content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
5. Time dependent climate sensitivity?
The co-evolution of the global mean surface air temperature (T) and the net energy flux at the top of the atmosphere, in simulations of the response to a doubling of CO[2] with GFDL’s CM2.1 model.
Slightly modified from Winton et al (2010).
Global climate models typically predict transient climate responses that are difficult to reconcile with the simplest energy balance models designed to mimic the GCMs’ climate sensitivity and rate of
heat uptake. This figure helps define the problem.
Take your favorite climate model, instantaneously double the concentration of $CO_2$ in the atmosphere, and watch the model return to equilibrium. I am thinking here of coupled atmosphere-ocean
models of the physical climate system in which $CO_2$ is an input, not models in which emissions are prescribed and the evolution of atmospheric $CO_2$ is itself part of the model output.
Now plot the globally-averaged energy imbalance at the top of the atmosphere $\mathcal{N}$ versus the globally-averaged surface temperature $T$. In the most common simple energy balance models we
would have $\mathcal{N} = \mathcal{F} - \beta T$ where both $\mathcal{F}$, the radiative forcing, and $\beta$, the strength of the radiative restoring, are constants. The result would be a straight
line in the $(T, \mathcal{N})$ plane, connecting $(0, \mathcal{F})$ with $(T_{EQ} \equiv \mathcal{F}/\beta,0)$ as indicated in the figure above. The particular two-box model discussed in post #4
would also evolve along this linear trajectory; the different way in which the heat uptake is modeled in that case just modifies how fast the model moves along the line.
The figure at the top shows the behavior of GFDL’s CM2.1 model. The departure from linearity, with the model falling below the expected line, is common if not quite universal among GCMs, and has
been discussed by Williams et al (2008) and Winton et al (2010) recently — these papers cite some earlier discussions of this issue as well. Our CM2.1 model has about as large a departure from
linearity as any GCM that we are aware of, which is one reason why we got interested in this issue.
As indicated in the somewhat cryptic legend, we use two different types of simulations to make this plot. One is the instantaneous doubling of $CO_2$ referred to above. We show annual means for the
first 10 years (with each cross in the figure an average over 4 realizations to knock down the noise, branching off at different times from a control simulation) and then show 5-year means up till
year 70, again averaging over 4 realizations. Because these integrations do not go out far enough to probe the slower long term evolution, we then append a single realization of the standard
calculation in which $CO_2$ is increased at 1%/year until the time of doubling (year 70) after which it is held fixed. We plot 5-year averages from this calculation, starting in year 70, so all
points in the figure correspond to the same value of $CO_2$. 600 years still isn’t enough to equilibrate, but as long as something fundamentally new doesn’t happen in the model on longer time scales
one can extrapolate to $\mathcal{N} = 0$ to get an estimate of the equilibrium temperature response. The two simulations match up nicely in year 70, as one expects if the 1%/yr case resides during
its ramp-up phase in the intermediate regime (post #3). Because of the curvature of this trajectory, the temperature change at year 70, about 1.5-1.6K (the transient climate response (TCR)) is
smaller than we might expect from the model’s equilibrium sensitivity and the model’s value of $\mathcal{N}$ at that same time.
One’s first reaction might be to say — well, there is nonlinearity in the model in the sense that $\beta$ is effectively a function of $T$. But I think there is agreement that the underlying
dynamics is still best described as linear; it’s just that the global mean energy balance is not a function of the global mean surface temperature. A more general linear model assumes that the
global mean energy balance is a linear functional of the surface temperature field, with different spatial structures in surface temperature perturbations, even if they have the same global mean,
generating different perturbation to the global mean energy balance.
Think of some atmospheric model equilibrated over a prescribed surface temperature distribution. This temperature distribution is the input to the model. The model outputs climate statistics of
interest, including the global mean energy balance. If the relation between input (surface temperatures) and output (global mean energy balance) is linear, we can write
$\mathcal{N}(t) = \mathcal{F}(t) - [\mathcal{B}(\mu)T(\mu,t)]\equiv \mathcal{F}(t) -\frac{1}{4\pi}\int \int \mathcal{B}(\theta, \phi)T(\theta,\phi,t)\cos(\theta)d\theta d\phi$
Brackets denote a spatial average over the surface and $\mu = (\theta,\phi) = (lat,lon)$ is the position on the surface. The scalar radiative restoring constant $\beta$ has been replaced by $\
mathcal{B(\mu)}$. (By the way, I am not assuming here that the top-of-atmosphere energy balance in some small region is only a function of the surface temperature in that same region — the relation
between these two is non-local due to mixing in the atmosphere.)
The simplest case is when temperature evolves in a self-similar manner, i.e., growing with a fixed spatial structure:
$T(\mu,t) = \mathcal{G}(\mu) g(t)$
(I have normalized things so that $[\mathcal{G}] \equiv 1$). The effective radiative forcing for temperature perturbations with this structure is
$\beta_g \equiv [\mathcal{B} \mathcal{G}] \Rightarrow \mathcal{N} = \mathcal{F} - \beta_g [T]$.
If temperatures perturbations have a different structure, $T(x,t) = \mathcal{H}(x) h(t)$, then we need to replace $\beta_g$ with $\beta_h \equiv [\mathcal{B}\mathcal{H}]$. But suppose that the
temperature perturbations are the sum of two patterns with relative contributions varying in time:
$T = \mathcal{G}(x)g(t) + \mathcal{H}(x)h(t)$,
with $[T] = g+h$. This gives us enough freedom to get evolution off the classic linear trajectory. But we haven’t learned anything yet about how and why the ratio of $g$ to $h$ is evolving in time.
One way of analyzing any linear system is through the frequency-dependence of the response to perturbations. Low frequency and high frequency forcing can result in different radiative restoring
strengths if they result in different spatial structures in the response. Evidently, the low frequency component controlling the late time evolution in the response to doubling of CO[2] is
characterized by a structure that is restored less strongly than is the fast, early response. Why would that be?
The story seems to be something like this: The atmosphere tends to be most unstable to vertical mixing in the tropics, where the surface temperature are warmest, but the oceans are most unstable to
vertical mixing at high latitudes, where the surface temperatures are the coldest. It is in the subpolar oceans that the mixing between surface and deeper waters is the strongest. One expects these
regions to be a major source of the difference between fast and slow responses, with the slow responses having larger subpolar ocean warming. This effect tends to mix out to other high latitude
regions, so the high latitude amplification of the response is typically larger in the slow response.
We now have to argue why a pattern with larger high latitude amplification is restored less strongly. This is more complicated. A part of the explanation seems to be that the surface is less
strongly coupled to the atmosphere in high than in low latitudes, so the surface warming has a harder time affecting the radiation escaping to space. But a big part also seems to be played by
different cloud feedbacks that come into play in the fast vs the slow responses, the clouds reacting to the different atmospheric conditions that occur when the subploar ocean warming is held back or
is given time to respond.
One can still try to save the global mean perspective. Winton et al (2010) pursue this line of reasoning by referring to the “efficacy of ocean heat uptake”. The idea here is that the difference in
spatial structure of the fast and slow responses can be attributed to the heat being transferred from shallow to deeper ocean layers. Putting aside the question of how this heat transfer is
controlled, one can try to think of it as a different kind of “forcing” of the near-surface layer, alongside the radiative forcing. The response to heat uptake, being focused in high latitudes,
naturally has a spatial structure that is more polar amplified than the response to CO[2] (with the heat uptake fixed), so it experiences a smaller restoring strength. The effects of the surface
cooling due to heat uptake by deeper layers is amplified, slowing down the initital fast warming more than one might otherwise expect. This picture has the nice feature that it ties the timing of
the change in spatial structure directly to the saturation of the heat uptake. You may want to think about how to capture this effect with a simple modification of the two-box model described in
earlier posts.
One moral of this story is that forcing a global mean perspective on the system can make things look more complicated than they actually are, making the response look superficially nonlinear when it
is still quite linear.
Another moral is that the connection between transient and equilibrium responses may not be as straightforward as we might like, even when only considering the consequences of the physical
equilibration of the deep ocean, leaving aside things like the slow evolution of ice sheets.
[The views expressed on this blog are in no sense official positions of the Geophysical Fluid Dynamics Laboratory, the National Oceanic and Atmospheric Administration, or the Department of Commerce.]
I’m not sure I understand this correctly. The atmospheric “resistance” to radiative loss of heat to space seems to be substantially lower in cold regions than in warmer regions (not surprising in
light of the low atmospheric water vapor in cold climates). The difference in temperature between the surface and the top of the troposphere, divided by the watts per square meter radiated, seems
significantly higher in the tropics than in polar regions.
So I am puzzled that your post appears to suggest radiative loss to space is more inhibited at high latitudes a long time after a step change in forcing. Is this just related to a slow change in the
moisture content of the atmosphere at high latitudes due to very slow ocean surface warming at high latitudes?
• Steve, I guess I am returning the favor in not being sure that I understand your comment. I an not clear what you mean by “atmospheric resistance to radiative loss of heat to space seems to be
substantially lower in cold regions”. It is the resistance (associated with the loss to space) experienced by an attempt to change the surface temperature in the model that is lower in high
latitudes — ie, the atmosphere is providing more of a buffer between the surface and the radiative loss. This is the opposite of what you might expect from water vapor. Part of the answer to what
the model is doing is that surface heating just doesn’t spread as far into the troposphere in high latitudes — you can think of it in large part as remaining below the average level at which
infrared photons escape to space. (This is often referred to as lapse-rate feedback.) But in the specific model being discussed here, the main factor explaining the difference in sensitivity
between fast and slow responses — or between the response to CO2 forcing with and without the effect of ocean heat uptake — is clouds (this would primarily be shortwave) — see table 1 in Winton
et al (2010) referenced above.
□ I had played with the CM2.1 runs available on the GFDL portal in response to your post #27. Data were only available for 200 (doubling) or possibly 300 years (one of the RCP runs). Large
amplification of polar warming was only visible at the North Pole. However looking at the plots in your post 11 (thanks for the response) I can see that somewhere after that, the
amplification at the South Pole begins to catch up. I had not found a picture of this previously though I would guess it to be in several papers I haven’t read. Intuitively it makes sense in
terms of the ice mass being much less strongly coupled to ocean circulation (being mostly on land). I hope this makes sense to you because it helps my mental model.
☆ The asymmetry of polar amplification has been discussed extensively over the years, some early papers are by Bryan et al (1988)) and Stouffer et al (1989). The key is the efficient and
deep uptake of heat in the Southern ocean in these models, creating a very large effective heat capacity locally. It is certainly odd that the lack of trend in Antarctic sea ice is used
by some as a critique of global warming models when this is exactly what most models predict. The rapidity of warming in the Southern Ocean and around Antarctica is, however, one of the
things that could be sensitive to the explicit simulation of mesoscale eddies in ocean models (post #29).
○ I’ve given those papers a quick read, thank you for the links. In an attempt to tie back in to what you wrote to Steve Fitzpatrick regarding cloud feedback, I find it hard at this
moment to relate the two results that 1) net polar amplification increases over time, first over decades to several centuries as the Arctic Ocean adjusts, then over an even longer
time period as the heat uptake in the Southern Ocean relaxes; all of which tends to increase δT/δN due to the greater atmospheric buffer at the poles, while 2) there is also an
increase in positive cloud feedback over time somehow related to relaxation of ocean heat uptake generally as in Winton et al 2010. Is the temporal coevolution of these phenomena
synergistic, or a coincidence? – or perhaps a parallel question, is it a coevolution in latitude and time or merely time?
PS – While writing the questions above it occurred to me that the cloud effect could have something to do with the energy balance of the oceanic mixed layer or top few meters of the
ocean even.
○ Isaac and Bill,
Thanks for the interesting discussion here. I agree with Isaac in his response.
What I want to add is that even in the slab-ocean simulations, i.e., without the change in oceanic heat uptake, the asymmetry of polar amplification still exists ( Figure 1 of the
link ). So the (spatial) asymmetry may also be related to the lack of ice-albedo feedback over the Antarctic continent and the lack of polar atmospheric sensible and latent heat
transport over the Antarctic region ( possibly related to the much weaker stationary wave activity in the southern hemisphere). Accordingly the cloud feedback are also different.
About the temporal asymmetry of polar amplification, not only the ocean area are larger in the southern hemisphere, the vertical mixing of heat over the Southern Ocean can also be
very deep due to the eddies associated with the strongest ocean current – ACC. Both may contribute to the larger heat inertia in the southern hemisphere. Therefore, as Isaac have
said, ” The rapidity of warming … could be sensitive to the explicit simulation of mesoscale eddies in ocean models”.
I’m not a professional at this, so I hope you’ll forgive the amateur question.
My query is to do with the time-dependence of the rate of response to a time-dependent forcing. In your examples, you consider either a step or ramp change in forcing extended over a long period. But
in practice, the forcing varies diurnally and annually (and even inter-annually) with a far greater amplitude, and the step is in the mean. While I can see that averaging this over a year is valid
(in a linear system) when calculating the equilibrium, I’m not so sure about the lagged response due to the thermal inertia. It seems to me that the penetration depth, and hence the surface heat
capacity, are frequency dependent.
By smoothing out the annual variation in the models, I assumed it was implicit that the top-most surface layer that varied in temperature both up and down with the forcing was being neglected, as it
equilibrates on timescales shorter than were being considered, that the ‘surface box’ was the layer below this that was still part of the mixed layer and hence responded within a few years, and the
second box was the deep ocean. In essence, the heat capacity is a continuous function of frequency related to rate of change of penetration depth, ranging from the ‘surface’ heat capacity to the
‘deep ocean’ heat capacity that we’re extracting two slices from, based on our chosen annual time scale. Beta relates to the rate at which heat can escape upwards from this layer.
If the slope of F – beta T versus T is not constant, my immediate naive interpretation would be to suspect that beta was not constant over all time scales. Should we expect it to be?
Just as you can get dependence on frequency with the differing horizontal spatial structure of the changes, wouldn’t the same apply to the vertical spatial structure? Can you clarify for me why (or
if) the changing penetration depth over time is expected to have negligible effect on the local dynamics?
As an aside, I found it useful in thinking about your 2-box model to plot the T versus To direction field. You can then see that the two eigenvectors of the system of differential equations
correspond to the trajectory of the fast and slow responses. The system moves fast parallel to one eigenvector until it lies on the line through the equilibrium parallel to the other, and then moves
slowly along this line towards the equilibrium. I’d be interested to know if anyone has plotted a similar direction field for an actual GCM. Does it have the same structure?
• The reason that the horizontal structure of the response changes as the model equilibrates is precisely that the lower frequencies penetrate more deeply, as you say. In the two box model, one has
to somehow make the outgoing flux a function of $T_0$ to mimic this effect.
Rephrasing your proposal a bit, plotting GCM evolution in the (global mean surface temperature, total ocean heat content) plane in response to different forcing scenarios would be interesting,
but I doubt that a two-box model would be able to mimic this evolution quantitatively.
I would definitely like to see more systematic analyses of the response of GCMs as a function of the frequency of periodic CO2 or total solar irradiance forcing.
I think I understand this but perhaps in slightly different terms:
The important point that you are trying to get across is that the model results can be explained without recourse to non-linearity in $\mathcal{B}$.
That is $\mathcal{B}$ is a function of position only $\mathcal{B(\mu)}$ not of $\mathcal{T}$ or $t$.
Assuming that the forcing due to WMGGs to be separable $\mathcal{F}(\mu,t) = \mathcal{C(\mu)A}(t)$ where $[\mathcal{C}] \equiv 1$,
and considering harmonic forcings $\mathcal{A}(t) = Ae^{\i \omega t}$ and that $\mathcal{T}$ is separable we get:
$\mathcal{T}(\mu,\omega,t) = \mathcal{T^\alpha}(\mathcal{C},\mu,\omega)Ae^{\i \omega t}$ where $\omega$ is the angular frequency and $\mathcal{T^\alpha}$ is a complex function.
The sole reason for including $\mathcal{C}$ is to allow the possibility that $\mathcal{T^\alpha}$ may be dependent on the type of forcing, e.g. WWMG, ice albedo, solar.
So I am saying that WMGGs produces a spatial pattern that is dependent on the angular frequency $\omega$ and that it may take complex values.
I understand you to be saying that for high vlaues of $\omega$, $|\mathcal{T^\alpha}|$ is small overall due to heat storage and is comparatively biased towards the equator and away from the poles and
for low values of $\omega$, $|\mathcal{T^\alpha}|$ is large overall and is comparatively biased towards the poles and away from the equator, and as $\omega$ goes to infinity $\mathcal{T^\alpha}$ goes
to its equilibrium distribution. Also that $\mathcal{B(\mu)}$ is large at the equator and small at the poles.
$\mathcal{T^\alpha}(\mathcal{C},\mu,\omega)Ae^{\i \omega t}$ is analogous to your $\mathcal{G(\mu)}g(t)$ which leads me to $\beta_\alpha(\mathcal{C},\omega) \equiv [\mathcal{B(\mu)}\mathcal{T^\alpha}
(\mathcal{C},\mu,\omega)]$ where $\beta_\alpha(\mathcal{C},\omega)$ is complex valued and tends to decrease with decreasing frequency due to the spatial patterns and frequency dependence of $\mathcal
{B}$ and $\mathcal{T^\alpha}$.
I will make the specific point that whereas I can see that the separation $\mathcal{T^\alpha}(\mathcal{C},\mu,\omega)Ae^{\i \omega t}$ is plausible I cannot see this to be the case more generally.
I.E. $g(t)$ must be a sinusoid.
My analogue to $\beta_g \equiv [\mathcal{B} \mathcal{G}] \Rightarrow \mathcal{N} = \mathcal{F} -\beta_g [{T}]$ is not as straightforward as $[\mathcal{T^\alpha}]$ is not normalised to unity as it
contains information regarding the attenuation of amplitude with increasing frequency. So I would have (something like):
$\beta_\alpha(\mathcal{C},\omega) \equiv [\mathcal{B(\mu)}\mathcal{T^\alpha}(\mathcal{C},\mu,\omega)] \Rightarrow \mathcal{N(\omega)} = \mathcal{F(\omega)} -\beta_\alpha(\mathcal{C},\omega)A$ where $
\mathcal{N}$ and $\mathcal{F}$ are complex.
That seems to have been a lot of fuss to show that in my way of thinking $\beta$ is a complex valued function of $\omega$. So I do get your dependancy on frequency but that the functions being
complex valued is not totally trivial. $\mathcal{N}$ and $\mathcal{F}$ will not normally be in phase and $\mathcal{T^\alpha}(\mathcal{C},\mu,\omega)$ is not necessarily separable into $\mathcal{T^\
beta}(\mathcal{C},\mu,\omega)e^{i \varphi}$ where $\mathcal{T^\beta}$ is real valued as the phase of $\mathcal{T^\alpha}$ might vary with $\mu$.
I am begining to wonder if this was worth saying but it was good to practise a bit of Latex and I am not letting it go to waste. Anyway all my functions are linear and combining the spatial vectors
for temperature and $\beta$ would give the required results.
I did look at what would be needed to calculate the immediate radiative restoring strength and it must be just the integral of the product of the restoring vector and the Fourier transform of the
forcing, but in practical terms that is as good as a useless thing to know.
• Alex, the distinction is only that I am separating off the relationship between top-of atmosphere flux and surface temperature from the rest of the model, as this has no frequency dependence to
speak of on the time scales being discussed here — since this connection is generated on atmospheric time scales (at most a couple of months). The slow physics can influence this relationship by
changing spatial structure of the temperature response, so the relationship between spatial structure and forcing can have time lags, but the relation between this spatial structure and the
energy balance does not (in this simple picture).
□ Isaac,
Sorry I was off on a tangent, trying to dig into the implications of your explanation after the your paragraph that starts:
“The story seems to be something like this:”
Returning to the main thrust. I spotted something that I found rather alarming and I checked Winton et al (2010) to make sure that it was recognised and it is:
“The stabilized forcing warming commitment inherent in a given level of ocean heat uptake is magnified by the efficacy.”
Given that the ocean heat uptake efficacy due to the current “experiment” in the real world is modelled to be ~2.5, the standard method Stored Flux {W/m^2} times Equilibrium Sensitivity {ºC/
(W/m^2)} would only give ~40% of the implied value for committed warming, which is rather scary.
I have also looked at how the N/R – T/Teq curve would incorporate into a simple thermal model in terms of response functions and whereas it can be done easily enough I can not see how it can
be constrained by real world data so it would have to rely on the simulated curves.
All in all, the prospect that the actual curve does lie below the linear approximation is not a pleasant one and I can see that it has many repercussions. Notably that empirical estimates for
the sensitivity will tend to underestimate.
It is not clear to me how much of the apparent efficacy as T approaches Teq is due not to the effect you described but to slow feedbacks in the simulators e.g. ice albedo. I think the
described effect should, provided that the warming is monotonic for all $\mu$, lead the curve to finally reattach to the linear slope close to Teq whereas a slow feedback would not.
However if the warming were not monotonic but ended say with the poles warming whilst the equator cooled any final value of the slope would be a difference and its value would not be so
restricted. It was that sort of worry that lead me to speculate on the effects of various differential warming patterns in my note above. Also it occurred to me that such warming patterns
would give rise to continuing differential warming even if we could hold the global average temperate constant. As you inform us on other threads and your papers, small differentials are
implicated in significant regional climate change, e.g. hurricanes and sahel rainfall patterns. That we cannot unrock the boat and that some patterns of differential warming are already
committed too is not a benign notion.
If I have interpreted the main thrust correctly, it makes for gloomy reading, and I think that these consequences may not be as widely appreciated as they should be.
☆ I am not sure that I follow everything that you are saying — but If we knew the forcing well enough, we could constrain long time scale responses with paleoclimates and faster responses
with the observed warming over the past century, to see if we are getting the ratio of TCR to Teq about right. It’s a challenge. In any case, we need to clearly distinguish between the
constraints on responses on different time scales and not just assume that these ratios are well known. if this picture is right, knowing the heat uptake is not enough to convert TCR into
○ Sorry that am still not being clear (and in parts downright wrong).
I am trying to combine your insight into my thinking regarding simple models and I shall try a different tack.
The curve $\mathcal{N(T)}$ that represents the underlying trajectory of the simulator is also a function of $t$, the $t$ parameter increasing along the curve from top left to bottom
right. $\mathcal{T}(t)$ being the temperature response function for a step forcing.
Let us say that it has an initial slope $d\mathcal{N}/d\mathcal{T}(t=0)$ that is steeper than the curve so when extended as a straight line the curve is always above it. The curve
could then be considered as the sum of this new line and an additional positive slow forcing of some sort $\mathcal{F}s(t)$.
By slow forcing I mean one that cannot be represented as being propotional to the instantaneous value of $\mathcal{T}(t)$ but the result of the historic values of $\mathcal{T}(t)$ and
represented by the convolution $(\mathcal{T}(t) * \mathcal{R}(t))$ for some function $\mathcal{R}(t)$ that captures the dependence on $\mathcal{F}s(t)$ on the historical values.
Provided the system is linear, I believe that this separation can always be achieved.
In that sense the curve can be seen as being due to a lower than equilibrium value sensitivity plus an additional (and perhaps somewhat fictitious) slow forcing.
Now I presumed that the GFDL model also contains some “genuine” slow feedback forcing e.g. albedo which evolves slowly and hence is not proportional to the current temperature but
some function of the temperature history, that is how I view the difference between instantaneous (fast) and slow forcings.
So $\mathcal{F}s(t)$ would be due to the combined effect of several distinquishable slow forcings, of which one would be due to the temporal evolution of spatial warming patterns you
Let this spatial component be $\mathcal{F}\mu(t)$. In one sense this feels like a convenient “fiction” but I can not logically consider it to be any more fictitious than the lapse
rate feedback which is also due to a spatial effect, the variation of warming with height. That said, the lapse rate feedback does differ in that it would be considered to be fast at
these timescales.
When I read your (Soden & Held) paper(s) that compared simulators by way of an analysis based on kernels, I did wonder whether such a spatial aspect could have been considered as it
seemed likely to me that the simulators might have different equilibrium polar amplifications. This case differs that in that we are considering the whole trajectory so the evolution
of the spatial pattern needs to be represented, but the justification for adding an additional spatial feedback would be the same and I think valid. That said it would require some
redefinitions of the individual feedbacks, in particular changing the Planck feedback to represent its “initial” value which would be more negative than otherwise.
I shall try to make a case for this line of thinking. It is based on a consideration of what one might deduce about the restoring strength from short term (say subdecadal)
oberservations of the flux imbalance.
According to my thinking, such an experiment would largely give a measure of the initial slope $d\mathcal{N}/d\mathcal{T}(t=0)$. Now in some of the literature this is at best
considered to differ from the equilibirum slope only as a matter of the “real” slow forcings e.g. ice albedo etc.. Whereas it should also be corrected for the effects of the evolving
spatial feedback due to $\mathcal{F}\mu(t)$ if the spatial signal is still evolving at this timescale (which I presume to be the case). By this I imply that failing to allow for this
effect would result in too low a value for the equilibrium sensitivity.
On a slightly different point, as I understand you, estimates of the restoring strength based on short term tropical data would lead to significantly lower values of equilibrium
sensitivity for spatial reasons.
Even if I could never make a case for rejigging the feedbacks to this way of thinking; I do think that it has broadened my thinking on this topic substantially and hopefully not
To my way of thinking, it does restrict the application of simple models when constrained only by the observational data. I do see your point about scale separation by use of both
instrumental and reconstructed paleoclimatic data but I would now see the sequence to be for the simulators to be informed by the paleodata and for the emulators to be informed by the
simulators, as is the case here. I do not see that the additional degree of freedom given by the curve $\mathcal{N(T)}$ to a simple emulator can be adequately constrained by such a
combination of centenial instrumental and millenial paleodata, as it can do no more than fix two points on the curve, so I feel that the production of candidate curves is best left to
the simulators.
Thanks for this thread and your comments so far, they are appreciated and represent a very worthwhile learning opportunity, to me at least. I hope that by sticking to just a few
points I have been more clear.
Hi Issac,
I was interested in your experiment to instantaneously double co2 and watch the system return to equilibrium. My test is to instantaneously add enough co2 to ultimately add 1C at equilibrium. Now I
want to plot, U (“unrealized temperature increase”) = 1 – R (“realized temperature increase”) over time. What is the formula for U? If the recognition of temperature was proportional to the
unrealized increase then this would simply be U = exp(-rt). From what I understand, due to the diffusive nature of conduction, “r” is not constant and diminishes over time. Can this formula be
expressed as U=exp(f(t))?
The reason I ask is because the “heat in the pipeline” problem, in financial mathematics terms, looks like a future value of an annuity problem. If we regress ln(accumulated co2), we can get a good
match with a second order polynomial. We then take the derivative to get a linear equation representing our rate of deposit into the unrealized account. If our “force of interest” were negative
(withdrawls) and constant then there would come a point where the rate of deposit equaled the rate of withdrawl and our unrealized account would have hit an upper limit. Can a similar approach be
used with a diminishing “r” to demonstrate that there is no upper limit in this scenario?
Thanks, AJ
• There is the potential for thinking that one is closing in on an equilibrium, but then being surprised that slow processes continue to warm the system. But I don’t understand your concern about
there being no upper limit. I am not very good at converting things like “force of interest” into an equation.
Dr. Held,
I’ve recently been looking over some of the outputs from GFDL CM2.1, and going over the Winton et al (2010) paper, so I was glad to see you had posted on this before. Forgive me if this is a dumb
question, but I’ve been a bit stumped by something:
In Soden and Held (2006), the strength of radiative restoring appears to be -1.37 W/m^2/K for GFDL CM2.1, which when combined with the CO2 doubling 3.5 W/m^2 forcing (as shown in Winton et al for
example) for this model would seem to indicate an equilibrium sensitivity of 3.5 / 1.37 = 2.55 K, which is a good deal different from the 3.4 K sensitivity we know for the GFDL CM2.1 model.
Do you suppose that this is because of the changing sensitivity described here, where Soden and Held (2006) uses only the first 100 years to calculate the radiative response to a temperature
increase, which is stronger than in subsequent years (perhaps due to the differing spatial structures of the surface temperature change)? Or are there other factors (or some misunderstanding on my
part) at work here?
• Yes, that is the point I am trying to make here — the strength of radiative restoring, measured by global mean flux change per unit change in global mean temperature, does weaken as the system | {"url":"http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/19/time-dependent-climate-sensitivity/","timestamp":"2014-04-21T08:03:15Z","content_type":null,"content_length":"99870","record_id":"<urn:uuid:189c742d-df53-44fc-adc2-7df861b03e81>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Dependency Tree
I'm interested in putting together a math dependency tree, and I'm interested in your help. I'm interested in including branches of mathematics as well as non-math like (theoretical) computer science
and physics. My chart so far:
My idea is if someone is interested in quantum computing, or measure theory, or Galois theory, etc., they will be able to look at the chart to see what they need to know first.
Unless anyone has any objections, as this chart expands I will edit the image in this post so that most current chart appears in the first post.
Any and all input is most welcome.
There are 10 types of people in the world, those who understand binary, those who don't, and those who can use induction. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=202668","timestamp":"2014-04-20T00:43:54Z","content_type":null,"content_length":"12679","record_id":"<urn:uuid:94c6b222-e3f1-467b-80e5-7fadfb0f7c8e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phone numbers
<br />If you were going to tell someone your phone number, how would you do it?<br /><br />Let's say your phone number is: 256-512-1024.<br /><br />Would you write the number as CCLVI-CCCCXII-MXXIV
(or CCLVI-CCCCXII-MXXIIII)?<br /><br />There's a purpose to this queston. It leads up to another question I have which is coming down the pike. <br />
Re:Phone numbers
Errrrr..... Hsitory has never been my best subject, but wasn't Alexander Graham Bell after the fall of the Roman Empire?
I, Lex Llama, super genius, will one day rule this planet! And then you'll rue the day you messed with me, you damned dirty apes!
Re:Phone numbers
[quote author=mariek link=board=3;threadid=751;start=0#7412 date=1064951131]<br />Let's say your phone number is: 256-512-1024.<br /><br />Would you write the number as CCLVI-CCCCXII-MXXIV (or
CCLVI-CCCCXII-MXXIIII)?<br />[/quote]<br /><br />I would write it as II-V-VI V-I-II I-0-II-IV, although I admit the 0 is an anachronism; I don't believe the Romans had zeros.
I, Lex Llama, super genius, will one day rule this planet! And then you'll rue the day you messed with me, you damned dirty apes!
Re:Phone numbers
I'm siding with Lector. It's more conservative with characters, less deciphering, and takes into account that not all countries follow the 1-(234)-567-8900 system.
flebile nescio quid queritur lyra, flebile lingua murmurat exanimis, respondent flebile ripae
Re:Phone numbers
Now I'm curious-what other question?
Re:Phone numbers
<br />The other question in still brewing. I haven't yet had time to form it (the phrase in Latin).<br />
Re:Phone numbers
As has been pointed out, not all countries follow this format for phone numbers, <<256-512-1024 >>; the placement of the dashes is quite abitrary, so what is the Roman way of writing 2,565,121,024?
I've searched the 'net and come up empty handed.<br /><br />BTW mariek, this is a suspiciously binary looking number - is that a clue to your next question?
Re:Phone numbers
[quote author=Phil link=board=3;threadid=751;start=0#7437 date=1064964474]<br />... so what is the Roman way of writing 2,565,121,024? I've searched the 'net and come up empty handed.<br />[/quote]
<br /><br />Let me know when you figure it out. <br />
<br />BTW mariek, this is a suspiciously binary looking number - is that a clue to your next question?<br />
<br /><br />Curious, isn't it? ;D<br /><br />Wow, someone actually noticed that they're powers of 2. Some of my favorite numbers are powers of 2, and others are just plain prime.<br /><br />
Re:Phone numbers
[quote author=mariek link=board=3;threadid=751;start=0#7441 date=1064973331]<br /><br />Let me know when you figure it out. <br /><br />[/quote]<br /><br />OK, I have discovered that if roman
numerals have an 'upper half frame' above them it means multiply the number by 100,000. And that the symbol for 10,000 is an M with 5 legs instead of 3, which can be typed as '((I))'. (However, the
((I)) is archaic, and I'm not sure if you can use both symbols at the same time).<br /><br />(I can't work out how to put an overbar on, so I'll use an underline - turn your monitor upside down to
see it properly | ((I))((I))MMMMMDCLI | ((I))((I))MXXIIII<br /><br /> is (2x10,000+5x1000+500+100+50+1)x100,000 + (2x10,000+1,000+20+4) = 25,651x100,000 + 21,024 = 2,565,121,024!<br />
Re:Phone numbers
Common alphabet mapping with some modification for zero and one would do.<br /><br />with<br />0= O,<br />1= I,<br />2=ABC,<br />3=DEF,<br />4=GH,<br />5=JKL,<br />6=MN,<br />7=PRS,<br />8=TUV and<br
/>9=WXY,<br /><br />256-512-1024 will be ALM-LIB-IOCH<br />
Re:Phone numbers
[quote author=Phil link=board=3;threadid=751;start=0#7445 date=1064976188]<br />So,<br /><br /> | ((I))((I))MMMMMDCLI | ((I))((I))MXXIIII<br /><br /> is (2x10,000+5x1000+500+100+50+1)x100,000 +
(2x10,000+1,000+20+4) = 25,651x100,000 + 21,024 = 2,565,121,024!<br /><br />[/quote]<br /><br />Ouch. I have a headache now. And this is why I decided not to major in math! <br />
Re:Phone numbers
Me, too! I liked the alphabet way better. ;D
Re:Phone numbers
[quote author=mingshey link=board=3;threadid=751;start=0#7447 date=1064976800]<br />Common alphabet mapping with some modification for zero and one would do.<br />...<br />256-512-1024 will be
ALM-LIB-IOCH<br />[/quote]<br /><br />Well, mapping the numbers to the letters on the telephone wasn't quite what I had in mind though.<br /><br /> | {"url":"http://www.textkit.com/greek-latin-forum/viewtopic.php?p=7434","timestamp":"2014-04-21T09:43:19Z","content_type":null,"content_length":"42252","record_id":"<urn:uuid:61a82e13-351f-482e-8671-267ff3f40ad1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prosper Math Tutor
Find a Prosper Math Tutor
I am an experienced tutor and instructor in undergraduate physics. I tutored at the University of Texas at Dallas, where I was also a Teaching Assistant. I taught courses at Richland College and
Collin County Community College.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...I really LOVE science in general, but my main subjects of focus are physics and engineering, plus the math that supports them, such as algebra, geometry, and trigonometry. I like to present
core concepts in a visual manner and apply them to real world experiences. Personally, that helps me understand the concepts best.
28 Subjects: including algebra 1, algebra 2, biology, chemistry
...I pride myself on making your learning experience not only beneficial but fun. Yes, I said fun. I look forward to helping you succeed!
2 Subjects: including algebra 1, chemistry
...During my previous years in college I realized that I could actually tutor someone and have them understand troubling subject(s)in a fun and professional manner. I graduated high school with
an excellent GPA and following this same step into college. I've been on the Honors list and Deans list for the past two years of my college life.
5 Subjects: including algebra 1, algebra 2, prealgebra, spelling
...I'm a financial professional in Dallas who is looking to help students understand math, finance, or accounting. I have a dual undergraduate degree in Finance & Accounting, a MBA in Finance
from Top 50 school, and work for a Fortune 100 Company. I taught finance to more than 100 Freshman at Mizzou for two years and thoroughly enjoyed helping people learn.
20 Subjects: including algebra 1, algebra 2, ACT Math, public speaking
Nearby Cities With Math Tutor
Aubrey, TX Math Tutors
Celina, TX Math Tutors
Cross Roads, TX Math Tutors
Crossroads, TX Math Tutors
Gunter Math Tutors
Howe, TX Math Tutors
Krugerville, TX Math Tutors
Lakewood Village, TX Math Tutors
Sanger, TX Math Tutors
Tioga, TX Math Tutors
Tom Bean Math Tutors
Valley View, TX Math Tutors
Westminster, TX Math Tutors
Weston Lakes, TX Math Tutors
Weston, TX Math Tutors | {"url":"http://www.purplemath.com/prosper_tx_math_tutors.php","timestamp":"2014-04-19T20:06:24Z","content_type":null,"content_length":"23400","record_id":"<urn:uuid:4c6935e3-ed33-4211-beaa-9f47d4fa92e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beyond Bayesians and Frequentists - Less Wrong
(Note: this is cross-posted from my blog and also available in pdf here.)
If you are a newly initiated student into the field of machine learning, it won't be long before you start hearing the words "Bayesian" and "frequentist" thrown around. Many people around you
probably have strong opinions on which is the "right" way to do statistics, and within a year you've probably developed your own strong opinions (which are suspiciously similar to those of the people
around you, despite there being a much greater variance of opinion between different labs). In fact, now that the year is 2012 the majority of new graduate students are being raised as Bayesians (at
least in the U.S.) with frequentists thought of as stodgy emeritus professors stuck in their ways.
If you are like me, the preceding set of facts will make you very uneasy. They will make you uneasy because simple pattern-matching -- the strength of people's opinions, the reliability with which
these opinions split along age boundaries and lab boundaries, and the ridicule that each side levels at the other camp – makes the "Bayesians vs. frequentists" debate look far more like politics than
like scholarly discourse. Of course, that alone does not necessarily prove anything; these disconcerting similarities could just be coincidences that I happened to cherry-pick.
My next point, then, is that we are right to be uneasy, because such debate makes us less likely to evaluate the strengths and weaknesses of both approaches in good faith. This essay is a push
against that --- I summarize the justifications for Bayesian methods and where they fall short, show how frequentist approaches can fill in some of their shortcomings, and then present my personal
(though probably woefully under-informed) guidelines for choosing which type of approach to use.
Before doing any of this, though, a bit of background is in order...
1. Background on Bayesians and Frequentists
1.1. Three Levels of Argument
As Andrew Critch [6] insightfully points out, the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:
1. Whether to interpret subjective beliefs as probabilities
2. Whether to interpret probabilities as subjective beliefs (as opposed to asymptotic frequencies)
3. Whether a Bayesian or frequentist algorithm is better suited to solving a particular problem.
Given my own research interests, I will add a fourth argument:
4. Whether Bayesian or frequentist techniques are better suited to engineering an artificial intelligence.
Andrew Gelman [9] has his own well-written essay on the subject, where he expands on these distinctions and presents his own more nuanced view.
Why are these arguments so commonly conflated? I'm not entirely sure; I would guess it is for historical reasons but I have so far been unable to find said historical reasons. Whatever the reasons,
what this boils down to in the present day is that people often form opinions on 1. and 2., which then influence their answers to 3. and 4. This is not good, since 1. and 2. are philosophical in
nature and difficult to resolve correctly, whereas 3. and 4. are often much easier to resolve and extremely important to resolve correctly in practice. Let me re-iterate: the Bayes vs. frequentist
discussion should center on the practical employment of the two methods, or, if epistemology must be discussed, it should be clearly separated from the day-to-day practical decisions. Aside from the
difficulties with correctly deciding epistemology, the relationship between generic epistemology and specific practices in cutting-edge statistical research is only via a long causal chain, and it
should be completely unsurprising if Bayesian epistemology leads to the employment of frequentist tools or vice versa.
For this reason and for reasons of space, I will spend the remainder of the essay focusing on statistical algorithms rather than on interpretations of probability. For those who really want to
discuss interpretations of probability, I will address that in a later essay.
1.2. Recap of Bayesian Decision Theory
(What follows will be review for many.) In Bayesian decision theory, we assume that there is some underlying world state θ and a likelihood function p(X1,...,Xn | θ) over possible observations. (A
likelihood function is just a conditional probability distribution where the parameter conditioned on can vary.) We also have a space A of possible actions and a utility function U(θ; a) that gives
the utility of performing action a if the underlying world state is θ. We can incorporate notions like planning and value of information by defining U(θ; a) recursively in terms of an identical agent
to ourselves who has seen one additional observation (or, if we are planning against an adversary, in terms of the adversary). For a more detailed overview of this material, see the tutorial by North
What distinguishes the Bayesian approach in particular is one additional assumption, a prior distribution p(θ) over possible world states. To make a decision with respect to a given prior, we compute
the posterior distribution p[posterior](θ | X1,...,Xn) using Bayes' theorem, then take the action a that maximizes $\mathbb{E}_{p_{\mathrm{posterior}}}[U(\theta; a)]$.
In practice, p[posterior](θ | X1,...,Xn) can be quite difficult to compute, and so we often attempt to approximate it. Such attempts are known as approximate inference algorithms.
1.3. Steel-manning Frequentists
There are many different ideas that fall under the broad umbrella of frequentist techniques. While it would be impossible to adequately summarize all of them even if I attempted to, there are three
in particular that I would like to describe, and which I will call frequentist decision theory, frequentist guarantees, and frequentist analysis tools.
Frequentist decision theory has a very similar setup to Bayesian decision theory, with a few key differences. These are discussed in detail and contrasted with Bayesian decision theory in [10],
although we summarize the differences here. There is still a likelihood function p(X1,...,Xn | θ) and a utility function U(θ; a). However, we do not assume the existence of a prior on θ, and instead
choose the decision rule a(X1,...,Xn) that maximizes
$\displaystyle \min\limits_{\theta} \mathbb{E}[U(a(X_1,\ldots,X_n); \theta) \mid \theta]. \ \ \ \ \ (1)$
In other words, we ask for a worst case guarantee rather than an average case guarantee. As an example of how these would differ, imagine a scenario where we have no data to observe, an unknown θ in
{1,...,N}, and we choose an action a in {0,...,N}. Furthermore, U(0; θ) = 0 for all θ, U(a; θ) = -1 if a = θ, and U(a;θ) = 1 if a ≠ 0 and a ≠ θ. Then a frequentist will always choose a = 0 because
any other action gets -1 utility in the worst case; a Bayesian, on the other hand, will happily choose any non-zero value of a since such an action gains (N-2)/N utility in expectation. (I am
purposely ignoring more complex ideas like mixed strategies for the purpose of illustration.).
Note that the frequentist optimization problem is more complicated than in the Bayesian case, since the value of (1) depends on the joint behavior of a(X1,...,Xn), whereas with Bayes we can optimize
a(X1,...,Xn) for each set of observations separately.
As a result of this more complex optimization problem, it is often not actually possible to maximize (1), so many frequentist techniques instead develop tools to lower-bound (1) for a given decision
procedure, and then try to construct a decision procedure that is reasonably close to the optimum. Support vector machines [2], which try to pick separating hyperplanes that minimize generalization
error, are one example of this where the algorithm is explicitly trying to maximize worst-case utility. Another example of a frequentist decision procedure is L1-regularized least squares for sparse
recovery [3], where the procedure itself does not look like it is explicitly maximizing any utility function, but a separate analysis shows that it is close to the optimal procedure anyways.
The second sort of frequentist approach to statistics is what I call a frequentist guarantee. A frequentist guarantee on an algorithm is a guarantee that, with high probability with respect to how
the data was generated, the output of the algorithm will satisfy a given property. The most familiar example of this is any algorithm that generates a frequentist confidence interval: to generate a
95% frequentist confidence interval for a parameter θ is to run an algorithm that outputs an interval, such that with probability at least 95% θ lies within the interval. An important fact about most
such algorithms is that the size of the interval only grows logarithmically with the amount of confidence we require, so getting a 99.9999% confidence interval is only slightly harder than getting a
95% confidence interval (and we should probably be asking for the former whenever possible).
If we use such algorithms to test hypotheses or to test discrete properties of θ, then we can obtain algorithms that take in probabilistically generated data and produce an output that with high
probability depends only on how the data was generated, not on the specific random samples that were given. For instance, we can create an algorithm that takes in samples from two distributions, and
is guaranteed to output 1 whenever they are the same, 0 whenever they differ by at least ε in total variational distance, and could have arbitrary output if they are different but the total
variational distance is less than ε. This is an amazing property --- it takes in random input and produces an essentially deterministic answer.
Finally, a third type of frequentist approach seeks to construct analysis tools for understanding the behavior of random variables. Metric entropy, the Chernoff and Azuma-Hoeffding bounds [12], and
Doob's optional stopping theorem are representative examples of this sort of approach. Arguably, everyone with the time to spare should master these techniques, since being able to analyze random
variables is important no matter what approach to statistics you take. Indeed, frequentist analysis tools have no conflict at all with Bayesian methods --- they simply provide techniques for
understanding the behavior of the Bayesian model.
2. Bayes vs. Other Methods
2.1. Justification for Bayes
We presented Bayesian decision theory above, but are there any reasons why we should actually use it? One commonly-given reason is that Bayesian statistics is merely the application of Bayes'
Theorem, which, being a theorem, describes the only correct way to update beliefs in response to new evidence; anything else can only be justified to the extent that it provides a good approximation
to Bayesian updating. This may be true, but Bayes' Theorem only applies if we already have a prior, and if we accept probability as the correct framework for expressing uncertain beliefs. We might
want to avoid one or both of these assumptions. Bayes' theorem also doesn't explain why we care about expected utility as opposed to some other statistic of the distribution over utilities (although
note that frequentist decision theory also tries to maximize expected utility).
One compelling answer to this is dutch-booking, which shows that any agent must implicitly be using a probability model to make decisions, or else there is a series of bets that they would be willing
to make that causes them to lose money with certainty. Another answer is the complete class theorem, which shows that any non-Bayesian decision procedure is strictly dominated by a Bayesian decision
procedure --- meaning that the Bayesian procedure performs at least as well as the non-Bayesian procedure in all cases with certainty. In other words, if you are doing anything non-Bayesian, then
either it is secretly a Bayesian procedure or there is another procedure that does strictly better than it. Finally, the VNM Utility Theorem states that any agent with consistent preferences over
distributions of outcomes must be implicitly maximizing the expected value of some scalar-valued function, which we can then use as our choice of utility function U. These theorems, however, ignore
the issue of computation --- while the best decision procedure may be Bayesian, the best computationally-efficient decision procedure could easily be non-Bayesian.
Another justification for Bayes is that, in contrast to ad hoc frequentist techniques, it actually provides a general theory for constructing statistical algorithms, as well as for incorporating side
information such as expert knowledge. Indeed, when trying to model complex and highly structured situations it is difficult to obtain any sort of frequentist guarantees (although analysis tools can
still often be applied to gain intuition about parts of the model). A prior lets us write down the sorts of models that would allow us to capture structured situations (for instance, when trying to
do language modeling or transfer learning). Non-Bayesian methods exist for these situations, but they are often ad hoc and in many cases ends up looking like an approximation to Bayes. One example of
this is Kneser-Ney smoothing for n-gram models, an ad hoc algorithm that ended up being very similar to an approximate inference algorithm for the hierarchical Pitman-Yor process [15, 14, 17, 8].
This raises another important point against Bayes, which is that the proper Bayesian interpretation may be very mathematically complex. Pitman-Yor processes are on the cutting-edge of Bayesian
nonparametric statistics, which is itself one of the more technical subfields of statistical machine learning, so it was probably much easier to come up with Kneser-Ney smoothing than to find the
interpretation in terms of Pitman-Yor processes.
2.2. When the Justifications Fail
The first and most common objection to Bayes is that a Bayesian method is only as good as its prior. While for simple models the performance of Bayes is relatively independent of the prior, such
models can only capture data where frequentist techniques would also perform very well. For more complex (especially nonparametric) Bayesian models, the performance can depend strongly on the prior,
and designing good priors is still an open problem. As one example I point to my own research on hierarchical nonparametric models, where the most straightforward attempts to build a hierarchical
model lead to severe pathologies [13].
Even if a Bayesian model does have a good prior, it may be computationally intractable to perform posterior inference. For instance, structure learning in Bayesian networks is NP-hard [4], as is
topic inference in the popular latent Dirichlet allocation model (and this continues to hold even if we only want to perform approximate inference). Similar stories probably hold for other common
models, although a theoretical survey has yet to be made; suffice to say that in practice approximate inference remains a difficult and unsolved problem, with many models not even considered because
of the apparent hopelessness of performing inference in them.
Because frequentist methods often come with an analysis of the specific algorithm being employed, they can sometimes overcome these computational issues. One example of this mentioned already is L1
regularized least squares [3]. The problem setup is that we have a linear regression task Ax = b+v where A and b are known, v is a noise vector, and x is believed to be sparse (typically x has many
more rows than b, so without the sparsity assumption x would be underdetermined). Let us suppose that x has n rows and k non-zero rows --- then the number of possible sparsity patterns is $\binom{n}
{k}$ --- large enough that a brute force consideration of all possible sparsity patterns is intractable. However, we can show that solving a certain semidefinite program will with high probability
yield the appropriate sparsity pattern, after which recovering x reduces to a simple least squares problem. (A semidefinite program is a certain type of optimization problem that can be solved
efficiently [16].)
Finally, Bayes has no good way of dealing with adversaries or with cases where the data was generated in a complicated way that could make it highly biased (for instance, as the output of an
optimization procedure). A toy example of an adversary would be playing rock-paper-scissors --- how should a Bayesian play such a game? The straightforward answer is to build up a model of the
opponent based on their plays so far, and then to make the play that maximizes the expected score (probability of winning minus probability of losing). However, such a strategy fares poorly against
any opponent with access to the model being used, as they can then just run the model themselves to predict the Bayesian's plays in advance, thereby winning every single time. In contrast, there is a
frequentist strategy called the multiplicative weights update method that fairs well against an arbitrary opponent (even one with superior computational resources and access to our agent's source
code). The multiplicative weights method does far more than winning at rock-paper-scissors --- it is also a key component of the fastest algorithm for solving many important optimization problems
(including the network flow algorithm), and it forms the theoretical basis for the widely used AdaBoost algorithm [1, 5, 7].
2.3. When To Use Each Method
The essential difference between Bayesian and frequentist decision theory is that Bayes makes the additional assumption of a prior over θ, and optimizes for average-case performance rather than
worst-case performance. It follows, then, that Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient. However, if we have no way of
obtaining a good prior, or when we need guaranteed performance, frequentist methods are the way to go. For instance, if we are trying to build a software package that should be widely deployable, we
might want to use a frequentist method because users can be sure that the software will work as long as some number of easily-checkable assumptions are met.
A nice middle-ground between purely Bayesian and purely frequentist methods is to use a Bayesian model coupled with frequentist model-checking techniques; this gives us the freedom in modeling
afforded by a prior but also gives us some degree of confidence that our model is correct. This approach is suggested by both Gelman [9] and Jordan [10].
3. Conclusion
When the assumptions of Bayes' Theorem hold, and when Bayesian updating can be performed computationally efficiently, then it is indeed tautological that Bayes is the optimal approach. Even when some
of these assumptions fail, Bayes can still be a fruitful approach. However, by working under weaker (sometimes even adversarial) assumptions, frequentist approaches can perform well in very
complicated domains even with fairly simple models; this is because, with fewer assumptions being made at the outset, less work has to be done to ensure that those assumptions are met.
From a research perspective, we should be far from satisfied with either approach --- Bayesian methods make stronger assumptions than may be warranted, and frequentists methods provide little in the
way of a coherent framework for constructing models, and ask for worst-case guarantees, which probably cannot be obtained in general. We should seek to develop a statistical modeling framework that,
unlike Bayes, can deal with unknown priors, adversaries, and limited computational resources.
4. Acknowledgements
Thanks to Emma Pierson, Vladimir Slepnev, and Wei Dai for reading preliminary versions of this work and providing many helpful comments.
5. References
[1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta algorithm and applications. Working Paper, 2005.
[2] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121--167, 1998.
[3] Emmanuel J. Candes. Compressive sampling. In Proceedings of the International Congress of Mathematicians. European Mathematical Society, 2006.
[4] D.M. Chickering. Learning bayesian networks is NP-complete. LECTURE NOTES IN STATISTICS-NEW YORK-SPRINGER VERLAG-, pages 121--130, 1996.
[5] Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel Spielman, and Shang-Hua Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In
Proceedings of the 43rd ACM Symposium on Theory of Computing, 2011.
[6] Andrew Critch. Frequentist vs. bayesian breakdown: Interpretation vs. inference. http://lesswrong.com/lw/7ck/frequentist_vs_bayesian_breakdown_interpretation/.
[7] Yoav Freund and Robert E. Schapire. A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence, 14(5):771--780, Sep. 1999.
[8] J. Gasthaus and Y.W. Teh. Improvements to the sequence memoizer. In Advances in Neural Information Processing Systems, 2011.
[9] Andrew Gelman. Induction and deduction in bayesian data analysis. RMM, 2:67--78, 2011.
[10] Michael I. Jordan. Are you a bayesian or a frequentist? Machine Learning Summer School 2009 (video lecture at http://videolectures.net/mlss09uk_jordan_bfway/).
[11] D. Warner North. A tutorial introduction to decision theory. IEEE Transactions on Systems Science and Cybernetics, SSC-4(3):200--210, Sep. 1968.
[12] Igal Sason. On refined versions of the Azuma-Hoeffding inequality with applications in information theory. CoRR, abs/1111.1977, 2011.
[13] Jacob Steinhardt and Zoubin Ghahramani. Pathological properties of deep bayesian hierarchies. In NIPS Workshop on Bayesian Nonparametrics, 2011. Extended Abstract.
[14] Y.W. Teh. A bayesian interpretation of interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, NUS, 2006.
[15] Y.W. Teh. A hierarchical bayesian language model based on pitman-yor processes. Coling/ACL, 2006.
[16] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Review, 38(1):49--95, Mar. 1996.
[17] F.~Wood, C.~Archambeau, J.~Gasthaus, L.~James, and Y.W. Teh. A stochastic memoizer for sequence data. In Proceedings of the 26th International Conference on Machine Learning, pages 1129--1136,
Comments (50)
Sort By: Best
I haven't read this in detail but one very quick comment: Cox's Theorem is a representation theorem showing that coherent belief states yield classical probabilities, it's not the same as the
dutch-book theorem at all. E.g. if you want to represent probabilities using log odds, they can certain relate to each other coherently (since they're just transforms of classical probabilities), but
Cox's Theorem will give you the classical probabilities right back out again. Jaynes cites a special case of Cox in PT:TLOS which is constructive at the price of assuming probabilities are twice
differentiable, and I actually tried it with log odds and got the classical probabilities right back out - I remember being pretty impressed with that, and had this enlightenment experience wherein I
went to seeing probability theory as a kind of relational structure in uncertainty.
I also quickly note that the worst-case scenario often amounts to making unfair assumptions about "randomization" wherein adversaries can always read the code of deterministic agents but
non-deterministic agents have access to hidden sources of random numbers. E.g. http://lesswrong.com/lw/vq/the_weighted_majority_algorithm/
Well, Cox's Theorem still assumes that you're representing belief-strengths with real numbers in the first place. Really you should go back to Savage's Theorem... :)
Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?
I'm not sure exactly how your comment about deterministic vs. non-deterministic agents is meant to apply to the arguments I've advanced here (although I suppose you will clarify after you're done
Separately, I disagree that the assumptions are unfair; I think of it as a particularly crisp abstraction of the actual situation you care about. As long as pseudo-random generators exist and you can
hide your source of randomness, you can guarantee that no adversary can predict your random bits; if you could usefully make the same guarantee about other aspects of your actions without recourse to
a PRG then I would happily incorporate that into the set of assumptions, but in practice it is easiest to just work in terms of a private source of randomness. Besides, I think that the use of this
formalism has been amply validated by its intellectual fruits (see the cited network flow application as one example, or the Arora, Hazan, and Kale reference).
Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?
There is a whole class of dutch book arguments, so I'm not sure which one you mean by the dutch book argument.
In any case, Susan Vineberg's formulation of the Dutch Book Theorem goes like this:
Given a set of betting quotients that fails to satisfy the probability axioms, there is a set of bets with those quotients that guarantees a net loss to one side.
Yes, that is the one I had in mind. Thanks!
Then you might think you could have inconsistent betting prices that would harm the person you bet with, but not you, which sounds fine.
Rather: "If your betting prices don't obey the laws of probability theory, then you will either accept combinations of bets that are sure losses, or pass up combinations of bets that are sure gains."
I've tried to do something similar with odds once, but the assumption about (AB|C) = F[(A|C), (B|AC)] made me give up.
Indeed, one can calculate O(AB|C) given O(A|C) and O(B|AC) but the formula isn't pretty. I've tried to derive that function but failed. It was not until I appealed to the fact that O(A)=P(A)/(1-P(A))
that I managed to infer this unnatural equation about O(AB|C), O(A|C) and O(B|AC).
And this use of classical probabilities, of course, completely defeats the point of getting classical probabilities from the odds via Cox's Theorem!
Did I miss something?
By the way, are there some other interesting natural rules of inference besides odds and log odds which are isomorphic to the rules of probability theory? (Judea Pearl mentioned something about MYCIN
certainty factor, but I was unable to find any details)
EDIT: You can view the CF combination rules here, but I find it very difficult to digest. Also, what about initial assignment of certainty?
EDIT2: Nevermind, I found an adequate summary ( http://www.idi.ntnu.no/~ksys/NOTES/CF-model.html ) of the model and pdf ( http://uai.sis.pitt.edu/papers/85/p9-heckerman.pdf ) about probabilistic
interpretations of CF. It seems to be an interesting example of not-obviously-Bayesian system of inference, but it's not exactly an example you would give to illustrate the point of Cox's theorem.
When the assumptions of Bayes' Theorem hold, and when Bayesian updating can be performed computationally efficiently, then it is indeed tautological that Bayes is the optimal approach. Even when
some of these assumptions fail, Bayes can still be a fruitful approach. However, by working under weaker (sometimes even adversarial) assumptions, frequentist approaches can perform well in very
complicated domains even with fairly simple models; this is because, with fewer assumptions being made at the outset, less work has to be done to ensure that those assumptions are met.
I've only skimmed this for now (will read soon), but I wanted to point out that I completely agree with this conclusion (without reading the arguments in detail). However, I might frame it
differently: both Bayesian statistics and frequentist statistics are useful only insofar as they approximate the true Bayesian epistemology. In other words, if you know the prior and know the
likelihood, then performing the Bayesian update will give P(A|data) where A is any question you're interested in. However, since we usually don't know our prior or likelihood, there's no guarantee
that Bayesian statistics -- which amounts to doing the Bayesian update on the wrong model of the actual structure of our uncertainty (i.e. our actual prior + likelihood) -- will closely approximate
Bayesian epistemology. So, of course we should consider other methods that, while superficially don't look like the true Bayesian update, may do a better job of approximating the answer we want.
Computational difficulty is a separate reason why we might have to approximate Bayesian epistemology even if we can write down the prior + likelihood and that, once again, might entail using methods
that don't look "Bayesian" in any way.
If you recall, I briefly made this argument to you at the July minicamp, but you didn't seem to find it persuasive. I'll note now that I'm simply not talking about decision theory. So, e.g., when you
It follows, then, that Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient.
I'm not taking a position on whether we need to consider whether we need average-case performance to be sufficient in order for using Bayesian statistics to be the best or a good option (I have
intuitions going both directions, but nothing fleshed out).
I predict that you'll probably answer my question in the later essay since my position hinges, crucially, one whether Bayesian epistemology is correct, but do you see anything that you disagree with
I predict that you'll probably answer my question in the later essay since my position hinges, crucially, one whether Bayesian epistemology is correct, but do you see anything that you disagree
with here?
Nope, everything you said looks good! I actually like the interpretation you gave:
However, I might frame it differently: both Bayesian statistics and frequentist statistics are useful only insofar as they approximate the true Bayesian epistemology.
I don't actually intend to take a position on whether Bayesian epistemology is correct; I merely plan to talk about implications and relationships between different interpretations of probability and
let people decide for themselves which to prefer, if any. Although if I had to take a position, it would be something like, "Bayes is more correct than frequentist but frequentist ideas can provide
insight into patching some of the holes in Bayesian epistemology". For instance, I think UDT is a very frequentist thing to do.
A nice middle-ground between purely Bayesian and purely frequentist methods is to use a Bayesian model coupled with frequentist model-checking techniques; this gives us the freedom in modeling
afforded by a prior but also gives us some degree of confidence that our model is correct. This approach is suggested by both Gelman [9] and Jordan [10].
Just to pile on a little bit here: A Bayesian might argue that uncertainty about which model you're using is just uncertainty, so put a prior on the space of possible models and do the Bayesian
update. This can be an effective method, but it doesn't entirely get rid of the problem - now you're modeling the structure of your uncertainty about models in a particular way, and that higher level
model could be wrong. You're also probably excluding some plausible possible models, but I'll sidestep that issue for now. The Bayesian might argue that this case is analogous to the previous - just
model your (model of the structure of uncertainty about models) - put a prior on that space too. But eventually this must stop with a finite number of levels of uncertainty, and there's no guarantee
that 1) your model is anywhere near the true model (i.e. the actual structure of your uncertainty) or 2) you'll be able to get answers out of the mess you've created.
On the other hand, frequentist model checking techniques can give you a pretty solid idea of how well the model is capturing the data. If one model doesn't seem to be working, try another instead!
Now a Bayesian might complain that this is "using the data twice" which isn't justified by probability theory, and they would be right. However, you don't get points for acting like a Bayesian, you
get points for giving the same answer as a Bayesian. What the Bayesian in this example should be worried about is whether the model chosen at the end ultimately gives answers that are close to what a
true Bayesian would give. Intuitively, I think this is the case - if a model doesn't seem to fit the data by some frequentist model checking method, e.g. a goodness of fit test, then it's likely that
if you could actually write down the posterior probability that the particular model you chose is true (i.e. it's the true structure of your uncertainty), that probability would be small, modulo a
high degree of prior certainty that the model was true. But I'm willing to be proven wrong on this.
For those who really want to discuss interpretations of probability, I will address that in a later essay.
I am still waiting with bated breath. :-)
Sorry about that! I'm really behind on writing right now, and probably will be for at least the next month as I work on a submission to NIPS (http://nips.cc/). I have a few other writing things to
catch up on before this, but still hope to get to it eventually.
No worries -- good luck with your submission! :-)
As Andrew Critch [6] insightfully points out, the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:...
I think there's a much more important and fundamental debate you're missing in your taxonomy, and one of the wellsprings of LW criticism: the sub-category of frequentist techniques called null
hypothesis testing. There are legitimate & powerful frequentist criticisms of NHST, and these are accepted and echoed as major arguments by many who are otherwise on an opposite side for one of those
other debates.
For my part, I'm sure that NHST is badly misleading and wrong, but I'm not so sure that I can tar all the other frequentist techniques with the same brush.
Bayesian methods make stronger assumptions than may be warranted, and frequentists methods provide little in the way of a coherent framework for constructing models, and ask for worst-case
guarantees, which probably cannot be obtained in general.
As a non-expert in the area, I find that this implies that "Bayesian methods" are unsuitable for FAI research, as preventing UFAI requires worst-case guarantees coupled with the assumption that AI
can read your source code. This must be wrong, otherwise EY would not trumpet Bayesianism so much. What did I miss?
If I apply Frequentist Decision Theory As Described By Jsteinhardt (FDTADBJ) to a real-world decision problem, where θ ranges over all possible worlds (as opposed to a standard science paper where θ
ranges over only a few parameters of some restricted model space), then the worst case isn't "we need to avoid UFAI", it's "UFAI wins and there's nothing we can do about it". Since there is at least
one possible world where all actions have an expected utility of "rocks fall, everyone dies", that's the only possible world that affects worst case utility. So FDTADBJ says there's no point in even
trying to optimize for the case where it's possible to survive.
Yup, exactly. It seems possible to me that you can get around this within the frequentist framework, but most likely it's the case that you need to at least use Bayesian ideas somewhere to get an AI
to work at all.
I plan to write up a sketch of a possible FAI architecture based on some of the ideas paulfchristiano has been developing; hopefully that will clarify some of these points.
Another answer is the complete class theorem, which shows that any non-Bayesian decision procedure is strictly dominated by a Bayesian decision procedure --- meaning that the Bayesian procedure
performs at least as well as the non-Bayesian procedure in all cases with certainty.
I don't understand the connection to the earlier claims about minimizing worst-case performance. To strictly dominate, doesn't this imply that the Bayesian algorithm does as well or better on the
worst-case input? In which case, how does frequentism ever differ? Surely the complete class theorem doesn't show that all frequentist approaches are just a Bayesian approah in disguise?
Trivia: "fairs well against" should be fares.
I don't understand why Bayesians are presented as going for expected value while Frequentists are going for worst-case. These seem kind of orthogonal issues.
These theorems, however, ignore the issue of computation --- while the best decision procedure may be Bayesian, the best computationally-efficient decision procedure could easily be non-Bayesian.
This raises another important point against Bayes, which is that the proper Bayesian interpretation may be very mathematically complex.
if we are trying to build a software package that should be widely deployable, we might want to use a frequentist method because users can be sure that the software will work as long as some
number of easily-checkable assumptions are met.
I think these are the strongest reasons you've raised that we might want to deviate from pure Bayesianism in practice. We usually think of these (computation and understandability-by-humans) as
irritating side issues, to be glossed over and mostly considered after we've made our decision about which algorithm to use. But in practice they often dominate all other considerations, so it would
be nice to find a way to rigorously integrate these two desiderata with the others that underpin Bayesianism.
Support vector machines [2], which try to pick separating hyperplanes that minimize generalization error, are one example of this where the algorithm is explicitly trying to maximize worst-case
Could you expand on this a little? I've always thought of SVMs as minimizing an expected loss (the sum over hinge losses) rather than any best-worst-case approach. Are you referring to the "max min"
in the dual QP? I'm interested in other interpretations...
In fact, now that the year is 2012 the majority of new graduate students are being raised as Bayesians (at least in the U.S.) with frequentists thought of as stodgy emeritus professors stuck in
their ways.
Is this actually true? Where would one get numbers on such a thing?
Data point: One of our Montreal LW meetup members showed us a picture and description pulled from his Bayes stats/analysis class, and the picture shows kiosks with the hippy bayes person and the
straight-suited old-and-set-in-his-ways corporate clone, along with the general idea that frequentist thinking is good for long-term verification and reliability tests, but that people who promote
frequentism over bayes when both are just as good are Doing Something Wrong (AKA sneer at the other tribe).
I don't think anyone needs anecdotes that Bayesian approaches are more popular than ever before or are a bona fide approach; I'm interested in the precise claim that now a majority of grad students
identify as Bayesians. That is the interest.
Ah, sorry for misunderstanding and going off on a tangent.
No, it's not true. This whole F vs B thing is such a false choice too. Does it make sense in computational complexity to have a holy war between average case and worst case analysis of algorithm
running time? Maybe for people who go on holy wars as a hobby, but not as a serious thing.
Does it make sense in computational complexity to have a holy war between average case and worst case analysis of algorithm running time?
Er, yes?
I don't understand why this was linked as a response at all. Randomization is conjectured not to help in the sense that people think P = BPP. But there are cases where randomization does strictly
help (wikipedia has a partial list: http://en.wikipedia.org/wiki/Randomized_algorithm).
My point was about sociology. Complexity theorists are not bashing each other's heads in over whether worst case or average case analysis is "better," they are proving theorems relating the
approaches, with the understanding that in some algorithm analysis applications, it makes sense to take the "adversary view," for example in real time systems that need strict guarantees. In other
applications, typical running time is a more useful quantity. Nobody calls worst case analysis an apostate technique. Maybe that's a good example to follow. Keep religion out of math, please.
Keep religion out of math, please.
I agree with this. That was supposed to be the point of the post.
Randomization is conjectured not to help in the sense that people think P = BPP.
Even if P = BPP, randomization still probably helps; P = BPP just means that randomization doesn't help so much that it separates polynomial from non-polynomial.
Your analogy is imprecise. Average case and worst case analyses are both useful in their own right, and deal with different phenomena; F and B claim to deal with the same phenomena, but F is usually
more vague about what assumptions its techniques follow from.
A more apt analogy, in my opinion, would be between interpretations of QM. All of them claim to deal with the same phenomena, but some interpretations are more vague about the precise mechanism than
Why do you think F is more vague than B? I don't think that's true. LW folks (up to and including EY) are generally a lot more vague and imprecise when talking about statistics than professional
statisticians using F for whatever reason. But still seem to have strong opinions about B over F. It's kinda culty, to be honest.
Here's a book by a smart F:
The section on B stat is fairly funny.
F techniques tend to make assumptions that are equivalent to establishing prior distributions, but because it's easy to forget about these assumptions, many people use F techniques without
considering what the assumptions mean. If you are explicit about establishing priors, however, this mostly evaporates.
Notice that the point about your analogy was regarding area of application, not relative vagueness.
I don't have a strong personal opinion about F/B. This is just based on informal observations about F techniques versus B techniques.
many people use F techniques without considering what the assumptions mean
Can you name three examples of this happening?
Here's one: http://lesswrong.com/lw/f6o/original_research_on_less_wrong/7q1g
Every biology paper released based on a 5% P-value threshold without regard to the underlying plausibility of the connection. There are many effects where I wouldn't take a 0.1% P-value to mean
anything (see: kerfluffle over superluminal neutrinos), and some where I'd take a 10% P-value as a weak but notable degree of confirmation.
I could, but I doubt anything would come of it. Forget about the off-hand vagueness remark; the analogy still fails.
"Area of app" depends on granularity: "analysis of running time" (e.g. "how long will this take, I haven't got all day") is an area of app, but if we are willing to drill in we can talk about
distributions on input vs worst case as separate areas of app. I don't really see a qualitative difference here: sometimes F is more appropriate, sometimes not. It really depends on how much we know
about the problem and how paranoid we are being. Just as with algorithms -- sometimes input distributions are reasonable, sometimes not.
Or if we are being theoretical statisticians, our intended target for techniques we are developing. I am not sympathetic to "but the unwashed masses don't really understand, therefore" kind of
arguments. Math techniques don't care, it's best to use what's appropriate.
edit: in fact, let the utility function u(.) be the running time of an algorithm A, and the prior over theta the input distribution for algorithm A inputs. Now consider what the expectation for F vs
the expectation for B is computing. This is a degenerate statistical problem, of course, but this isn't even an analogy, it's an isomorphism.
The section on B stat is fairly funny.
No doubt about it, Larry Wasserman* is a smart guy. Unfortunately, that section isn't his finest work. The normal prior example compares apples and oranges as discussed here, and the normalizing
constant paradox analysis is just wrong, as LW himself discusses here.
* I'm just a teeny bit jealous that his initials are "LW". How awesome would that be?
I don't have precise numbers but this is my experience after having worked with ML groups at Cambridge, MIT, and Stanford. The next most common thing after Bayesians would be neural nets people if I
had to guess (I don't know what you want to label those as). Note that as a Bayesian-leaning person I may have a biased sample.
I suspect Berkeley might be more frequentist but am unsure.
I see.
Finally, Bayes has no good way of dealing with […] with cases where the data was generated in a complicated way that could make it highly biased […].
Wait, what? If we don't know about the possibility of bias, we're doomed anyway, are we not? If we do know about it, then we just have to adjust our prior, right? Or is this again about the
intractability of true Bayesian computation in complicated cases?
Some random thoughts:
In order to fix "Bayesian Decision Theory" so that it works in multiplayer games, have it search through strategies for the one that leads to maximum utility, rather than just going action by action.
I guess this may be a non-mainstream thing?
Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient. However, if we have no way of obtaining a good prior, or when we need
guaranteed performance, frequentist methods are the way to go.
If you "need guaranteed performance," just include that information in the utility function.
the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:
Whether to interpret subjective beliefs as probabilities
Whether to interpret probabilities as subjective beliefs (as opposed to asymptotic frequencies)
Whether a Bayesian or frequentist algorithm is better suited to solving a particular problem.
Why are these arguments so commonly conflated?
Given the rest of your article, it looks like "conflated" could be replaced by "correlated" here. Calling the relationship between ideals and algorithms "conflation" already judges the issue a bit :)
In order to fix "Bayesian Decision Theory" so that it works in multiplayer games, have it search through strategies for the one that leads to maximum utility, rather than just going action by
action. I guess this may be a non-mainstream thing?
Nope, that's definitely a standard thing to do. It's what I was referring to when I said:
We can incorporate notions like planning and value of information by defining U(θ; a) recursively in terms of an identical agent to ourselves who has seen one additional observation (or, if we
are planning against an adversary, in terms of the adversary).
However, this doesn't actually work that well, since recursively modeling other agents is expensive, and if the adversary is more complicated than our model can capture, we will do poorly. It is
often much better to just not assume that we have a model of the adversary in the first place.
If you "need guaranteed performance," just include that information in the utility function.
There is a difference between "guaranteed performance" and "optimizing for the worst case". Guaranteed performance means that we can be confident, before the algorithm gets run, that it will hit some
performance threshold. I don't see how you can do that with a Bayesian method, except by performing a frequentist analysis on it.
Given the rest of your article, it looks like "conflated" could be replaced by "correlated" here. Calling the relationship between ideals and algorithms "conflation" already judges the issue a
bit :)
When terminology fails to carve reality at its joints then I think it is fair to refer to it as conflation. If it was indeed the case that ideals mapped one-to-one onto algorithms then I would
reconsider my word choice.
There is a difference between "guaranteed performance" and "optimizing for the worst case". Guaranteed performance means that we can be confident, before the algorithm gets run, that it will hit
some performance threshold.
Ah, okay. Whoops.
I don't see how you can do that with a Bayesian method, except by performing a frequentist analysis on it.
How about a deliberate approximation to an ideal use of the evidence? Or do any approximations with limited ranges of validity (i.e. all approximations) count as "frequentist"? Though then we might
have to divide computer-programming frequentists into "bayesian frequentists" and "frequentist frequentists" depending on whether they made approximations or applied a toolbox of methods.
How about a deliberate approximation to an ideal use of the evidence?
I'm confused by what you are suggesting here. Even a Bayesian method making no approximations at all doesn't necessarily have guaranteed performance (see my response to Oscar_Cunningham).
I'm referring to using an approximation in order to guarantee performance. E.g. replacing the sum of a bunch of independent, well-behaved random variables with a gaussian, and using monte-carlo
methods to get approximate properties of the individual random variables with known resources if necessary.
There is a difference between "guaranteed performance" and "optimizing for the worst case".
I'm not sure what the difference between these two is, could you spell it out for me?
"Guaranteed performance" typically cashes out as "replace the value of an action with the probability that its outcome is better than L, then pick the best" whereas "optimizing for the worst case"
typically cashes out as "replace the value of an action with the value of its worst outcome, then pick the best."
The latter is often referred to as "robustness" and the former as "partial robustness," and which one is applicable depends on the situation. Generally, the latter is used in problems with severe
probabilistic uncertainty, whereas the former needs some probabilistic certainty.
Suppose that there are two possible policies A and B, and in the worst case A gives utility 1 and B gives utility 2, but for the specific problem we care about we require a utility of 3. Then an
algorithm that optimizes for the worst case will choose B. On the other hand, there is no algorithm (that only chooses between policies A and B) that can guarantee a utility of 3. If you absolutely
need a utility of 3 then you'd better come up with a new policy C, or find an additional valid assumption that you can make. The subtlety here is that "optimizing for the worst case" implicitly means
"with respect to the current set of assumptions I have encoded into my algorithm, which is probably a subset of the full set of assumptions that I as a human make about the world".
The notion of guaranteed performance is important because it tells you when you need to do more work and design a better algorithm (for instance, by finding additional regularities of the environment
that you can exploit). | {"url":"http://lesswrong.com/lw/f7t/beyond_bayesians_and_frequentists/","timestamp":"2014-04-21T10:47:59Z","content_type":null,"content_length":"233776","record_id":"<urn:uuid:767c229a-1ec4-4cb7-a745-df2fe181894e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Re: [reactive] FRP + physics / status of hpysics
Peter Verswyvelen bugfact at gmail.com
Fri Mar 6 07:01:41 EST 2009
Thanks for the info. With backtracking I actually meant the computation of
the exact collision time, and let (part of the simulation) only go that far,
so it's not really "back tracking" in the physics engine; does that
correspond to your 2nd proposal. I just got this from a physics
implements it that way (at least that why I got from reading it
diagonally, the books contains a lot of advanced math...)
But do you mean that with your proposed methods the simulation will advance
a full "time step" anyway, so the time interval does not need to broken up
into smaller ones, where each sub-interval ends with a collision event? I
wander how this could work since most of the time in a game when a collision
happens, the game logic decides what forces to apply next, so the simulation
can't really advance a full time step anyway (although that could be hacked
I guess). Converting the game logic into differential equations with
constraints seems very hard.
However, I must admit I haven't used any modern physics engines the last 5
years or so... But it's interesting to hear from people that did.
On Fri, Mar 6, 2009 at 11:59 AM, jean-christophe mincke <
jeanchristophe.mincke at gmail.com> wrote:
> Hello Peter,
> The backtraking in time to solve the collision problem you mentionned is
> not, in my opinion, efficient.
> From a previous life as an aerospace engineer, I remember that two other
> solutions exist to handle contact or collision constraints, at least if 2nd
> order diff. equations are used to describe the motion of a solid with mass.
> In any case, you have to use a 'serious' variable time step integration
> algorithm (I.E Runge-Kutta).
> 1. The naive one: introduce a (virtual) spring between every 2 objets that
> may collide. When these objets get closer, the spring is compressed and
> tries to push them back.
> If the mass/velocity are high, that leads to a stiff system and the time
> steps may become very small.
> However, this solution does not require any modification of the equations
> of motion.
> 2. The serious one: modify or augment the equations of motion so that the
> collision constraints are implicitly taken into account. If I remember well,
> the magical trick is to use langrangian multipliers.
> The difficult here (especially in the context of aFRP) is to derive the new
> equations.
> Hope it helps
> Regards
> Jean-Christophe Mincke
> 2009/3/6 Peter Verswyvelen <bugfact at gmail.com>
>> Regarding hpysics, did anybody did some experiments with this? The blog
>> seems to be inactive since december 2008; has development ceased?
>> Do alternatives exist? Maybe good wrappers (hopefully pure...) around
>> existing engines?
>> Integrating hpysics with Grapefruit might be a good topic for the
>> Hackaton, trying to make a simple game (e.g. Pong or Breakout) without using
>> recursive signal functions, but with correct collision response and
>> better-than-Euler integration, all handled by the physics engine. Other FRP
>> engines could be tried, but Grapefruit hacking is already a topic on the
>> Hackaton, so it would combine efforts.
>> It feels as if none of the current FRP engines can handle physics
>> correctly, since a typical physics implementations requires "time
>> backtracking", in the sense that when you want to advance the current
>> simulation time by a timestep, collision events can happen during that time
>> interval, and hence the FRP engine can only advance time until the earliest
>> collision event. So to do physics *with* an FRP engine, the implementation
>> and maybe even semantics of the FRP system might need to be changed. *Using*
>> a physics engine as a blackbox inside an FRP system might make more sense.
>> Thanks to Wolfgang Jeltsch and Christopher Lane Hinson for having a
>> discussion with me that lead to this. Interestingly a similar discussion
>> was help by other people in the Reactive mailing list at the same time :-)
>> Cheers,
>> Peter Verswyvelen
>> _______________________________________________
>> Reactive mailing list
>> Reactive at haskell.org
>> http://www.haskell.org/mailman/listinfo/reactive
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20090306/79d107c1/attachment.htm
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-March/057169.html","timestamp":"2014-04-19T21:50:48Z","content_type":null,"content_length":"8685","record_id":"<urn:uuid:555ad9a7-079a-4996-9b14-59bc929d9482>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Denumerable sets...
May 2nd 2009, 08:46 PM #1
Denumerable sets...
Let A, B, and C be disjoint denumerable sets. Show that A∪B∪C is also denumerable.
Given: A≈N, B≈N, and C≈N. Goal:A∪B∪C≈N
Suppose A,B, and C are non-empty disjoints sets such that A≈N, B≈N, and C≈N. Then, there exists functions f:A→N, g:B→N, and h:C→N that are one-to-one and onto N.
This is where my understanding kinda falls apart...Do have to set some knda of restriction? I can't find many examples in my book of this kind of stuff. Actaully, I'm probably just not making the
There are three bijections: $\alpha :A \Leftrightarrow \mathbb{N}\;,\;\beta :B \Leftrightarrow \mathbb{N}\;\& \;\chi :C \Leftrightarrow \mathbb{N}$.
Define a new function $\Phi :A \cup B \cup C \mapsto \mathbb{N}\;,\;\Phi (x) = \left\{ {\begin{array}{lr}<br /> {2^{\alpha (x)} ,} & {x \in A} \\<br /> {3^{\beta (x)} ,} & {x \in B} \\<br /> {5^
{\chi (x)} ,} & {x \in C} \\ \end{array} } \right.$
You must show that the function, $\Phi$, is well-defined and is an injection.
Suppose Φ(x)=Φ(y). We see that x and y must have same parity for or else Φ(x)≠Φ(y). So, suppose x,y∈A. Then, plugging into the definition Φ(x)=2^∝(x)=2^∝(y)=Φ(y). ln(2^∝(x))=ln(2^∝(y)) ⇒ ∝ln(2)x=
∝ln(2)y ⇒ x=y. Therefore, Φ(x) is injective when x∈A.
Let y be an element of the codomain. By our definition y=2^∝(x). Solving for x we get x=ln(y)/∝ln(2) when y>0. Now, f(ln(y)/∝ln(2))=2^∝(ln(y)/∝ln(2))=y for y∈N. Thus, Φ(x) is surjective.
Then I proceed to go down the list?
Absolutely not! The function $\Phi$is of course not surjective.
But it does not have to be in order to prove the theorem.
All one has to do to prove a set is denumerable is to show that it is infinite and exhibit an injection from the set to $\mathbb{N}$.
Last edited by Plato; May 3rd 2009 at 12:02 PM.
Plato's method works, of course, but I had a different idea. Since A is countable, there exists a function N->A that is one-to-one and countable so each member of A can be "labeled" $a_n$.
Similarly, members of B and C can be "labeled" $b_n$ and $c_n$. Now define a function from N to the union of A, B, and C by $f(3n)= a_n$, $f(3n+1)= b_n$, and $f(3n+2)= c_n$. It is easy to show
that function is bijective.
Plato's method works, of course, but I had a different idea. Since A is countable, there exists a function N->A that is one-to-one and countable so each member of A can be "labeled" $a_n$.
Similarly, members of B and C can be "labeled" $b_n$ and $c_n$. Now define a function from N to the union of A, B, and C by $f(3n)= a_n$, $f(3n+1)= b_n$, and $f(3n+2)= c_n$. It is easy to show
that function is bijective.
Could you maybe expand on this? I am not sure how I would write out the new function H:N⇒A∪B∪C? Would I just list them like piece wise functions?
May 3rd 2009, 03:47 AM #2
May 3rd 2009, 10:19 AM #3
May 3rd 2009, 10:29 AM #4
May 3rd 2009, 11:49 AM #5
MHF Contributor
Apr 2005
May 5th 2009, 10:26 PM #6 | {"url":"http://mathhelpforum.com/discrete-math/87023-denumerable-sets.html","timestamp":"2014-04-18T13:33:04Z","content_type":null,"content_length":"52326","record_id":"<urn:uuid:1d076b27-f04c-40e0-a2bf-0d077124ac04>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: Middle School Word Problems
This page:
word problems
Dr. Math
See also the
Dr. Math FAQ:
classic problems
word problems
About Math
factoring expressions
graphing equations
factoring numbers
conic sections/
3D and higher
Number Sense
factoring numbers
negative numbers
prime numbers
square roots
Word Problems
Browse Middle School Word Problems
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Mixture problems.
How many gallons are in a barrel?
One day four baseball games were played, involving eight teams. Three people each predicted the four winners of the games. Use their predictions to determine which teams played each other.
The mass of a baseball is 50g. What is the mass of a bucket when 2 buckets = 6 blocks; 1 bucket + 1 block = 2 milk cartons; and 2 baseballs = 1 milk carton.
If a straight line is drawn diagonally from one corner of the court to the opposite corner, how many tiles will the diagonal intersect?
In each single bed you can find 7 bedbugs, and in each double bed 13 bedbugs. If there are 106 bedbugs in all, how many double beds are there?
Bill + Ben's age = 91. Bill is twice as old as Ben was when Bill was as old as Ben is now.
Juan can bike twice as fast as he can run...
Two trains are 150 kilometers apart, heading toward each other along a single track...
Four students borrowed books from a bookshelf and eight students returned books. There are 27 books on the shelf. How many books were there in the beginning?
How many boys in a school wear spectacles if there are 1400 pupils, 1/4 of those wear spectacles, and 2/7 of them are boys?
How many goats are there in the herd? What are the sizes of the feeding groups once they have stabilised? Find at least two possible cyclic patterns of sizes.
A candle is lit at 4:30 and burns until 10:30, and a shorter one is lit at 6:00 and goes out at 10:00. If they were the same length at 8:30, what were their original lengths?
Each of 5 students paid a fare with 5 coins, for a total of $21.83. How many pennies did the bus driver receive?
You buy .02 and .15 stamps, paying $1.56 in all. There are 10 more .02 than .15 stamps; how many of each kind did you buy?
Find the heart rate of a 15 year old at 85% intensity.
In order to figure a profit of 10% I would normally take my cost and then multiply by 1.1 to find the selling price. My employer tells me that in order to find a 10% profit I should divide my
cost by 0.9, but I don't understand how this is 10%. Can you please explain?
Raymond gives candy bars away in a specific pattern until he has no candy bars left. How many candy bars did he start with?
On a race track, one car travels 5 laps a minute and another car travels 8 laps a minute. How long will it be before the second car laps the first?
Please tell me how many square yards of carpeting are needed for the following room/closet.
Two cars leave a garage traveling in opposite directions. One car leaves at 8 am and averages 60 mph. The other car leaves at 9 am and averages 50 mph. At what time will they be 225 miles apart?
The census taker says, "I need to know the ages of your children."
A man is three times as old as his son was at the time when the father was twice as old as his son will be two years from now. Find the present age of each if they sum to 55.
A man has nine children whose ages are at an exact interval. The sum of the squares of the ages of each is the square of his own age. What is the age of each child and the man?
In a survey of 270 grade 9 students, 58 percent liked rock music. How many did not like rock music?
The difference of twice one number three times another number is 24. Find the numbers if their product is a minimum.
Our whole class has come up with one answer to a problem, while the teacher's answer book gives a different answer. What have we done wrong?
If a chicken and a half lays an egg and a half in a day and a half, how long does it take to get a dozen eggs?
Patrick has a box of crayons with red, blue, yellow, orange, green and purple. How many different ways can Patrick select 3 colors?
What is the easiest way to do a hard word problem?
If a tree grows a 1/4-inch-thick ring each year for the first 60 years, 1/3-inch-rings for the next 80 years, and 1/2-inch-rings for the next 200 years, what is the circumference of a
123-year-old tree?
If Joe travels at 50 mph, he arrives at his destination 20 minutes faster than if he travels at 45 mph. How far does he travel?
A worm is at the bottom of a well that is 100 feet deep. Each day, he climbs up 5 feet towards the top. At night he falls back 2 feet. How many days will it take him to reach the top of the well?
The minute hand of a clock is 3 cm longer then the hour hand. If the distance between the tips of the hands at nine o'clock is 15 cm, how long is the minute hand?
A clock shows the correct time on May 1 at 1:00 P.M, but loses 20 minutes a day. When will the clock show the correct time again?
Is there more coffee in the tea, or more tea in the coffee, or are they the same?
It takes me 3 hours to paint a house. It takes you 5 hours to paint a house. How long will it take for both of us to paint a house?
Jones takes 12 hours to complete a task. Marco arrives, and they finish in 2 hours. How long would Marco have needed to do the job alone?
An adult wonders how to logically reason through problems that entail combining rates of work. Moved to re-examine the mechanics he learned in high school, Doctor Jerry introduces a constant
tacitly assumed in the wording of such problems.
Mildred receives a 5% commission on her sales of exercise equipment and a 6% commission on her sales of weight training equipment...
Which of these is the better buy: a 6-ounce can of tuna that sells for $1.59, or a package of three 3-ounce cans for $2.19?
Page: [<prev] 1 2 3 4 5 6 7 8 9 10 11 12 [next>] | {"url":"http://mathforum.org/library/drmath/sets/mid_word_problems.html?start_at=81&s_keyid=38676394&f_keyid=38676395&num_to_see=40","timestamp":"2014-04-16T04:42:44Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:d8a73e29-479b-4614-9ec0-ccd3e0661436>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
angle between subspaces
up vote 8 down vote favorite
Let $E$ be a finite dimensional real inner product space. I want to define the angle between two subspaces $E_1$ and $E_2$. This has a fairly obvious meaning if $E_1$ is 1-diemsnional: Take the angle
between any non-zero vector in $E_1$ and its orthogonal projection onto $E_2$.
There are a number of other cases that can be treated ad-hoc, if one is a hyperplane, or the dihedral angle between planes in $R^3$.
In general, it isn't quite clear what the right definition is. I see two possibilities:
1. If $p=\dim E_1\le \dim E_2$, consider the two subspace $\lambda^p(E_1)$ and $\Lambda^p(E_2$ of $\Lambda^p(E)$ (which is also an inner product space, and proceed as above, since $\Lambda^p(E_1)$
is a line.
2. $Hom(E,E)$ is itself an inner product space with the inner product $$ \langle A,B\rangle=trace A^\top B. $$ Let $A_i$ be the orthogonal projection onto $E_i$ and take the angle between $A_1$ and
Are either of these definitions standard? Are they equivalent (I think so)? Is there another definition, perhaps more immediate?
2 Didn't you try Google? If you put "angle between subspaces" into Google you will find a ton of stuff there. – Dick Palais Jul 13 '12 at 17:00
2 I did go to google. I found lots of things, along the lines of principal angles and the product of their cosines. I don't really understand what that measures; maybe it is one (or both) of the
suggestions above. By the way, I have a third possibility: Take the infimum of the angles between pairs of unit vectors, one in $E_1$ and one in $E_2$, and both orthogonal to $$E_1\cap E_2$ (angle
$0$ if this set is empy, i.e., if one subspace is a subspace of the other). – John Hubbard Jul 13 '12 at 17:16
John, what are $\lambda^p$ and $\Lambda^p$? – Vidit Nanda Jul 13 '12 at 19:16
I think the two definitions aren't equivalent. If one space is generated by the $p$ first vectors of an ON basis, and $B$ is the matrix of orthogonal proj. on a second same dimension subspace,
with obvious 2x2 block partition, then the first angle has cosine $\det(B_{11})$ and the second has cosine $tr(B_{11})/p$. In general, I would say that the most general notion of "angle" is the
orbit of the pair of subspaces under the orthogonal group. – BS. Jul 14 '12 at 14:14
I am not sure it can be done in general if $E_1$ and $E_2$ are of different dimensionality. Ideally the angle (or rather its cosine) would be given by the scalar or inner product of $\Lambda_1$
and $\Lambda_2$ where each $\Lambda$ is the exterior product of all the elements in some basis of $E_1$ and $E_2$ respectively, normalised to unity. However the standard definition for the inner
product does not apply if $\Lambda_1$ and $\Lambda_2$ are not of the same grade. – AlexArvanitakis Jul 15 '12 at 1:18
show 1 more comment
2 Answers
active oldest votes
There is a standard answer: Principal angles, see http://en.wikipedia.org/wiki/Principal_angles.
Let $p \ge q$ be the dimensions of the two subspaces $E_1$ and $E_2$. Then there is a unique non-increasing sequence $[c_1,c_2,...,c_q]$ with entries in $[0,1]$ (and a matching
non-decreasing sequence $[s_1,s_2,...,s_q]$) such that one can have an orthonormal basis for $E$, call it $e_1,e_2,...$, in such a way that one subspace is generated by orthonormal vectors
up vote $$e_1,e_2,...,e_p$$ and the other subspace generated by orthonormal vectors $$c_1e_1+s_1e_{p+q},c_2e_2+s_2e_{p+q-1},...,c_qe_q+s_qe_{p+1}.$$ One can see this from the Singular Value Theorem.
5 down The principal angles are obviously those angles whose cosines match the $c_i$ values.
This concept captures all of the geometric invariant information relating the positioning of the two subspaces, so any well-defined definition you care to give must be a deterministic
function of this sequence of principal angles.
add comment
Let me confuse you some more. There is a third possibility that is used frequently in functional analysis. Define
$$\delta(E_1,E_2)= \sup_{x\in E_1,\;|x|=1}{\rm dist}\; (x,E_2). $$
The number $\delta(E_1,E_2)$ is called the gap between $E_1$ and $E_2$. Clearly $\delta(E_1, E_2)\in [0,1]$ so that there exists $\theta\in [0,\frac{\pi}{2}]$ such that
$$\delta(E_1,E_2)=\sin \theta.$$
We define the above $\theta$ to be the angle between $E_1,E_2$. Note that if $\dim E_1=1$, than this definition agrees with your first definition. However
$$\delta(E_1, E_2)\neq \delta(E_2,E_1).$$
up vote 4 Moreover
down vote
$$ \theta <\frac{\pi}{2} \Longleftrightarrow \delta(E_1,E_2)<1 \Longleftrightarrow E_1\cap E_2^\perp= 0. $$
Your first definition of angle has a similar property. Finally let me point out that
$$ \delta(E_1,E_2)= \Vert P_{E_2^\perp}P_{E_1}\Vert, $$
where $P_U$ denotes the orthogonal projection onto the subspace $U$, and for any linner operator $A$ we set
$$ \Vert A\Vert =\sup_{|x|=1} |Ax|. $$
If you take the Hausdorff distance between the intersections of $E_1$ and $E_2$ with the unit sphere, you get a very good metric on the space of all subspaces. It works even for closed
subspaces of Banach spaces. This is similar to a symmetrized version of your $\delta$. It isn't quite the angle I am after, because it returns $\pi/2$ when $E_1\subset E_2$ and $E_1 \ne
E_2$. I would want the angle between a line and a plane in $R^3$ to be $0$ when the line is in the plane. – John Hubbard Jul 15 '12 at 16:02
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/102153/angle-between-subspaces","timestamp":"2014-04-17T21:41:38Z","content_type":null,"content_length":"62775","record_id":"<urn:uuid:3f322d81-3b08-4fe0-b36a-9d875be84283>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can Nitrogen Offer a Different Perspective on the Hydrogen Storage Problem? (NH3)n-(H2)m Clusters from Cryogenic Storage as Novel Fuels
We have initiated a systematic exploration of the classical minima and the ground state properties of (NH3)n-cH2 and (NH3)n-pH2, where cH2 is the classical hydrogen molecule and pH2 is para-hydrogen.
The classical minima are systematically searched for (NH3)8-20-cH2 and (NH3)8-20-pH2, whereas the ground state properties are computed for n = 8,12,16,17 , for c-H2 and pH2. The sizes above are
chosen after a dominant pattern of adsorption for the hydrogen molecule on the ammonia clusters in this size range is observed and first emerges with the octamer: Both the classical and para-hydrogen
prefer the site with four ammonia molecules arranged in a rhombic pattern. The sizes 12 and 16 are chosen because of their higher symmetry and increased stability. Both the dodecamer and the
hexadecamer of ammonia have cage - like structures. However, neither of them have a cavity sufficiently large to contain a molecule of hydrogen inside. Therefore, the systematic search for the minima
is performed by placing the hydrogen molecule randomly on the surface of a sphere with sufficiently large radius to contain comfortably the cluster inside. A large number of Brownian trajectories are
generated over the first five minima of the bare ammonia clusters. Comparing the classical minima with the 0K quantum energies allows us to quantify the zero point energy for the systems. We find
this to be greater than 90% of the classical energy in many cases. For the n = 17 case we find that neither the classical, nor para-hydrogen are bound when quantum effects are considered. At the same
time, our exploration of the minima of bare ammonia clusters continues to seek possible large cages inside which hydrogen can be potentially stored. At the present time, we are performing genetic
algorithm minima searches for n = 22 and n = 23. The structure of the global minimum of the former provides insightful information to elucidate the growth pattern of ammonia clusters. With the
structure of the global minimum of n = 22 we predict that the n = 27 will be the first cluster to have an ammonia molecule in the center fully coordinated like the bulk ammonia ice, namely, the
central molecule is the donor of three hydrogen bonds and the acceptor of three hydrogen bonds simultaneously.
We have formulated an extension of the Ring Polymer dynamics approach to curved spaces using stereographic projection coordinates. We test the theory by simulating the particle in a ring, mapped by a
stereographic projection using three potentials. Two of these are quadratic, and one is a nonconfining sinusoidal model. We propose a new class of algorithms for the integration of the Ring Polymer
Hamilton equations in curved spaces. These are designed to improve the energy conservation of symplectic integrators based on the split operator approach. For manifolds, the position - position
autocorrelation function can be formulated in numerous ways. We find that the position - position autocorrelation function computed from configurations in the Euclidean space, that contains, as a
submanifold, the configuration space has the best statistical properties.
We have continued to explore the Smart Darting approach, using the n-dimensional Decoupled Double Wells [(DDW)n] potential energy surfaces for which we can obtain deterministic results. In previously
reported efforts we had found that Smart Darting far outperforms Parallel Tempering for the computations of the classical heat capacity for these systems. The drawback for Smart Darting, as
implemented for classical thermodynamic simulations, is that its implementation involves transformations of coordinates. Therefore, the proper Jacobian has to be computed and included in the
algorithm that generates the random walk. This is nontrivial for atomic clusters and becomes very complicated to implement for systems of rigid molecules. This year we have extended our exploration
of the Decoupled Double Wells [(DDW)n] for the computation of its thermodynamic properties using several Infinite Swapping and Partial Infinite Swapping algorithms [c.f. N. Plattner, J. D. Doll, P.
Dupuis, H. Wang, Y. Liu, and J. E. Gubernatis J. Chem. Phys. 135, 134111 (2011)]. Infinite Swapping (IS) is a technique for rare event sampling that uses, as the sampling distribution, the sum of all
possible permutations of the product of n individual distributions, each term associating a set of n walkers with a set of n temperatures. N. Plattner, et. al. demonstrate that such sums symmetrized
over all the possible permutations are connected in regions of configuration space where rare events manifest themselves, whereas the individual distributions are not connected and hence, much harder
to sample. We have coded a Smart Monte Carlo algorithm for classical simulations in (DDW)n, we have formulated a generic algorithm for producing all n! permutation operators for up to n = 7 and we
have combined this code with the Smart Monte Carlo algorithm code to perform a number of tests. Because the full IS formalism grows factorially over a set of chosen temperatures, and these may not be
sufficiently numerous to cover the important regions, a set of partial "shuffle" strategies have been considered by N. Plattner, et. al. We test a 18 - temperature PIS(1-3|3-1) scheme for (DDW)1-80.
We note that the approach works no better than Parallel Tempering. We are in the process of coding and testing a PIS(5|4) and a PIS(6|5) scheme with 20 and 30 temperatures respectively. | {"url":"https://acswebcontent.acs.org/prfar/2012/Paper11854.html","timestamp":"2014-04-19T01:48:36Z","content_type":null,"content_length":"26048","record_id":"<urn:uuid:95fecbb2-8cfe-4a55-abad-21feac3b093a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Give parametric equations and bounds for the parameter that describe the unit circle as shown. In each case the unit circle should be traced only once. Check the answers by putting them in a
calculator and seeing if we find the right picture.
• We want to start at (0,1) and go around the circle counterclockwise from t = 0 to t = 4π.
From this example we know that switching the "normal" equations traces out the circle clockwise startingfrom (0,1).
x(t)& = &sin ty(t)& = &cos t
0≤ t ≤ 2πPICTURE param eq 23Start with these equations. We want y to stay the same, moving from 1 down to $-1 and back up to $1:PICTURE param eq 24However, we want x to be the opposite. We want x to
first move to $-1, then to $ + 1, then back to 0:PICTURE param eq 25If we stick a negative sign in front of the x equation and leave the y equation unchanged, we'll draw the circle in the correct
direction. The equations
x(t) = -sin ty(t) = cos t
for 0 ≤ t ≤ 2πproduce the circle starting at the correct point and drawn in the correct direction:PICTURE graph these, with arrows and label t = 0 and t = 2πThere's one thing left to address: the
speed. Right now we're taking from 0 to $2π$ to draw the circle. That's not long enough. We want to take from 0 to 4π. Tofix this, double the period of each equation. The final parameterization is
x(t) = -sin (\frac{t}{2})y(t) = cos (\frac{t}{2})
for 0 ≤ t ≤ 4π. | {"url":"http://www.shmoop.com/points-vectors-functions/parametric-unit-circle-exercises-3.html","timestamp":"2014-04-16T04:13:20Z","content_type":null,"content_length":"28783","record_id":"<urn:uuid:69db6d8f-d21d-4c08-adfe-041ac804846a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: Two-word commands with gettoken
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: Two-word commands with gettoken
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: Two-word commands with gettoken
Date Sun, 8 Mar 2009 18:56:21 -0000
I wouldn't use globals here, or indeed almost anywhere else.
Anyone watching needs to know that local macro 0 is special; it is born as whatever the user types after the command line. It can be redefined, as here with -gettoken-.
A little inelegant, but practical, is to do something like this:
program define mycmd
local typed `0'
gettoken subcmd 0: 0
if "`subcmd'"=="reg" | "`subcmd'"=="areg" |
"`subcmd'"=="xtreg" {
mycmd_ols `typed'
else if "`subcmd'"=="ivreg" | "`subcmd'"=="xtivreg" |
"`subcmd'"=="ivreg2" | "`subcmd'"=="xtivreg2" {
mycmd_iv `typed'
else error 199
and then parse away.
Alternatively, don't use -gettoken- at all. That sounds better.
program define mycmd
local subcmd : word 1 of `0'
if "`subcmd'"=="reg" | "`subcmd'"=="areg" |
"`subcmd'"=="xtreg" {
mycmd_ols `0'
else if "`subcmd'"=="ivreg" | "`subcmd'"=="xtivreg" |
"`subcmd'"=="ivreg2" | "`subcmd'"=="xtivreg2" {
mycmd_iv `0'
else error 199
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Augusto Cadenas
Sent: 07 March 2009 20:09
To: statalist@hsphsun2.harvard.edu
Subject: st: Two-word commands with gettoken
I have a question about -gettoken- and programming in Stata. The stata
help file suggests that -gettoken- can be used to create a two-word
command. This is the example that is given:
*** begin example ***
program define mycmd
gettoken subcmd 0: 0
if "`subcmd'"=="list" {
mycmd_l `0'
else if "`subcmd'"=="generate" {
mycmd_g `0'
else error 199
program define mycmd_l
program define mycmd_g
*** end example ***
I wonder how I could use the `subcmd' that has been determined by the
first program, -mycmd-, within the sub-programs -mycmd_l- and
-mycmd_g- without referring to it explicitly. To make a concrete
example: In my case I want a program to do two similar, but slightly
different things depending on whether I am doing an OLS regression or
an IV regression. So the setup I have in mind is like:
*** begin example ***
program define mycmd
gettoken subcmd 0: 0
if "`subcmd'"=="reg" | "`subcmd'"=="areg" |
"`subcmd'"=="xtreg" {
mycmd_ols `0'
else if "`subcmd'"=="ivreg" | "`subcmd'"=="xtivreg" |
"`subcmd'"=="ivreg2" | "`subcmd'"=="xtivreg2" {
mycmd_iv `0'
else error 199
program define mycmd_ols
`subcmd' `0'
program define mycmd_iv
`subcmd' `0'
*** end example ***
But this does not work, I guess because `subcmd' is not recognized
within the next program. How do I get around that? It's two days I'm
trying and I haven't found a solution. Thanks for any suggestions.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-03/msg00350.html","timestamp":"2014-04-20T14:21:33Z","content_type":null,"content_length":"9876","record_id":"<urn:uuid:67dcb3fc-c2ad-4982-8e50-976fb3653ceb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fayetteville, GA Calculus Tutor
Find a Fayetteville, GA Calculus Tutor
...Taught Prealgebra concepts as a GMAT instructor for three years. I love helping students understand Prealgebra! Tutored on Precalculus topics during high school and college.
28 Subjects: including calculus, physics, statistics, GRE
...I earned my BS in mathematics from Bethune-Cookman college, my master's in educational administration and leadership from Florida A&M University, and I'm almost done with my PhD in mathematics
education at Georgia State University. Once I'm done with that, I'll consider graduate degrees in pure or applied mathematics. I'm patient with students at all levels.
10 Subjects: including calculus, geometry, algebra 1, algebra 2
I have been tutoring students of all ages for the past seven years in the areas of reading, language arts, mathematics (algebra, middle-school mathematics, elementary-level mathematics, calculus
), and different sciences. I am currently a certified high-school science teacher at a public school. I ...
19 Subjects: including calculus, reading, English, physics
...If you cancel an appointment less than 8 before a session and do not notify me, you will be charged a cancellation fee. If I need to cancel or reschedule a session, there is never a fee.I have
just graduated from Georgia Tech with a degree in nuclear and radiological engineering, I have been tut...
10 Subjects: including calculus, physics, algebra 1, ASVAB
...She can adapt to many learning styles. Not all students learn alike, and sometimes creativity is the key! She has taught over 150 students in the past four years!
22 Subjects: including calculus, reading, writing, physics
Related Fayetteville, GA Tutors
Fayetteville, GA Accounting Tutors
Fayetteville, GA ACT Tutors
Fayetteville, GA Algebra Tutors
Fayetteville, GA Algebra 2 Tutors
Fayetteville, GA Calculus Tutors
Fayetteville, GA Geometry Tutors
Fayetteville, GA Math Tutors
Fayetteville, GA Prealgebra Tutors
Fayetteville, GA Precalculus Tutors
Fayetteville, GA SAT Tutors
Fayetteville, GA SAT Math Tutors
Fayetteville, GA Science Tutors
Fayetteville, GA Statistics Tutors
Fayetteville, GA Trigonometry Tutors
Nearby Cities With calculus Tutor
Fairburn, GA calculus Tutors
Forest Park, GA calculus Tutors
Griffin, GA calculus Tutors
Hampton, GA calculus Tutors
Jonesboro, GA calculus Tutors
Lake City, GA calculus Tutors
Mcdonough calculus Tutors
Morrow, GA calculus Tutors
Peachtree City calculus Tutors
Riverdale, GA calculus Tutors
Stockbridge, GA calculus Tutors
Tyrone, GA calculus Tutors
Union City, GA calculus Tutors
Villa Rica, PR calculus Tutors
Woolsey, GA calculus Tutors | {"url":"http://www.purplemath.com/Fayetteville_GA_calculus_tutors.php","timestamp":"2014-04-20T11:27:39Z","content_type":null,"content_length":"24114","record_id":"<urn:uuid:843f7ed3-8297-4d26-a953-9c5ff16ba3ef>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Process Algebra Diary
I just learned that
Moshe Vardi
is the recipient of the
Blaise Pascal Medal in Computer Science for 2008
of the
European Academy of Sciences
. This is outstanding news for TCS as a whole and for the logic in computer science community in particular.
The motivation for the prize reads:
In recognition of his outstanding contributions in several areas of computer science connected by their use of logic as an underlying methodology. His work has had fundamental and lasting impact
on automatic verification, logic of knowledge, database theory, and finite-model theory.
Moshe also contributes to the development of activities in TCS in my small corner of the world by sitting on the advisory board of
Congrats to Moshe! Somehow I feel that there will be more and even more prestigious awards in store for him.
I just posted the instalment of the concurrency column for the October issue of the Bulletin of the EATCS. The article, entitled Formalizing operational semantic specifications in logic, has been
contributed by Dale Miller.
I strongly recommend this article not only to the standard readership of the concurrency column, but also to those readers who normally focus on logic in computer science and on programming languages
amongst others. By publishing it, I feel like I am killing at least three birds with a stone.
I am attending the session before lunch of TCS 2008, after which I will go to Milan central station to catch a train back to Pescara, my home town.
Today's invited talks were delivered by two Italian computer scientists, Antonio Restivo and Luca Cardelli.
Over the last 35 years or so, Restivo has been one of the prime movers in the Italian community of researchers working on automata and formal language theory. In his talk, he addressed the following
basic question:
What is the "right" way of extending the notion of recognizability from word languages to picture languages?
He described the approach he has developed with his co-workers, which is based on the notions of locality and projection. This notion turns out to coincide with the class of picture languages that
are definable using existential monadic second-order logic. This leads Restivo to believe that it is the "right" notion of recognizable 2D language.
The notion of recognizable 2D language offers some surprises, at least for a non-expert like me. For instance, the language consisting of the square pictures over alphabet {a,b} that contain an equal
number of a's and b's is recognizable, and so is the language of pictures of a's whose sides have prime length.
Luca Cardelli needs no introduction. In his talk, which was based on this paper, Luca presented a basic process-algebraic language that can be used to describe the ordinary differential equations
that one meets in (bio-)chemistry. He showe several interesting applications of this language and how it can be used to investigate the discrete vs. continuous modelling dichotomy. The talk was
excellent, as usual, and I am sure that the slides will appear very soon on this web page. It was great to see that a simple process calculus based on a stochastic version of Milner's CCS can be used
to specify ODEs in a compositional way. For TCS people like us, the automata described by terms in the language "explain" the ODEs and why they take they form they take. Luca also showed us that his
compilation of terms into ODEs produces exactly the known ODEs for some classic examples, such as the predator-prey one.
Yesterday, Tim Roughgarden gave a very interesting and beautifully delivered invited talk entitled Algorithmic Game Theory: Some Greatest Hits and Future Directions. You can read the paper here.
I am in Milan for TCS 2008, the bi-annual conference organized by IFIP TC1 in conjunction with the World Computer Congress (WCC). TCS 2008 aims at covering both volume A and volume B TCS, and this
year features as many as seven 45-minute invited talks over three days. (IMHO, this is an exceptional number of invited talks for such a short conference.)
Being part of an event like the WCC, which features many events for IT professionals, makes TCS a very expensive TCS conference to attend, and the atmosphere is somewhat different from that of the
typical conference TCS folks tend to attend. The early registration fee for an IFIP member was 525 euros (the late one is 680 euros for IFIP members and 800 euros for non-members), it does not
include lunches and at the coffee breaks we get just coffee! Not surprisingly attendance at the conference is small and, despite the number of CS departments in Milan and surrounding areas, there are
preciously few locals taking part in the event. I guess that there is a definite lesson to be learnt here.
Here I will limit myself to discussing some of the invited talks at the conference. TCS 2008 was kicked off by a presentation delivered by Grzegorz Rozenberg entitled Natural Computing. In his talk,
he discussed models of computation inspired by nature and how they differ from classical computational models we use in computer science. I have not found his presentation on line, but you can read
Rozenberg's views on the subject of natural computing vis-a-vis classical computer science in his acceptance speech for a honorary degree he received from the University of Bologna as well as in the
many (and I really mean "many") technical papers he has written on the topic.
Rozenberg's invited talk was followed by a truly remarkable presentation by Javier Esparza. Javier was one of our invited speakers at ICALP 2008 and I already sang the praises for his presentation in
Reykjavík. I really wish that more students had been there to see how to deliver an excellent talk despite a lack of slides for the first ten minutes or so because of technical problems. This was an
example of how very good story-telling skills can easily overcome technological failures (at least at the beginning of a technical talk). After all, a conference speaker or a teacher are not so
different from ancient story tellers, minstrels or gurus.
Javier's talk reported on results that he has obtained with his students on the solution of monotonic polynomial equations, a subject that one would expect to have been completely settled by
mathematicians a long time ago, and that instead has been the source of interesting recent developments in the computer-science literature. The key point here is that one can obtain very strong
results on methods for solving polynomial fixed-point equations if one restricts oneself to monotonic equations. I let you browse through Javier's slides since nothing I could write here would do
justice to the many exciting results he has achieved with his students. The slide also list some interesting open problems. In particular, consider the problem
MSPE-DECISION: Given an MSPE X = f (X) with rational coefficients and k rational, decide whether the first component of the least solution of that system of monotonic polynomial equations is at most
The above problem is in PSPACE and unlikely to be in P. It might be solvable using a polynomial number of arithmetic operations. According to Javier, a proof of this fact would be a sensational
result. Some of you might like to sharpen their pencils and try to solve this problem.
I was again pleased to be in the room to listen to yet another inspiring talk by Javier, who is rapidly becoming one of the favourite invited speakers at European TCS conferences.
I will try to report on the other invited talks I heard at some point soon.
On the night between August 19 and August 20, Italian TCS prematurely lost Stefano Varricchio (University of Rome "Tor Vergata") while he was on holiday with his family. He was 48, or so I am told.
Stefano's research was in the classic areas combinatorics on words, automata theory and formal languages. I am not qualified to offer an assessment of his research contributions, and I hope that some
of my readers who work on the aforementioned topics will post comments on Varricchio's work. I met him once a few years ago during a visit to the University of L'Aquila, where he was a full professor
at the time. I recall that, after my presentation based on this paper, he kindly pointed out the relevance of a famous theorem by Fine and Wilf for some of the results I presented.
Here is the theorem.
Theorem (Fine and Wilf, 1965) If a word x has periods p and q, and has length at least p + q − gcd(p, q), then x has also a period gcd(p, q).
A relative of mine and his wife, who followed Stefano's compilers course in L'Aquila, told me that he was one of the best teachers they had during their studies.
To quote a poem by Giuseppe Ungaretti,
Si sta come, d'autunno, sugli alberi, le foglie.
G. Ungaretti - Soldati Bosco di Courton luglio 1918 | {"url":"http://processalgebra.blogspot.com/2008_09_01_archive.html","timestamp":"2014-04-18T15:46:25Z","content_type":null,"content_length":"118678","record_id":"<urn:uuid:8630b320-66b2-449f-875e-058f0c4544d5>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about MCMC on Xi'an's Og
“At equilibrium, we thus should not expect gains of several orders of magnitude.”
As was signaled to me several times during the MCqMC conference in Leuven, Rémi Bardenet, Arnaud Doucet and Chris Holmes (all from Oxford) just wrote a short paper for the proceedings of ICML on a
way to speed up Metropolis-Hastings by reducing the number of terms one computes in the likelihood ratio involved in the acceptance probability, i.e.
The observations appearing in this likelihood ratio are a random subsample from the original sample. Even though this leads to an unbiased estimator of the true log-likelihood sum, this approach is
not justified on a pseudo-marginal basis à la Andrieu-Roberts (2009). (Writing this in the train back to Paris, I am not convinced this approach is in fact applicable to this proposal as the
likelihood itself is not estimated in an unbiased manner…)
In the paper, the quality of the approximation is evaluated by Hoeffding’s like inequalities, which serves as the basis for a stopping rule on the number of terms eventually evaluated in the random
subsample. In fine, the method uses a sequential procedure to determine if enough terms are used to take the decision and the probability to take the same decision as with the whole sample is bounded
from below. The sequential nature of the algorithm requires to either recompute the vector of likelihood terms for the previous value of the parameter or to store all of them for deriving the partial
ratios. While the authors adress the issue of self-evaluating whether or not this complication is worth the effort, I wonder (from my train seat) why they focus so much on recovering the same
decision as with the complete likelihood ratio and the same uniform. It would suffice to get the same distribution for the decision (an alternative that is easier to propose than to create of
course). I also (idly) wonder if a Gibbs version would be manageable, i.e. by changing only some terms in the likelihood ratio at each iteration, in which case the method could be exact… (I found the
above quote quite relevant as, in an alternative technique we are constructing with Marco Banterle, the speedup is particularly visible in the warmup stage.) Hence another direction in this recent
flow of papers attempting to speed up MCMC methods against the incoming tsunami of “Big Data” problems. | {"url":"https://xianblog.wordpress.com/tag/mcmc/","timestamp":"2014-04-19T17:03:33Z","content_type":null,"content_length":"87520","record_id":"<urn:uuid:b89ee2c2-3949-4135-a1c3-06ea88f88fc6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
doritos bag problem
for our instructable we are making the doritos bag smaller so that we use less plastic
Step 2: Measure the bag for the volume
without popping the 1st bag form the doritos bag into a rectangular prism get the ruler and measure the bag sides (height, length and width)
Step 3: Find the volume
to find the volume you have to take the measurements and plug them into this equations (length*width*height)
Step 4: Measure for the surface area
to measure the surface area you have to open the first bag and dump the doritos out now you open the bottom it should now kinda look like a tube and now cut on one of the long sides so when you
flatten it out it is a rectangle measure the sides with the ruler
Step 5: Find the surface area
to find the surface area use the measurements and plug them into this equation (length*height)
Step 6: Making the smaller bag
get the second bag and cut the bag short ways so that there is about a centimeter now you have to get the stapeler and staple the bag a few times so it is closed
Step 8: The end
you can now compare the diffrence of the measurements of the origanal bag and the small bag
Now cook some hamburger (w/ taco seasoning) and add it to the half pouch! Then add diced tomatoes, dices lettuce, shredded cheese, sour cream, and maybe a bit of salsa!
And you'll have . . .
A "Walking Taco!"
Frito-Lay thinks different...
Anyway, good try.
Ah, but this gives an example on how theoretical is not practical. Nacho chips are manufactured differently than Pringles chips which are stacked neatly into a can or smaller space. See if you can
scoop up a measured amount of nacho chips and fill up your small container without breaking any. Then try to do that a hundred times a minute to get up to production speed. Fun to experiment and try.
It's the same with cereal*. There's a reason packs say "contents may settle".
[(There's a similar project for Cheerios - I'm guessing they are tasks set by the same teacher.)] | {"url":"http://www.instructables.com/id/doritos-bag-problem/","timestamp":"2014-04-19T10:26:13Z","content_type":null,"content_length":"142507","record_id":"<urn:uuid:1928668a-f4c1-4f08-a444-fa4ec3455a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
ChessPub Forum - C00-C19:How to Beat the French Defence: A new book
Welcome, Guest. Please Login or Register
Latest Updates: Discussion forum for ChessPublishing.com, the opening theory site
ChessPub Forum › Chess Publishing Openings › French › C00-C19:How to Beat the French Defence: A new book
The French Fan Re: How to Beat the French Defence: A new book
Reply #121 - 02/27/09 at 07:57:24
Junior Member kalle99 wrote
Offline on 12/30/07 at 00:15:47:
A new book is planned to be released July 1, 2008.
I Love
ChessPublishing! A very interesting book. There is a lot against the french now. Thats great !! And the book has the Tarrasch as its main weapon.
Posts: 50 I searched this forum to see any possible earlier post on this book but I didnt find it. Hopfully I have not missed anything.
Russia http://www.amazon.com/How-Beat-French-Defence-essential/dp/1857445678/ref=pd_bxg...
Editorial Reviews
Book Description
The French Defence is considered to be one of Black s most reliable answers to 1 e4. Indeed, many players have become frustrated in their attempts to prove an advantage and make
headway against Black s ultra-solid formation.
In How to Beat the French Defence, Andreas Tzermiadianos meets this difficult challenge head on. He advocates his favourite weapon against the French the Tarrasch Variation and
reveals an abundance of opening ideas and novelties, providing the reader with a complete repertoire which is aimed at posing Black serious problems. Read this book and fight the
French Defence with renewed confidence and vitality.
These "Beat..." this and that books come up in great numbers but I never believe them. It's only their authors' personal message. As to the French, it is rather unbeatable than the
« Last Edit: 02/27/09 at 07:58:56 by The French Fan »
Back to top
buffos Re: How to Beat the French Defence: A new book
Reply #120 - 02/21/09 at 19:04:27
YaBB Newbies ANDREW BRETT wrote
Offline on 01/19/09 at 13:23:58:
I'd be interested to see what he gives against the Guimard given the run of success for black - maybe he has something against that as well !
I love
ChessPublishing.com! His main line is based on a Kotronias-Halkias Game. 90% of the line is sound, but the last part is too superficial (logical though since if a book is going to analyze every line
is great detail 10 moves after the Novelty, then the book will never end.
Posts: 39
Thessaloniki, Greece Basically this line (starting with Nd4, gives white a slight, but constant edge. Nothing in the vincinity of winning as in the book, but white has a comfortable edge. (reaching
Gender: this result after searching the postion with the IDEA tool (found in Aquarium) for more than a week and after finding a lot of little details that both sides need to know.
Basically for over the board play, white has easier play and if i have to choose sides, i take white gladly)
« Last Edit: 02/21/09 at 19:06:38 by buffos »
Back to top
Paddy Re: How to Beat the French Defence: A new book
Reply #119 - 02/05/09 at 12:10:18
God Member Ametanoitos wrote
Offline on 02/05/09 at 01:24:32:
I just noticed that in the critical 3...Be7 variation Watson reccomends 12...f6! (instead of 12...Ba6 13.Qe3 f6 as played in the game Kristjansson_ Caruana, 2008). The differenve can be
The truth important if 13.exf6 Bxf6 14.Nb3 trying to play like Tzermiadianos reccomends then 14...Bb7! is fine because after something like 15.Nxc5 Nxc5 16.Ne5 d4! is great for Black. I didn't fing
will out! to many games played with this move order but i think that this is an improvement for Black.
Posts: 810 Yes, it appears that both 12...f6 (Watson 3) and 12...Ba6 13 Qe3 f6 (McDonald 2008) are playable.
Gender: It is worth noting that the move order in this important game was different:
[Event "Frank K Berry ch-USA"]
[Site "Tulsa USA"]
[Date "2008.05.21"]
[Round "9"]
[White "Kudrin, S."]
[Black "Perelshteyn, E."]
[Result "1/2-1/2"]
[ECO "C03"]
[WhiteElo "2549"]
[BlackElo "2552"]
[PlyCount "103"]
[EventDate "2008.05.13"]
[Source "ChessPublishing"]
1. e4 e6 2. d4 d5 3. Nd2 Be7 4. Bd3 c5 5. dxc5 Nf6 6. Qe2 O-O 7. Ngf3 a5 8. O-O Na6 9. e5 Nd7 10. c3 Naxc5 11. Bc2 b6 12. Qe3 Ba6 13. Re1 f6 14. exf6 Bxf6 15. Nb3 e5 16. Nxc5 bxc5 17. Ng5
Bxg5 18. Qxg5 Qe8 19. Be3 Qf7 20. Qh4 h6 21. Rad1 Rab8 22. b3 Rb6 23. f3 Rf6 24. c4 d4 25. Qe4 g5 26. Bd2 Re6 27. Bxa5 Bc8 28. Bd2 Nf6 29. Qg6+ Qxg6 30. Bxg6 Kg7 31. Bb1 Bb7 32. Re2 Rfe8
33. Rde1 e4 34.
fxe4 Nxe4 35. a4 Kf7 36. Bxe4 Rxe4 37. Rxe4 Rxe4 38. Rxe4 Bxe4 39. a5 Bb7 40. Kf2 Ke6 41. g4 Ke5 42. Kg3 Ke4 43. h4 Kd3 44. hxg5 hxg5 45. Bxg5 Kc3 46. Kf4 Kxb3 47. Ke5 Kxc4 48. Bd2 Kb5
49. g5 Bf3 50. g6 Bh5 51. g7 Bf7 52. Be1 1/2-1/2
- Kudrin (a great 3 Nd2 expert) played 12 Qe3 instead of 12 Re1. In either case I think Black is OK though. In the game above Black was under some pressure but it is not too hard to find
Back to
Ametanoitos Re: How to Beat the French Defence: A new book
Reply #118 - 02/05/09 at 01:24:32
God Member
I just noticed that in the critical 3...Be7 variation Watson reccomends 12...f6! (instead of 12...Ba6 13.Qe3 f6 as played in the game Kristjansson_ Caruana, 2008). The differenve can be
Offline important if 13.exf6 Bxf6 14.Nb3 trying to play like Tzermiadianos reccomends then 14...Bb7! is fine because after something like 15.Nxc5 Nxc5 16.Ne5 d4! is great for Black. I didn't
fing to many games played with this move order but i think that this is an improvement for Black.
The road to
success is
Posts: 1351
Back to top
Ametanoitos Re: How to Beat the French Defence: A new book
Reply #117 - 01/21/09 at 19:19:48
God Member
I tried to find the book in a local bookshop but it was not there....I don't mind to find out what he gives against 3...Nc6 (Kotronias has played some good games with White here, and i
Offline know that Tzermi and Kotronias know very well the analysis of each other) but i wanted to see what he gives against the 3...dxe4 4.Nxe4 Bd7 when after 5.c4 (i know he recomends this
line) my coach adviced me to play 5...Nf6!.
The road to Also, i'd like to hear if he has any idea against Watson's "equalising" line 3...Be7 4.Bd3 Nc6! and of course against the line Banikas played against Kotronias as i posted above. This
success is was a very crucial game that would decide the National Champion and Banikas who hasn't French as his main weapon against 1.e4 played with confidence this line and won easily! The thing
under is that Banikas knows that Kotronias is an expert from the White side and picked this line that was considered bad for Black untill this game! I suppose that he know the line given in
construction this book and had prepared something. Unfortunately i just ordered the book and i have to wait some days before my questions are answered....Can anybody help me?
Posts: 1351
Back to top
ANDREW BRETT Re: How to Beat the French Defence: A new book
Reply #116 - 01/19/09 at 13:23:58
God Member
I'd be interested to see what he gives against the Guimard given the run of success for black - maybe he has something against that as well !
I Love ChessPublishing!
Posts: 621
« Last Edit: 01/19/09 at 13:24:22 by ANDREW BRETT »
Back to top
Ametanoitos Re: How to Beat the French Defence: A new book
Reply #115 - 01/18/09 at 17:54:28
God Member
But what about the Game Kotronias - Banikas Rhodes 2008?
1.e4 e6 2.d4 d5 3.Nd2 Be7 4.Bd3 c5 5.dxc5 Nf6 6.Qe2 Nc6 7.Ngf3 Bxc5 8.0-0 Qc7 9.a3 0-0 10.e5 Nd7 11.Nb3 (here Kotronias in his Chess Informant notes gives 11.b4! but i suspect that both
players had an improvement in mind here) Bb6 12.Re1 f6 13.exf6 Nxf6 14.Be3 e5 15.Bxb6 Qxb6 16.Bb5 e4 17.Nfd2 Ng4 18.Rf1 Nce5 19.h3 Nh6 20.c4 a6 21.Ba4 Nf3+ 22.Nxf3 exf3 23.Qd3 dxc4
The road to 24.Qxc4+ Kh8 25.Qc5 Qf6 26.Nd2 Bxh3 27.Ne4 Qh4 0-1
success is
under What does Tzermi gives here? This game was played after the book was out!
Posts: 1351
Back to top
Kramnikaze Re: How to Beat the French Defence: A new book
Reply #114 - 01/17/09 at 12:38:49
YaBB Newbies
This book "How to beat the French defence"is a very nice,well analysed book imo.
Offline Since i have Moskalenko's book, i compard them in the line 3.Nd2 Be7.
I Love Tzermiadianos recommends 4.Bd3!? and so after: 4..c5! 5.dc5 Nf6 6.Qe2 we have a first road to take.
Since Kristjansson-Caruana, Reykjavik 2008, 0-1 is such a nice game i followed that road in the 2 books.
Posts: 26
That game went: 6...0-0 7.Ngf3 a5 8.0-0 Na6 9.e5 Nd7.
Tzermiadianos now recommends 10.c3!? Nac5 11.Bc2 b6!? 12.Re1 Ba6 13.Qe3 f6!? and here Tzermiadianos goes on with 14.ef6! Bf6 15. Nb3!? e5 only briefly saying(14. b4!?)
14.b4!? is the move that Moskalenko says,not mentioning 14.ef6.
14.b4!? is the move played in Kristjansson-Caruana,very instructive how to play against b4.
I think 14.ef6 is the way forward for white in this line (10.c3!?) but i want to improve on 15..e5!?
The game Gara-Rudolf from 2008, 1/2(between two 2300 players) followed:
15...Qe8!? 16.Nc5 Nc5 17.Bd2 Qf7 18.Nd4 and only now e5.
But after the move 19.Nb3 black should have played 19...Bh4! to gain advantage( f.e. 20.g3 Be7 21.Qe5 Qf2 22.Kh1 Qf3 23.Kg1 Rae8)
White can improve and sac his queen for the queenside pawns to storm forward: 19.Qh3 g6 20.Nb3 Bg7 21.Be3 Ne6 22.Qh4 a4!? 23.Qa4 Bd3 24.Qa8 Ra8 25.Bd3 Qb7 26.a4 Nc5!? 27.Nc5! bc5
28.Bc5 but i think this looks promising for white.
Black can improve however with 22...Nf4!?(instead of 22..a4!?)
In this game with 15..Qe8!? i think white can improve with 17.Ng5!? after 17...Bg5 18.Qg5 Bd3 19.Bd1 there is alot of play for both sides.
Back to top
TopNotch Re: How to Beat the French Defence: A new book
Reply #113 - 01/10/09 at 03:07:34
God Member
Overall its a very good book, but still it has its grey areas, for instance he fails to prove a clear advantage against Watson's old
I only look 1 move ahead, Line:
but its always the best
1.e4 e6 2.d4 d5 3.Nd2 Nf6 4.e5 Nfd7 5.Bd3 c5 6.c3 Nc6 7.Ne2 cxd4 8.cxd4 f6 9.exf6 Nxf6 10.0-0 Bd6 11.Nf3 Qb6
Posts: 1810
Gender: . Of course there are more dangerous ways to meet this 11...Qb6 line than what Tzermi gives, so Black players dare not rest easy.
« Last Edit: 01/10/09 at 03:09:34 by TopNotch »
The man who tries to do something and fails is infinitely better than he who tries to do nothing and succeeds - Lloyd Jones
Back to top
Stigma Re: How to Beat the French Defence: A new book
Reply #112 - 01/07/09 at 14:12:36
God Member
My first impression of this book is very favorable. There's so much to study I will probably keep my old, simpler Tarrasch repertoire and gradually add lines from this great book.
Offline IMHO the kind of introduction Tzermiadianos gives with typical middlegames and endgames should be mandatory in opening books!
I thought I found a hole in the book: After
I Love
ChessPublishing! 3...c5 4.Ngf3
Posts: 2051 black can play
(as Vaganian, M.Gurevich and many other French experts have done). As far as I can see this move order is not mentioned anywhere, and it worried me because against 3...a6
Tzermiadianos wants to play 4.Bd3, not 4.Ngf3!
But in the Psakhis 3.Nd2 book I found:
3...c5 4.Ngf3 a6 5.dxc5 Bxc5 6.Bd3
and we have reached the main line of 3...a6 4.Bd3, see diagram on p. 60 of Tzermiadianos. I think* the Greek author intended this transposition, but forgot to mention it (in chapter
8 would be natural). This is a very small problem of course, but it's good to be aware of it.
Stating an opinion rather than pretending to know a fact!
« Last Edit: 01/07/09 at 14:22:42 by Stigma »
Improvement begins at the edge of your comfort zone. -Jonathan Rowson
Back to top
buffos Re: How to Beat the French Defence: A new book
Reply #111 - 12/12/08 at 11:56:43
YaBB Newbies
I finally found some time to take a deep look at some parts of the book. Here is what i think
1) There is a tremendous amount of work inside the book. It is not a database dump at all, and this feelling is all over the place.
2) To the simple question , are there improvements , mistakes, etc, the answer is simple, "is it possible for a book to be the final word in an opening?". There are plenty of
I love places where black can choose another root, make improvements etc, and ofcourse, there could be improvement in improvements... and so on.
ChessPublishing.com! 3) This is definatelly a book to stay in your bookself for years, and i would even go as far as comparing it with Watson books for French (for black this time) , which are great
resources even years after they were published. I am saying almost, because Watson is really great in introducing new ideas.
Posts: 39 4) Although i dont think high of Nd2 variation, not because its not good, but because the handling of some of its positions require really great technique from white's part, so
Thessaloniki, Greece i would not recomment it for anyone below master level, this book can help intermediate and masters
I have read many indifferent books on french, and this is NOT on this list. A great book.
My rating 9/10
Back to top
Bibs Re: How to Beat the French Defence: A new book
Reply #110 - 12/05/08 at 14:39:30
God Member
Just a quickie.
Offline Have played the French for a good number of years at a reasonablish level, and have a good knowledge of most Tarrasch lines, having jumped around a bit.
This book impressed me. Went further than I knew in all lines I know well and checked, including some of the lesser known ones.
Posts: 1628 Havent looked for a while, but iirc, didnt notice any obvious omissions.
He has clearly put a lot of work into this. Well worth getting. Credit to the fella - a genuinely good piece of work.
« Last Edit: 12/05/08 at 15:20:26 by Bibs »
Back to top
Lou_Cyber Re: How to Beat the French Defence: A new book
Reply #109 - 12/04/08 at 09:51:08
Full Member
I am interested in this book, so I followed this thread, which I have missed so far:
5 pages of discussions about the general advantages/disadvantages of 3.Nc3, Nd2 and e5.....
"I didnīt 2 pages of where and when the book will come out and can be bought.....
understand that.
It must be true." 1 page of questions regarding reference to the Khalifman book.
Posts: 232 Almost zero about the value of the book, the lines etc. if I leave out the valuable information by tracke.
Gender: Anyway, today I found a generally favorable review here:
What the books recommendation in the 3...Nf6 line, is it the universal system or some Ne2 stuff? Right now I am happy with the universal, but my sources are a bit dated (The Nd2
volume of Psakhis, which doesnīt cover the Rubinstein), so I am interested.
Likewise I would be interested about the lines covered in chapter 10 and 11, so far I donīt see anything really convincing for white.
Which lines are given against the Rubinstein?
How do you like the book in general?
If you try, you may lose. If you donīt try, you have lost.
Back to top
TopNotch Re: How to Beat the French Defence: A new book
Reply #108 - 11/08/08 at 17:45:36
God Member MilenPetrov wrote
Offline on 11/08/08 at 16:02:12:
Hi, I laso checked Khalifman's books and did not find where it is. If it is present in should be in Volume 6 and maybe somewhere between pages 46 and 60. Strange but I did not
I only look 1 move find any simmilar position (even to one mentioned on page 51 in How to Beat The French).
but its always the Regards
dom wrote
Posts: 1810
Gender: on 11/08/08 at 17:04:30:
Not in volume 6 (page 45 to 60 are about Rubinstein with a quick dxe4) and in volume 7...dealing all with Nc3 (and not Tarrasch Nd2).
I have not yet recorded 5.Ne5 in the Tarrasch 3..b6 system, while reading books,except Psakhis "French Defence - 3.Nd2" first game of the book, Tischbierek-Hertneck,Altenkirchen
2001 where games deviates after 5...Bb7.Maybe 4...Be7 is better than 4...Nf6
That was my experience as well. Perhaps editors on the project John Emms or Richard Palliser could clarify matters.
« Last Edit: 11/08/08 at 17:47:08 by TopNotch »
The man who tries to do something and fails is infinitely better than he who tries to do nothing and succeeds - Lloyd Jones
Back to top
dom Re: How to Beat the French Defence: A new book
Reply #107 - 11/08/08 at 17:04:30
Moderator Not in volume 6 (page 45 to 60 are about Rubinstein with a quick dxe4) and in volume 7...dealing all with Nc3 (and not Tarrasch Nd2).
I have not yet recorded 5.Ne5 in the Tarrasch 3..b6 system, while reading books,except Psakhis "French Defence - 3.Nd2" first game of the book, Tischbierek-Hertneck,Altenkirchen 2001
Offline where games deviates after 5...Bb7.
Maybe 4...Be7 is better than 4...Nf6
Posts: 868
« Last Edit: 11/08/08 at 17:33:39 by dom »
Back to top | {"url":"http://www.chesspub.com/cgi-bin/yabb2/YaBB.pl?num=1198973747","timestamp":"2014-04-17T18:24:03Z","content_type":null,"content_length":"88835","record_id":"<urn:uuid:43cbc9c8-1732-46ff-90f3-e018ff1d4fb4>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACT ONE - An Algebraic Specification Language with two Levels of Semantics”, ADT
Results 1 - 10 of 18
, 1989
"... We present a higher-order calculus ECC which can be seen as an extension of the calculus of constructions [CH88] by adding strong sum types and a fully cumulative type hierarchy. ECC turns out
to be rather expressive so that mathematical theories can be abstractly described and abstract mathematics ..."
Cited by 84 (4 self)
Add to MetaCart
We present a higher-order calculus ECC which can be seen as an extension of the calculus of constructions [CH88] by adding strong sum types and a fully cumulative type hierarchy. ECC turns out to be
rather expressive so that mathematical theories can be abstractly described and abstract mathematics may be adequately formalized. It is shown that ECC is strongly normalizing and has other nice
proof-theoretic properties. An !\GammaSet (realizability) model is described to show how the essential properties of the calculus can be captured set-theoretically.
- PROC. 3RD INT. CONF. DESIGN AND IMPLEMENTATION OF SYMBOLIC COMPUTATION SYSTEMS (DISCO'93) A. MIOLA (ED.), SPRINGER, BERLIN, LNCS 722, PP.17-32 (1993) , 1993
"... The specification language TROLL light is intended to be used for conceptual modeling of information systems. It is designed to describe the Universe of Discourse (UoD) as a system of
concurrently existing and interacting objects, i.e., an object community. The first part of the present paper introd ..."
Cited by 34 (22 self)
Add to MetaCart
The specification language TROLL light is intended to be used for conceptual modeling of information systems. It is designed to describe the Universe of Discourse (UoD) as a system of concurrently
existing and interacting objects, i.e., an object community. The first part of the present paper introduces the various language concepts offered by TROLL light . TROLL light objects have observable
properties modeled by attributes, and the behavior of objects is described by events. Possible object observations may be restricted by constraints, whereas event occurrences may be restricted to
specified life-cycles. TROLL light objects are organized in an object hierarchy established by sub-object relationships. Communication among objects is supported by event calling. The second part of
our paper outlines a simplified computational model for TROLL light . After introducing signatures for collections of object descriptions (or templates as they are called in TROLL light) we explain
how single ...
- In Proc. AMAST 2000 , 2000
"... this paper we present Casl-Chart a formal visual specification language for reactive systems obtained by combining an already existing language for reactive systems, precisely the statecharts as
supported by Statemate ([6, 7]), with an already existing language for the specification of data structur ..."
Cited by 19 (3 self)
Add to MetaCart
this paper we present Casl-Chart a formal visual specification language for reactive systems obtained by combining an already existing language for reactive systems, precisely the statecharts as
supported by Statemate ([6, 7]), with an already existing language for the specification of data structures, precisely the algebraic specification language Casl ([12, 17])
- Algebraic Methodology and Software Technology (AMAST 2000), volume 1816 of Lecture Notes in Computer Science (LNCS , 2000
"... . We are interested in the composition of languages, in particular a data description language and a paradigm-specific language, from a pragmatic point of view. Roughly speaking our goal is the
description of languages in a component-based style, focussing on the data definition component. The p ..."
Cited by 14 (2 self)
Add to MetaCart
. We are interested in the composition of languages, in particular a data description language and a paradigm-specific language, from a pragmatic point of view. Roughly speaking our goal is the
description of languages in a component-based style, focussing on the data definition component. The proposed approach is to substitute the constructs dealing with data from the "data" language for
the constructs describing data that are not specific to the particular paradigm of the "paradigm-specific" language in a way that syntax, semantics as well as methodologies of the two components are
preserved. We illustrate our proposal on a toy example: using the algebraic specification language Casl, as data language, and a "pre-post" condition logic `a la Hoare, as the paradigm specific one.
A more interesting application of our technique is fully worked out in [16] and the first step towards an application to UML, that is an analysis of UML from the data viewpoint, following the guid...
- Proceedings Combinatorics, Computation and Logic , 1999
"... : This paper is an introduction to recent research on hidden algebra and its application to software engineering; it is intended to be informal and friendly, but still precise. We first review
classical algebraic specification for traditional "Platonic" abstract data types like integers, vectors, ma ..."
Cited by 10 (0 self)
Add to MetaCart
: This paper is an introduction to recent research on hidden algebra and its application to software engineering; it is intended to be informal and friendly, but still precise. We first review
classical algebraic specification for traditional "Platonic" abstract data types like integers, vectors, matrices, and lists. Software engineering also needs changeable "abstract machines," recently
called "objects," that can communicate concurrently with other objects through visible "attributes" and state-changing "methods." Hidden algebra is a new development in algebraic semantics designed
to handle such systems. Equational theories are used in both cases, but the notion of satisfaction for hidden algebra is behavioral, in the sense that equations need only appear to be true under all
possible experiments; this extra flexibility is needed to accommodate the clever implementations that software engineers often use to conserve space and/or time. The most important results in hidden
algebra are ...
- In Proceedings of the first B conference, pages 47–62, 3 rue du Maréchal Joffre, BP 34103, 44041 Nantes Cedex 1
"... . This paper is the result of a reflexion coming from the usage and learning of the language B. It tries to better explain and understand the assembly primitives includes and uses of the
language. It presents a high-level notion of components and develops a "component algebra". This algebra is speci ..."
Cited by 7 (2 self)
Add to MetaCart
. This paper is the result of a reflexion coming from the usage and learning of the language B. It tries to better explain and understand the assembly primitives includes and uses of the language. It
presents a high-level notion of components and develops a "component algebra". This algebra is specialized to deal with the B-components. The B assembly primitives are re-expressed in this basic
formalism. Some problems about independence of concepts in the B methodology are pointed out and are discussed. 1 Introduction Specifications, like programs, must be modular because very large formal
texts are not understandable for a human being. So, the study of modules and modularization is one of the issues in software engineering. The three main objectives of modularization [BHK90] are :
information hiding, compositionality of module operations and reusability of modules. If the specification methodology encompasses the need for formal proofs to ensure consistency, as it is the case
in the B ...
"... Summary. A semantics for the Clear specification language is given. The language of set theory is employed to present constructions corresponding to Clear's specification-combining operations,
which are then used as the basis for a denotational semantics. This is in contrast to Burstall and Goguen's ..."
Cited by 7 (1 self)
Add to MetaCart
Summary. A semantics for the Clear specification language is given. The language of set theory is employed to present constructions corresponding to Clear's specification-combining operations, which
are then used as the basis for a denotational semantics. This is in contrast to Burstall and Goguen's 1980 semantics which described the meanings of these operations
- Recent trends in algebraic development techniques: 12th international workshop, WADT’97 , 1998
"... . In the early phases of software development it seems profitable to freely mix semi-formal and formal design techniques. Formal techniques have their strength in their ability to rigorously
define desired software qualities like functionality, whereas semi-formal methods are usually said to be easi ..."
Cited by 7 (3 self)
Add to MetaCart
. In the early phases of software development it seems profitable to freely mix semi-formal and formal design techniques. Formal techniques have their strength in their ability to rigorously define
desired software qualities like functionality, whereas semi-formal methods are usually said to be easier to understand and to be more human-nature oriented. We propose a new approach in order to
combine these two areas by exploiting how constructs of the formal specification language TROLL light are related to the graphical elements of the UML approach. 1 Introduction As a first step and in
order to explain the roles of and the relationships between semi-formal and formal specifications, let us start with a very simple software process model. We assume software development starts
somehow with stating the requirements of the software system. These requirements are then formalized in the specification phase and are made more precise with respect to implementation in the design
phase. In the i...
- PROC. 3RD WORKSHOP ON THEORY AND APPLICATIONS OF ABSTRACT DATA TYPES , 1985
"... ..."
- In People and Ideas in Theoretical Computer Science , 1999
"... Data Types and Algebraic Semantics The history of programming languages, and to a large extent of software engineering as a whole, can be seen as a succession of ever more powerful abstraction
mechanisms. The first stored program computers were programmed in binary, which soon gave way to assembly l ..."
Cited by 3 (0 self)
Add to MetaCart
Data Types and Algebraic Semantics The history of programming languages, and to a large extent of software engineering as a whole, can be seen as a succession of ever more powerful abstraction
mechanisms. The first stored program computers were programmed in binary, which soon gave way to assembly languages that allowed symbolic codes for operations and addresses. fortran began the spread
of "high level" programming languages, though at the time it was strongly opposed by many assembly programmers; important features that developed later include blocks, recursive procedures, flexible
types, classes, inheritance, modules, and genericity. Without going into the philosophical problems raised by abstraction (which in view of the discussion of realism in Section 4 may be
considerable), it seems clear that the mathematics used to describe programming concepts should in general get more abstract as the programming concepts get more abstract. Nevertheless, there has
been great resistance to u... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1431175","timestamp":"2014-04-21T03:31:38Z","content_type":null,"content_length":"38186","record_id":"<urn:uuid:7bf3c4ee-33b1-471d-8411-5dd03f74ed43>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
pathway problem
You're on the right track. It looks like question e) is where you fell apart. The total number of possible paths will be: (No. Paths from A to C) * (No. Paths from C to D) * (No. Paths from D to B)
That being said, what happens if Jackson must pass through point P? The number of possible paths is reduced. Which term of the equation above do you think that changes? Then, having calculated the
total possible number of paths, as well as the total number of paths that could pass through P, I'm sure you'll know how to calculate the probability of passing through Point P. | {"url":"http://mathhelpforum.com/statistics/189640-pathway-problem.html","timestamp":"2014-04-16T14:14:02Z","content_type":null,"content_length":"30108","record_id":"<urn:uuid:1fbf2617-7634-4a43-9547-4d9ee30169d4>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Solving Nonlinear Equations
There are some close connections between finding a local minimum and solving a set of nonlinear equations. Given a set of equations in unknowns, seeking a solution is equivalent to minimizing the sum
of squares when the residual is zero at the minimum, so there is a particularly close connection to the Gauss-Newton methods. In fact, the Gauss-Newton step for local minimization and the Newton step
for nonlinear equations are exactly the same. Also, for a smooth function, Newton's method for local minimization is the same as Newton's method for the nonlinear equations . Not surprisingly, many
aspects of the algorithms are similar; however, there are also important differences.
Another thing in common with minimization algorithms is the need for some kind of step control. Typically, step control is based on the same methods as minimization except that it is applied to a
merit function, usually the smooth 2-norm squared, .
"Newton" use the exact Jacobian or a finite difference approximation to solve for the step based on a locally linear model
"Secant" work without derivatives by constructing a secant approximation to the Jacobian using past steps; requires two starting conditions in each dimension
"Brent" method in one dimension that maintains bracketing of roots; requires two starting conditions that bracket a root
Basic method choices for FindRoot. | {"url":"http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationIntroductionNonlinearEquations.html","timestamp":"2014-04-19T06:59:16Z","content_type":null,"content_length":"29828","record_id":"<urn:uuid:b5b92140-6254-48ce-94b2-de0e0fd5413a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 1992 [00085]
[Date Index] [Thread Index] [Author Index]
Re: Integrate 2 March 1992
• To: mathgroup at yoda.physics.unc.edu
• Subject: Re: Integrate 2 March 1992
• From: David Withoff <withoff>
• Date: Wed, 4 Mar 1992 11:50:44 -0600
> In[2]:= ans = (1 /(4 Pi^2 k^2)) \
> Integrate[(1/(u v)) \
> Exp[-((so + ss)/2) (u^2 + v^2) - u v phi], \
> {u,-Infinity,Infinity},{v,-Infinity,Infinity}]
> Infinity::indet:
> Indeterminate expression ComplexInfinity + ComplexInfinity encountered.
> Integrate::idiv: Integral does not converge.
> Out[2]= Indeterminate
> The integral does not diverge. It works out via a sequence of
> transformations that I would like to teach to Mathematica
> (change variable, differentiate, change variable, substitute,
> integrate differential equation) (*)
> (1/(2 Pi k^2)) ArcSin(phi/(so + ss)) + c
> where c is a constant of integration that happens to
> be 0.
> I know this is not an easy sort of integral to do.
> One of the reasons I make the effort to use a
> symbolic math package is so it will save me effort
> evaluating hard integrals. At least, Mathematica
> should say it can't do it, not report false information.
> This makes me wonder if it safe to try to each Mathematica
> the steps.
> Any clues/suggestions/workarounds ?
> Thanks,
> Purvis
> purvis at mulab.physiol.upenn.edu
I'm not sure I have any suggestions or workarounds -- maybe just clues.
I can describe what's going on, and it may or may not interfere with
teaching Mathematica the steps you describe. Also, just for fun, I thought
I'd try to summarize what I do when tracking down problems like this.
Definite integrals are normally evaluated (in Mathematica) using one of two
methods -- by evaluating the indefinite integral and taking limits at the
The first method has the well-known deficiency of ignoring branch cuts
and other singularities. Also, since it uses the Limit code, problems
in Limit will also show up in Integrate. The most common problem of this
type arises from the fact that Limit can't always handle essential
singularities. (Both problems are being worked on, and neither affects
the integral in the present example.)
Various rules are used to decide which method to try first. You can
force Mathematica to use the indefinite integration method by evaluating
the indefinite integral and taking limits at the endpoints yourself.
You can (at least in V2.0) force Mathematica to use the generalized
hypergeometric method by using Integrate`IntegrateG instead of Integrate.
(This is not documented, however, and may change in future versions.)
There is currently no special support in Mathematica for multiple integrals.
Multiple integrals are evaluated as iterated integrals.
Anyway, with that background, I took a look at the integral in the example
above, which is an iterated definite integral of the following function:
In[17]:= f = E^(-(phi*u*v) - ((so + ss)*(u^2 + v^2))/2)/(u*v)
-(phi u v) - ((so + ss) (u + v ))/2
Out[17]= -------------------------------------
u v
This integrand has a pole at v==0, so the corresponding integral has
limited convergence properties (it isn't, for example, uniformly
convergent). It probably has a useful principal value, however. One
method that sometimes works to get the principal value of an integral
with a singularity at, say, x==x0 is to use
Limit[eps -> 0, Integrate[f, {x, a, x0-eps}] +
Integrate[f, {x, x0+eps, b}] ]
The integral in the present example is not elementary and this method
doesn't help.
The first of the iterated integrals is sent to the package that implements
the generalized hypergeometric method, which probably doesn't like
the singularity:
In[20]:= Integrate`IntegrateG[f, {v, -Infinity, Infinity}]
Indeterminate expression ComplexInfinity + ComplexInfinity encountered.
Out[20]= Indeterminate
After this, the second of the iterated integrals is degenerate:
In[21]:= Integrate[Indeterminate, {u, -Infinity, Infinity}]
Integrate::idiv: Integral does not converge.
Out[21]= Indeterminate
The message is arguably a bit misleading, but at least we know where
it comes from.
Like I said -- a few clues, but no workarounds. The implied suggestions
(to provide special support for principal value integrals and multiple
integrals) are certainly worth pursuing.
Dave Withoff
withoff at wri.com | {"url":"http://forums.wolfram.com/mathgroup/archive/1992/Mar/msg00085.html","timestamp":"2014-04-18T05:52:00Z","content_type":null,"content_length":"38413","record_id":"<urn:uuid:f2633a0c-77b3-4c3d-b268-ad880b48daac>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
ALEX Lesson Plan: Congruent and Similar Figures
Lesson Plan ID: 21195
Title: Congruent and Similar Figures
Overview/Annotation: This lesson provides a variety of hands-on activities for the students to understand the difference between congruent and similar figures. Students will use geoboards, pattern
blocks, and the Internet to explore the difference. The activities can be used to reach all learning types.
Content Standard(s): AED(5)
Visual 6. Describe works of art according to the style of various cultures, times, and places.
TC2(3-5) 8. Collect information from a variety of digital sources.
MA2013(3) 24. Understand that shapes in different categories (e.g., rhombuses, rectangles, and others) may share attributes (e.g., having four sides), and that the shared
attributes can define a larger category (e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of
quadrilaterals that do not belong to any of these subcategories. [3-G1]
Primary Learning Students will be able to identify figures as being congruent or similar.
Additional Learning Students will explain the difference between congruent and similar figures. Students will construct congruent and similar figures.
Approximate Duration of Greater than 120 Minutes
the Lesson:
Materials and basketball, tennis ball, two oranges, grid paper, geoboards with rubberbands, pattern blocks, rulers, glue, scissors, lineless paper, scissors, crayons, markers, colored
Equipment: pencils, construction paper
Technology Resources Computers with Internet access
Background/Preparation: The teacher must have the classroom set up for three different math centers. Depending upon class size, it may be necessary to have six centers (two of each). Make sure
students know the proper way to use the geoboard.
Procedures/Activities: 1.)1. The teacher will begin by displaying a basketball and a tennis ball. Ask: "How are these two figures alike?" Possible answer: they are the same shape. "How are they
different?" Possible answer: they have different sizes. Explain to the students that the basketball and tennis ball are similar. Write the definition for similar on the board
and have students copy it. Definition: Similar figures have the same shape but different sizes. 2. Display two-same sized oranges. Ask: "How are these oranges the same?"
Possible answer: they are the same shape and size. Explain to the students that the oranges are congruent figures. Write the definition for congruent on the board and have
students copy it. Definition: Congruent figures have the same size and same shape. 3. referring back to the two oranges, have students explain why all congruent figures are
similar but not all similar figures are congruent. Possible answers: all congruent figures have the same shape, so they are similar. Similar figures are not necessarily the
same size, so they may or may not be congruent. 4. Explain the three different centers to the students. *Geoboard Center: Students will make a figure on the geoboard. The
students then switch with someone in the group to see if the figure can be duplicated (congruency). Have the students then make a similar figure. Students should use graph
paper to draw examples of figures which are congruent to each other. Next, they should draw figures which are similar to each other. *Pattern Block: Students will sort pattern
blocks which are congruent to each other and those which are similar to each other. Have one student in the group trace a shape on the lineless paper. Have the other students
try to replicate that shape on construction paper. To make sure the shapes are congruent, have the student cut out one of the shapes, place it over the other student's shape
to see if they are congruent, and glue it to the previous shape. Have them reverse roles until they have created five sets of congruent shapes. *Carol's Congruent
Concentration: Students will use the website http://www.beaconlearningcenter.com (Carol's Congruent Concentration) to further explore congruent and similar figures. After
completion of the website activities, have students design a flag that has congruent figures.
Carol's Congruent Concentration
Carol's Congruent Concentration is website that allows students to actually see congruent figures. They are asked to math congruent figures.
2.)1. The teacher will begin by displaying a basketball and a tennis ball. Ask: "How are these two figures alike?" Possible answer: they are the same shape. "How are they
different?" Possible answer: they have different sizes. Explain to the students that the basketball and tennis ball are similar. Write the definition for similar on the board
and have students copy it. Definition: Similar figures have the same shape but different sizes. 2. Display two-same sized oranges. Ask: "How are these oranges the same?"
Possible answer: they are the same shape and size. Explain to the students that the oranges are congruent figures. Write the definition for congruent on the board and have
students copy it. Definition: Congruent figures have the same size and same shape. 3. referring back to the two oranges, have students explain why all congruent figures are
similar but not all similar figures are congruent. Possible answers: all congruent figures have the same shape, so they are similar. Similar figures are not necessarily the
same size, so they may or may not be congruent. 4. Explain the three different centers to the students. *Geoboard Center: Students will make a figure on the geoboard. The
students then switch with someone in the group to see if the figure can be duplicated (congruency). Have the students then make a similar figure. Students should use graph
paper to draw examples of figures which are congruent to each other. Next, they should draw figures which are similar to each other. *Pattern Block: Students will sort pattern
blocks which are congruent to each other and those which are similar to each other. Have one student in the group trace a shape on the lineless paper. Have the other students
try to replicate that shape on construction paper. To make sure the shapes are congruent, have the student cut out one of the shapes, place it over the other student's shape
to see if they are congruent, and glue it to the previous shape. Have them reverse roles until they have created five sets of congruent shapes. *Carol's Congruent
Concentration: Students will use the website http://www.beaconlearningcenter.com (Carol's Congruent Concentration) to further explore congruent and similar figures. After
completion of the website activities, have students design a flag that has congruent figures.
Carol's Congruent Concentration
files will display in a
new window. Others will
prompt you to download.
Assessment Strategies: Informal assessment would include observing student interaction throughout the lesson. For a written assessment, teacher draws a picture of two congruent figures. Have
students explain whether the figures are similar, congruent, or both.
Extension: Student groups may be ability paired to allow more advanted students to help classmates.
Each area below is a direct link to general teaching strategies/classroom accommodations for students with identified learning and/or behavior problems such as: reading or math performance below
grade level; test or classroom assignments/quizzes at a failing level; failure to complete assignments independently; difficulty with short-term memory, abstract concepts, staying on task, or
following directions; poor peer interaction or temper tantrums, and other learning or behavior problems.
Presentation of Environment
Time Demands Materials
Attention Using Groups and Peers
Assisting the Reluctant Dealing with Inappropriate Behavior
Be sure to check the student's IEP for specific accommodations.
Variations Submitted by
ALEX Users: | {"url":"http://alex.state.al.us/lesson_view.php?id=21195","timestamp":"2014-04-19T12:13:27Z","content_type":null,"content_length":"21022","record_id":"<urn:uuid:781c1d5d-f40b-46c0-aa53-e9ad07d5e966>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Best known in the west as the poet who wrote the Ruba 'iyat, Omar Khayyam was also one of the leading mathematicians of the Islamic world. This manuscript of his "Algebra," written in standard Arabic
scientific characters, was probably copied from an earlier manuscript; the work begins with basic definitions and makes its principal contribution in the field of cubic equations. Although the
"Algebra" was unknown to western mathematicians until the eighteenth century, Omar received wide recognition for it in the Islamic world. He was called to the court of Sultan Malik Shah I
(1054-1092), where he revised astronomical tables and introduced a highly accurate calendar. Among the other fourteen works bound in this volume are two by Sharaf al-Din al Tusi (d. ca. 1213/1214),
one on the height of vertical objects and the other on the height of the North Pole, and treatises by Alhazen (965-1039) on the astrolabe, and by al-Farabi (ca. 870-950) on music. | {"url":"http://www.columbia.edu/cu/lweb/eresources/exhibitions/treasures/html/159.html","timestamp":"2014-04-20T08:28:36Z","content_type":null,"content_length":"7839","record_id":"<urn:uuid:7cb52a46-5778-492c-81d9-cb03639a1a6b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
GATE 1998 Exam Civil Engineering Question Paper
Here is the Civil Engineering GATE Exam sample written Multiple Choice Question paper.
GATE – 1998
CE : Civil Engineering
Section A
(100 marks)
1. For each subquestion given below four answers are provided out of which only one is correct. Indicate in the answer book the correct or most appropriate answer by writing the letter A,B,C or D
against the subquestion number. (31×1=31)
1.1 If A is a real square matrix, then AA^T is
(a) unsymmetric (b) always symmetric
(c) skew symmetric (d) sometimes symmetric
1.2 In matrix algebra AS = AT (A,S,T, are matrices of appropriate order) implies S=T only if
(a) A is symmetric (b) A is singular
(c) A is non singular (d) A is skew symmetric
1.3 A discontinuous real function can be expressed as
(a) Taylor’s series and Fourier’s series
(b) Taylor’s series and not by Fourier’s series
(c) neither Taylor’s series nor Fourier’s series
(d) not by Taylor’s series, but by Fourier’s series
1.4 The Laplace Transform of a unit step function u[a](T), defined as
Ua(t)= is
1 fort >a
(a) e^-as/s (b)se^-as
(c) s-u(0) (d) se^-as-1
1.5 The continuous function ƒ(x,y) is said to have saddle point at (a,b) if
(a) ƒ[x] (a,b)= ƒ[y](a,b) = 0; ƒ[sy]^2- ƒ[xx] ƒ[yy]<0 at (a,b)
(b) ƒ[x] (a,b)= 0; ƒ[y](a,b) = 0; ƒ[xy]^2- ƒ[xx] . ƒ[yy]>0 at (a,b)
(c) ƒ[x] (a,b)= 0; ƒ[y](a,b) = 0; ƒ[xx]- and ƒ[yy]<0 at (a,b)
(d) ƒ[x] (a,b)= 0; ƒ[y](a,b) = 0; ƒ[xx]-^2 – ƒ[xx] . ƒ[yy = ]0 at (a,b)
1.6 The Taylor’s series expansion of sin x is
x^2 x^4 x^2 x^4
(a) 1- + (b) 1+
2! 4! 2! 4!
x^3 x^5 x^3 x^5
(a) x+ + (b) x-
3! 5! 3! 5!
1.7 A three hinged arch shown in Figure is quarter of a circle. If the vertical and horizontal components of reaction at A are equal, the value of θ is
(a) 60^0
(b) 45^0
(c) 30^0
(d) None in (0^0,90^0)
1.8 A propped cantilever beam is shown in Figure. The plastic moment capacity of the beam is M[0]. The collapse load P is
(a) 4M[0]/L (b) 6M[0]/L
(c) 8M[0]/L (d) 12M[0]/L
1.9 The maximum permissible deflection for gantry gride, spanning over 6m, on which an EOT (electric overhead travelling) crane of capacity 200 k.N is operating, is
(a) 8mm (b) 10mm
(c) 12mm (d0 18mm
1.10 An isolated T beam is used as a walkway. The beam is simply supported with an effective span of 6m. The effective width of flange, for the cross-section shown in Figure, is
(a) 900 mm
(b) 1000 mm
(c) 1259 mm
(d) 2200 mm
1.11 The plane of stairs supported at each end by landings spanning parallel with risers is shown in Figure. The effective span of staircase slab is
(a) 3000 mm (b) 4600 mm
(c) 4750 mm (d) 6400 I
1.12 Some of the structural strength of a clayey material that is lost by remoulding is slowly recovered with time. This property of soils to undergo an isothermal gel-to sol-to-get transformation
upon agitation and subsequent rest is termed
(a) Isotropy (b) Anisotropy
(c) Thixotropy (d) Allotropy
1.13. If soil is dried beyond its shrinkage limit, it will show
(a) Large volume change
(b) Moderate volume change
(c) Low volume change
(d) No volume change
1.14. The stress-strain behaviour of soils as shown in the Figure correspondence to :
(a) Curve 1 : Loose sand and normaly consolidated clay
Curve 2 : Loose sand and over consolidated clay
(b) Curve 1 : Dense sand and normally consolidated clay
Curve 2 : Loose sand and over consolidated clay
(c) Curve 1 : Dense sand and over consolidated clay
Curve 2 : Loose sand and normally consolidated clay
(d) Curve 1 : Loose sand and over consolidated clay
Curve 2 : Dense sand normally consolidated clay
1.15 In cohesive soils he depth of tension crack (Z[cr]) is likely to be
1.16 The settlement of prototype in granular material may be estimated using plate load test data from the following expression :
1.17 In which one of the following arrangement would the vertical force on the cylinder due to water be the maximum ?
1.18. At the same mean velocity, the ratio of head loss per unit length for a sewer pipe flowing full to that for the same pipe flowing half full would be
(a) 2.0 (b) 1.63
(c) 1.00 (d) 0.61
1.19 Three reservoirs A, B and C are interconnected by pipes as shown in the Figure. Water surface elevations in the reservoirs and the Pirzometric head at the junction J are indicated in the Figure.
Discharge Q[1], Q[2] and Q[3] are related as
(a) Q[1]+Q[2] = Q[3] (b) Q[1]=Q[2]+Q[3]
(c) Q[2]=Q[1]+Q[3] (d) Q1+Q[2]+Q[3] = 0
1.20. The comparison between pumps operating in series and in parallel is
(a) Pumps operating is series boost the discharge, whereas pumps operating in parallel boost the head.
(b) Pumps operating in parallel boost the discharge, whereas pumps operating in series boost the head.
(c) In both cases there would be a boost in discharge only.
(d) In both case there would be a boost in head only.
1.21 The Bowen ratio is defined as
(a) Ratio of heat and vapour diffusivities
(b) Proportionality constant between vapour heat flux and sensible ehat flux.
(c) Ratio of actual evaportranspiration and potential evaportranspiration.
(d) Proportionality constant between heat energy used up in evaporation and the bulk radiation from a water body.
1.23. Excessive fluoride in drinking water causes
(a) Alzheimer’s disease
(b) Mottling of teeth and embrittlement of bones
(c) Methemoglobinemia
(d) Skin cancer
1.24 Coagulation-flocculation with alum is performed
(a) immediately before chlorination
(b) immediately after chlorination
(c) after rapid sand filtration
(d) before rapid sand filtration
1.25. Sewage treatment in an oxidation pond is accomplished primarily by
(a) alga-bacterial symbols
(b) algal photosynthesis only
(c) bacterial oxidation only
(d) chemical oxidation only
1.26 An inverted siphon is a
(a) device for distributing septic tank effluent to a soil absorption system
(b) device for preventing overflow from elevated water storage tank
(c) device for preventing crown corrosion of sewer
(d) section of sewer which is dropped below the hydraulic grade line in order to avoid an obstacle.
1.27 Water distribution systems are sized to meet the
(a) maximum hourly demand
(b) Average hourly demand
(c) maximum daily demand and fire demand
(d) average daily demand and fire demand.
1.28 At highway stretches where the required overtaking sight distance cannot be provided, it is necessary to incorporate in such sections the following
(a) at least twice the stopping sight distance
(b) half to the required overtaking sight distance
(c) one third of the required overtaking sight distance
(d) three times the stopping sight distance
1.29 The modulus of subgrade reaction is obtained from the plate bearing test in the form of load-deformation curve. The pressure corresponding to the following settlement value should be used or
computing modulus of subgrade reaction
(a) 0.375 cm (b) 0.175 cm
(c) 0.125 cm (d) 0.250 cm
1.30 In the plate bearing test, if the load applied is in the form of an inflated type of wheel, then this mechanism corresponds to
(a) rigid plate (b) flexible plate
(c) semi-rigid plate (d) semi-elastic plat e
1.31. Base course is used in rigid pavements for
(a) prevention of subgrade settlement
(b) prevention of slab cracking
(c) preventing of pumping
(d) preventing of thermal expansion
2. For each subquestion given below four answers are provided out of which one is correct. Indicate in the answer book the correct or most appropriate answer by writing the letter A,B, Cor Dagainst
the subquestion number.
1 1 (22×2=44)
2.1 The infinite series 1+ + +…………
(a) converges (b) diverges
(c) oscillates (d) unstable
2.2 The real symmetric matrix C corresponding to the Quadratic form
Q=4x[1]x[2] – 5x[22′] is
(a) 1 2 (b) 2 0
2 -5 0 -5
(c) 1 1 (d) 0 2
1 -2 1 -5
2.3 A cantilever beam is shown in the Figure. The moment to be applied at free end for zero vertical deflection at that point is
(a) 9 kN.m clockwise
(b) 9 kN.m anti-clockwise
(c) 12kN.m clockwise
(d) 12kN.m anti-clockwise
2.4. The strain energy stored in member AB of the pin-joined truss is shown in Fig. 2.4, when E and A are same for all members, is
(a) 2P^2L
(b) P2L
(c) P2L
(d) Zero
2.5 The stiffness matrix of a beam element is given as (2EI/L) 2 -1. Then the flexibility matrix is 1 2
(a) L 2 1 (b) L 1 -2
2EI 1 2 6EI -2 1
(a) L 2 -1 (b) L 2 -1
3EI -1 2 5EI -1 2
2.6 The plastic modulus of a section is 4.8×10^-4m^3. The shape factor is 1.2. The plastic moment capacity of the section is 120 kN.m. The yield stress of the material is
(a) 100MPa (b) 240MPa
(c) 250MPa (d)300MPa
2.7. A reinforced concrete wall carrying vertical loads is generally designed as per recommendations given for columns. The ratio of minimum reinforcements in the vertical and horizontal directions
(a) 2 :1 (b) 5:3
(c) 1:1 (d) 3:5
2.8. The proposed dam shown in the figure is 90 m long and the coefficient of permeability of the soil is 0.0013mm/second. The quantity of water (m^3) that will be lost per day be seepage is (rounded
to the nearest number):
(a) 55 (b) 57
(c) 59 (d) 61
2.9 The time for a clay layer to achieve 90% consolidation is 15 years. The time required to achieve 90% consolidation, if the layer were twice as thick, 3 times more permeable and 4 times more
compressible would be :
(a) 70 years (b) 75 years
(c) 80 years (d) 85 years
2.10. The total active thrust on a vertical wall 3m high retaining a horizontal sand backfill (unit weight γ[t]=20 kN/m^3, angle of shearing resistance φ’=30^0) when the water table is at the bottom
of the wall, will be :
(a) 30 kN/m (b) 35 kN/m
(c) 40 kN/m (d) 45 kN/m
2.11 A 40^0 slope is excavated to a depth of 10(d)depth of 10 m is a deep layer of saturated clay of unit weight 20kN/m^3; the relevant shear strength parameters are c[u]=72 kN/m^2 and φ[u]=0. The
rock ledge is at a great depth. The Taylor’s stability coefficient for φ[u]=0 and 400 slope angle is 0.18. The factor of safety of the load is :
(a) 2.0 (b) 2.1
(c) 2.2 (d) 2.3
2.12 A point load of 700 kN is applied on the surface of thick layer of saturated clay. Using Boussinesq’s elastic analysis, the estimated vertical stress (σ[v]) at a depth of 2 m and a radial
distance of 1.0 m from the point of application of the load is :
(a) 47.5 kPa (b) 47.6kPa
(c) 47.7 kPa (d) 47.8kPa
2.13 A nozzle discharging water under head H has an outlet area “a” and discharge coefficient c[d]=1.0. A vertical plate is acted upon by the fluid force F[j] when held across the free jet and by the
fluid force F[n] when held against the nozzle to stop the flow. The ratio
F[j] is
(a) 1/2 (b) 1
(c) 2 (d) 2
2.14. A body moving through still water at 6m/sec produces a water velocity of 4m/sec at a point 1m ahead. The difference in pressure between the nose and the point 1m ahead would be
(a) 2,000N/m^2 (b) 10,000N/m^2
(c) 19,620N/m^2 (d) 98,100N/m^2
2.15 The return period for the annual maximum flood of a given magnitude is 8 years. The probability that this food magnitude will be exceeded once during the next 5 years is
(a) 0.625 (b) 0.966
(c) 0.487 (d) 0.529
2.16 Two completely penetrating wells are located L (in meters) apart, in a homogeneous confined aquifer. The drawdown measured at the mid point between the two wells (at a distance of 0.5L from both
the wells) is 2.0 m when only he first well is being pumped at the steady rate of Q[1]m^3/sec. When both the wells are being pumped at identical steady rate of Q2m^3/sec, the drawdown measured at the
same location is 8.0m. It may be assumed that the drawdown at the wells always remains at 10.0 m when being pumped and the radius of influence is larger than Q^1 is equal to
2.17. In connection with the design of a barrage, identify the correct matching of the criteria of design with the items of design
│ │Item of design │ │Criteria of design │
│(i) │Width of waterway │(A)│Scour depth and exit gradient │
│(ii) │Level and length of downstream floor │(B)│Lacey’s formula for wetted perimeter and discharge capacity of the barrage as computed by weir equations│
│(iii)│Depth of sheet piles and total of barrage floor│(C)│Uplift pressure variation │
│(iv) │Barrage floor thickness │(D)│Hydraulic jump considerations │
Codes :
(i) (ii) (iii) (iv)
(a) A B C D
(b) D C B A
(c) B A D C
(d) B D A C
2.18. In a BOD test using 5% dilution of the sample (15 ML of sample and 285 mL of dilution water), dissolved oxygen values for the sample and dilution water blank bottles after five days incubation
at 20^0C were 3.80 and 8.80 mg/L, respectively. Dissolved oxygen originally present in the undiluted sample was 0.80 mg/L. The 5-day 20^0C BOD of the sample is
(a) 116mg/L (b) 108 mg/L
(c) 100mg/L (d) 92 mg/L
2.19. For a flow of 5.7 MLD (million litres per day) and a detention time of 2 hours, the surface area of a rectangular sedimentation tank to remove all particles have setting velocity of 0.33 mm/s
(a) 20m^2 (b) 100m^2
(c) 200m^2 (d) 400m^2
2.10. For a highway with design speed of 100 kmph, the safe overtaking sight distance is (assume acceleration as 0.53m/sec^2).
(a) 300m (b) 750m
(c) 320m (d) 470m
2.21 What is the equivalent wheel load of a dual wheel assembly carrying 20,440 N each for pavement thickness of 20 cm? Centre to centre spacing of tyres is 27cm and the distance between the walls of
yres is 11cm.
(a) 27600 N (b) 32300N
(c) 40880N (d) 30190N
2.22 Plate bearing test with 20 cm diameter plate on soil subgrade yielded a pressure of 1.25×10^5N/m^2 at 0.5 cm deflection. What is the elastic modulus of subgrade ?
(a) 56.18×10^5N/m^2 (b) 22.10×10^5N/m^2
(c) 44.25×10^5N/m^2 (d) 50.19×10^5N/m^2
3. Solve the following set of simultaneous equations by Gauss elimination method. (5)
x-2y+z=3 …..(1)
x+3z=11 ……(2)
-2y+z = 1 ……(3)
4. The cross-section of a pretensioned prestressed concrete beam is shown in Figure. The reinforcement is placed concentrically. If the stress in steel at transfer is 1000 MPa, compute the stress in
steel immediately after transfer. The modular ratio is 6. (5)
5. An ISMS 400, with a flange width of 140 mm is subjected to an axial compressive load of 750 kN. Design the slab base resting on concrete of grade M15. The slab base available are 600x350x20 mm,
650x325x28mm, and 700x2300x32 mm. Select one of these. (5)
6. The total unit weight of the glacial outwash soil is 6kN/m^3. The specific gravity of the solid particles of the soil is 2.67. The water content of the soil is 17% Calculate. (5)
(a) dry unit weight (b) porosity
(c) void ratio (d) degree of saturation
Assume that unit weight of water (γ[w]) is 10kN/m^3
7. An overflow spillway is 40 m high. Water flows down the spillway with a head of 2.5 m over the spillway crest. The spillway discharge coefficient
cd = 0.738. Show that the water depth at the toe of the spillway would be 0.3m. Determine the sequent depth required for the formation of the hydraulic jump and the loss of head in the jump. (5)
(50 Marks)
Answer and TEN question from this section. All questions carry equal marks.
8. Solve d^4y – y = 15 cos 2x
dx^4 (5)
9. Obtain the eigen values and eigen vectors of the matrix 8 -4
2 2 (5)
10. Using the Force Method, computer the slope at the support B of the propped cantilever beam shown in Fig. 10. The value of EI is constant. (5)
11. The steel portal frame shown in Figure is subjected to an imposed service load of 15 kN. Compute the required plastic moment capacity of the members. All the members are of the same
cross-section. Draw the collapse mode. (5)
12. Compute the bending moments at the top of the columns in the upper storey of the multi-storey frame shown in Figure, by the cantilever the portal methods of analysis. Indicate tension face of
columns, the area of cross-section of all columns is same.
13. The cross-section of a simply supported plate girder is shown in Figure. The loading on the girder is symmetrical. The bearing stiffeners at supports are the sole means of providing restraint
against torsion. Design the bearing stiffeners at supports, with minimum moment of inertia about the centre line of web plate only as the sole design criterion. The flat section available are :
250×25, 250×32, 200×28, and 200×32 mm. Dray a sketch (5)
14. The diameter of a ring beam in water tank is 7.8 m. It is subjected to an outward raidal force of 15 kN/m. Design the section using M 25 grade concrete and Fe415 reinforcement. Sketch the
cross-section. (5)
15. For general c- φ soil, cohesion c is 50 kPa, the total unit weight γ[t] is 20 kN/m^3 and the bearing capacity formula, calculate the net ultimate bearing capacity for a strip footing of width B =
2m at depth z = 1m. Considering shear failure only, estimate the safe total load on a footing 10 m long by 2 m wide strip footing using a factor of safety of 3. (5)
16. A soft normally consolidated clay layer is 20 m thick with a moisture content of 45%. The clay has a saturated unit weight of 20 kN/m^3, a particle specific gravity of 2.7 and a liquid limit of
60%. A foundation load will subjected the centre of the layer to a vertical stress increase of 10 kpa. Ground water level is at the surface of the clay. Estimate
(a) The initial and final effective stresses at the centre of the layer
(b) The approximate value of the compression index (C[c])
(c) The consolidation settlement of the foundation if the initial effective stress at the centre of the soil is 100kPs.
Assume unit weight of water to be 10kN/m^3.
17. Estimate the safe load carrying capacity of a single bored pile 20m long, 500 mm diameter. The adhesion coefficient (d) is 0.4. Take a factor of safety of 2.5. The soil strata is as follows.
│Depth (m)│Soil deposit │Undrained shear strength (S[u])kPa │
│0-5 │Loose fill │50 │
│5-10 │Weathered over consolidate clay│70 │
│10-15 │Over consolidated clay │100 │
│15-30 │High over-consolidated clay │200 │
Assumne, φn=0 is valid and Nc=9, for deep fomadations. (5)
18. (a) What is the shear strength in terms of effective stress on a plane within a saturated soil mass at a point where the total normal stress is 295 kPa and the pore water pressure 120kPa? the
effective stress shear strength parameters are C’=12 kPa and φ’ = 30^0 (5)
(b) In a falling head permeameter test on a silty clay sample, the following results were obtained; sample length 120 mm; sample diameter 80 mm; initial head 1200 mm, final head 400 mm; time for fall
in head 6 minutes stand pipe diameter 4 mm. Find the coefficient of permeability of the soil in mm/second.
19. Water flows through the Y-joint as shown in figure. Find the horizonal and vertical components of the force acting on the joint because of the flow of water. Neglet energy losses and body force.
20. Water flows in a rectangular channel at depth of 1.20 m and a velocity of 2.4m/sec. What would be the effect of a local rise in the channel bed of 0.60 m on the water surface ? (5)
21. A reservoir is proposed to be constructed to command an area of 1,20,000 hectares. The area has a monsoon rainfall of about 100cm per year. It is anticipated that sugar and rice would each be
equal to 20% of the command area and wheat equal to 50% of the command area, making a total of annual irrigation equal to 90% of the command area.
(i) Work out the storage required for the reservoir, assuming the water requirements given below, canal losses as 25% of the head discharge and reservoir evaporation and dead storage losses as 20% of
the gross capacity of the reservoir.
(ii) Determine also the full supply discharge of the canal at the head of the canal.
│Crop │Transplanted Rise │Sugar Cane│Wheat │
│Sowing time │July │Feb-Mar │October│
│Harvesting Time │November Next year│Dec-March │Mar-Apr│
│Total Water Depth in cm │150 │90 │37.5 │
│“Kor” period in weeks │2.5 │4 │4 │
│“Kot” watering in cm │19.0 │16.5 │13.5 │
Note that in which = Depth of water in cm, B=base period in
days, and D = duty of water in hectares/cumec. (5)
22. The following rainfall hyetograph and the corresponding direct run off are recorded in a watershed. Compute the one-hour unit hydrography ordinates for the first four hours. Assume φ index = 0.50
cm/hr (5)
│Time (hrs)│Rainfall (cm) │Direct Run Off (m^3/sec) │
│ │ │ │
│1 │2.8 │64.2 │
│2 │5.2 │288.4 │
│3 │4.7 │794.5 │
│4 │0.0 │1369.6 │
│5 │0.0 │1593.7 │
│6 │0.0 │1175.1 │
│7 │0.0 │588.1 │
│8 │0.0 │286.9 │
│9 │0.0 │170.5 │
│10 │0.0 │110.0 │
23. A dual-media rapid sand filter plant is to be constructed for treatment of 72 million litres of water per day. A pilot plant study indicated that a filtration rate of 15m/h would be acceptable.
Allowing one unit out of service for backwashing, how many 5mx8m filter units will be required ? Determine the net production in million litres per day of each filter unit if backwashing is done at
36m/h for 20 minutes and the water is wasted for the first 10 minutes of each filter run. (5)
24. The minimum flow of a river is 50m^3/s having a disoolved oxygen (DO) content of 7.0 mg/L (80% saturation) and BOD[5] of 8.0 mg/L. It receives a waste water discharge of 5cm^3/s with BOD[5] of
200 mg/L and no DO. If the rate constants for deoxygenation and reaeration (both base e) and 0.5/d and 1.0/d, respectively and the velocity of river flow is 0.8m/s, calculate the distance in
kilometre downstream from the point of waste water discharge where the minimum DO occurs. (5)
25. An activated sludge aeration tank (length 30.0m; width 14.0m; effective liquid depth 4.3m) has the following parameters :
flow 0.0796m3, soluble BOD[5] after primary settling 130 mg/L; mixed liquor suspended solids (MLSS) 2100 MG/L; mixed liquor volatile suspended solids (MLVSS) 1500 mg/L; 30 minute settled sludge
volume 230 mL/L; and return sludge concentration 9100 mg/L. Determine the aeration period, food to micro-organisms (F.M) ratio, sludge volume index (SVI), and return sludge rate. (5)
26. There is a horizontal curve of radius 360 m and length 180m. Calculate the clearance required from the central line on the inner side of the curve, so as to provide an overtaking sight distance
of 250m. (5)
27. The width of expansion joint gap is 2.5 cm in a cement concrete 20 cm thick pavement. If the laying temperature is 15^0 C and the maximum slab temperature in the summer is 55^0C, calculate
(i) the spacing between expansion joints, and
(ii) the spacing between contraction joints.
Coefficient of thermal expansion for concrete is 10×10^-6 per degree centigrade. Ultimate stress in tension in cement concrete is 1.6×10^5 N/m^2. Ultimate tensile stress in steel is 1200x10N/m^2.
Factor of safety is to be taken as 2. Assume the pavement width to be 3.5 m. Unit weight of steel is 75,000 N/m^3. Total reinforcement of 6kg/m^2 is provided in the slab. (5) | {"url":"http://latestexams.com/2009/10/gate-1998-exam-civil-engineering-question-paper/","timestamp":"2014-04-18T20:48:47Z","content_type":null,"content_length":"152233","record_id":"<urn:uuid:9bad709f-56b0-4df9-b272-5988818e11ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recognizable and rational trace languages of finite and infinite traces
- Theoretical Computer Science , 1993
"... The main results of the present paper are the equivalence of definability by monadic second-order logic and recognizability for real trace languages, and that first-order definable, star-free,
and aperiodic real trace languages form the same class of languages. This generalizes results on infinite w ..."
Cited by 31 (4 self)
Add to MetaCart
The main results of the present paper are the equivalence of definability by monadic second-order logic and recognizability for real trace languages, and that first-order definable, star-free, and
aperiodic real trace languages form the same class of languages. This generalizes results on infinite words and on finite traces to infinite traces. It closes an important gap in the different
characterizations of recognizable languages of infinite traces. 1 Introduction In the late 70's, A. Mazurkiewicz introduced the notion of trace as a suitable mathematical model for concurrent systems
[16] (for surveys on this topic see also [1, 6, 10, 17]). In this framework, a concurrent system is seen as a set \Sigma of atomic actions together with a fixed irreflexive and symmetric independence
relation I ` \Sigma \Theta \Sigma. The relation I specifies pairs of actions which can be carried out in parallel. It generates an equivalence relation on the set of sequential observations of the
system. As ...
- Acta Informatica , 1993
"... This paper shows the equivalence between the family of recognizable languages over infinite traces and the family of languages which are recognized by deterministic asynchronous cellular Muller
automata. We thus give a proper generalization of McNaughton's Theorem from infinite words to infinite tra ..."
Cited by 13 (3 self)
Add to MetaCart
This paper shows the equivalence between the family of recognizable languages over infinite traces and the family of languages which are recognized by deterministic asynchronous cellular Muller
automata. We thus give a proper generalization of McNaughton's Theorem from infinite words to infinite traces. Thereby we solve one of the main open problems in this field. As a special case we
obtain that every closed (w.r.t. the independence relation) word language is accepted by some I-diamond deterministic Muller automaton. 1 Introduction A. Mazurkiewicz introduced the concept of traces
as a suitable semantics for concurrent systems [Maz77]. A concurrent system is given by a set of atomic actions \Sigma = fa; b; c; : : :g together with an independence relation I ` \Sigma \Theta \
Sigma, which specifies pairs of actions which can be performed concurrently. This leads to an equivalence relation on \Sigma generated by the independence relation I. More precisely, if a and b
denote independent...
- Theoretical Computer Science , 1996
"... We present direct subset automata constructions for asynchronous (asynchronous cellular, resp.) automata. This provides a solution to the problem of direct determinization for automata with
distributed control for languages of finite traces. We use the subset automaton construction and apply Kla ..."
Cited by 1 (0 self)
Add to MetaCart
We present direct subset automata constructions for asynchronous (asynchronous cellular, resp.) automata. This provides a solution to the problem of direct determinization for automata with
distributed control for languages of finite traces. We use the subset automaton construction and apply Klarlund's progress measure technique in order to complement non-deterministic asynchronous
cellular Buchi automata for infinite traces. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2056337","timestamp":"2014-04-21T05:19:50Z","content_type":null,"content_length":"20190","record_id":"<urn:uuid:c9f24d31-4a35-45c7-846f-6325ad19bbfd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explanation to a logical problem needed!
July 6th 2008, 06:35 AM
Explanation to a logical problem needed!
There are 3 ants at 3 corners of a triangle, they randomly start moving towards another corner.. what is the probability that they don't collide.
All three should move in the same direction - clockwise or anticlockwise. Probability is 1/4.
Can someone please explain the answer?
July 6th 2008, 08:28 AM
Label the triangle $\Delta ABC$. Any bit-triple will denote the direction the ant at the vertices goes. Example: $\left( {0,1,1} \right)$ means that ant at A goes counter-clockwise while ants at
B & C go clockwise. In that case ant A will collides with the ant at B. There are eight such triples. In how many will there be no collisions?
July 6th 2008, 09:17 AM
July 6th 2008, 09:20 AM
July 17th 2008, 01:02 AM
Label the triangle $\Delta ABC$. Any bit-triple will denote the direction the ant at the vertices goes. Example: $\left( {0,1,1} \right)$ means that ant at A goes counter-clockwise while ants at
B & C go clockwise. In that case ant A will collides with the ant at B. There are eight such triples. In how many will there be no collisions?
I really liked this explanation, thanks!
July 17th 2008, 06:11 AM
There are 3 ants at 3 corners of a triangle, they randomly start moving towards another corner.. what is the probability that they don't collide.
All three should move in the same direction - clockwise or anticlockwise. Probability is 1/4.
Can someone please explain the answer?
If they are not all going in the same direction then a pair must be walking towards one another and so will collide, so to avoid collision they must all go in the same direction.
Each ant has two choices of direction so the probability that they all go clockwise is (1/2)(1/2)(1/2)=1/8.
Similarly the probability that they all go anti-clockwise is 1/8
Hence the probability that they all go in the same direction is the probability that they all go clockwise plus the probability that they all go anti-clockwise= .. | {"url":"http://mathhelpforum.com/statistics/43123-explanation-logical-problem-needed-print.html","timestamp":"2014-04-18T01:10:15Z","content_type":null,"content_length":"9056","record_id":"<urn:uuid:1978dedc-0378-4d0f-963a-99d7312da8e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00085-ip-10-147-4-33.ec2.internal.warc.gz"} |
Descriptive statistics of some Agile feature characteristics
September 2, 2012
By Derek-Jones
The purpose of software engineering research is to figure out how software development works so that the software industry can improve its quality/timeliness (i.e., lower costs and improved customer
satisfaction). Research is hampered by the fact that companies are not usually willing to make public good quality data about the details of their software development processes.
In mid July a post on the ACCU general mailing list caught my eye and I followed a link to a very interesting report, went to visit 7digital a few weeks later, told them about my empirical software
engineering with R book and how I wanted to make all the data I used available to readers and they agreed to make the data public! The data arrived at the start of August and I spent the rest of the
month analyzing it (the R code I used to analyse it).
Below is a draft of what will eventually appear in the book. As always comments welcome, particularly if you can extract more information from the 7digital data (the mapping of material to WordPress
blog format might still be flaky in places).
Agile feature characteristics
Traditionally software development projects work towards releasing product updates on prespecified dates, often with a release cycle of between once or twice a year and with many updates included in
each release. In contrast to this approach development groups following an Agile method <book ???> make frequent releases with each containing a small incremental update (Agile is an umbrella term
applied to a variety of iterative and incremental software development methodologies).
Rationale for the Agile approach includes getting rapid feedback from customers on the direction of developments and maximizing return on software investment by getting newly implemented features
into customers hand almost immediately.
The large number of releases (compared to other approaches) has the potential to provide enough data for meaningful statistical analysis of questions such as how often new features are released and
the number of features under development at any time.
7digital<book 7Digital_12> is a digital media delivery company that operates an international on-line digital music store (www.7digital.com) and provides business to business digital media services
via an open API platform. At 7digital software development is done using an Agile process and since April 2009 various items of information have been recorded <book Bowley_12>; 7digital are open
about there process and have made this information publicly available and it is analysed here.
The data consists of information on the 3,238 features implemented by the 7digital team between April 2009 and July 2012; this information consists of three dates (Prioritised/Start Development/
Done), a classification of the feature as one of nine possible internal types (i.e., Bug, Build Maintenance, Feature Bug, Infrastructure, MMF, Production Bug, Regression Bug, Reports and Support) and
for features of type MMF a Size estimate (i.e., small, medium or large); the two most frequent feature types are MMF and Production Bug with 1350 and 745 instances respectively out of 3,238 features.
Information on one feature was removed because the dates looked as-if they had been incorrectly recorded.
During the recording period the number of developers grew from 14 to 35.
The start/done dates represent an elapsed time period, a wide variety of factors can cause work on the implementation of a feature to be stalled for a period of time, i.e., the time difference need
not represent total development time.
The Agile process gives a great deal of flexibility to developers about which projects they chose to work on. Information on the number of developers working on the implementation of individual
features was not recorded.
Is the data believable?
As discussed elsewhere [checking data quality] measurements involving people are likely to be subject more external influences than measurements of inanimate objects such as source code, they are
also more difficult to replicate and are open to those being measured influencing the results in their favor.
The following is what is known about the 7digital measurement process.
The data recording was done by whoever ran the Agile stand-up session at the start of the day.
What unit of time measurement is appropriate for analysing an Agile process? While fine grained measurements are the ideal they have the potential to require nontrivial effort from those reporting
the values, are open to individual interpretation (e.g., when exactly did work start/stop on this feature?) and subject to human error (e.g., forgetting to note the event when it happened and having
to recall it later). The day was chosen as the basic unit of time measurement; in light of the time needed to implement most features this may seem too large, but this choice has the advantage of
being the natural unit of measurement in that developers meet together every morning to discuss progress and that days work and being so broad makes it more likely that start/end times will be
consistently applied as well as less prone to inaccurate recall later.
Goodhart’s law (it is really an observation of human behavior rather than a law) says “Any observed statistical regularity will tend to collapse once pressure is placed on it for control purposes.”
If the measurements collected were actively used to control or evaluate the development team then the developers would be motivated to move the measurements in the direction that was favorable to
them. 7digital do not attempt to use the measurements for control or evaluation or developers and developers have no motive change their behavior based on being measured.
I find the data believable in that the measurement process is not so expensive or cumbersome that developers are unwilling to attempt to report accurate data and not being directly effected by the
results means they have no motive for changing their behavior to influence the measurements.
Believable data does not mean the data is error free. The following is a count of the days of the week on which feature implementations were recorded as being Done. Monday is day 0 and the counts for
Saturday/Sunday should be zero; assuming that Friday/Monday had been intended the non-zero values suggest a 2-4% error rate, comparable with human error rates for low stress/non-critical work.
> table(Done.day[(Done.day <= 650)] %% 7)
> table(Done.day[(Done.day > 650)] %% 7)
Predictions made in advance
Your author is not aware of any empirically based theory of Agile feature development capable of making predictions about development time related questions.
The analysis described here is purely descriptive; there is no attempt to build predictive models or compare the data against any existing theory.
The results from this data analysis (and all analysis in this book) are to provide information that will help software developers do a better job. What information can be extracted that would be
useful to 7digital? This has proved to be a something of a chicken-and-egg question because people are interested in seeing the results before deciding whether they are useful. The following issues
are of general interest:
1. characteristics of the time taken to implement new features,
2. variations in the number of different kinds of features (e.g., bug/non-bug) over time,
Applicable techniques
Overview of data
The data consists of start/finish times for the implementations of features and the overview information that springs to mind is average number of features implementation starts per time interval and
average time taken to implement a feature. The figure below is a good enough approximation to this information to get a rough idea of its characteristics (e.g., the effect of weekends and holidays
have not been taken into account and a 30 day rolling mean has been applied to smooth out daily fluctuations).
Figure 1. Average number of feature implementations started (blue) and their average duration (red); a 30 day rolling mean has been applied to both. Data courtesy of 7digital.
The plot appears to have two parts, before and after day 650 (or thereabouts). After day 650 the oscillations in feature implementation time die down substantially and the rate at which new feature
implementations are started steadily increases. Possible reasons for the larger variations in the first 650 days include less expertise in organizing features into smaller work items and larger
features being needed during the earlier stages of product development.
Obviously shorter implementation times make it possible to start work on more new features, however new feature starts continues to increase while implementation time stabilises around a lower value.
Possible causes for the continuing increase in new feature starts include an increase in the number of developers and/or existing developers becoming more skilled in breaking work down into smaller
features (i.e., feature implementation time stays about the same because fewer developers are working on each feature, making developers available to start on new features).
Software product development is a complicated business and a wide variety of different events and processes are likely to have contributed to the patterns of behavior seen in the data. While
developers write the software it is customers who report most of the bugs and one of the goals of following an Agile methodology is rapid response to customer feedback (e.g., deciding which features
need to be implemented and which left out). Customer information is not present in the dataset.
Are the same processes generating the apparent two phase behavior?
Any pattern of behavior is generated by a set of processes and when a pattern of behavior changes it is worthwhile asking how the processes driving the behavior changed.
Fitting a statistical distribution to a dataset is useful in that many distributions are known to be generated by processes having specified behaviors. Being able to fit the same distribution to both
the pre and post 650 day datasets suggests that the phase change seen was not a fundamental change but akin to turning the volume knob of the distribution parameters one way or the other. If the
datasets are best fitted by different distributions then the processes generating the two patterns of behavior are potentially very different.
Of the two characteristics plotted the feature implementation time appears to undergo the largest change of behavior and so the distribution of implementation times for the two phases is analysed
Table 1. Values of the first four moments
for the pre and post 650 day feature
implementation times.
Moment Initial 650 days After 650 days
Median 3 3
Mean 7.6 4.6
Variance 116.4 35.0
Skewness 3.3 4.9
Kurtosis 19.2 30.4
A quick look at the data shows that many features are implemented in a single day and only a few take more than a week, one distribution having this pattern of behavior is the power-law. The table
above shows that the variance is much larger than the mean and the distribution has a large positive skew, properties shared by the [negative binomial distribution]. The figure below is a plot of the
number of features requiring a given number of elapsed working days for their implementation (top first 650 days, all features finished after 650 days), along with two power-law and a negative
binomial distribution fit to the data.
Figure 2. Number of features whose implementation took a given number of elapsed workdays. Top first 650 days, bottom after 650 days. Green line is the fitted negative binomial distribution. Data
courtesy of 7digital.
The power-law fits were obtained by splitting the data into two parts, shorter/longer than 16 days (after noticing that visually the combined dataset seemed to have this form, less noticeable in the
two subsets) and performing nonlinear regression using nls to find good fits for the parameters a and b (whose initial starting values converged without needing manual tuning).
pow_equ=nls(num.features ~ a*days^b, start=list(a=1200, b =-2))
y=predict(pow_equ, days)
lines(days, y)
While the power-law fits are not very good overall one of them does provide an easy to remember seat of the pants method for approximating the probability of a project taking a small number of days
to complete (e.g., for
[sum(1/(1:16)) is 3.38]). The approximation
The R package fitdistrplus contains functions for matching and fitting a dataset against known commonly occurring distribution. The Cullen and Frey graph produced by a call to descdist suggests that
a negative binomial distribution is the best fitting of those tested (agreeing with the ad-hoc conclusion jumped to above).
descdist(p\$Cycle.Time, discrete=TRUE, boot=100)
The function fitdist returns values for the parameters providing the appropriate fit to the specified dataset and distribution.
fd=fitdist(p\$Cycle.Time, "nbinom", method="mle") # Fit to a negative binomial distribution
# Plot distribution using fitted parameters
plot(dnbinom(1:93, size=size.ct, mu=mu.ct)*length(p\$Cycle.Time),
xlim=c(1,90), ylim=c(1,1200), log="xy")
The figure above shows that the negative binomial distribution could be a reasonable fit if the percentage of single day features was not so high. Two possibilities spring to mind:
1. the data does not include any counts for zero days which is one of the possible values supported by the negative binomial distribution (obviously feature implementations cannot take zero days),
2. measurement quantization introduces significant uncertainty for shorter implementations, if the minimum unit of measurement were less than 1 day the fit might be much better because some feature
implementations take half-a-day while others take a whole day.
It is possible to adjust the negative binomial equation to move the lower bound from zero to one. The package gamlss supports what is known as zero-truncation and the figure below shows the
zero-truncated negative binomial distribution fitted to the pre/post 650 day counts.
Figure 3. A zero-truncated negative binomial distribution fitted to the number of features whose implementation took a given number of elapsed workdays; top first 650 days, bottom after 650 days.
Data courtesy of 7digital.
The quality of fit is much better for the pre 650 day data compared to the post 650 data.
> qual.pre650
AIC log.likelihood
6109.225 -3052.612
> qual.post650
AIC log.likelihood
9923.509 -4959.754
Modifying the negative binomial distribution to handle a dataset not containing zeroes improves the fit, can the fit be further improved by adjusting for measurement quantization?
One possibility is to simulate measuring feature implementation in units smaller than a day; the following code multiplies the implementation time by two and randomly decides whether to subtract one,
i.e., maps measurements made in days to a possible set of measurements made in half days.
dither=as.integer(runif(num.features, 0, 1) > 0.33)
Fitting 1,000 randomly modified half-day measurements and averaging over all results shows that the fit is slightly worse than the original data (as measured by various goodness of fit criteria):
> fit.quality(p\$Cycle.Time[Done.day < 650])
loglikelihood AIC BIC
-3438.284 6880.567 6890.575
> rowMeans(replicate(1000, fit.quality(sub.divide(p\$Cycle.Time[Done.day < 650]))))
loglikelihood AIC BIC
-4072.721 8149.442 8159.450
As discussed in the section on [properties of distributions] the negative binomial distribution can be generated by a mixture of [Poisson distribution]s whose means have a [Gamma distribution]. There
are other distributions that can be generated through a mixture of Poisson distributions, are any of them a better fit of the data? The Delaporte distribution <book ???> sometimes fits very slightly
better and sometimes slightly worse (see chapter source code for details); the difference is not large enough to warrant switching from a relatively well known distribution to one that is rarely
covered in text books or supported in software; if data from other projects is best fitted by a Delaporte distribution then a switch may well be worthwhile.
The data subset corresponding to p$Type == "Production Bug" fits significantly better than the complete dataset (i.e., AIC = 3729) while the fit for the subset p$Type == "MMF" is comparable to the
complete dataset (i.e., AIC of 7251).
Both datsets appear to follow the same distribution, the negative binomial distribution (with zero-truncation), with the initial 650 days having a greater mean and variance than post 650 days. The
Poisson distribution is often encountered in processes involving events in time and one can imagine it applying to the various processes involved in the implementation of a feature; why the means of
these Poisson distributions might follow a Gamma distribution is harder to fathom and is left for another day (it implies that both the Poisson means are decreasing and that the variance of the means
is decreasing)
Do any other equations fit the data? Given enough optional parameters it is always possible to find an equation that is a good fit to the data. The following call to nls shows that the equation
exp_mod=nls(num.features ~ a*exp(b*days^c), start=list(a=10000, b=-2.0, c=0.4))
This equation is unappealing because of its lack of similarity with equations seen in many other areas of research, an exponential whose exponent has the form of
How many new feature implementations are started on each day?
The table below give the probability of a given number of new feature implementations starting on any day. There are sufficient multi-day implementations that on almost 20% of days no new feature
implementations are started. An exponential equation is the commonly encountered form that provides an approximate fit to these values (i.e.,
Table 2. Probability of a given number of new feature
implementations starting on any day.
0.18 0.12 0.15 0.1 0.099 0.081 0.076 0.043 0.033 0.029
Time dependent patterns in the data
7digital is a growing company and we would expect that the rate of creation of features would increase over time, also as the size of the code base and the customer base increases the rate at which
bugs are accepted for fixing is likely to increase.
The number of features developments started per day is one way of comparing different types of features. Plotting this information (see top left) shows that there is a great deal of variation over
very short periods of time. This variation can be smoothed using a [rolling mean] to bring out the trends (the rollmean function in package zoo); the other plots show 20, 50 and 120 day rolling means
for bugs (red) and non-bugs (blue) and the non-bug/bug feature ratio (black).
Figure 4. Number of feature developments started on a given work day (red bug fixes, blue non-bug work, black ratio of two values; 20 day rolling mean bottom left, 50 day top right, 120 day bottom
Both the number of bugs and non-bug features has trended upwards, as has the ratio between them. While it is tempting to suggest that this increase has been generated by the significant increase in
number of developers over the time period, it is also possible the group has become better at dividing work into smaller feature work items or that having implemented the basic core of the products
less work is now needed to create new features. The information present in the data is not sufficient to attempt to provide believable explanations for the upward trend.
Time series analysis
A preliminary data analysis technique for time data is to plot the current values against their lagged values for various lags. The output from the R function lag.plot for the number of in-progress
features is shown below; apart from clustering the plots do not show any noticeable relationships in the data.
Figure 5. Scatterplot of number of features currently in-progress against various time lags (in working days).
Over longer timescales do the number of in-progress feature implementations have noticeable seasonal variations (e.g., greater in summer and Christmas/year year when developers are likely to be
[Autocorrelation] is the cross-correlation of a time varying signal with itself, i.e., the correlation between a measurement occurring at time
The number of in-progress features appears to be increasing over time (top left of figure below) and this trend away from zero needs to be adjusted for before an autocorrelation is calculated. The
feature implementation recording process did not happen over night and took a while before it covered all work performed; comparing a linear fit of all data (pink line of top left of figure below)
and all data from January 2010 (red line) shows that this startup period does not significantly bias the growth trend. However, it is possible that patterns of behavior present in the total set of
work items over a period are not reflected in the first 250 days of recording (roughly 180 working days) and so these are excluded from this particular analysis. From feature duration measurements we
know that over 70% of features take longer than a day to implement, so the data contains a lot of serial dependence which may affect the accuracy of the results.
trend=lm(day.totals ~ time(day.totals))
plot(day.totals, xlab="Days since Apr 2009", ylab="Features in-progress")
day.detrend=day.totals - predict(trend) # Subtract out any global trend
The bottom left of the figure below shows the variation of in-progress features about the trend line. The top right shows the autocorrelation function for this plot, the regular spikes are caused by
weekends (when no work took place). Removing weekends from the analysis results in the autocorrelation shown in the bottom right.
Apart from some correlation having a one day lag the autocorrelation drops to zero almost immediately followed by what appear to be small random spikes. These small spikes do not look important
enough to follow up. A very similar pattern is seen in the autocorrelation of the two 650-day phases (the initial 650 days has a larger correlation for lags of 2-5 days). It is possible that a
seasonal oscillation in feature work exists but is not seen because the data is so noisy (i.e., contains significant variation between adjacent days).
Summing daily values to create weekly totals, which of provides some smoothing, and performing the above analysis again produces essentially the same results.
Figure 6. The number of features currently in-production on a given day since April 2009 (top left, pink line is a linear fit of all data, red line a linear fit of the data after day 250), the
variation in this number about a linear trend line, excluding the first 250 days (bottom left), the autocorrelation function (top right) and the autocorrelation function with weekends removed from
the data (bottom right).
Do reported bugs correlate with new feature releases?
When a feature is released the probability of a new bug being reported increases. Whether different bug probabilities should be assigned to bugfix releases and non-bugfix releases is discussed below.
Based on this expectation we would expect to see a [cross correlation] between releases and number of bugs accepted for fixing. The more code a feature contains the more likely it is to contain a
bug; however, no information on feature code size is provided so number of implementation work days is used as a measure of feature size.
The data does not specify which bugs belong to which features. It is to be expected that over time the probability of a bug being reported against a feature will decrease, reasons for this behavior
include bugfixing, customers no longer using a feature and features being superseded by newer ones.
The figure below is the cross correlation between the 'size' of all features recorded as Done on a given day and all bugs recorded as Prioritised on a given date; the top plot is for all non-bugfix
feature releases while the bottom plot is for all feature releases.
Figure 7. Cross correlation of feature release 'size' (top non-bugfix releases, bottom all releases) and date when bugs are prioritised.
The feature/bug cross correlation in the figure above should be zero for negative lags (i.e., no bugs can be reported for features that have not yet been released). One way of interpreting the
pattern of correlation is that there some bugs are reported immediately after the release (perhaps by early adopters) followed by more bugs some 20 to 50 working days after release; other
interpretations include there being a small amount of signal just visible behind lots of noise in the data or that the approximation used to estimate feature size is too crude.
Using weekly totals produces essentially the same result.
Summary of findings
The distribution of feature implementation times appears to follow a negative binomial distribution (with zero-truncation), with the values for the initial 650 days having a greater mean and
variability (i.e., variance) than the following days.
There appears to be too much noise in the data for any time series signal involving mean values or a relationship between releases and bugs to be reliably extracted.
Thanks to 7digital for making the data available and being willing to make it public and to Rob Bowley for helping me to understand 7digital’s development environment.
for the author, please follow the link and comment on his blog:
The Shape of Code » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/descriptive-statistics-of-some-agile-feature-characteristics/","timestamp":"2014-04-20T08:44:34Z","content_type":null,"content_length":"83890","record_id":"<urn:uuid:607677fb-165f-4c0c-aa14-c747995e7916>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compound Interest?
I don't really see what i'm doing wrong...can someone help me out? thanks
It's hard to say what you did wrong when you don't show what you did! I suspect that you used the formula, P(1+ r)^n, where n is the number of compounding periods and r is the interest per
compounding period, incorrectly. It looks like you used n= 4, which is correct, compounding every have year for two years means compounding 4 times, but used r= .03 which is wrong. An interest rate
of 3% means 3/2= 1.5% or .015 semi-annually.
Or calculate the semi-annual rate the $332 results at: 6000(1 + i)^4 = 6332 1 + i = (6332 / 6000)^(1/4) i = .01355... That's ~1.36% compared to 1.5% ; so A is better.
i punched in .015 and 1.5 for the rate of interest per compounding period but both are wrong..maybe my other work is wrong? and yeah that's the formula i used.
You're asking us to guess...wrong in what way? Did you change number of periods from 2 to 4?
the way it goes is i have to punch in values to the blank spaces then press submit, if its wrong it says incorrect.
...yeah i see i'm getting no where here, guess i'll just email my teacher and ask her for help.
ah i see where i went wrong... A=P(1+r/n)^nt $P=6000$ $r=0.03$ $n=2$ $t=2$ A = 6000(1.015)^4 = 6368.18 | {"url":"http://mathhelpforum.com/business-math/164433-compound-interest.html","timestamp":"2014-04-23T19:12:32Z","content_type":null,"content_length":"56046","record_id":"<urn:uuid:769ef5ee-d0ad-47b8-b3a1-a76268b2f331>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factoring Quadratic Expressions
9.5: Factoring Quadratic Expressions
Created by: CK-12
In this lesson, we will learn how to factor quadratic polynomials for different values of $a,\ b$$c$$c=0$
Factoring Quadratic Expressions in Standard From
Quadratic polynomials are polynomials of degree 2. The standard form of a quadratic polynomial is $ax^2+bx+c$where $a,\ b$and $c$are real numbers.
Example 1: Factor $x^2+5x+6$
Solution: We are looking for an answer that is a product of two binomials in parentheses: $(x+ \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )(x + \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )$
To fill in the blanks, we want two numbers $m$$n$
$6&=1 \times 6 \qquad and \qquad 1+6=7\\6&=2 \times 3 \qquad and \qquad 2+3=5$
So the answer is $(x+2)(x+3)$
We can check to see if this is correct by multiplying $(x+2)(x+3)$
$x$$x$$3 =x^2+3x$
2 is multiplied by $x$$3 =2x+6$
Combine the like terms: $x^2+5x+6$
Example 2: Factor $x^2-6x+8$
Solution: We are looking for an answer that is a product of the two parentheses $(x+ \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )(x + \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )$
The number 8 can be written as the product of the following numbers.
$8=1\cdot 8$$1+8=9$
$8&=(-1)(-8) && and && -1+(-8)=-9\\8&=2 \times 4 && and && 2+4=6$
$8=(-2)\cdot (-4)$$-2+(-4)=-6 \leftarrow$
The answer is $(x-2)(x-4)$
Example 3: Factor $x^2+2x-15$
Solution: We are looking for an answer that is a product of two parentheses $(x \pm \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )(x \pm \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )$
In this case, we must take the negative sign into account. The number –15 can be written as the product of the following numbers.
$-15=-1 \cdot 15$$-1+15=14$
And also,
$-15=1 \cdot (-15)$$1+(-15)=-14$
$-15=(-3) \times 5$and $(-3)+5=2$This is the correct choice.
$-15=3 \times (-5)$and $3+(-5)=-2$
The answer is $(x-3)(x+5)$
Example 4: Factor $-x^2+x+6$
Solution: First factor the common factor of –1 from each term in the trinomial. Factoring –1 changes the signs of each term in the expression.
$-x^2+x+6 = -(x^2-x-6)$
We are looking for an answer that is a product of two parentheses $(x \pm \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )(x \pm \underline{\;\;\;\;\;\;\;\;\;\;\;} \ )$
Now our job is to factor $x^2-x-6$
The number –6 can be written as the product of the following numbers.
$&-6=(-1) \times 6 \qquad and \qquad (-1)+6=5\\&-6=1 \times (-6) \qquad and \qquad 1+(-6)=-5\\&-6=(-2) \times 3 \qquad and \qquad (-2)+3=1\\&-6=2 \times (-3) \qquad and \qquad 2+(-3)=-1 \qquad This \
is \ the \ correct \ choice.$
The answer is $-(x-3)(x+2)$
To Summarize:
A quadratic of the form $x^2+bx+c$$(x+m)(x+n)$
• If $b$$c$$m$$n$
□ Example $x^2+8x+12$$(x+6)(x+2)$
• If $b$$c$$m$$n$
□ Example $x^2-6x+8$$(x-2)(x-4)$
• If $c$$m$$n$
□ Example $x^2+2x-15$$(x+5)(x-3)$
□ Example $x^2+34x-35$$(x+35)(x-1)$
• If $a=-1$$-(x+m)(x+n)$
□ Example $-x^2+x+6$$-(x-3)(x+2)$
Practice Set
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the
video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both.
CK-12 Basic Algebra: Factoring Quadratic Equations (16:30)
Factor the following quadratic polynomials.
1. $x^2+10x+9$
2. $x^2+15x+50$
3. $x^2+10x+21$
4. $x^2+16x+48$
5. $x^2-11x+24$
6. $x^2-13x+42$
7. $x^2-14x+33$
8. $x^2-9x+20$
9. $x^2+5x-14$
10. $x^2+6x-27$
11. $x^2+7x-78$
12. $x^2+4x-32$
13. $x^2-12x-45$
14. $x^2-5x-50$
15. $x^2-3x-40$
16. $x^2-x-56$
17. $-x^2-2x-1$
18. $-x^2-5x+24$
19. $-x^2+18x-72$
20. $-x^2+25x-150$
21. $x^2+21x+108$
22. $-x^2+11x-30$
23. $x^2+12x-64$
24. $x^2-17x-60$
Mixed Review
25. Evaluate $f(2)$$f(x)=\frac{1}{2} x^2-6x+4$
26. The Nebraska Department of Roads collected the following data regarding mobile phone distractions in traffic crashes by teen drivers.
1. Plot the data as a scatter plot.
2. Fit a line to this data.
3. Predict the number of teenage traffic accidents attributable to cell phones in the year 2012.
Year ($y$ Total ($n$
27. Simplify $\sqrt{405}$
28. Graph the following on a number line: $-\pi, \sqrt{2}, \frac{5}{3}, - \frac{3}{10}, \sqrt{16}$
29. What is the multiplicative inverse of $\frac{9}{4}$
Quick Quiz
1. Name the following polynomial. State its degree and leading coefficient $6x^2 y^4 z+6x^6-2y^5+11xyz^4$
2. Simplify $(a^2 b^2 c+11abc^5 )+(4abc^5-3a^2 b^2 c+9abc)$
3. A rectangular solid has dimensions $(a+2)$$(a+4)$$(3a)$
4. Simplify $-3hjk^3 (h^2 j^4 k+6hk^2)$
5. Find the solutions to $(x-3)(x+4)(2x-1)=0$
6. Multiply $(a-9b)(a+9b)$
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/book/CK-12-Algebra-Basic/r1/section/9.5/","timestamp":"2014-04-17T15:49:13Z","content_type":null,"content_length":"128524","record_id":"<urn:uuid:355f8e95-6958-422d-a00a-07aafc45f0af>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monotone sequence - how can this be a monotone sequence if it bounces between numbers
May 15th 2011, 01:42 AM
Monotone sequence - how can this be a monotone sequence if it bounces between numbers
My understanding of showing a sequence is monotone is to show that it either increases or decreases by working out
$(a_n+1) - (a_n)$
and if it's > 0 its increasing, if less than, decreasing, right?
But now i have this problem that I must say if the sequence is monotone:
$(a_n) = (-1)^n + 2n$
which to me works out that it is bounded by 0 and 4, ie that it bounces between 0 and 4 ( if n is odd then the value is 0 if even then its 4)
Surely this goes against monotone sequence??? But my answer says that it IS monotone. How can it be monotone when it is neither increasing nor decreasing??
May 15th 2011, 01:51 AM
A monotonically increasing sequence is one for which each term is greater than or equal to the term before it; if each term is strictly greater than the one preceding it, the sequence is called
strictly monotonically increasing.
You have a monotonically increasing sequence since $a_{n+1}-a_n \geq 0$, but it's not strictly monitonically increasing.
May 15th 2011, 01:55 AM
Thanks, I was just replying as I picked up my error (I was confusing the $a_n + 1 - a_n$ with the original sequence. Ok I see it is increasing.
But what do you mean by "it is not strictly monotonically increasing?"
Thank you
May 15th 2011, 02:13 AM
In order for it to be strictly monitonically increasing, you need $a_{n+1} - a_n > 0$ | {"url":"http://mathhelpforum.com/differential-geometry/180632-monotone-sequence-how-can-monotone-sequence-if-bounces-between-numbers-print.html","timestamp":"2014-04-19T23:38:52Z","content_type":null,"content_length":"6344","record_id":"<urn:uuid:35fca97f-07d6-48e9-9c07-7cb4ea99241e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
cheat sheets
Cheat Sheets
Take a look at our free cheat sheets, formula sheets and tables! Use our free math and science resources to help you do your homework or study for your next exam. Please note that most of these are
pdf files so you will need Adobe Viewer to view them.
Coming soon… | {"url":"http://prodigypreptutoring.com/cheat-sheets/","timestamp":"2014-04-20T15:50:27Z","content_type":null,"content_length":"32918","record_id":"<urn:uuid:3e2f48b6-e825-452a-8bdd-9b1e2c035f87>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Mclean Math Tutor
Find a West Mclean Math Tutor
...I have a bachelor's degree magna cum laude from U.C. Berkeley in Zoology. I have a Ph.D. in Molecular Genetics.
25 Subjects: including ACT Math, SAT math, prealgebra, algebra 1
...Each student has a different way of learning a subject. In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending
extra time to prepare for the session prior to meeting with the student. My broad background in math,...
16 Subjects: including algebra 1, algebra 2, calculus, geometry
My name is Nadeene and I've always enjoyed math and science. I thoroughly enjoy tutoring students and demystifying these sometimes intimidating subjects. I was a volunteer science facilitator for
five years with a non-profit organization based in Philadelphia, teaching students in grades 2-6 vario...
14 Subjects: including algebra 2, prealgebra, precalculus, reading
...I know French, Spanish and Portuguese. I possess a BA in French and lived in Paris for a summer. I speak Spanish fluently and lived in Buenos Aires, Argentina for a summer.
19 Subjects: including prealgebra, logic, probability, Spanish
Hey all! My name is Jenna and I'm an enthusiastic and well-rounded professional who loves to teach and help others. I have a BA in International Studies and spent the last year and a half in a
village in Tanzania (East Africa) teaching in- and out-of-school youth a variety of topics including English, health, life skills, and civics.
35 Subjects: including ACT Math, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/west_mclean_math_tutors.php","timestamp":"2014-04-20T23:50:04Z","content_type":null,"content_length":"23560","record_id":"<urn:uuid:bb3757d6-86ec-40ce-a721-554aa9863201>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Book Recommendation: Automata and Computability
Michael Roach <mcr@visi.com>
16 May 1999 15:35:00 -0400
From comp.compilers
| List of all articles for this month |
From: Michael Roach <mcr@visi.com>
Newsgroups: comp.compilers
Date: 16 May 1999 15:35:00 -0400
Organization: Compilers Central
Keywords: books
I just had to post a recommendation for a book I have been reading as
of late. It's called Automata and Computability by Dexter C. Kozen, and
its ISBN # 0-387-94907-0. It looks to be a first edition printing.
The book consists of the author's lecture notes from a 1 semester
senior-level course he taught at Cornell Univeristy entiled: Automata
and Computability Theory. I have been teaching myself automata and set
theory for several years now, but have constantly found myself
struggling with even the simplest of problems. This books has answered
my questions, and for once I feel confident reading some of the more
abstract papers that abound on the subject. It's written very plainly,
with enough proofs to show how and why things work, but it doesn't
over do it like so many other texts do.
If your having difficulty understanding DFAs, NFAs, CFGs, etc and their
construction along with the theory that makes it all happen, then this
is a book to read. It does require a bit of mathematical sophistication,
but not much, and where it is needed the book provides alot of help.
Another plus is its really cheap (to me anyways) and costs roughly $39 US
through Barnes and Noble. Maybe now I'll actually be able to contribute
to this wonderful newsgroup rather than constantly asking questions :)
Table of Contents
1 Course Roadmap and Historical Perspective
2 Strings and Sets
Finite Automata and Regular Sets
3 Finite Automata and Regular Sets
4 More on Regular Sets
5 Nondeterministic Finite Automata
6 The Subset Construction
7 Pattern Matching
8 Pattern Matching and Regular Expressions
9 Regular Expressions and Finite Automata
A Kleene Algebra and Regular Expressions
10 Homomorphisms
11 Limitations of Finite Automata
12 Using the Pumping Lemma
13 DFA State Minimization
14 A Minimization Algorithm
15 Myhill–Nerode Relations
16 The Myhill–Nerode Theorem
B Collapsing Nondeterministic Automata
C Automata on Terms
D The Myhill–Nerode Theorem for Term Automata
17 Two‐Way Finite Automata
18 2DFAs and Regular Sets
Pushdown Automata and Context‐Free Languages
19 Context‐Free Grammars and Languages
20 Balanced Parentheses
21 Normal Forms
22 The Pumping Lemma for CFLs
23 Pushdown Automata
E Final State Versus Empty Stack
24 PDAs and CFGs
25 Simulating NPDAs by CFGs
F Deterministic Pushdown Automata
26 Parsing
27 The Cocke–Kasami–Younger Algorithm
G The Chomsky–Sch&uouml;tzenberger Theorem
H Parikh's Theorem
Turing Machines and Effective Computability
28 Turing Machines and Effective Computability
29 More on Turing Machines
30 Equivalent Models
31 Universal Machines and Diagonalization
32 Decidable and Undecidable Problems
33 Reduction
34 Rice's Theorem
35 Undecidable Problems About DFLs
36 Other Formalisms
37 The &lgr;‐Calculus
I While Programs
J Beyond Undecidability
38 Gödel's Incompleteness Theorem
39 Proof of the Incompleteness Theorem
K Gödel's Proof
Homework Sets
Homework 1
Homework 2
Homework 3
Homework 4
Homework 5
Homework 6
Homework 7
Homework 8
Homework 9
Homework 10
Homework 11
Homework 12
Miscellaneous Exercises
Finite Automata and Regular Sets
Pushdown Automata and Context‐Free Languages
Turing Machines and Effective Computability
Hints and Solutions
Hints for Selected Miscellaneous Exercises
Solutions to Selected Miscellaneous Exercises
Michael Charles Roach (_)
Saint Paul, MN USA Flying Enthusiast, PP-ASEL
mcr@(tiny|yuck).net Ham Radio Call Sign KB0LXV
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/99-05-065","timestamp":"2014-04-20T03:26:49Z","content_type":null,"content_length":"7621","record_id":"<urn:uuid:156dd3c0-67b8-4c08-a8dd-6ff792bcaa84>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parabolic Iteration, again
04/30/2008, 10:03 AM
(This post was last modified: 05/01/2008 03:55 AM by andydude.)
Post: #1
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
Parabolic Iteration, again
a recent post
I gave a forumla for the second diagonal of iterated-dxp (e^x-1) and this line of research has led to some interesting discoveries. I have since generalized the approach to "interpolating" the
diagonals of these series, and I found
generating functions
for the first
diagonals of parabolic iteration. I would like to present my findings and see if there is anything like this already out there...
It all started with noticing that the first diagonal is:
$f^{\circ t}(x) = \sum_{k=0}^{\infty} t^k x^{k+1} f_2^k + \cdots$
where the function being iterated is of the form
$f(x) = x + \sum_{k=2}^{\infty} f_k x^k$
. This naturally lead to investigating the second diagonal, which I found is not too different than that of iterated-dxp:
$- \sum_{k=0}^{\infty} t^k x^{k+2} f_2^k H_k^{(2)} <br /> \left(f_2 - \frac{f_3}{f_2}\right)$
but it involves a little more than just harmonic numbers. So I began looking at the third diagonal, and found some patterns, but I was only able to interpolate the coefficients up to a point, then I
was stuck with a sequence of rational numbers I had no idea what to do with, then I eventually found
which solved the problem I was having. Before I went to OEIS, I had found the coefficients of the third diagonal to be:
$\sum_{k=0}^{\infty} t^k x^{k+3} f_2^k \left({k+1 \atop 2}\right)<br /> \left(A^2 C_k + D\right)$
where A and D are constants (described later), and
was the rational sequence [0, 1, 3/2, 71/36, 29/12, 638/225, 349/108, ...], which according to OEIS is equivalent to
$C_n = \frac{1}{n} \sum_{k=1}^{n} \frac{H^{(2)}_k}{n+1-k}$
where H is Conway and Guy's harmonic numbers (not the usual generalized harmonic numbers). Once I had the generating functions from OEIS, then I could being playing the game of
. So I took this huge expression (D is "large" when written out) for the third diagonal, and played with derivatives and integrals until it was a recognizable function that generated the right
coefficients. Maybe I'll post a more in-depth discussion of the techniques I used later on, but for now, I just want to show the results.
Going back to the first diagonal:
$\sum_{k=0}^{\infty} t^k x^{k+1} f_2^k = \frac{x}{1- f_2 t x}$
and according to OEIS the generating function of the 2nd degree harmonic numbers is
, which means the second diagonal is:
$- \sum_{k=0}^{\infty} t^k x^{k+2} f_2^k H_k^{(2)} <br /> A = x^2 \frac{\log(1 - f_2 t x)}{(1 - f_2 t x)^2} A$
$A = \left(f_2 - \frac{f_3}{f_2}\right)$
and using the new generating functions for
, we find the generating function for the third diagonal is:
$\sum_{k=0}^{\infty} t^k x^{k+3} f_2^k \left({k+1 \atop 2}\right)<br /> \left(A^2 C_k + D\right) = x^3 \frac{A^2(\log(1 - f_2 t x)^2 - \log(1 - f_2 t x)) + (f_2 t x) D}{(1 - f_2 t x)^3}$
and finally, written out in full:
$<br /> \begin{tabular}{rl}<br /> f^{\circ t}(x) &<br /> = \frac{x}{z}<br /> \\ & + \left(\frac{x}{z}\right)^2 \left(f_2 - \frac{f_3}{f_2}\right) \log(z) <br /> \\ & + \left(\frac{x}{z}\right)^3 \
left[<br /> \left(f_2 - \frac{f_3}{f_2}\right)^2 (\log(z)^2 - \log(z)) +<br /> (1-z)\left(\frac{f_2}{2} \left(f_2 - \frac{f_3}{f_2}\right) - \left(\frac{f_3}{f_2}\right)^2 + \frac{f_4}{f_2} \right)
<br /> \right]<br /> \\ & + \cdots<br /> <br /> \end{tabular}<br />$
$z = 1 - f_2 t x$
The most fascinating part, though, is that
only appears in
Andrew Robbins
04/30/2008, 10:19 AM
Post: #2
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Parabolic Iteration, again
Hi Andrew,
For basic readers like me, what exactly is ${f_2}$ in $z = 1 - f_2 t x$? and is $z = 1 - f_2 t x = 1- f_2 *t* x$?
Thank You in advance,
04/30/2008, 04:30 PM
(This post was last modified: 05/05/2008 10:41 PM by andydude.)
Post: #3
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Parabolic Iteration, again
Ivars Wrote:For basic readers like me, what exactly is ${f_2}$ in $z = 1 - f_2 t x$? and is $z = 1 - f_2 t x = 1- f_2 *t* x$?
$f_2 = \frac{f''(0)}{2}$
and in general,
$f_k = \frac{1}{k!}\frac{d^k}{dx^k}f(0) = \frac{1}{k!} \left[ \frac{d^k}{dx^k}f(x)\right]_{x=0}$
. These are the coefficients of a
Taylor series
And yes, that's normal multiplication.
Any other questions?
Andrew Robbins
04/30/2008, 07:02 PM
Post: #4
Ivars Posts: 366
Long Time Fellow Joined: Oct 2007
RE: Parabolic Iteration, again
Is not
$(\log(z)^2 - \log(z))$ just $\log(z)$?
04/30/2008, 07:34 PM
Post: #5
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Parabolic Iteration, again
Ivars Wrote:Is not $(\log(z)^2 - \log(z))$ just $\log(z)$?
$(\log(z)^2 - \log(z)) = \log(z)(\log(z)-1)$
$(\log(z^2) - \log(z)) = \log(z)$
Do not confuse
, they are two completely different functions.
Andrew Robbins
05/03/2008, 08:10 PM
(This post was last modified: 05/03/2008 08:13 PM by andydude.)
Post: #6
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Parabolic Iteration, again
From these generating functions is it easy to see that parabolic iteration does not work for $f_2 = 0$, which I believe has already been proven by someone, somewhere. What is interesting is that
there are actually 2 reasons for this. The first reason is that there are many $f_2$ in the denominator, which cannot be zero, and the second reason is that if $f_2 = 0$, then $z=1$ which means t
plays no part in the equations at all.
Also, I wonder if studying the special case $f^{\circ 1/x}(x)$ would yield more insights, as this would imply that $z = 1 - f_2$, so there wouldn't be an x in the denominator. This would make finding
more diagonals easier.
Andrew Robbins
PS. I think it was either Bennet or Jabotinsky that showed $f_2 = 0$ doesn't work.
05/04/2008, 07:14 AM
Post: #7
bo198214 Posts: 1,365
Administrator Joined: Aug 2007
RE: Parabolic Iteration, again
I admit I didnt dive completely into your derivations.
First question what is the parabolic flow matrix?
Is it the Carleman/Bell-Matrix of parabolic iteration
$f^{\circ t}(x)$
Why is it then important to know the diagonals? Because the first entry is the coefficient of the series of
$f^{\circ t}(x)$
andydude Wrote:From these generating functions is it easy to see that parabolic iteration does not work for $f_2 = 0$,
which I believe has already been proven by someone, somewhere. What is interesting is that there are actually 2 reasons for this. The first reason is that there are many $f_2$ in the denominator,
which cannot be zero, and the second reason is that if $f_2 = 0$, then $z=1$ which means t plays no part in the equations at all.
I dont get this, lets look at the
double binomial formula
, which should give the same coefficients if I understood that right. There are no denominators depending on the value of any
. I only know that in
hyperbolic iteration
there occurs
in the denominator.
So lets clarify the basics first
05/05/2008, 05:26 AM
Post: #8
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Parabolic Iteration, again
What is the flow matrix?
flow matrix
(although it could also be called
iterational matrix
... see
) is the matrix of coefficients
$f^{\circ t}(x) = \sum_{j=0}^{\infty}\sum_{k=0}^{\infty} A_{jk} t^j x^k$
which are obtained from any method (usually a special case of regular iteration), for parabolic iteration there will only be a finite number of t's, but for hyperbolic iteration (yes, the flow matrix
would apply to that as well) this matrix is not triangular (as it is with parabolic iteration). For parabolic iteration the "flow series" is:
$<br /> \begin{tabular}{rl}<br /> f^{\circ t}(x)<br /> & = x \\ <br /> & + x^2 \left( tf_2 \right) \\ <br /> & + x^3 \left( t(f_3-f_2^2) + t^2f_2^2 \right) \\ <br /> & + x^4 \left( t\left(\frac{f_2}
{2}(3f_2^2 - 5f_3) + f_4\right) + t^2\left(\frac{5f_2}{2}(f_3-f_2^2) \right) + t^3f_2^3 \right) \\ <br /> & + \cdots<br /> \end{tabular}<br />$
this corresponds to the "flow matrix":
$<br /> \left[\begin{tabular}{cccccc}<br /> 0 & 0 & 0 & 0 & 0 & \cdots \\<br /> 1 & 0 & 0 & 0 & 0 & \cdots \\<br /> 0 & f_2 & 0 & 0 & 0 & \cdots \\<br /> 0 & (f_3-f_2^2) & f_2^2 & 0 & 0 & \cdots \\
<br /> 0 & \left(\frac{f_2}{2}(3f_2^2 - 5f_3) + f_4\right)<br /> & \left(\frac{5f_2}{2}(f_3-f_2^2) \right)<br /> & f_2^3 & 0 & \cdots \\<br /> \vdots & \vdots & \vdots & \vdots & \vdots & \ddots<br
/> \end{tabular}\right]<br />$
However, the flow matrix is not limited to parabolic iteration, but applies to hyperbolic iteration as well. Since the coefficients of hyperbolic iteration are not polynomials, there is a big
difference between the series and the matrix, which may serve to illustrate the need for an flow matrix. For hyperbolic iteration the "flow series" is:
$<br /> \begin{tabular}{rl}<br /> f^{\circ t}(x)<br /> & = x f_1^t \\ <br /> & + x^2 \frac{f_1^{t-1}(f_1^{t} - 1)f_2}{(f_1 - 1)} \\ <br /> & + x^3 \frac{f_1^{t-2}(f_1^t-1)(f_1((f_1-1)f_3-2f_2^2) +
f_1^t(2f_2^2 + (f_1-1)f_1f_3))}{(f_1-1)^2(f_1+1)} \\ <br /> & + \cdots<br /> \end{tabular}<br />$
which corresponds to the "flow matrix":
$<br /> \left[\begin{tabular}{cccc}<br /> 0 & 0 & 0 & \cdots \\<br /> 1 & \log(f_1) & \log(f_1)^2/2 & \cdots \\<br /> 0 &<br /> \frac{f_2\log(f_1)}{(f_1-1)f_1} & <br /> \frac{3f_2\log(f_1)^2}{2
(f_1-1)f_1} & \cdots \\<br /> \vdots & \vdots & \vdots & \ddots<br /> \end{tabular}\right]<br />$
Is it the Carleman matrix of parabolic iteration?
No, it is the first row of the Carleman matrix^t, or the first column of the Bell matrix^t, meaning both have been raised to the t. The flow matrix is simply a different expression for the flow
Why is it important to know the diagonals?
Well, it seems important to know the asymptotic behavior of the coefficients, and it is very difficult to know the asymptotic behavior of a sequence you only have the first few members of, so my goal
with these diagonals is to provide a formula that we can use for root-tests and other convergence tests, so the world will stop calling them "formal power series" and start calling them functions.
That is my ultimate goal.
But then again, how am I helping when I only have 3 diagonals, and we don't need to find asymptotic behavior of
$t^k x^k$
but of
? Well, it is obvious that the diagonals follow more of a pattern than the columns, so it seems easier to interpolate the diagonals than the columns of the flow matrix. My hope is that after enough
diagonals are found, then a pattern will be found in the columns, as well.
Are these the same as Jabotinsky's double-binomial formula?
Yes. However, that formula is expressed in terms of the coefficients of the
-th iterate of a function. The only way I've seen Daniel Geisler write the flow series has been in terms of the coefficients of the 1-st iterate only, which is a major distinction between
Jabotinsky's, and the flow series.
Andrew Robbins
05/05/2008, 08:30 AM
Post: #9
andydude Posts: 467
Long Time Fellow Joined: Aug 2007
RE: Parabolic Iteration, again
bo198214 Wrote:I admit I didnt dive completely into your derivations.
I'm sorry for the terseness, it was very complicated, and the way I found them was to look at lots of
finite differences
and finite quotients until I saw a pattern. I will describe my derivation techniques as soon as I finish the "HyperSage" library.
Also, for comparison, Gottfried gives the same "parabolic flow matrix" as table (1.6.1.4.) in
Andrew Robbins
05/05/2008, 05:33 PM
Post: #10
bo198214 Posts: 1,365
Administrator Joined: Aug 2007
RE: Parabolic Iteration, again
andydude Wrote:I'm sorry for the terseness,
No, no, its ok. I just lacked the basics, thats all. Thank you for your very nice explanation. I just have digest it and then look at the case
. Because I dont believe that it should not be possible, the question would be what goes wrong in the case
, can only be a convergence thing. Perhaps Walker excluded that case in his construction of an entire superexponential.
User(s) browsing this thread: 1 Guest(s) | {"url":"http://math.eretrandre.org/tetrationforum/showthread.php?tid=156","timestamp":"2014-04-17T12:41:08Z","content_type":null,"content_length":"58370","record_id":"<urn:uuid:649c9700-a0f5-435a-baa2-b90bb08b0d56>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
234pages on
this wiki
The map view of Atlantis displays the area surrounding your kingdom. From here, you may view squares that can contain several different locations. By default, your map will open centered on your
city. Throughout Atlantis, you may see Wildernesses, Anthropus Camps, player owned kingdoms and outpostsa
Anthropus Camps contain invaders and resources and can be spied on or attacked.
• Conquering savannas or lakes will increase food production.
• Conquering forests will increase lumber production.
• Conquering hills will increase your stone production.
• Conquering mountains will increase your metals production.
• Conquering plains gains you no resource production boost, but you need them to build outposts.
• Mysterious Clouds are testing sites (just ignore them).
For more info about wilds and ACs, visit Wilderness and Anthropus Camps.
Production Increase by wildernesses
│ Level │Increasing │
│ Level 1 │5% │
│ Level 2 │10% │
│ Level 3 │15% │
│ Level 4 │20% │
│ Level 5 │25% │
│ Level 6 │30% │
│ Level 7 │35% │
│Level 8 │40% │
│ Level 9 │45% │
│Level 10 │50% │
The Dark Secrets of the DoA Coordinate System
Are you always getting lost with those coordinate numbers when scrolling along the map? Well not anymore after you read this article!
This is no surprise, because DoA uses a very strange coordinate system. I don't see any advantage in it, maybe part of the out-of-this-world atmosphere in this game.
the reason is the 3D effect
So throw away everything you learned in school about maps and coordinates, you really gonna have to learn how to read maps like an Atlantean.
Coordinates are written in the form x/y. The x axis runs horizontal, increasing toward the right of the map. This is what you would expect.
Now you would expect the y axis to run vertical, increasing toward the top or bottom. But no, Surprisingly it runs diagonal, increasing toward the lower right corner of the map.
To make things clearer, I made a drawing: centered around square 10/10 (to avoid the map rollover)
DoA Coordinate system
You can see two very weird properties about this kind of coordinates:
The squares 9/9 and 11/11 (in green) are not adjacent to 10/10 in any direction.
The y coordinate changes per two along the vertical axis.
However the map is endless and as soon as it reaches 749 (the highest coordinate for both x and y) it goes back to 0.
Calculating Travel Time
In the example map, the four spaces shown adjacent to 10/10 non-diagionally are 10/9, 11/9, 10/11 and 9/11. But, in travel time, the four adjacent spaces are actually 10/9, 11/10, 10/11 and 9/10.
This makes a lot more sense. Unfortunately, it means that the map is misleading in judging travel time. Looking at the map, you'd think that if you start at 10/10, the travel time to 10/9 and 11/9
would be the same. They are not. Instead, the travel time to 10/9 and 11/10 are the same.
The total travel time for an attacker is its muster time (the time it spends just getting ready to go out on an attack) plus a factor based on distance. With level 5 Rapid Deployment and Level 6
Dragonry, LBMs take 16s to travel one unit of distance and SSDs take 4s. Travel time for other units and research levels can be calculated by setting up a dummy attack that's one unit away, noting
the time, and then setting up a second dummy attack that's two units away and then taking the difference. So, if a city is in 400/300, the first dummy attack could be to 400/301 and the second to 400
The muster time is always 30 seconds.
The distance can be calculated with a little algebra. The first step is to determine the x and y distance. Since the map is 750 units wide and wraps in both directions, this will be either the larger
coordinate minus the smaller coordinate or 750 plus the smaller coordinate minus the larger coordinate, whichever is less. Next, add the square of the x distance to the square of the y distance and
then take the square root of the sum. That's the Pythagorean Theory. Multiply the distance by the travel time per unit and add the muster time and that's the total travel time.
For example, if a city is in 400/300, an attack is sent to 427/281 and the attackers take 16s per unit of travel, then total travel time would be 9m 18s. The calculation is as follows:
x = 27 (the lesser of 427 - 400 or 750 + 400 - 427)
y = 19 (the lesser of 300 - 281 or 750 + 281 - 300)
((27^2 + 19^2)^.5) * 16 + 30
There is an easier way to calculate travel time.
1. Find & Click on target
2. Click "Attack"
3. Choose your desired troops
4. Check the "Estimated Time" for travel at the bottom of window (Follow Step 5 to initiate Attack)
5. Click "Attack"
6. Check the "Marching" Bar to see if Estimated Time is correct.
There is a 97% chance both "times" will be insync. In the Image examples both times were "8m 19s" even though the Marching Bar says "8m 16s". It took me 3 seconds to re-frame & re-position my
Snipping Tool to take a screen shot of the Marching Bar after clicking "Attack" in the Muster Point window.
Smaller distances will always be 99% in sync between MP window & Marching Bar. Medium to Large distances will be about 95-97% in sync.
Page last updated: 2014-04-18 05:04 (UTC) | {"url":"http://dragonsofatlantis.wikia.com/wiki/Map","timestamp":"2014-04-18T10:34:01Z","content_type":null,"content_length":"72956","record_id":"<urn:uuid:3875540c-e326-4479-bfe2-bd4cb4ad029e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Storing the value of a variable
Replies: 1 Last Post: Feb 10, 2013 3:30 AM
Re: Storing the value of a variable
Posted: Feb 10, 2013 3:30 AM
On 2/9/2013 9:22 PM, Subash Padmanaban wrote:
> a=mmreader('abcd.avi');
> for k= 55 : 65
> frame=read(a,k);
> level= graythresh(frame);
> fig=im2bw(frame,level);
> imshow(fig);
> numberofpoints=1;
> [x y]=ginput(numberofpoints);
> plot(x,y);
> end
> My problem is that, I want to store the values of x and y in an
> array and later plot it. How do I go about with this? Also, every
> time the number of frames will not be the same.
You know the total size of x and y needed? It is the
length of each loop * numberofpoints. Hence you
can preallocate a matrix to store then in.
close all;
N = 3;
numberofpoints = 4;
data = zeros(numberofpoints,2,N); %store all points
for i=1:N
[data(:,1,i) data(:,2,i)] = ginput(numberofpoints);
%get the points out and plot them
X = permute(data,[1 3 2]);
X = reshape(X,[],size(data,2),1) | {"url":"http://mathforum.org/kb/message.jspa?messageID=8293157","timestamp":"2014-04-16T16:32:35Z","content_type":null,"content_length":"15043","record_id":"<urn:uuid:0af0d43d-af74-4b77-ad62-ff683829704e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
stability, in mathematics, condition in which a slight disturbance in a system does not produce too disrupting an effect on that system. In terms of the solution of a differential equation, a
function f(x) is said to be stable if any other solution of the equation that starts out sufficiently close to it when x = 0 remains close to it for succeeding values of x. If the difference between
the solutions approaches zero as x increases, the solution is called asymptotically stable. If a solution does not have either of these properties, it is called unstable.
For example, the solution y = ce^-x of the equation y′ = -y is asymptotically stable, because the difference of any two solutions c[1]e^-x and c[2]e^-x is (c[1] - c[2])e^-x, which always approaches
zero as x increases. The solution y = ce^x of the equation y′ = y, on the other hand, is unstable, because the difference of any two solutions is (c[1] - c[2])e^x, which increases without bound as x
increases. A given equation can have both stable and unstable solutions. For example, the equation y′ = -y(1 - y)(2 - y) has the solutions y = 1, y = 0, y = 2, y = 1 + (1 + c^2e^-2x)^-^1/[2], and y =
1 - (1 + c^2e^-2x)^-^1/[2] (see). All these solutions except y = 1 are stable because they all approach the lines y = 0 or y = 2 as x increases for any values of c that allow the solutions to start
out close together. The solution y = 1 is unstable because the difference between this solution and other nearby ones is (1 + c^2e^-2x)^-^1/[2], which increases to 1 as x increases, no matter how
close it is initially to the solution y = 1.
Stability of solutions is important in physical problems because if slight deviations from the mathematical model caused by unavoidable errors in measurement do not have a correspondingly slight
effect on the solution, the mathematical equations describing the problem will not accurately predict the future outcome. Thus, one of the difficulties in predicting population growth is the fact
that it is governed by the equation y = ax^ce, which is an unstable solution of the equation y′ = ay. Relatively slight errors in the initial population count, c, or in the breeding rate, a, will
cause quite large errors in prediction, even if no disturbing influences occur. | {"url":"http://www.britannica.com/print/topic/562235","timestamp":"2014-04-16T13:09:09Z","content_type":null,"content_length":"10324","record_id":"<urn:uuid:83bbc0b4-cede-49d9-9f19-035b4dbc9399>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 233.05017
Autor: Erdös, Paul
Title: On a problem of Grünbaum. (In English)
Source: Can. Math. Bull. 15, 23-25 (1972).
Review: The following problem is stated by Grünbaum: Determine the sequence of integers m^(n)[1] < m^(n)[2] < ... so that for every i there is a set of n points in the plane which determine exactly m
^(n)[i] lines. The author proves that there is a c[1] so that for every c[1]n^3/2 < m \leq \binom{n}{2}, m \ne \binom{n}{2}-1, m \ne \binom{n}{2}-3 there is a set of n points which determines exactly
m lines. The result is best possible (apart from the value of c[1]). The principal tool is a result of L. M. Kella and W. O. J. Moser [Can. J. Math. 10, 210-219 (1958; Zbl 081.15103)]. Several
unsolved problems are stated.
Classif.: * 11B83 Special sequences of integers and polynomials
00A07 Problem books
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/23305017.htm","timestamp":"2014-04-21T14:44:49Z","content_type":null,"content_length":"3643","record_id":"<urn:uuid:af3da89a-f4d1-4667-9ab2-a76489674fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 24
If two triangles have two sides equal to two sides respectively, but have one of the angles contained by the equal straight lines greater than the other, then they also have the base greater than the
Let ABC and DEF be two triangles having the two sides AB and AC equal to the two sides DE and DF respectively, so that AB equals DE, and AC equals DF, and let the angle at A be greater than the angle
at D.
I say that the base BC is greater than the base EF.
Since the angle BAC is greater than the angle EDF, construct the angle EDG equal to the angle BAC at the point D on the straight line DE. Make DG equal to either of the two straight lines AC or DF.
Join EG and FG.
Since AB equals DE, and AC equals DG, the two sides BA and AC equal the two sides ED and DG, respectively, and the angle BAC equals the angle EDG, therefore the base BC equals the base EG.
Again, since DF equals DG, therefore the angle DGF equals the angle DFG. Therefore the angle DFG is greater than the angle EGF.
Therefore the angle EFG is much greater than the angle EGF.
Since EFG is a triangle having the angle EFG greater than the angle EGF, and side opposite the greater angle is greater, therefore the side EG is also greater than EF.
But EG equals BC, therefore BC is also greater than EF.
Therefore if two triangles have two sides equal to two sides respectively, but have one of the angles contained by the equal straight lines greater than the other, then they also have the base
greater than the base. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI24.html","timestamp":"2014-04-21T12:16:55Z","content_type":null,"content_length":"4389","record_id":"<urn:uuid:55b28e78-0f7a-4fdf-a035-afe4e3df6dcd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explained: The Shannon limit
Claude Shannon's clever electromechanical mouse, which he called Theseus, was one of the earliest attempts to teach a machine to learn and one of the first experiments in artificial intelligence.
Photo: Bell Labs
It's the early 1980s, and you’re an equipment manufacturer for the fledgling personal-computer market. For years, modems that send data over the telephone lines have been stuck at a maximum rate of
9.6 kilobits per second: if you try to increase the rate, an intolerable number of errors creeps into the data.
Then a group of engineers demonstrates that newly devised error-correcting codes can boost a modem’s transmission rate by 25 percent. You scent a business opportunity. Are there codes that can drive
the data rate even higher? If so, how much higher? And what are those codes?
In fact, by the early 1980s, the answers to the first two questions were more than 30 years old. They’d been supplied in 1948 by Claude Shannon SM ’37, PhD ’40 in a groundbreaking paper that
essentially created the discipline of information theory. “People who know Shannon’s work throughout science think it’s just one of the most brilliant things they’ve ever seen,” says David Forney, an
adjunct professor in MIT’s Laboratory for Information and Decision Systems.
Shannon, who taught at MIT from 1956 until his retirement in 1978, showed that any communications channel — a telephone line, a radio band, a fiber-optic cable — could be characterized by two
factors: bandwidth and noise. Bandwidth is the range of electronic, optical or electromagnetic frequencies that can be used to transmit a signal; noise is anything that can disturb that signal.
Given a channel with particular bandwidth and noise characteristics, Shannon showed how to calculate the maximum rate at which data can be sent over it with zero error. He called that rate the
channel capacity, but today, it’s just as often called the Shannon limit.
In a noisy channel, the only way to achieve zero error is to add some redundancy to a transmission. For instance, if you were trying to transmit a message with only three bits, like 001, you could
send it three times: 001001001. If an error crept in, and the receiver received 001011001 instead, she could be reasonably sure that the correct string was 001.
Any such method of adding extra information to a message so that errors can be corrected is referred to as an error-correcting code. The noisier the channel, the more information you need to add to
compensate for errors. As codes get longer, however, the transmission rate goes down: you need more bits to send the same fundamental message. So the ideal code would minimize the number of extra
bits while maximizing the chance of correcting error.
By that standard, sending a message three times is actually a terrible code. It cuts the data transmission rate by two-thirds, since it requires three times as many bits per message, but it’s still
very vulnerable to error: two errors in the right places would make the original message unrecoverable.
But Shannon knew that better error-correcting codes were possible. In fact, he was able to prove that for any communications channel, there must be an error-correcting code that enables transmissions
to approach the Shannon limit.
His proof, however, didn’t explain how to construct such a code. Instead, it relied on probabilities. Say you want to send a single four-bit message over a noisy channel. There are 16 possible
four-bit messages. Shannon’s proof would assign each of them its own randomly selected code — basically, its own serial number.
Consider the case in which the channel is noisy enough that a four-bit message requires an eight-bit code. The receiver, like the sender, would have a codebook that correlates the 16 possible
four-bit messages with 16 eight-bit codes. Since there are 256 possible sequences of eight bits, there are at least 240 that don’t appear in the codebook. If the receiver receives one of those 240
sequences, she knows that an error has crept into the data. But of the 16 permitted codes, there’s likely to be only one that best fits the received sequence — that differs, say, by only a digit.
Shannon showed that, statistically, if you consider all possible assignments of random codes to messages, there must be at least one that approaches the Shannon limit. The longer the code, the closer
you can get: eight-bit codes for four-bit messages wouldn’t actually get you very close, but two-thousand-bit codes for thousand-bit messages could.
Of course, the coding scheme Shannon described is totally impractical: a codebook with a separate, randomly assigned code for every possible thousand-bit message wouldn’t begin to fit on all the hard
drives of all the computers in the world. But Shannon’s proof held out the tantalizing possibility that, since capacity-approaching codes must exist, there might be a more efficient way to find them.
The quest for such a code lasted until the 1990s. But that’s only because the best-performing code that we now know of, which was invented at MIT, was ignored for more than 30 years. That, however,
is a story for the next installment of Explained.
Second part: Explained: Gallager codes.
not rated yet Jan 19, 2010
Any details about this "best-performing code"??
There is new error correction method very near Shannon's limit: which is finally linear (not exponential as usual), works for any noise levels (not only low as usual), allows to manipulate the amount
of redundancy added in continuous way (not simple fractions ratios as usual) and allows additionally to simultaneously compress and encrypt the data well.
Here is simulator of it's processing and link to the paper:
not rated yet Jan 19, 2010
Best performing code is a function of your noise characteristic.
Some channels have a probabilistic approach to noise (e.g. by flipping the occasional bit with each bit flip being randomly distributed)
Some channels have noise that occurs in blocks (e.g. when lightning strikes nearby you get a whole block of data that is corrupted)
It's also a function of how probable the individual bit sequences in your message are (i.e. it may be advantageous to encode frequently used signals in shorter bit sequences at the cost of making
very seldom occuring signals very long)
Shannon's papers are extremely interesting. I would recommend anyone who is into information encoding/transmission (or encrypting/decrypting) to read them.
not rated yet Jan 20, 2010
Theoretical limit is one thing, the real question is how to get near it in practice - not requiring exponential correction time as it is supposed to..
Ok,I've understood 30 years from now, but I think they've meant from 1990 when Gallager's LDPC was reinvented by MacKay. This near Shannon's limit approach usually requires square time for placing
the parity bits and exponential time for correction process and so can be used only for very low noises.
Strength of the approach from the link is that the verification for the correction process is made locally - we don't longer need to work simultaneously on the whole message as in sudoku-like LDPC
correction, but we expand our 'correction tree' bit by bit and have immediate probabilistic verification, making that the tree has finite expected width and so we need linear time for correction
process for any noise levels.
About Shannon's ideas for cryptography in times of quantum computers:
not rated yet Jan 20, 2010
The amazing thing that Shannon proved (his noisy channel coding theorem) is that you can get an *arbitrarily* small error probability on a noisy channel, without the data rate becoming zero. A highly
counterinituitive result.
not rated yet Jan 20, 2010
Why counterintuitive?
Sending such 'uncertain' bit - let say '1' with 1-p probability really denotes '1' and with p it was intended to be '0' - so the information which of theses cases it is contains
h(p)= -p lg(p)-(1-p)lg(1-p) bits
so such 'uncertain bit' contains 1-h(p) real bits of information - it's exactly the Shannon's limit. | {"url":"http://phys.org/news183117569.html","timestamp":"2014-04-17T06:50:16Z","content_type":null,"content_length":"79406","record_id":"<urn:uuid:27dc900f-051f-452b-82b3-2c69240b576e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kenilworth, NJ SAT Math Tutor
Find a Kenilworth, NJ SAT Math Tutor
...My goal is to impart the knowledge I have gained through diligent study and daily practice, and to help others achieve their dreams the way that I did mine. If you choose to work with me and
utilize my effective learning methods, I guarantee that you will find extraordinary success in an efficie...
37 Subjects: including SAT math, English, algebra 1, Chinese
...I am experienced in both latin and standard dancing and can teach through the gold level. Standard dances include Waltz, Foxtrot, Viennese Waltz, Tango and Quickstep. Latin dances include
Rumba, Samba, Cha-Cha and Jive/Swing.
15 Subjects: including SAT math, calculus, GRE, geometry
...I first began to tutor at an after school center. There I was able to work with students who had challenging learning needs. I greatly enjoyed my time there because I was able to work with
students who were not typically given personal academic attention.
27 Subjects: including SAT math, English, reading, writing
...What this does is allow students to began thinking in a more abstract fashion before they approach the core laws in physics. Inorganic Chemistry --> Rather than force equations on students, I
use dimensional analysis to help students understand where the equations come from and why they work....
34 Subjects: including SAT math, chemistry, physics, calculus
...Over the past year, I have worked with students in various areas, ranging from algebra, calculus, chemistry, and physics to English and US History. I also specialize in standardized test prep
(SAT/ACT/ISEE/GRE). On the side, I continue to extend my own education by taking classes of interest t...
38 Subjects: including SAT math, Spanish, algebra 1, GRE
Related Kenilworth, NJ Tutors
Kenilworth, NJ Accounting Tutors
Kenilworth, NJ ACT Tutors
Kenilworth, NJ Algebra Tutors
Kenilworth, NJ Algebra 2 Tutors
Kenilworth, NJ Calculus Tutors
Kenilworth, NJ Geometry Tutors
Kenilworth, NJ Math Tutors
Kenilworth, NJ Prealgebra Tutors
Kenilworth, NJ Precalculus Tutors
Kenilworth, NJ SAT Tutors
Kenilworth, NJ SAT Math Tutors
Kenilworth, NJ Science Tutors
Kenilworth, NJ Statistics Tutors
Kenilworth, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/kenilworth_nj_sat_math_tutors.php","timestamp":"2014-04-21T05:17:29Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:d349700f-c724-451e-b354-557a71fe8b17>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explained: The Shannon limit
Claude Shannon's clever electromechanical mouse, which he called Theseus, was one of the earliest attempts to teach a machine to learn and one of the first experiments in artificial intelligence.
Photo: Bell Labs
It's the early 1980s, and you’re an equipment manufacturer for the fledgling personal-computer market. For years, modems that send data over the telephone lines have been stuck at a maximum rate of
9.6 kilobits per second: if you try to increase the rate, an intolerable number of errors creeps into the data.
Then a group of engineers demonstrates that newly devised error-correcting codes can boost a modem’s transmission rate by 25 percent. You scent a business opportunity. Are there codes that can drive
the data rate even higher? If so, how much higher? And what are those codes?
In fact, by the early 1980s, the answers to the first two questions were more than 30 years old. They’d been supplied in 1948 by Claude Shannon SM ’37, PhD ’40 in a groundbreaking paper that
essentially created the discipline of information theory. “People who know Shannon’s work throughout science think it’s just one of the most brilliant things they’ve ever seen,” says David Forney, an
adjunct professor in MIT’s Laboratory for Information and Decision Systems.
Shannon, who taught at MIT from 1956 until his retirement in 1978, showed that any communications channel — a telephone line, a radio band, a fiber-optic cable — could be characterized by two
factors: bandwidth and noise. Bandwidth is the range of electronic, optical or electromagnetic frequencies that can be used to transmit a signal; noise is anything that can disturb that signal.
Given a channel with particular bandwidth and noise characteristics, Shannon showed how to calculate the maximum rate at which data can be sent over it with zero error. He called that rate the
channel capacity, but today, it’s just as often called the Shannon limit.
In a noisy channel, the only way to achieve zero error is to add some redundancy to a transmission. For instance, if you were trying to transmit a message with only three bits, like 001, you could
send it three times: 001001001. If an error crept in, and the receiver received 001011001 instead, she could be reasonably sure that the correct string was 001.
Any such method of adding extra information to a message so that errors can be corrected is referred to as an error-correcting code. The noisier the channel, the more information you need to add to
compensate for errors. As codes get longer, however, the transmission rate goes down: you need more bits to send the same fundamental message. So the ideal code would minimize the number of extra
bits while maximizing the chance of correcting error.
By that standard, sending a message three times is actually a terrible code. It cuts the data transmission rate by two-thirds, since it requires three times as many bits per message, but it’s still
very vulnerable to error: two errors in the right places would make the original message unrecoverable.
But Shannon knew that better error-correcting codes were possible. In fact, he was able to prove that for any communications channel, there must be an error-correcting code that enables transmissions
to approach the Shannon limit.
His proof, however, didn’t explain how to construct such a code. Instead, it relied on probabilities. Say you want to send a single four-bit message over a noisy channel. There are 16 possible
four-bit messages. Shannon’s proof would assign each of them its own randomly selected code — basically, its own serial number.
Consider the case in which the channel is noisy enough that a four-bit message requires an eight-bit code. The receiver, like the sender, would have a codebook that correlates the 16 possible
four-bit messages with 16 eight-bit codes. Since there are 256 possible sequences of eight bits, there are at least 240 that don’t appear in the codebook. If the receiver receives one of those 240
sequences, she knows that an error has crept into the data. But of the 16 permitted codes, there’s likely to be only one that best fits the received sequence — that differs, say, by only a digit.
Shannon showed that, statistically, if you consider all possible assignments of random codes to messages, there must be at least one that approaches the Shannon limit. The longer the code, the closer
you can get: eight-bit codes for four-bit messages wouldn’t actually get you very close, but two-thousand-bit codes for thousand-bit messages could.
Of course, the coding scheme Shannon described is totally impractical: a codebook with a separate, randomly assigned code for every possible thousand-bit message wouldn’t begin to fit on all the hard
drives of all the computers in the world. But Shannon’s proof held out the tantalizing possibility that, since capacity-approaching codes must exist, there might be a more efficient way to find them.
The quest for such a code lasted until the 1990s. But that’s only because the best-performing code that we now know of, which was invented at MIT, was ignored for more than 30 years. That, however,
is a story for the next installment of Explained.
Second part: Explained: Gallager codes.
not rated yet Jan 19, 2010
Any details about this "best-performing code"??
There is new error correction method very near Shannon's limit: which is finally linear (not exponential as usual), works for any noise levels (not only low as usual), allows to manipulate the amount
of redundancy added in continuous way (not simple fractions ratios as usual) and allows additionally to simultaneously compress and encrypt the data well.
Here is simulator of it's processing and link to the paper:
not rated yet Jan 19, 2010
Best performing code is a function of your noise characteristic.
Some channels have a probabilistic approach to noise (e.g. by flipping the occasional bit with each bit flip being randomly distributed)
Some channels have noise that occurs in blocks (e.g. when lightning strikes nearby you get a whole block of data that is corrupted)
It's also a function of how probable the individual bit sequences in your message are (i.e. it may be advantageous to encode frequently used signals in shorter bit sequences at the cost of making
very seldom occuring signals very long)
Shannon's papers are extremely interesting. I would recommend anyone who is into information encoding/transmission (or encrypting/decrypting) to read them.
not rated yet Jan 20, 2010
Theoretical limit is one thing, the real question is how to get near it in practice - not requiring exponential correction time as it is supposed to..
Ok,I've understood 30 years from now, but I think they've meant from 1990 when Gallager's LDPC was reinvented by MacKay. This near Shannon's limit approach usually requires square time for placing
the parity bits and exponential time for correction process and so can be used only for very low noises.
Strength of the approach from the link is that the verification for the correction process is made locally - we don't longer need to work simultaneously on the whole message as in sudoku-like LDPC
correction, but we expand our 'correction tree' bit by bit and have immediate probabilistic verification, making that the tree has finite expected width and so we need linear time for correction
process for any noise levels.
About Shannon's ideas for cryptography in times of quantum computers:
not rated yet Jan 20, 2010
The amazing thing that Shannon proved (his noisy channel coding theorem) is that you can get an *arbitrarily* small error probability on a noisy channel, without the data rate becoming zero. A highly
counterinituitive result.
not rated yet Jan 20, 2010
Why counterintuitive?
Sending such 'uncertain' bit - let say '1' with 1-p probability really denotes '1' and with p it was intended to be '0' - so the information which of theses cases it is contains
h(p)= -p lg(p)-(1-p)lg(1-p) bits
so such 'uncertain bit' contains 1-h(p) real bits of information - it's exactly the Shannon's limit. | {"url":"http://phys.org/news183117569.html","timestamp":"2014-04-17T06:50:16Z","content_type":null,"content_length":"79406","record_id":"<urn:uuid:27dc900f-051f-452b-82b3-2c69240b576e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US5826251 - System of controlling or monitoring processes or industrial plants employing a dual-line fuzzy unit
Publication US5826251 A
Publication Grant
Application US 08/513,987
PCT number PCT/EP1994/000655
Publication Oct 20, 1998
Filing date Mar 5, 1995
Priority Mar 13, 1993
Fee status Paid
Also DE4308083C1, WO1994022073A1
published as
Publication 08513987, 513987, PCT/1994/655, PCT/EP/1994/000655, PCT/EP/1994/00655, PCT/EP/94/000655, PCT/EP/94/00655, PCT/EP1994/000655, PCT/EP1994/00655, PCT/EP1994000655, PCT/EP199400655, PCT/EP94
number /000655, PCT/EP94/00655, PCT/EP94000655, PCT/EP9400655, US 5826251 A, US 5826251A, US-A-5826251, US5826251 A, US5826251A
Inventors Harro Kiendl
Original Kiendl; Harro
Export BiBTeX, EndNote, RefMan
Patent Citations (11), Non-Patent Citations (8), Referenced by (8), Classifications (9), Legal Events (7)
External Links: USPTO, USPTO Assignment, Espacenet
System of controlling or monitoring processes or industrial plants employing a dual-line fuzzy unit
US 5826251 A
A dual-line system of controlling or monitoring processes or industrial plants employing a dual-line fuzzy unit is presented in which due allowance can be made for both positive and negative rules.
These are rules issuing "positive recommendations" and "warnings" or "prohibitions" for the selection of the values of the output variables for the fuzzy unit. The dual-line system enables logical
compromises to be made between these "recommendations" and "warnings" or "prohibitions". Along with the possibility of drawing on positive empirical knowledge, use of this dual-line system also
creates the possibility of utilizing negative empirical knowledge with the same degree of transparency with which positive empirical knowledge has previously been used in conventional fuzzy units.
The warnings serve to protect the plant or resources or to prevent undesirable control behavior. The inclusion of warnings or, more especially, prohibitions is also of interest for warranties of
operational reliability.
I claim:
1. System of controlling or monitoring processes or industrial plants achieved by generating an unambiguous scalar value of an output variable u or values of output variable u.sub.1, u.sub.2, . . . ,
u.sub.s at the output of a fuzzy unit, in which each of the output values comes from a continuum of possible values and the output variables are generated as a function of the value of an input
variable e or of the values of several input variables e.sub.1, e.sub.2, . . . , e.sub.r, from at least one sensor of said controlled or monitored processes or industrial plants, subsequently
combined to form a vector e for each output variable u.sub.i, by means of two parallel processing lines, each containing the sections fuzzification, conclusion in accordance with a rule basis
assisted by fuzzy logic and creation of a membership function on the output side, characterized in that, in the case of a scalar output variable u, the first processing line generates a membership
function μ.sub.c.sup.+ (u) on the basis of conclusions assisted by a first set of rules (rule basis I), the functional values of the said membership function indicating, in respect of each potential
value of the output variable, how far the first set of rules (rule basis I) on the whole recommends the said value of the output variable, further characterized in that the second processing line
generates a membership function μ.sub.c.sup.- (u) on the basis of conclusions assisted by a second set of rules (rule basis II), the functional values of the said membership function indicating, in
respect of each potential value of the output variable, how far the second set of rules (rule basis II) on the whole warns against use of the said value of the output variable, further that a
resulting membership function μ.sub.c (u) is generated from membership functions μ.sub.c.sup.+ (u) and μ.sub.c.sup.- (u) assisted by a hyperinference procedure (12) which makes a compromise between
the recommendations and warnings; in addition, that an unambiguous output value u(e) is generated from the membership function μ.sub.c (u) assisted by a hyper-defuzzification procedure (13); in
addition, that, in case of several output variables u.sub.1, u.sub.2, . . . , u.sub.s, the system described above in respect of one output variable is applied for each individual output variable
u.sub.1, u.sub.2, . . . , u.sub.s still further characterized in that the hyperinference procedure functions in accordance with a rule corresponding to the form μ.sub.c (u)=f(μ.sub.c.sup.+ (u),
μ.sub.c.sup.- (u)), while at the same time the function f possesses the three properties f(μ,0)=μ and f(μ,1)=0 as well as f(μ.sub.1 and μ.sub.2)≦f(μ.sub.1 and μ.sub.3) for μ.sub.2 ≧μ.sub.3, the first
effect of this rule being that the resulting membership function μ.sub.c (u) is identical to the membership function μ.sub.c.sup.+ (u) in respect of the values of u for which the membership function
μ.sub.c.sup.- (u) assumes the functional value of 0, which means that none of the second rules (rule basis II) warns against the use of this value of the output variable, further the second effect of
the said rule being that the resulting membership function μ.sub.c (u) assumes the functional value 0 in respect of such values of u for which the membership function μ.sub.c.sup.- (u) assumes the
functional value of 1 which means that the second rules (rule basis II) warn against the use of this value of the output variable in the maximum possible degree 1, further the third effect of the
said rule being that, proceeding from the given value of u and the same functional value μ.sub.c.sup.+ (u), the functional value of the resulting membership function μ.sub.c (u) tends to decrease as
the functional value of μ.sub.c.sup.- (u) increases, that is, the second rules (rule basis II) warn to an increased degree against the use of the value of the output variable.
2. System in accordance with claim 1, characterised in that the resulting membership functions μ.sub.e (u) or μ.sub.e (u.sub.1), μ.sub.e (u.sub.2), . . . μ.sub.e (u.sub.s) are conducted to further
fuzzy units for use in the same.
3. System in accordance with claims 1 or 2, characterised in that the hyperinference procedure generates the functional values μ.sub.e (u)=μ.sub.e.sup.+ (u) in respect of all values of the output
variable u to which μ.sub.e.sup.- (u)=0 applies and the functional values μ.sub.e.sup.- (u)=0 for all other values of the output variable u.
4. System in accordance with claims 1 or 2, characterised in that the hyperinference procedure generates the functional values μ.sub.e (u)=μ.sub.e.sup.+ (u) in respect of all values of the output
variable u to which μ.sub.e.sup.+ (u)≧μ.sub.e.sup.- (u) applies and the functional values μ.sub.e (u)=0 for all other values of the output variable u.
5. System in accordance with claims 1 or 2, characterised in that the hyperinference procedure functions in accordance with the rule μ.sub.e (u)=μ.sub.e.sup.+ (u) .right brkt-top. μ.sub.e.sup.- (u)
or a logically equivalent rule, so interpreted that in this rule the functional values of functions μ.sub.e.sup.+ (u) and μ.sub.e.sup.- (u) are to be used and evaluated in accordance with the
arithmetic rules defined for fuzzy operators " " and ".right brkt-top.", whereby different variants result in respect of this hyperinference procedure in as far as varying definitions exist for the
fuzzy operators.
6. System in accordance with claim 5, characterised in that the hyperinference procedure in a first procedural stage transforms the membership functions μ.sub.e.sup.+ (u) and μ.sub.e.sup.- (u) into
membership functions k.sup.+ (μ.sub.e.sup.+ (u)) and k.sup.- (μ.sub.e.sup.- (u)) and in that the second stage of the hyperinference procedure corresponds to the claim 5.
7. System in accordance with claim 6, characterised in that for transformation functions k.sup.+ (u) and k.sup.- (u) are used which increase monotonically in respect of μ≧0, which in μ=0 assume the
functional value of 0 and which assume the functional value of 1 in a finite value of μ or only asymptotically.
8. System in accordance with claim 7, characterised in that the hyperdefuzzification procedure breaks up the membership function μ.sub.e (u) into partial functions μ.sub.1 (u), μ.sub.2 (u), - - - ,
μ.sub.r (u) in accordance with rule μ.sub.i (u)=μ.sub.e (u) in respect of u .di-elect cons. c.sub.i, d.sub.i ! and μ.sub.i (u)=0 for u .epsilon slash. c.sub.i, d.sub.i !, in which case the intervals
c.sub.i, d.sub.i ! are formed according to the rule that the function μ.sub.e (u) assumes a preselected value p.sub.i ≧0 at interval limits c.sub.i, and d.sub.i and greater values within the
interval, in which case each partial function is then defuzzified separately and the resulting values μ.sub.j are employed for determining the final output-variable value u(e) of the fuzzy unit.
9. System in accordance with claim 8, characterised in that the hyperdefuzzification procedure allocates a weighting factor g.sub.i to each value u.sub.i and determines the value u.sub.i with the
greatest weighting factor as the final value of the output variable, while--where maximum weighting factors result of equal size--one of the associated values is selected as the value of the output
10. System in accordance with claim 9, characterised in that to each value u.sub.i, the weighting factor g.sub.j is formed as a function of the value F.sub.i of the area located under the functional
graph of partial function μ.sub.i (u) in respect of positive values or as a function of the height h.sub.i of the area centre of gravity S.sub.i of these areas above the coordinate axis μ=0, or as a
function of the maximum functional value μ.sub.i,max. of the partial function μ.sub.i (u) or as a function of the widths b.sub.i,L. and b.sub.i,R of intervals d.sub.i-1, c.sub.i ! and d.sub.i,
c.sub.i+1 ! left or right of the interval c.sub.i, d.sub.i ! in which μ.sub.i (u)=0 applies, or as a function of several of the variables F.sub.i, h.sub.i, μ.sub.i,max. and b.sub.i,L as well as
b.sub.i,R, in which case the resulting weighting factor g.sub.j will increase as the respective values of F.sub.i, h.sub.i, μ.sub.i,max. or b.sub.i,L or b.sub.i,R rise.
11. System in accordance with claims 9 or 10, characterised in that, in a first processing stage the value u.sub.j is determined with the largest weighting factor and that in a second processing
stage the final value of the output variable u(e) is formed by shifting u.sub.j within the interval c.sub.j, d.sub.j !, formed in other words by applying a rule u(e)=u.sub.j +Δ.sub.j with c.sub.j
≦u.sub.j +Δ.sub.j ≦d.sub.j.
12. System in accordance with claim 11, characterised in that the resulting shift Δ.sub.j according to Δ.sub.j =ΣΔ.sub.ji with i≠j is composed of partial amounts Δ.sub.ji whose prefix signs
correspond to the prefix sign of u.sub.i -u.sub.j and whose absolute values increase as g.sub.i, F.sub.i, h.sub.i and u.sub.i,max. rise in value, and whose absolute values decrease as the value of
g.sub.j, F.sub.j, h.sub.j and μ.sub.j,max. and additionally on the values c.sub.j, μ(c.sub.j), d.sub.j and u(d.sub.j).
13. System in accordance with claim 12, characterised in that the hyperdefuzzification procedure transforms the function μ.sub.e (u) into a function k (μ.sub.e (u)) during a first processing stage
and in that the second stage of the hyperdefuzzification procedure corresponds to claim 12.
14. System in accordance with claim 13, characterised in that use is made for the transformation process of a function k(μ) which reveals monotic rise in respect of μ≧0, such function assuming the
functional value of 0 in μ=0 and the functional value of 1 in a finite value μ or only asymptotically.
15. Fuzzy unit for implementing the system in accordance with claim 14, characterised in that it contains standard microprocessors and memory modules on the hardware side and comprises two
subprograms on the software side for sequential or parallel processing of the first set of rules (rule basis I) and second set of rules (rule basis II) as well as consisting of two further
subprograms for implementing the hyperinference procedure and the hyperdefuzzification procedure.
16. Fuzzy unit for implementing the system in accordance with claim 14, characterised in that it contains partial units on the hardware side consisting of standard microprocessors and memory modules,
special fuzzy chips, artificial neural networks, customised circuits or optical components, in which case the first two partial units are connected in parallel for parallel processing of the first
set of rules (rule basis I) and second set of rules (rule basis II) and that the other partial units are connected downstream from this for the purpose of implementing the hyperinference procedure
and hyperdefuzzification procedure.
17. System in accordance with claim 15 or 16, characterised in that it has conventional single-line fuzzy controllers for determining the membership function μ.sub.e.sup.+ (u) by processing the
positive rules and for determining the membership function μ.sub.e.sup.- (μ) by processing the negative rules in addition to one or more sub-units linked to these for determining membership function
μ.sub.e (μ) as well as for implementing the hyperdefuzzification process.
18. Application of the system in accordance with claim 14, for determining the parameters of a non-linear performance-characteristics controller for control functions, characterised in
that--proceeding from a fuzzy unit in accordance with claim 14--the parameters of a non-linear performance-characteristics controller are determined by an off line procedure in such a way that this
has identical or approximately the same input/output behaviour pattern as the aforementioned fuzzy unit.
19. System in accordance with claim 18, characterised in that the behaviour pattern of the fuzzy unit is approximated by a piece-by-piece affine facet function and that this is evaluated on digital
or analog lines.
20. Application of the system in accordance with claim 14 for the introduction of a dead zone for the output variable of a fuzzy controller, characterised in that, with the aid of prohibitive rules,
provision is made to ensure that the fuzzy controller ceases to react to minor deviations in the event of the control system being located almost in the steady position.
21. Application of the system in accordance with claim 14 for generating prohibitive bands for the output variable of a fuzzy controller, e.g. in the interest of protecting a multistage actuator
connected downstream as a means of preventing frequent switchover, characterised in that provision is made by the prohibitive rules to ensure that the value of the output variable on the fuzzy
controller does not assume any values in close proximity to the switchover thresholds.
22. Application of the system in accordance with claim 14 for constructing a fuzzy controller designed to protect the actuator or system, characterised in that provision is made for measurable
variables governing strain imposed on the actuator or system, e.g. the temperature or an algorithm which carries out time-weighted integration of the actuating variables supplied in the past by the
values of the actuator or loaded on to the system, and in which prohibitive rules are employed to warn against such output-variable values u on the controller liable to impose unacceptable strain on
the actuator or plant.
23. Application of the system in accordance with claim 14 for the prevention of undesirable vibrations on the plant when applying pulse-frequency modulation, characterised in that prohibitive rules
provide warning against such values u of the output variable on the fuzzy controller causing undesirable pulse frequencies.
24. Application of the system in accordance with claim 14 for vibration absorption, characterised in that determination first proceeds off line--e.g. by conducting an eigenvalue analysis or on
experimental lines--to find out what undesirable vibrations are apt to occur on the control system and what frequencies these vibrations have and then to establish on line--e.g. by conducting a
correlation analysis in respect of each potential value u of the output variable currently due to be administered--to what extent this value causes increase in vibrational excitation in connection
with the behaviour pattern of the output variable previously administered, and in which a prohibitive rule is then established which increasingly warns against the use of a value u as this degree of
excitation rises.
25. Application of the system in accordance with claim 14 for the purpose of system supervision, characterised in that, with the aid of positive and negative rules, the presence or absence of system
conditions u is indicated which are relevant for taking action at a higher process control level, e.g. by triggering alarms, where positive rules provide criteria indicating the extent to which such
conditions exist, whereas negative rules indicate to what degree the conditions are not given.
26. Application of the system in accordance with claim 14 for evaluating the aggregate quality u of complex processes according to scale u.sub.min ≦u≦u.sub.max, characterised in that positive rules
supply criteria for every possible aggregate quality u for assessing the degree to which this quality represents an adequate evaluation as applied to the relevant situation and negative rules supply
criteria for the extent to which it is inadequate, whilst a final aggregate quality is determined from this by means of hyperinference and hyperdefuzzification.
27. Application of the system in accordance with one of the claims 1-14 for creating a fuzzy measuring system consisting of one or more sensors and a fuzzy unit for processing the measured values,
characterised in that current measured values as well as measured values delivered previously by the sensor or sensors in the past are conducted to the fuzzy unit in the form of input variables, and
that, for each potential value u supplied by processing the measured values, positive and negative rules can be made to establish to what degree this value is deemed to be supported or discarded by
the previously measured values, on the basis of which unambiguous, purposefully processed measuring results are determined by hyperinference and hyperdefuzzification.
The invention is described below on the basis of a standard design example shown in FIG. 1. In the new type of structure illustrated here, the fuzzy unit is designed as a dual-line version. The first
line contains a fuzzification module 6 and a fuzzy logic module 7; the second line contains a fuzzification module 8 and a fuzzy logic module 9. Fuzzy logic module 7 contains a set of rules defined
as rule basis I which provide "recommendations", i.e. define certain values of the output variable u as being "favourable". Fuzzy logic module 9 contains a set of rules, defined as rule basis II,
which issue "warnings", i. e. define certain values of the output variable u as being "unfavourable". In addition to these sets of rules, fuzzy logic modules 7 and 9 contain fuzzy operators I and II
respectively--as well as inference machines I and II respectively. The output of inference machine I is offset by means of a module 10 for membership functions on the output side to form a membership
function μ.sub.c.sup.+ (u). Its functional values indicate for each value of the output variable u the degree to which that value is "favourable" on the basis of the conclusions of fuzzy logic module
7, i.e. is recommended. The output of fuzzy logic module 9 is offset by means of a module 11 for membership functions on the output side to form a membership function μ.sub.c.sup.- (u). The
functional values of this membership function indicate for each value of the output variable u of the fuzzy unit the degree to which that value is "unfavourable" on the basis of the conclusions of
fuzzy logic module 9, i.e. to what extent a warning is issued against its use. In the two logic modules I and II fuzzy operators and inference strategies differing from one another may be used, for
example in logic module II the familiar SUM-PROD-inference or the Einstein sum for enhanced superimposition of mild warnings. The two membership functions μ.sub.c.sup.+ (u) and μ.sub.c.sup.- (u) are
conducted in direct form or transformed state to a hyperinference machine 12. This applies a hyperinference strategy to determine a resulting membership function μ.sub.c (u), the functional values of
which represent a practical compromise between the "recommendations" of fuzzy logic module 7 and the "warnings" of fuzzy logic module 9. Various hyperinference strategies are presented below which
enable recommendations and warnings to be offset against one another in a varying and practical manner.
According to hyperinference strategy 1 (claim 3) it is determined in respect of all values u with μ.sub.c.sup.- (u)=0, that μ.sub.c (u)=μ.sub.c.sup.+ (u); otherwise μ.sub.c (u)=0 (FIG. 4a). This
strategy is based on the heuristics that the output variable u of the fuzzy unit should not be capable of accepting values defined as "unfavourable" by fuzzy logic module 9, even if this is only to
the slightest degree. According to hyperinference strategy 2 (claim 4) it is determined in respect of all values u with μ.sub.c.sup.+ (u)≧μ.sub.c.sup.- (u) that μ.sub.c (u)=μ.sub.c.sup.+ (u);
otherwise μ.sub.c (u)=0 (FIG. 4b). This strategy is based on the heuristics that the output variable u of the fuzzy unit should only be capable of accepting values which are defined by fuzzy logic
module 7 as being "favourable" to a greater degree than they are determined by fuzzy logic module 9 to be "unfavourable". Hyperinference strategy 3 (claim 5) proceeds from the formation of the
difference for the classical sets A and B: The characteristic function μ.sub.A-B (x) of the difference set A-B assumes the functional value of 1 precisely for all elements x to which μ.sub.A (x)=1
and μ.sub.B (x)=0, while μ.sub.A (x) and μ.sub.B (x) are the characteristic functions of sets A and B. Hyperinference strategy 3 is based on the idea of transferring this difference to fuzzy
operators. This results in rule μ.sub.c (u)=μ.sub.c.sup.+ (u) .right brkt-top. μ.sub.c.sup.- (u). For their evaluation, the functional values of membership functions μ.sub.c.sup.+ (u) and
μ.sub.c.sup.- (u) are employed and linked to the rules of calculation defined for fuzzy operators " " and ".right brkt-top.". As the fuzzy operators " " and ".right brkt-top." have varying
definitions, this will result in a corresponding number of variants of hyperinference strategy 3 being obtained. They are distinguished in that the membership functions μ.sub.c.sup.+ (u) and
μ.sub.c.sup.- (u) are offset to form a resulting membership function μ.sub.c (u) based on an optional compromise between the "recommendations" in rule basis 7 and the "warnings" given in rule basis
9. FIG. 4 c contains an example. Hyperinference strategies 1, 2 and 3 are heuristically plausible and ought to satisfy practical requirements to a large extent. In special cases more general
hyperinference strategies may be applied. Each function μ.sub.c (u)=f(μ.sub.c.sup.+ (u), μ.sub.c.sup.- (u)) with the properties f(μ,0)=μ and f(μ,1)=0 as well as f(.sub.1 μ .sub.2 μ)≦f
(μ.sub.1,μ.sub.3) for μ.sub.2 ≧μ.sub.3 may serve as a generalised hyperinference strategy of this kind (claim 6).
The resulting membership function μ.sub.c (u) is finally conducted in direct form or transformed state to the hyperdefuzzification module 13. This module determines an unambiguous value u(e) in
accordance with a hyperdefuzzification strategy formed from μ.sub.c (u). The first step of the preferred hyperdefuzzification strategies 1 and 2 consists of generating partial functions μ.sub.1 (u),
μ.sub.2 (u), - - - , μ.sub.r (u) from the membership function μ.sub.c (u) in accordance with rule μ.sub.i (u)=μ.sub.c (u) in respect of u .di-elect cons. c.sub.i, d.sub.i ! and μ.sub.i (u)=0 for u
.epsilon slash. c.sub.i, d.sub.i !, while the intervals c.sub.i, d.sub.i ! are formed according to the rule that the function μ.sub.c (u) assumes a preselected value p.sub.i ≧0 at interval limits
c.sub.i and di, and greater values within the interval. FIG. 5 shows an example containing two resulting partial functions in respect of p.sub. =p.sub.2 =0. Each partial function μ.sub.i (u) for
instance is defuzzified separately in accordance with the centre of gravity method. In this way r values u.sub.i differing from one another are obtained. Allocated to each value u.sub.i is a
weighting factor g. According to hyperdefuzzification strategy 1 (claim 10), the final unambiguous value of the output variable u(e) selected from among values u.sub.i is the one which has the
greatest weighting factor g. Where maximum weighting factors g.sub.j of equal size result, an unamibiguous value u is determined by a random-event generator or by including additional aspects. With
another version of hyperdefuzzification strategy 1, the weighting factor g.sub.j used is the area F.sub.i below the functional graphs of partial function μ.sub.r (u), namely, in the interval in which
the partial function has positive values. With other versions of hyperdefuzzification strategy 1, the weighting factor g.sub.j is formed as a function of F.sub.i or as a function of the height
h.sub.i of the centre of gravity of area F.sub.i across the axis belonging to value μ=0, or of the maximum functional value μ.sub.i,max. of the function μ.sub.i (u) or as a function of the widths
b.sub.i,L. and b.sub.i R of intervals .sub.i d, .sub.i+1 c! and .sub.i+1 d,! left or right of the interval c.sub.i, d.sub.i ! in which μ.sub.i (u)=0, or as a function of several of the variables
F.sub.i, h.sub.i, μ.sub.i,max. and b.sub.i,L as well as b.sub.i,R (cf. FIG. 5). The resulting value of the weighting factor g.sub.i increases proportionately with the values of F.sub.i, h.sub.i,
μ.sub.i,max. and b.sub.i,L or b.sub.i,R. A practical possibility of forming the weighting factor g.sub.i is provided for example by the rule g.sub.i =F.sub.i ((d.sub.i -c.sub.i +0.5 b.sub.i,L +0.5
b.sub.iR)/(d.sub.i -c.sub.i)). This ensures for example that, where there is a prohibition all values u with .di-elect cons.≦ value .di-elect cons., the output variable of the fuzzy unit is in fact
capable of accepting the permitted value u=0.
According to hyperdefuzzification strategy 2 (claim 12), the value u.sub.j, is first determined to which the largest weighting factor has been allocated. From this, the final value u(e) is formed by
shifting u.sub.j within the interval c.sub.j, d.sub.j !. This shift process satisfies the rule u(e)=μ.sub.j +μ.sub.j with c.sub.j ≦u.sub.j +Δ.sub.j ≦d.sub.i, i.e. the resulting value u(e) is still
located within the interval c.sub.j, d.sub.j !. For example, in accordance with Δ.sub.j =ΣΔ.sub.ji the shift Δ.sub.j is composed of partial amounts Δ.sub.ji with i ≠j whose prefix signs correspond to
the prefix sign of u.sub.i -u.sub.j and whose absolute values increase as g.sub.i, F.sub.i, h.sub.i and u.sub.i,max. rise in value, and whose absolute values decrease as the values of g.sub.j,
F.sub.j, h.sub.j and μ.sub.j,max. and this shift is that it is dependent solely or additionally on the values c.sub.j, μ(c.sub.j), d.sub.j and u(.sub.j d) and that, for example, the final value u(e)
is formed with μ*=(1-μ(c.sub.j))(1-μ(d.sub.j) applying the rule u(e)=(μ(c.sub.j)c.sub.j +μ*u.sub.j +μ(d.sub.j)d.sub.j)/(μ(c.sub.j)+μ*+μ(d.sub.j)). This rule ensures that the value u(e) continues to
be shifted more to one of the marginal points of the interval c.sub.j, d.sub.j ! as the values of the membership function μ(u) increase there. This is meaningful when the prohibition affecting the
values to the left and right of this interval is not to have any "remote effect" in the sense that even those values situated within the interval in close proximity to the interval limits c.sub.j,
and d.sub.j are also to be considered as critical.
Hyperdefuzzification strategy 3 (claim 14) differs from the others previously specified in that, instead of the membership functions μ.sub.c.sup.+ (u) and μ.sub.c (e) or .sub.c μ(u), the functions
obtained from these as a result of transformation, i. e. functions .sub.μ.sbsb.c.sup.+ (u)=k+(μ.sub.c.sup.+ (u)) and .sub.μ.sbsb.c.sup.- (u)=k.sup.- (μ.sub.c.sup.- (u)) or .sub.μ.sbsb.c (u)=k(μ.sub.c
(u)) are conducted to the hyperinference module or the hyperdefuzzification module for further processing, while the functions .sub.μ.sup.+ =k.sup.+ (μ),.sub.μ.sup.- =k.sup.- (μ) and .sub.μ =k(μ) are
preferably monotonically rising functions for μ≧0 which assume functional value 0 in μ=0 and functional value 1 in μ=1 as is, for example, the case involving function .sub.μ (μ)=μ.sup.a for each a&
gt;0 or for function .sub.μ (μ)=(e.sup.λμ- 1)/(e.sup.λ- 1) for each λ≠0. In this way, a gradual compromise can be ensured specifically between the "recommendations" given in rule basis I and the
"warnings" issued in rule basis II. An alternative possibility of attaining this objective is that of selecting functions .sub.μ.sbsb.c.sup.+ =k.sup.+ (μ), .sub.μ.sbsb.c.sup.- =k.sup.- (μ) and
.sub.μ.sbsb.c =k(μ) in such a way that they assume value 1 in a finite value u or only asymptotically. In particular for processing the positive and negative rules this provides the possibility of
utilising inference strategies, such as the SUM-PROD-inference which supply non-standardised membership functions as results and which could not be suitable processed by the hyperinference strategy
without restriction to the maximum functional value 1. The use of the SUM-PROD-inference is partiuclarly interesting for processing negative rules as it is thus possible to superimpose several mild
warnings so as to form a severe warning or a strict prohibition. The same objective can be attained without any subsequent standardisation being necessary by superimposing the membership functions
provided by the individual rules R.sub.i and R.sub.j, i.e. membership functions μ.sub.i (u) and μ.sub.j (u) according to the rule μ.sub.c.sup.- (u)=(μ.sub.i (u)+μ.sub.j.sup.- (u))/(1+μ.sub.i.sup.-
(u)μ.sub.j.sup.- (u)) known as the Einstein sum.
If hyperinference strategy 1 is selected for the new structure of the fuzzy unit presented here, together with one of the hyperdefuzzification strategies described, it is guaranteed that the value of
the output variable u of the fuzzy unit will definitely not assume any value defined by fuzzy logic module 9 as being "unfavourable", even with the slightest positive degree. In addition, the
different hyperdefuzzification strategies provide varying possibilities for quantifying the effect of the fuzzy unit. Applying one of the other hyperinference strategies enables a variable and less
rigid compromise to be made between the "recommendations" of fuzzy logic module 7 and the "warnings" of fuzzy logic module 9.
The main applications of the described fuzzy unit are controlling and monitoring processes or industrial plants. When the new dual-line fuzzy unit is used as a fuzzy controller the output variable u
influences a downstream system or a downstream actuator. The dual-line fuzzy unit can also be employed as a member or component of a more complex control or monitoring facility, e.g. for processing
measured values. Its output variables can be used to influence downstream structural components. An extension to the new dual-phase fuzzy signal processing system makes it possible additionally or
exclusively to utilise the membership function μ.sub.c (u) to influence downstream structural components (claim 2).
As tests have shown, the invention-based system can be implemented by means of fuzzy units containing standard microprocessors and memory modules, and in which the system is processed by software. A
subprogram is realised for each of the fuzzification modules I (6) and II (8), fuzzy logic modules I (7) and II (9), modules I (10) and II (11) for membership functions on the output side, the
hyperinference machine (12) and the hyperdefuzzification module (13). Instead of the specified software component implementation, these components can also be constructed as hardware units in the
form of electrical, electronic or optical components as special fuzzy chips designed as artificial neural networks, customised circuits, discrete circuitry or analog circuitry. In all these cases,
dual-line designing principles makes it possible to effect system implementation in both lines on parallel principles, thus bringing about a saving of time (claims 16 and 17).
A most simple way of constructing a fuzzy unit for system implementation is to utilise one conventional type single-line fuzzy unit for determining the membership functions μ.sup.+.sub.c (u) by
processing the positive rules and another for determining the membership functions μ.sup.-.sub.c (u) by processing the negative rules, and to employ special modules only for determining the
membership function μ.sub.c (u) and for hyperdefuzzification, these modules being constructed of electrical, electronic or optical components, preferably in the form of standard microprocessors and
memory modules or special fuzzy chips, artificial neural networks, customised circuits, discrete circuitry or analog circuitry (claim 18).
With a view towards real-time realisation of an invention-based dual-line fuzzy control system, the following possibility is of particular interest: First of all a dual-line fuzzy unit is designed
which solves the set problem. A table of values is then compiled which, for example with a fuzzy controller with the two input variables x.sub.1 and x.sub.2 and the output variable u for a number of
grid points (x.sub.1, x.sub.2) of the x.sub.1 -x.sub.2 space, indicates the size of the resulting values u(x.sub.1, x.sub.2) of the fuzzy controller's output variable in respect of those points.
Following this, a non-linear function, e.g. in the form of a polynominal or piece-for-piece affine facet functions, is applied with parameters still available; these free parameters are set in such a
way that the non-linear function provides approximately the same values u for the grid points of the x.sub.1 -x.sub.2 space as those contained in the table of values. This process is begun by
effecting a random setting of the parameter values for the facet functions and determining in respect of these to what extent deviations occur between the functional values supplied by the functional
statement and contained in the table of values. By varying the parameter values of the functional statement progressively and applying a gradient process, these deviations are then gradually reduced.
In this way, parameter values are finally achieved which impart approximately the same pattern of behaviour to the non-linear function which is possessed by the previously designed dual-line fuzzy
controller. The outcome is the parameter values in respect of a non-linear key field controller which can be realised on the hardware side with considerably less outlay than the original fuzzy
controller and which speeds up processing of the system, thus making it especially suitable for controlling fast processes (claim 19). If the pattern of behaviour of the dual-line fuzzy unit is
specifically approximated by a piece-for-piece affine facet function, realisation of this is possible by means of an analog circuit consisting only of operation amplifiers, diodes and resistors
(claim 20). FIG. 6 illustrates an example. (Cf. H. Kiendl: Suboptimaler Regler mit abschrittsweise linearer Struktur. Lecture Notes in Economics and Mathematical Systems. Springer Verlag Berlin
Heidelberg New York 1972, p. 133 ff.)
Standard potential applications of the dual-line fuzzy signal processing technique are set forth below.
Introduction of a dead zone for the actuating variable (claim 21): In traditional automatic control engineering it is known that preventing the controller from reacting to minor deviations can prove
to be practical for calming or steadying system behaviour. For this purpose, a dead zone is created by means of a unidimensional characteristic element. With a dual-line fuzzy controller this concept
can also be transferred to fuzzy controllers with several input variables. A prohibitive rule is introduced for this purpose. When the steady position is approached, this forbids all actuating
variable values u situated in the vicinity of the value u.sub.R of the actuating variable which can just hold the system in the steady position. A dead zone of this kind is also purposeful for
controlling mechatronic systems with static friction as this prevents the system from being acted upon by values of actuating variables that are too small to overcome such static friction and which
thus act as a strain on the system without moving it.
Prohibited band for actuating variable (claim 22): Frequently a multistage actuator is connected downstream from a controller, whereby in cases in which actuating-variable requirements are negligible
only the first stage is switched on, and the second stage and further stages are switched on in addition to correspond to increased requirements etc. In the interests of protecting the actuator,
switching the stages on and off frequently is to be avoided. This can be accomplished by applying prohibitive rules which ban all values of output variables of the controller in the proximity of the
switchover thresholds.
Protection of the actuator or system (claim 23): In general, strain is imposed on an actuator or system by actuating-variable output; for example a motor heats up in operation. For that reason, when
designing a control circuit it is advisable to pay attention not only to the control performance itself, but also to strain on the actuators or the system. This can be achieved by applying
prohibitive rules. For this purpose, an indicator is first obtained in respect of such strain, for example by measuring the temperature of a motor or by time-weighted integration of the actuating
variable values previously generated by the actuator. A prohibitive rule is then introduced into the controller which warns against output-variable values u for the controller or places a strict ban
on these if they impose considerable or unacceptable further strain on the actuator or system.
Positional control: In the case of positional control applications, e.g. with the aid of a robot, it may for safety reasons prove advisable to prohibit high approach speeds after the desired target
position has almost been reached. This can be done by applying a corresponding prohibitive rule.
Pulse-frequency modulation (claim 24): There are certain applications in which the value u supplied by the controller is converted to a pulse train. In the event of such a pulse train affecting a
downstream system, this may cause undesirable vibrations to occur on certain pulse frequencies. This can be avoided by applying prohibitive rules which prohibit all values u giving rise to critical
pulse frequencies.
Vibration absorption (claim 25): Off line determination is carried out, e.g. by conducting an eigenvalue analysis or by means of experiments, to ascertain the undesirable vibrations which may occur
in the control system and the frequencies these vibrations have. A correlation analysis of the input and output variable patterns of the system, for example, is then used to determine for each
potential value u of the actuating variable currently due to be administered the extent to which this value causes an increase in vibrational excitation in connection with the history of the
actuating variable hitherto administered. A prohibitive rule is then established which increases its warnings against the use of a value u as the degree of excitation rises. In this way, such
actuating-variable values u which cause undesirable vibrational excitation are avoided.
Fuzzy supervision (claim 26): One of the functions of fuzzy supervision consists of applying fuzzy rules that result from observing the dynamic behaviour of processes to indicate the presence of
process conditions u, knowledge of which is of relevance for monitoring or supervising processes or systems or for taking action at higher process control level, e.g. by unleashing alarms. If, in
addition to positive rules, negative rules are also used, this can enhance selectivity in detecting such process conditions. In that case, it is not only possible to introduce criteria into the rule
basis with regard to the prevailing degree of such process conditions, but also with regard to the extent to which they are not present. The invention-based fuzzy process in this practical example is
thus primarily intended for monitoring or supervision purposes.
Fuzzy quality measure (claim 27): Another function of fuzzy supervision is that of quality rating as applied to the behaviour of complex processes. For this purpose, it is necessary to combine the
ratings of partial aspects so as to form an aggregate quality while also making due allowance for positive and negative partial ratings. In certain circumstances therefore an overall evaluation may
be negative, even though certain partial evaluations are highly positive. This type of overall evaluation is possible with a dual-line fuzzy unit: the positive and negative rules can provide the
degree of suitablility or unsuitability for any potential aggregate quality on the basis of an evaluation scale. These recommendations and warnings are offset by hyperinference and
hyperdefuzzification to form a final aggregate quality result. Here too, this primarily involves application for (quality) supervision of processes or industrial plants.
Fuzzy measuring system (claim 28): Measuring signals delivered by a sensor frequently call for intelligent modification for further processing. For this purpose, the current measured value needs to
be compared with previous measured values obtained more recently and modified on a logical context-related basis. One example is that relating to temperature-controlled egg incubators of which it is
known that briefly opening the door for the purpose of removing eggs causes a temperature reduction to occur which, while not being detrimental in principle, nevertheless leads to an undesirable
thermostat reaction: after the door is closed the temperature often increases to such an extent that the eggs are exposed to risk. Such problems can be solved by conducting current and previously
measured values to a dual-line fuzzy unit and applying positive and negative rules to determine, in respect of each potential value u of the measurable variable, the extent to which it is to be
considered as supported or discarded by the previous measured values. By means of hyperinference and hyperdefuzzification, the value is then generated that is deemed to be the most advisable in the
context of the application. In the example relating to the incubator, the abrupt drop in temperature recorded by the measuring device will, for instance, cease to appear at the output of the fuzzy
unit: the unit only recognises this as being a fall in temperature to which the conventional downstream thermostat is not supposed to react.
FIG. 1 is a block diagram of the claimed dual-line fuzzy system;
FIG. 2 illustrates a situation where a conventional fuzzy system fails to process a prohibition properly;
FIG. 3 is a block diagram of a conventional fuzzy system;
FIG. 4 illustrates different hyperinference strategies 1, 2 and 3 (a, b and c respectively);
FIG. 5 illustrates different strategy elements of hyperdefuzzification procedures; and
FIG. 6 is a block diagramm of an electronic circuit which can be used to realize the key-field of a fuzzy system.
German Patent 43 08 083 Kiendi 1994 (Verfahren zur Erzeugung von Stellgroβen am Ausgang eines Fuzzy-Reglers und Fuzzy-Regler hierfur)
1. Field of the Invention
The invention relates to a system of regulating or monitoring processes or industrial plants by generating an unambiguous scalar value of an output variable u or an unambiguous vector u of output
variables at the output of a fuzzy unit as a function of the value of an input variable e or the values of several input variables combined to form a vector e, a fuzzy unit for implementing the said
system and beneficial applications of the said system. The description that follows is based on fuzzy units revealing only one input variable e and one output variable u. The subject of the invention
can, however, be applied analogously to fuzzy units revealing several input and several output variables.
2. Description of the Related Art
Signal-processing fuzzy units are familiar for instance in the for of fuzzy controllers. Their method of functioning is established, for exaple, in the article "Fuzzy Control" by H. Kiendl and M.
Fritsch, in at Autoatisierungstechnik 41 (1993) 2, pp. A 5-A8 and can be described with the aid of FIG. 3 as follows: The controller input variable e is conducted to the fuzzification module 1. By
means of the membership functions 2 on the input side, this module establishes for each of the linguistic values a.sub.i, e.g. "vanishing", "positively small" and "positively large" as applied to
input variable e, the degree to which it is allocated to a currently applied value of input variable e. These values, also referred to as truth values w(e=a.sub.i) of the linguistic statements e=
a.sub.i are conducted to a fuzzy logic module 3 containing linguistic rules, combined to form a rule basis fuzzy operators and an inference machine. Proceeding from these truth values, the logic
operators assist in establishing to what degree the premises of the rules are fulfilled. The outcome of this determines the conclusions of the inference machine and--by means of the membership
functions 4 on the output side--a resulting membership function μ.sub.c (u). This function indicates for each value of the output variable u to what degree this is "favourable", i.e. recommended to
serve as an output variable value as based on the conclusions of all rules. This membership function μ.sub.c (u) is conducted to a defuzzification module 5. This then determines--for example
employing the familiar center of gravity method--a resulting unambiguous value of the output variable u.
The method of functioning of conventional fuzzy controllers is also described in H. -P. Preuβ's article "Fuzzy Control--heuristische Regelung mittels unscharfer Logik", atp Automatisierungstechnische
Praxis 1992, 4 pp. 176-184 and 5, pp. 239-246. The internal makeup of a fuzzy controller described in this article on the basis of FIG. 12 and 13 (q.v.) corresponds fully to the structure depicted in
FIG. 3 of this patent application. Differences occur only as a result of the terminology and graphic layout. Thus, in FIG. 3, the output variables of the fuzzification unit, which are truth values in
conformity with Preuβ's article (q.v., p. 240, left-hand column, sentence 1) are designated w(e=a.sub.i) for the purposes of illustration. Furthermore, in FIG. 3, the function of processing the
various rules (Preuβ=inference) as well as that of combining the conclusion of all rules (Preuβ=composition) are incorporated in the fuzzy logic function module. Likewise in FIG. 3, the membership
function to be defuzzified, which results on the output side from the combined interaction of all rules, is indicated by use of the symbol μ.sub.c (u). This designation is absent in FIGS. 12 and 13
of Preuβ's article. Nevertheless, this function is highlighted in the right-hand illustration section of FIG. 12 of the article by being underscored in black. Finally, in FIG. 3, the "module for
membership functions on the output side" likewise required by Preuβ, although not drawn separately there, is nevertheless identified as a separate block. This goes to show that the state of the art
illustrated in FIG. 3, from which this invention proceeds, is covered by Preuβ's article.
Conventional signal-processing fuzzy modules of this type reveal the following drawback: No guarantee can be given that the value of the output variable u lies outside certain "prohibited
ranges"--generally or under certain conditions. Avoidance of such prohibited ranges can be desirable for example if the output variable u of a fuzzy controller acts upon an actuator consisting of
several units that are switched on or off depending on the absolute value of u. Interest is then focused on reducing the frequency of the on/off switching action. In that case, all values of u will
be "unfavourable" or "prohibited" that are situated in the vicinity of the switchover thresholds. Another example frequently encountered is that systems engineering often requires a guarantee that in
certain situations a valve will be closed to the full extent, i.e. completely and not "almost". This means that in these situations all valve positions except the "closed" position are prohibited.
In the explanations that follow, reasons are given as to why such prohibitions cannot be maintained with certainty employing conventional fuzzy controllers. Here, the linguistic values "negatively
large", "negatively mean", "negatively small", "vanishing", "positively small", "positively mean" and "positively large" are planned for the output variables u of the fuzzy controller. From now on,
these will be referred to in their abbreviated form as NL, NM, NS, V, PS, PM and PL. The links between these linguistic values and the real numerical values of the variable u are established by means
of the membership functions in accordance with FIG. 2a. The two cases are now to be considered according to which the value range 1.8.ltoreq.u.ltoreq.2.2 applicable to the output variable u are to be
prohibited in general (stipulation 1) or under certain conditions (stipulation 2). To meet these requirements it is possible, of course, to remove all rules from the rule basis pertaining to
stipulation 1 in whose conclusions the term "u=positively mean" occurs and, as regards stipulation 2, all rules whose premises are fulfilled where the said prior conditions are met. In this way, it
may be possible for the fuzzy controller to generate a membership function μ.sub.c (u) which assumes only negligible functional values in the prohibited values range (FIG. 2b). Even so,
defuzzification of this membership function may result in a value u(e) lying within the prohibited values range. Thus, the outcome of defuzzification of the membership function μ.sub.c (u) depicted
in FIG. 2b according to the familiar centre of gravity method just happens to be value u(e)=2 which is situated in the middle of the prohibited range.
The main underlying cause of the drawback set out above encountered with conventional single-line signal-processing fuzzy units is as follows: in conventional type fuzzy units all values of the
output variable that more or less meet the conclusions of the rules are treated as "favourable", i.e. as positive "recommendations", while the degree of recommendation depends on the degree to which
the conclusion is met as well as the degree to which the premises of the rules are fulfilled. Values of output variables failing to meet the conclusion of a rule, either completely or to a slightly
lesser degree, are not treated as "prohibitions" or more or less severe "warnings", but rather as "non-recommendations". This explains why values of output variables not recommended by a certain rule
may nevertheless still emerge on the output side. The cause may be attributed to the action mechanism of known defuzzification procedures (FIG. 2b) or to the action mechanism of known inference
strategies not providing any means for preventing the "non-recommendation" of a rule from being "superimposed" by recommendations of other rules. As established above, strict adherence to
prohibitions may be desirable in practical applications.
The same disadvantage is revealed in the dual-line fuzzy controller familiar from Komori Kumiharo's publication "Fuzzy Control Method", Patent Abstracts of Japan, Publication Number JP4023004, 1992.
The two lines act upon a plant (e.g. a system engineering process) via a pulse-width modulation (PWM), while the first line exercises influence on the output variable of the plant in the positive
direction (e.g. temperature increase) and the second line affects the output variable of the plant in the negative direction (e.g. temperature decrease). Both lines are designed as conventional
single-line fuzzy controllers, i.e. they each have a fuzzification unit, a fuzzy logic unit and a defuzzification unit. The output variable f.sub.1 of the first line is formed by defuzzification of
the membership function μ.sub.1 (f) generated by the first line and acts in the positive direction. The output variable f.sub.2 of the second line is formed by defuzzification of the membership
function μ.sub.2 (f) generated by that line and acts in the negative direction. Thus, a typical feature in the structural makeup of this type of controller is that the membership function μ.sub.1 (f)
produced by the first line provides recommendations for the positive action directions and that the membership function generated by the second line μ.sub.2 (f) supplies recommendations for the
negative action directions. No allowance is made in particular for warnings or prohibitions as applied to positive and negative action directions. Accordingly, no offsetting of recommendations and
warnings takes place in respect of positive and negative action directions either.
Considering the invention from the technical angle, attention is drawn to the article by W. Zhang and S. Chen: "A Logical Architecture for Cognitive Maps", IEEE Int. Conf. of Neural Networks, ICNN
1988, pp. I-231-I-238. This article deals with cognitive networks formed from discrete nodes and connections. Pairs of values are allocated to the connections making it possible to express the extent
to which a node exercises a positive (supportive) influence on another node and on to what extent it has a negative (inhibitory) effect on the node. The article describes a system in which it is
possible to pool information contained in such a network: thus the resulting positive and negative effects a node i exercises on a node j can be determined, taking into account all paths along which
it is possible to access from node i to node j. If this system is applied to a network in which certain nodes are defined as input nodes and others as output nodes, it is possible to determine the
extent to which each input node supports or inhibits each of the output nodes. However, this does not yet produce any unambiguous output variable values in relation to given input variable values as
called for in the above-mentioned task. The process of generating output variable values necessitates appropriate offsetting of supporting and inhibitory factors. Even when applying this additional
measure, which is not provided for in the system, it would--as with the familiar expert systems--only be possible to generate output variable values for discrete input variable values from a spectrum
of discrete alternatives and not, as in the present task, from a continuum of possible values. This system is in so far unsuitable for solving the task in hand. Apart from that, the system processes
expert knowledge which is decentrally deposited in a network and not in the form of rules as is called for in the present task.
The task is to propose a system of controlling or monitoring processes or industrial plants by generating an unambiguous output variable value obtained from a continuum of possible values at the
output of a fuzzy unit in which at least one rule offers positive action proposals for selecting a value of an output variable u ("recommendations") and at least one other rule prohibits certain
values of output variables or issues a more or less severe warning ("warnings" or "prohibitions") against using these values, allowance also being made for such recommendations or warnings/
prohibitions to resort to a compromise when generating the value of the output variable.
With a dual-line system of controlling or monitoring processes or industrial plants employing a dual-line fuzzy unit, due allowance can be made for both positive and negative rules. These are rules
issuing "positive recommendations" and "warnings" or "prohibitions" for the selection of the values of the output variables for the fuzzy unit. The dual-line system enables logical compromises to be
made between these "recommendations" and "warnings" or "prohibitions". Along with the possibility of drawing on positive empirical knowledge, use of this dual-line system now also creates the
possibility of utilising negative empirical knowledge with the same degree of transparency with which positive empirical knowledge has previously been used in conventional fuzzy units. As far as
practical applications are concerned, this caters for the possibility of including warnings which serve to protect the plant or resources or to prevent undesirable control behaviour. The inclusion of
warnings or, more especially, prohibitions is also of interest for warranties of operational reliability.
The invention-based system in accordance with claim 1 solves the above described task. Further developments of the invention-based system, an invention-based fuzzy unit for implementing the said
system and beneficial applications of the said system form the subject of claims 2 to 28.
Cited Patent Filing date Publication date Applicant Title
US5131071 * Sep 21, 1989 Jul 14, 1992 Omron Tateisi Electronics Co. Fuzzy inference apparatus
US5179625 * May 5, 1992 Jan 12, 1993 Omron Tateisi Electronics Co. Fuzzy inference system having a dominant rule detection unit
US5255344 * Jun 26, 1992 Oct 19, 1993 Matsushita Electric Industrial Co., Ltd. Inference rule determining method and inference device
US5285376 * Oct 24, 1991 Feb 8, 1994 Allen-Bradley Company, Inc. Fuzzy logic ladder diagram program for a machine or process controller
US5295226 * Sep 2, 1992 Mar 15, 1994 Research Development Corporation Of Japan Fuzzy computer
US5303331 * Mar 19, 1991 Apr 12, 1994 Ricoh Company, Ltd. Compound type expert system
US5376611 * May 6, 1993 Dec 27, 1994 Phillips Petroleum Company Chromium ribbon-like silicate clay A-olefin catalysts
US5408584 * Oct 28, 1993 Apr 18, 1995 Rohm Co., Ltd. Fuzzy inference system
US5425131 * Jun 11, 1993 Jun 13, 1995 American Neuralogix Inc. Minimum comparator for fuzzy microcontroller
US5495574 * Jul 20, 1994 Feb 27, 1996 Olympus Optical Co., Ltd. Digital fuzzy inference system
US5600757 * Apr 13, 1995 Feb 4, 1997 Kabushiki Kaisha Toshiba Fuzzy rule-based system formed on a single semiconductor chip
1 * Kacprzyk et al; 3rd Int. Conf. on Information Processing and Management of Uncertainty in Knowledge Based Systems, IPMU 1990, pp. 424 430.
2 Kacprzyk et al; 3rd Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 1990, pp. 424-430.
3 * Patent Abstracts of Japan, vol. 16, No. 186 (p 1347) dated May 17, 1992.
4 Patent Abstracts of Japan, vol. 16, No. 186 (p-1347) dated May 17, 1992.
5 * Pfluger et al; IEEE Int. Conf. on Fuzzy Systems, ICFS 1992, pp. 717 723.
6 Pfluger et al; IEEE Int. Conf. on Fuzzy Systems, ICFS 1992, pp. 717-723.
7 * Zhang et al; IEEE Int. Conf. on Neural Networks, ICNN 1988, pp. I 231 I 238.
8 Zhang et al; IEEE Int. Conf. on Neural Networks, ICNN 1988, pp. I-231-I-238.
Citing Patent Filing date Publication date Applicant Title
US6421571 Feb 29, 2000 Jul 16, 2002 Bently Nevada Corporation Industrial plant asset management system: apparatus and method
US6430544 * Aug 11, 1998 Aug 6, 2002 Ronald Childress Single variable priority constraint fuzzy control system
US6775576 Jul 8, 2002 Aug 10, 2004 Bently Nevada, Llc Industrial plant asset management system: apparatus and method
US6814743 Dec 26, 2001 Nov 9, 2004 Origin Medsystems, Inc. Temporary seal and method for facilitating anastomosis
US6889096 * Jul 15, 2002 May 3, 2005 Bently Nevada, Llc Industrial plant asset management system: apparatus and method
US6966887 Feb 27, 2002 Nov 22, 2005 Origin Medsystems, Inc. Temporary arterial shunt and method
US7544203 Sep 27, 2004 Jun 9, 2009 Maquet Cardiovascular Llc Temporary seal and method for facilitating anastomosis
US7947062 Apr 15, 2002 May 24, 2011 Maquet Cardiovascular Llc Temporary anastomotic seal and method
Date Code Event Description
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:IBM DEUTSCHLAND GMBH;REEL/FRAME:029981/0917
Mar 13, 2013 AS Assignment Effective date: 20110728
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NUTECH SOLUTIONS GMBH;REEL/FRAME:027699/0194
Feb 14, 2012 AS Assignment Effective date: 20110718
Owner name: IBM DEUTSCHLAND GMBH, GERMANY
Apr 20, 2010 FPAY Fee payment Year of fee payment: 12
Apr 7, 2006 FPAY Fee payment Year of fee payment: 8
Owner name: NUTECH SOLUTIONS GMBH, GERMANY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIENDL, HARRO;REEL/FRAME:014250/0758
Jul 11, 2003 AS Assignment
Effective date: 20030607
Owner name: NUTECH SOLUTIONS GMBH MARTIN-SCHMEISSER-WEG 15DORT
Mar 28, 2002 FPAY Fee payment Year of fee payment: 4
Jul 20, 1999 CC Certificate of correction
Original Image | {"url":"http://www.google.com/patents/US5826251?dq=5,598,374","timestamp":"2014-04-19T01:06:17Z","content_type":null,"content_length":"114063","record_id":"<urn:uuid:a764018a-048b-4013-9edd-9afeda501b15>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capacitor Contactor Addresses Switching Powers of 80 kvar
> Capacitor Contactor Addresses Switching Powers of 80 kvar
Capacitor Contactor Addresses Switching Powers of 80 kvar
TDK-EPC has developed a new capacitor contactor for switching powers of 80 kvar. It supplements the previous series for powers of 12.5, 20, 25, 33, 50, 75 and 100 kvar, enabling cost-effective PFC
solutions to be implemented in specific configurations. Use of a 100 kvar contactor or parallel connection of lower powers is no longer required. The new 80 kvar contactor offers the same features
and technical properties as the other types of this series such as desirable attenuation of inrush currents; optimized switching behaviour that extends the operating life of PFC capacitors and of the
entire system; transient avoidance; reduced resistive losses; simple installation; isolated and protected resistor function; and improved energy quality.
[1] http://www.epcos.com/pfc | {"url":"http://www.ecnmag.com/print/product-releases/2011/01/capacitor-contactor-addresses-switching-powers-80-kvar","timestamp":"2014-04-18T20:12:53Z","content_type":null,"content_length":"9046","record_id":"<urn:uuid:93f8d011-6b49-43e9-b658-d02b0b2d1b67>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predstavitve grafov z enotsko razdaljo
Boris Horvat
(2009) PhD thesis, Univesity of Ljubljana.
The doctoral thesis describes problems concerning graphs that can be represented in the Euclidean plane (or k-space) in such a way, that vertices are represented as points in the plane (k-space) and
ed- ges as line segments of unit lengths. Problems are observed from a computational and a mathematical point of view. In the first part of the thesis the (already known, mainly mathematical) theory
of unit- distance graph representations is presented; at the same time the terminology of the results is unified and several propositions are proved. First computer aided attempts to generate small
graphs with a unit-distance representation are discussed. In the following chapter the well-known graph products of k-dimensional unit-distance graphs are studied; the chapter summarizes the results
from [59]. The third chapter disproves the wrong assumption that Heawood graph is not a unit-distance graph, by providing the unit-distance coordinatization of it. In the fourth chapter all
degenerate unit-distance representati- ons of the Petersen graph in the Euclidean plane are presented and some relationships among them are observed; see [58]. In the following chapter generalized
Petersen graphs and I-graphs are observed. Necessary and sufficient conditions for two I-graphs to be isomorphic are given. As a corollary it is shown that a large subclass of I-graphs can be drawn
with unit-distances in the Euclidean plane by using the representation with a rotational symmetry. Conjectures concerning unit-distance coordinati- zations and highly-degenerate unit-distance
representations of I-graphs are stated and verified for all I-graphs up to 2000 vertices. In the sixth chapter the decision problems that ask about the existence of a degenerate k-dimensional
unit-distance representation or coordinatization of a given graph are shown to be NP-complete. In the last chapter of the thesis a heuristics that draws a given graph in the Euclidean plane by
minimizing the quotient of the longest and the shortest edge length is presented; see SPE algorithm in [1]. The dilation coefficient of a graph is introduced and theoretically obtained bounds for the
dilation coefficient of a complete graph are given. The calculated upper bounds for the dilation coefficients of complete graphs are compared to the values obtained by three graph-drawing algorithms,
see [63].
EPrint Type: Thesis (PhD)
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 8175
Deposited By: Boris Horvat
Deposited On: 21 February 2012 | {"url":"http://eprints.pascal-network.org/archive/00008175/","timestamp":"2014-04-16T13:28:38Z","content_type":null,"content_length":"9357","record_id":"<urn:uuid:22c28dcf-5226-4b63-aa49-adc335bb0d58>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
April 16th 2007, 11:24 AM
Hi guys, i need some help on this question.
Graph f(x) and sketch the specified reflection image.
g(x), the reflection of f(x) = x + 1 in the x-axis
Can somebody teach me how I would graph f(x) and its reflection?
thanks in advance.
April 16th 2007, 12:04 PM
Then a reflection in x-axis is: -f(x) | {"url":"http://mathhelpforum.com/pre-calculus/13792-graphing-print.html","timestamp":"2014-04-20T03:56:19Z","content_type":null,"content_length":"4234","record_id":"<urn:uuid:1cdd08f3-986a-47dd-9453-92bd401491c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using R for Introductory Statistics, Chapter 3.4
August 21, 2010
By Christopher Bare
...a continuing journey through Using R for Introductory Statistics, by John Verzani.
Simple linear regression
Linear regression is a kooky term for fitting a line to some data. This odd bit of terminology can be blamed on Sir Francis Galton, a prolific victorian scientist and traveler who saw it as related
to his concept of regression toward the mean. Calling it a linear model is a little more straight-forward, and linear modeling through the lm function is bread-and-butter to R.
For example, let's look at the data set diamonds to see if there's a linear relationship between weight and cost of diamonds.
f = price ~ carat
plot(f, data=diamond, pch=5,
main="Price of diamonds predicted by weight")
res = lm(f, data=diamond)
abline(res, col='blue')
We start by creating the formula f using the strange looking tilde operator. That tells the R interpreter that we're defining a symbolic formula, rather than an expression to be evaluated
immediately. So, our definition of formula f says, "price is a function of carat". In the plot statement, the formula is evaluated in the context given by data=diamond, so that the variables in our
formula have values. That gives us the scatter plot. Now let's fit a line using lm, context again given by data=diamond, and render the resulting object as a line using abline. Looks spiffy, but what
just happened?
The equation of a line that we learned in high school is:
Minimizing squared error over our sample gives us estimates of the slope and intercept. The book presents this without derivation, which is a shame.
Maybe later, I'll get brave an try to insert a derivation here.
There's a popular linear model that applies to dating, which goes like this: It's OK for a man to date a younger woman if her age is at least half the man's age plus seven. In other words, this:
Apparently, I should be dating a 27 year old. Let me go ask my wife if that's OK. In the meantime, let's see how our rule compares to results of a survey asking the proper cutoff for dating for
various ages.
plot(jitter(too.young$Male), jitter(too.young$Female),
main="Appropriate ages for dating",
xlab="Male age", ylab="Female age")
abline(7,1/2, col='red')
res <- lm(Female ~ Male, data=too.young)
abline(res, col='blue', lty=2)
legend(15,45, legend=c("half plus 7 rule",
"Estimated from survey data"),
col=c('red', 'blue'), lty=c(1,2))
That's a nice correspondence. On second thought, this is statistical proof that my daughter is not allowed to leave the house 'til she's 30.
Somehow related to that is the data set Animals, comparing weights of body and brain for several animals. The basic scatterplot not revealing much, we put the data on a log scale and find that it
looks much better. As near as I can tell, the I or AsIs function does something like the opposite of the tilde operator. It tells the interpreter to go ahead and evaluate the enclosed expression. The
general gist is to transform our data to log scale then apply linear modeling.
f = I(log(brain)) ~ I(log(body))
plot(f, data=Animals,
main="Animals: brains vs. bodies",
xlab="log body weight", ylab="log brain weight")
res = lm(f, data=Animals)
abline(res, col='brown')
Now the problem is, the line doesn't seem to fit very well. Those three outliers on the right edge have high body weights but less than expected going on upstairs. That seems to unduly influence the
linear model away from the main trend. R contains some alternative algorithms for fitting a line to data. The function lqs is more resistant to outliers, like the large but pea-brained creatures in
this example.
res.lqs = lqs(f, data=Animals)
abline(res.lqs, col='green', lty=2)
That's better. Finally, you might use identify to solve the mystery of the knuckleheaded beasts.
with(Animals, identify(log(body), log(brain), n=3, labels=rownames(Animals)))
Problem 3.31 is about replicate measurements, which might be a good idea where measurement error, noisy data, or other random variation is present. We follow the by now familiar procedure of defining
our formula, doing a scatterplot, building our linear model, and finally plotting it over the scatterplot.
We are then asked to look at the variance of measurements at each particular voltage. To do that, we'll first split our data.frame up by voltage. The result is a list of vectors, one per voltage
breakdown.by.voltage = split(breakdown$time, breakdown$voltage)
List of 7
$ 26: num [1:3] 5.8 1580 2323
$ 28: num [1:5] 69 108 110 426 1067
$ 30: num [1:11] 7.7 17 20 21 22 43 47 139 144 175 ...
$ 32: num [1:15] 0.27 0.4 0.69 0.79 2.75 3.9 9.8 14 16 27 ...
$ 34: num [1:19] 0.19 0.78 0.96 1.31 2.78 3.16 4.15 4.67 4.85 6.5 ...
$ 36: num [1:15] 0.35 0.59 0.96 0.99 1.69 1.97 2.07 2.58 2.71 2.9 ...
$ 38: num [1:7] 0.09 0.39 0.47 0.73 1.13 1.4 2.38
Next, let's compute the variance for each component of the above list and build a data.frame out of it.
var.by.voltage = data.frame(voltage=names(breakdown.by.voltage),
This split-apply-combine pattern looks familiar. It's basically a SQL group by in R. It's also the basis for Hadley Wickham's plyr library. Plyr's ddply function takes breakdown, a data.frame, and
splits it on values of the voltage column. For each part, it computes the variance in the time column, then assembles the results back into a data.frame.
ddply(breakdown, .(voltage), .fun=function(df) {var(df$time)})
While that's not directly related to linear modeling, this kind of exploratory data manipulation is what R is made for.
More fun
Previous episode of Using R for Introductory Statistics
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/using-r-for-introductory-statistics-chapter-3-4-2/","timestamp":"2014-04-17T19:01:00Z","content_type":null,"content_length":"48506","record_id":"<urn:uuid:2a6f3e2f-ccdf-48c5-b935-b2b2647077a0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hurewicz fibration
Hurewicz fibration
Basic concepts
Serre fibration $\Leftarrow$Hurewicz fibration $\Rightarrow$Dold fibration $\Leftarrow$shrinkable map
A continuous map $p \;\colon\; E\longrightarrow B$ of topological space is called a Hurewicz fibration it it satisfies the right lifting property with respect to maps of the form $\sigma_0 \;\colon\;
X\cong X\times\{0\}\hookrightarrow X\times I$ for all topological spaces $X$.
Instead of checking the homotopy lifting property, one can instead solve a universal problem:
A map is a Hurewicz fibration precisely if it admits a Hurewicz connection. (See there for details.)
Appearance in a model structure
There is a Quillen model category structure on Top where fibrations are Hurewicz fibrations, cofibrations are closed Hurewicz cofibrations and weak equivalences are homotopy equivalences; see model
structure on topological spaces and Strøm's model category. There is a version of Hurewicz fibrations for pointed spaces, as well as in the slice category $Top/B_0$ where $B_0$ is a fixed base.
The historical paper of Hurewicz is
• Witold Hurewicz, On the concept of fiber space, Proc. Nat. Acad. Sci. USA 41 (1955) 956–961; MR0073987 (17,519e) PNAS,pdf.
A decent review of Hurewicz fibrations, Hurewicz connections and related issues isin
• James Eells, Jr., Fibring spaces of maps, in Richard Anderson (ed.) Symposium on infinite-dimensional topology
A textbook account of the homotopy lifting property is for instance in
See also the textbooks on algebraic topology by Whitehead and Spanier.
Revised on December 8, 2013 02:10:07 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/Hurewicz+fibration","timestamp":"2014-04-20T18:25:46Z","content_type":null,"content_length":"26335","record_id":"<urn:uuid:784e0592-967d-4637-a656-acefde709507>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johnsburg, IL Calculus Tutor
Find a Johnsburg, IL Calculus Tutor
...I taught trigonometry and algebra 2 to high school juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree
in applied statistics. I have tutored students of varying levels and ages for more than six years.
19 Subjects: including calculus, geometry, statistics, algebra 1
...My rates are low because I believe every student should have access to a tutor regardless of their family's income. I have tutored algebra 2 to students at a major tutoring firm. I believe
that math is fun.
12 Subjects: including calculus, geometry, algebra 1, algebra 2
...Integer and Rational Exponents 35. Solving Special Types of Equations and Inequalities 36. Working with Functions 37.
17 Subjects: including calculus, reading, geometry, statistics
...I also have TA-ed Econometrics and used econometrics extensively in assignments and in original research (e.g. my honors thesis). Further, I have attended top-tier PhD-level conferences in
which leading academics, some with Nobel prizes and eminent textbook authors (e.g. Wooldridge and Hamilton...
57 Subjects: including calculus, chemistry, English, French
...My tutoring methods would be the following: I would first ask the child if he or she believes he or she learns better by seeing, hearing or touching. I would then have the child fill out a
questionnaire to see if what he or she believes is true. I would then use that information to decide how I would help the child.
19 Subjects: including calculus, reading, geometry, algebra 1
Related Johnsburg, IL Tutors
Johnsburg, IL Accounting Tutors
Johnsburg, IL ACT Tutors
Johnsburg, IL Algebra Tutors
Johnsburg, IL Algebra 2 Tutors
Johnsburg, IL Calculus Tutors
Johnsburg, IL Geometry Tutors
Johnsburg, IL Math Tutors
Johnsburg, IL Prealgebra Tutors
Johnsburg, IL Precalculus Tutors
Johnsburg, IL SAT Tutors
Johnsburg, IL SAT Math Tutors
Johnsburg, IL Science Tutors
Johnsburg, IL Statistics Tutors
Johnsburg, IL Trigonometry Tutors
Nearby Cities With calculus Tutor
Antioch, IL calculus Tutors
Bull Valley, IL calculus Tutors
Cary, IL calculus Tutors
Fox Lake, IL calculus Tutors
Grayslake calculus Tutors
Hawthorn Woods, IL calculus Tutors
Lake Barrington, IL calculus Tutors
Lake Villa calculus Tutors
Lakemoor, IL calculus Tutors
Mchenry, IL calculus Tutors
Round Lake Beach, IL calculus Tutors
Round Lake Park, IL calculus Tutors
Round Lake, IL calculus Tutors
Spring Grove, IL calculus Tutors
Volo, IL calculus Tutors | {"url":"http://www.purplemath.com/Johnsburg_IL_Calculus_tutors.php","timestamp":"2014-04-19T19:52:57Z","content_type":null,"content_length":"23957","record_id":"<urn:uuid:9c66f7a9-f83a-41e8-a71f-53c145588867>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show That The Radius R Of The Orbit Of A Moon Of... | Chegg.com
Show that the radius r of the orbit of a moon of a given planet can be determined from the radius R of the planet, the acceleration of gravity at the surface of the planet, and the time τ required by
the moon to complete one full revolution about the planet. Determine the acceleration of the gravity at the surface of the planet Jupiter knowing that R = 44.400mi, τ =3.551 days, and r = 417.001 mi
for its moon Europa. | {"url":"http://www.chegg.com/homework-help/show-radius-r-orbit-moon-given-planet-determined-radius-r-pl-chapter-12-problem-82p-solution-9780072976939-exc","timestamp":"2014-04-21T16:01:47Z","content_type":null,"content_length":"42151","record_id":"<urn:uuid:0ea0a3e4-21a9-49f4-abbf-4a1166f224e0>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Simple, accurate, and efficient revisions to MacCormack and Saulyev schemes: high Péclet numbers.
(English) Zbl 1114.65103
Summary: Stream water quality modeling often involves numerical methods to solve the dynamic one-dimensional advection-dispersion-reaction equations (ADRE). There are numerous explicit and implicit
finite difference schemes for solving these problems, and two commonly used schemes are the MacCormack and Saulyev schemes.
This paper presents simple revisions to these schemes that make them more accurate without significant loss of computation efficiency. Using advection dominated (high Péclet number) problems as test
cases, performances of the revised schemes are compared to performances of five classic schemes: forward-time/centered-space (FTCS); backward-time/centered-space (BTCS); Crank-Nicolson; and the
traditional MacCormack and Saulyev schemes. All seven of the above numerical schemes are tested against analytical solutions for pulse and step inputs of mass to a steady flow in a channel, and
performances are considered with respect to stability, accuracy, and computational efficiency.
Results indicate that both the modified Saulyev and the MacCormack schemes, which are named the Saulyev${}_{c}$ and MacCormack${}_{c}$ schemes, respectively, greatly improve the prediction accuracy
over the original ones. The computation efficiency in terms of CPU time was not impacted for the Saulyev${}_{c}$ scheme. The MacCormack${}_{c}$ scheme demonstrates increased time consumption but is
still much faster than implicit schemes.
65M06 Finite difference methods (IVP of PDE)
35K15 Second order parabolic equations, initial value problems | {"url":"http://zbmath.org/?q=an:1114.65103","timestamp":"2014-04-18T00:26:52Z","content_type":null,"content_length":"22682","record_id":"<urn:uuid:6618c2fb-63b8-4a94-ac45-8441fe2063f5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [asa] Thermodynamics & Eternal Unive
Re: [asa] Thermodynamics & Eternal Universe - A Question
From: gordon brown <Gordon.Brown@Colorado.EDU> Date: Wed Oct 01 2008 - 21:32:19 EDT
> You will note that I specified integers. You can keep going with division
> and get an infinity of rational numbers between each pair of integers and
> have a greater infinity than that of the integers. You can also have a
> larger number yet of irrational numbers. That's without counting
> imaginary numbers and the infinite number of modular arithmetics. If you
> want to go with Planck values, assign an integer to each, although an
> integral ordinal would probably be better. It will apply to what you
> assumed for your claim of a proof.
> Dave (ASA)
I agree with the point that the above is making, and so the technical
correction that I offer is not intended to detract from it. The number of
rational numbers between each pair of integers is the same order of
infinity as that of the integers. However the number of irrational numbers
is indeed larger.
Gordon Brown (ASA member)
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Wed Oct 1 21:33:08 2008
This archive was generated by hypermail 2.1.8 : Wed Oct 01 2008 - 21:33:09 EDT | {"url":"http://www2.asa3.org/archive/asa/200810/0017.html","timestamp":"2014-04-20T08:34:46Z","content_type":null,"content_length":"6721","record_id":"<urn:uuid:8b69ad9d-9954-4440-b459-fc9f12cc45f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining tangent line using new parameter to a curve
April 1st 2011, 05:10 AM
Defining tangent line using new parameter to a curve
I have been asked a series of questions on the following curve
r(t) = (1+cost,2sint,0) where t varies from 0 to 2pi
part a) required me to find the orientation of the tangent vector point (pi/3)
I found the tangent vector r'(t)= (-sint,2cost,0) and substituted in (pi/3)
r'(pi/3)= -sqrt(3)/2 i + j
but part b) requires me to use a second parametric variable to define the tangent line to the point (pi/3).
I knwo this isnt to hard , but i dont quite understand what the question is asking. What do they mean define? have i not already defined it at this point? btw the answer given is q(s)=(1.5-sqrt
(3)s/2, sqrt(3) + s ,0)
Any help understanding this would be appreciated
April 1st 2011, 06:40 AM
You have parametric equations for the given figure (it happens to be an ellipse) but now they want you to write parametric equations for the tangent line to the figure. Yes, the tangent vector to
the ellipse at $t= \pi/3$ is $-\sqrt{3}{2}\vec{i}+ \vec{j}$ and one point on the tangent line is, of course, the point on the ellipse, $(3/2, \sqrt{3}, 0)$.
In three dimensions, parametric equations for a line through $(x_0, y_0, z_0)$ with "direction vector" $A\vec{i}+ B\vec{j}+ C\vec{k}$ are $x= As+ x_0$, $y= Bs+ y_0$, $z= Cs+ z_0$ where "s" is the | {"url":"http://mathhelpforum.com/calculus/176517-defining-tangent-line-using-new-parameter-curve-print.html","timestamp":"2014-04-20T21:27:38Z","content_type":null,"content_length":"6113","record_id":"<urn:uuid:4416166e-30b9-40b0-b608-9671114cabf0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
weak equivalence of simplicial sets
up vote 2 down vote favorite
Given a morphism f:X --> Y in sSet, and assume that it induces isomorphisms for \pi_0,\pi_1,\pi_2, and all integral homology groups. Does it imply that f is a weak equivalence?
In Hatcher's Algebraic Topology book, he requires both X and Y simply-connected.
Here is a possible idea of `proof': we may assume f is a fibration between fibrant objects, and let Z be the fiber of f. The long exact sequence for homotopy groups shows that Z is simply-connected.
Then need to use Leray-Serre spectral sequence to see all integral homology of Z vanishes. But since Y may not be simply-connected, it is hard to check the condition of the spectral sequence to hold,
and I am not good at the twisted coefficients. Just wondering if there is any counterexample for this question, and hopefully some references as well. Thank you.
homotopy-theory at.algebraic-topology
2 It seems that this question (and questions of this kind) can be answered generally using usual topological spaces. Using words like simplicial sets, morphisms and fibrant objects may be overkill!
– Somnath Basu Apr 28 '11 at 4:04
1 In my answer to this question: mathoverflow.net/questions/53399/…; I gave an example of a map which is a homology equivalence and an isomorphism on $\pi_i$ for $i < n$ (for a fixed $n$ that can be
arbitrary large). Moreover, all homotopy groups of these spaces are abstractly isomorphic. You can see what goes wrong in the Leray-Serre spectral sequence. Talking about fibrant simplicial sets
in this situation is a distraction. – Johannes Ebert Apr 28 '11 at 8:21
A counterexample is given in Example 4.35 in the textbook that's mentioned in the question. It's a pretty simple construction: Start with $S^1\vee S^n$, $n >1$, and attach an $(n+1)$-cell by a map
2 $S^n \to S^1 \vee S^n$ representing the element $2t-1$ in $\pi_n(S^1 \vee S^n) = {\Bbb Z}[t,t^{-1}]$. Then the inclusion of $S^1$ into the resulting space is an isomorphism on all homology groups
and on $\pi_i$ for $i < n$ but not on $\pi_n$. – Allen Hatcher Apr 29 '11 at 16:14
add comment
1 Answer
active oldest votes
The answer is no, and there are plenty of counterexamples. Note that simplicial sets are not relevent here; one can cook up examples with spaces and take their total singular complexes.
For example, there are high dimensional knots $K: S^n \to S^{n+2}$ (i.e., smooth embeddings with $n > 1$) such that the complement $X = S^{n+2} - K(S^n)$ has $\pi_1(X) = \Bbb Z$. A
generator is represented by a map $X \to S^1$ which is a both a $\pi_1$- and a homology isomorphism. This will give examples with the exception of your condition on $\pi_2$.
up vote 6 To get the $\pi_2$ condition on the above consider the subclass of those knots such that $n = 2k+1$ is odd and $\pi_j(X) \cong \pi_j(S^1)$ for $j\le k$ and $\pi_{k+1}(X) \ne 0$. These are
down vote called "simple knots." There is a complete classification of these in terms of a certain bilinear form (the Blanchfield pairing). The classification was announced by Kearton in the paper
Classification of simple knots by Blanchfield duality. Bull. Amer. Math. Soc. 79 (1973), 952–955
add comment
Not the answer you're looking for? Browse other questions tagged homotopy-theory at.algebraic-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/63214/weak-equivalence-of-simplicial-sets/63219","timestamp":"2014-04-20T18:30:13Z","content_type":null,"content_length":"55679","record_id":"<urn:uuid:811b9556-764f-4387-b844-6767e8ad7b83>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random errors in MOS capacitors
Results 1 - 10 of 12
- IEEE J. Solid-State Circuits , 1989
"... Abstract-The matching properties of the threshold voltage, substrate factor, and current factor of MOS transistors have been analyzed and measured. Improvements to the existing theory are given,
as well as extensions for long-distance matching and rotation of devices. Matching parameters of several ..."
Cited by 208 (1 self)
Add to MetaCart
Abstract-The matching properties of the threshold voltage, substrate factor, and current factor of MOS transistors have been analyzed and measured. Improvements to the existing theory are given, as
well as extensions for long-distance matching and rotation of devices. Matching parameters of several processes are compared. The matching results have been verified by measurements and calculations
on several basic circuits. 1.
, 1996
"... A dense and fast threshold-logic gate with a very high fan-in capacity is described. The gate performs sum-ofproduct and thresholding operations in an architecture comprising a poly-to-poly
capacitor array and an inverter chain. The Boolean function performed by the gate is soft programmable. This i ..."
Cited by 26 (2 self)
Add to MetaCart
A dense and fast threshold-logic gate with a very high fan-in capacity is described. The gate performs sum-ofproduct and thresholding operations in an architecture comprising a poly-to-poly capacitor
array and an inverter chain. The Boolean function performed by the gate is soft programmable. This is accomplished by adjusting the threshold with a dc voltage. Essentially, the operation is dynamic
and thus, requires periodic reset. However, the gate can evaluate multiple input vectors in between two successive reset phases because evaluation is nondestructive. Asynchronous operation is,
therefore, possible. The paper presents an electrical analysis of the gate, identifies its limitations, and describes a test chip containing four different gates of fan-in 30, 62, 127, and 255.
Experimental results confirming proper functionality in all these gates are given, and applications in arithmetic and logic function blocks are described. I. INTRODUCTION T HRESHOLD logic (TL)
originally emerged ...
- IEEE Journal of Solid-State Circuits , 1999
"... Abstract — This paper describes a three-axis accelerometer implemented in a surface-micromachining technology with integrated CMOS. The accelerometer measures changes in a capacitive half-bridge
to detect deflections of a proof mass, which result from acceleration input. The half-bridge is connected ..."
Cited by 19 (3 self)
Add to MetaCart
Abstract — This paper describes a three-axis accelerometer implemented in a surface-micromachining technology with integrated CMOS. The accelerometer measures changes in a capacitive half-bridge to
detect deflections of a proof mass, which result from acceleration input. The half-bridge is connected to a fully differential position-sense interface, the output of which is used for one-bit force
feedback. By enclosing the proof mass in a one-bit feedback loop, simultaneous force balancing and analog-to-digital conversion are achieved. On-chip digital offset-trim electronics enable
compensation of random offset in the electronic interface. Analytical performance calculations are shown to accurately model device behavior. The fabricated singlechip accelerometer measures 4 2 4mm
P, draws 27 mA from a 5-V supply, and has a dynamic range of 84, 81, and 70 dB along the �-, �-, and �-axes, respectively. Index Terms—Accelerometer, calibration, force balance,
microelectromechanical systems (MEMS), sensor, sigma–delta.
- IEEE International Electron Devices Meeting , 1998
"... This paper gives an overview of MOSFET mismatch effects that form a performance/yield limitation for many designs. After a general description of (mis)matching, a comparison over past and future
process generations is presented. The application of the matching model in CAD and analog circuit de-of t ..."
Cited by 11 (0 self)
Add to MetaCart
This paper gives an overview of MOSFET mismatch effects that form a performance/yield limitation for many designs. After a general description of (mis)matching, a comparison over past and future
process generations is presented. The application of the matching model in CAD and analog circuit de-of these parallel paths (e.g. in multiplexers, comparators, input stages etc.) is important.
Figure 8 will give an example of how transistor matching influences clock delay differences in clock trees. Hence, unequal paths lead to performance or yield loss in analog circuits or reduce
robustness in digital circuits. Threshold matching sign is discussed. Mismatch effects gain importance The difference AVT between the threshold voltages as critical dimensions and CMOS power supply
volt- of a pair of MOS transistors (mismatch) is wuaily ages decrease. described[2], [3], [4],[5], [6], [7] by its standard deviation:
, 2000
"... Parametric faults are a significant cause of incorrect operation in analog circuits. Many design for test techniques for analog circuits are ineffective at detecting multiple parametric faults
because either their accuracy is poor, or the circuit is not tested in the configuration in which it is use ..."
Cited by 4 (0 self)
Add to MetaCart
Parametric faults are a significant cause of incorrect operation in analog circuits. Many design for test techniques for analog circuits are ineffective at detecting multiple parametric faults
because either their accuracy is poor, or the circuit is not tested in the configuration in which it is used. We present a design for test (DFT) scheme that offers the accuracy needed to test
high-quality circuits. The DFT scheme is based on a circuit that digitally measures the ratio of a pair of capacitors. The circuit is used to characterize the transfer function of a switched
capacitor circuit, which is usually determined by capacitor ratios. In our DFT scheme, capacitor ratios can be measured to within 0.01% accuracy and filter parameters can be shown to be satisfied to
within 0.1% accuracy. With this characterization process, a filter can be directly shown to satisfy all specifications that depend on capacitor ratios. We believe the accuracy of our approach is at
least an order of magnitude...
- IEEE Trans. Circuits Syst , 1991
"... A&r-u &-A novel procedure that determines the capacitor values for a given integrator-based SC network with given capacitor ratios is presented. The procedure optimally distributes a limited
capacitance area among the individual circuit capacitors by minimizing the overall capacitor spread while sim ..."
Cited by 1 (1 self)
Add to MetaCart
A&r-u &-A novel procedure that determines the capacitor values for a given integrator-based SC network with given capacitor ratios is presented. The procedure optimally distributes a limited
capacitance area among the individual circuit capacitors by minimizing the overall capacitor spread while simultaneously minimizing either sensitivity or noise. Noise in SC circuits is a function of
ideal SC design parameters such as capacitor ratios and capacitance levels and of the technology-dependent parameters describing the switches and amplifiers. In our description of the noise
performance, we have found a characteristic point which is only a function of SC design parameters and can thus serve as a measure for the noise performance. For its description a closed-form
expression is used, which has the same form as the corresponding sensitivity measure. With these expressions an efficient capacitance assignment optimization procedure is derived, which is
implemented in the computeraided design and optimization program package SCSYN. I.
, 2001
"... The implementation of active pixel based image sensors in CMOS technology is becoming increasingly important forproducing imaging systems that can be manufactured with low cost, low power,
simple interface, and with good image quality. The major obstacle in the design of CMOS imagers is Fixed Patter ..."
Add to MetaCart
The implementation of active pixel based image sensors in CMOS technology is becoming increasingly important forproducing imaging systems that can be manufactured with low cost, low power, simple
interface, and with good image quality. The major obstacle in the design of CMOS imagers is Fixed Pattern Noise (FPN) and Signal-to-Noise-Ratio (SNR) of the video output. This research focuses on
minimizing FPN and improving SNR in linear CMOS image sensors which are needed in scanning and swiping applications such as nger print sensing, spectroscopy, and medical imaging systems. FPN is
reduced in this research through the use of closed loop operational ampli ers in active pixels and through performing Correlated Double Sampling (CDS). SNR is improved by increasing the pixel
saturation voltage. This thesis concludes that FPN can be reduced using the closed loop opamp bu ers. The major FPN noise sources are the shot noise from the photodiode, kTC noise from the sampling
capacitors, and o set mismatches in the sample and hold ampli ers all of which are not compensated by CDS. Sample and hold ampli er o set mismatch is identi ed as
, 2007
"... A carbon nanotube is considered as a candidate for a next-generation chemical sensor. CNT sensors are attractive as they allow room-temperature sensing of chemicals. From the system perspective,
this signifies that the sensor system does not require any micro hotplates, which are one of the major so ..."
Add to MetaCart
A carbon nanotube is considered as a candidate for a next-generation chemical sensor. CNT sensors are attractive as they allow room-temperature sensing of chemicals. From the system perspective, this
signifies that the sensor system does not require any micro hotplates, which are one of the major sources of power dissipation in other types of sensor systems. Nevertheless, a poor control of the
CNT resistance poses a constraint on the attainable energy efficiency of the sensor platform. An investigation on the CNT sensors shows that the dynamic range of the interface should be 17 bits,
while the resolution at each base resistance should be 7 bits. The proposed CMOS interface extends upon the previously published work to optimize the energy performance through both the architecture
and circuit level innovations. The 17-bit dynamic range is attained by distributing the requirement into a 10-bit Analog-to-Digital Converter (ADC) and a 8-bit Digital-to-Analog Converter (DAC). An
extra 1-bit leaves room for any unaccounted subblock performance error.
, 2002
"... Serietitel och serienummer Title of series, numbering ISSN 1400-3902 URL för elektronisk version ..."
Add to MetaCart
Serietitel och serienummer Title of series, numbering ISSN 1400-3902 URL för elektronisk version
"... The paper is an overview of MOS transistor mismatch modeling and simulation over the existent literature. The fluctuations of physical parameters and line width are the main causes of mismatch.
There are two types of mismatch. Systematic mismatch can be reduced to great extent with proper layout. Di ..."
Add to MetaCart
The paper is an overview of MOS transistor mismatch modeling and simulation over the existent literature. The fluctuations of physical parameters and line width are the main causes of mismatch. There
are two types of mismatch. Systematic mismatch can be reduced to great extent with proper layout. Different patterns are available, that are able to reduce from linear to n-th order polynomial
systematic mismatch. Stochastic mismatch can only be reduced with better process control and larger transistor areas. There are different approaches for calculating the standard deviation
representing stochastic mismatch. Simple formulas (e.g. square root of area rule) are most commonly used. With the reducing of the transistor area some new effects should be considerate and more
complex formulas are needed. On the other hand correlation functions and frequency domain analysis with spatial spectra give more accurate results. These two approaches are more general but they do
not give physical insight and the final layout should be known. Mismatch can be simulated in several ways. Brute force simulation based on Monte-Carlo analysis is appropriate for any kind of
distribution but it is the most time expensive. Simulations based on small signal analysis are faster because less circuit simulations are needed to calculate the sensitivity. Two different
approaches to calculate the sensitivity are presented in this paper. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2556639","timestamp":"2014-04-19T20:02:33Z","content_type":null,"content_length":"39138","record_id":"<urn:uuid:305b4a91-bb77-4777-9ced-16c0469d97b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
how can I do this? what is the equation of the graph? Please help
Okay, so seeing two graphs on one set of axis suggests a piece wise function. The sharp turn in the lower graph indicates that it is an absolute value function. Now, I can't see the markings on the
graph so clearly, so there may be some errors. I assume the lowest point on the graph, that is, where the sharp turn is is (-1,1), and that the y-intercept is 2 and that the horizontal graph is in
line with y=5 and that the dots are above x=-3. So here goes: An absolute valued function always gives positive answers, so whenever the answer is going to become negative, it it flips the graph
upward, which is why we see a sharp turn. We see the graph flips when x is at -1, which means if x gets any smaller, the value will be negative, so then the absolute value part of the function is |
x+1|, when x=-1, it becomes zero, anyless than -1, it is negative and therefore the graph flips up making a sharp turn. but we're not done yet. The y-intercept is 2. The y-intercept occurs when x is
zero. In this case, if x=0, we have y=|1|=1. So to correct this, we just shift the graph upward by 1 by adding a constant 1, so the absolute value part of the function is |x+1| + 1. The other part of
the graph is easy (provided I can see it correctly), the graph is a constant 5, so it's y=5. Now we put these 2 functions together and obtain:
Whoops, there should be a pic there, but the forum tells me it's file size is too large. I'll try again
okay, here it is. We see the circle attached to the end of the y=5 curve is unshaded, and therefore it means x is not equal to -3 at the point. The shaded circle at the end of the absolute value part
says it can be equal.
thanks for your explanation.:) I understood this much very well. let me try to do the questions. can you do the last one (continuity) please
No, the function is not continuous at x= -3. Explanation: A function f is continuous at a number a if lim{x-->a}f(x)=f(a). Here, we actually can find a value for f(a), that is f(-3)=2, but it is not
equal to the limit. Since this lim at x=-3 does not exist. For a limit to exist, the left hand limit must be equal to the right hand limit. here, the left hand limit is 5, the right hand limit is 2
(looks like i ended up doing parts (a) and (b) for you by explaining this), so the limit does not exist and therefore, the function is not continuous. Note: left hand limit is lim{x-->a-} and right
hand limit is lim{x-->a+} So you se, all the parts leading up to (e) was to give you clues. for the function to be continuous, we had to have the answers (a)=(b)=(c)=(d), but that was not the case.
(a) was not equal to (b) and (c) didn't even exist | {"url":"http://mathhelpforum.com/calculus/11793-graph-print.html","timestamp":"2014-04-20T19:22:30Z","content_type":null,"content_length":"7548","record_id":"<urn:uuid:12304d15-98d7-47fc-9683-25a709dc8157>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
The analysis of failure times in the presence of competing risks
Results 1 - 10 of 25
- Journal of the American Statistical Association , 1996
"... this paper we propose state space or dynamic models as a flexible technique, which makes simultaneous modelling and smooth estimation of hazard functions and covariate effects possible. The
development is related to a dynamic version of the piecewise exponential model and extensions to point process ..."
Cited by 8 (4 self)
Add to MetaCart
this paper we propose state space or dynamic models as a flexible technique, which makes simultaneous modelling and smooth estimation of hazard functions and covariate effects possible. The
development is related to a dynamic version of the piecewise exponential model and extensions to point processes studied by Gamerman (1991, 1992) and, more closely, to dynamic grouped survival models
with only one terminating event (Fahrmeir 1994), where a generalized Kalman filter and smoother (GKFS) is proposed for estimating hazard functions and time-varying effects. Here we extend this
approach to models with multiple terminating events (Section 2) and develop a numerically efficient Fisher scoring smoothing algorithm (Section 3). It is obtained by extending iterative Kalman-type
techniques for multicategorical time series (Fahrmeir and Tutz, 1994 Ch.8; Fahrmeir and Wagenpfeil, 1995) to the present situation. The smoothing algorithms can be derived as posterior mode
estimators or, from a nonparametric point of view, as penalized likelihood estimators. For only one terminating event (m=1), they generally improve (GKFS) with regard to numerical accuracy and
approximation quality. Data-driven choice of smoothing- or hyperparameters can be achieved by an EM-type algorithm or by cross-validation. 5
, 1999
"... SUMMARY. Over the last decade, J. M. Robins has developed a set of tools for assessing, from observa-tional data, the causal effects of a time-dependent treatment or exposure in the presence of
time-dependent covariates that may be simultaneously confounders and intermediate variables. This report c ..."
Cited by 6 (0 self)
Add to MetaCart
SUMMARY. Over the last decade, J. M. Robins has developed a set of tools for assessing, from observa-tional data, the causal effects of a time-dependent treatment or exposure in the presence of
time-dependent covariates that may be simultaneously confounders and intermediate variables. This report concerns a case study of the application of one these techniques, G-estimation using
structural nested failure time models, to the problem of assessing the effect of graft versus host disease on leukemia relapse after bone marrow transplantation.
- Statistics in Medicine 16 , 1997
"... In the competing risks problem, a useful quantity is the cumulative incidence function, which is the probability of occurrence by time t for a particular type of failure in the presence of other
risks. The estimator of this function as given by Kalbfleisch and Prentice is consistent, and, properly n ..."
Cited by 5 (0 self)
Add to MetaCart
In the competing risks problem, a useful quantity is the cumulative incidence function, which is the probability of occurrence by time t for a particular type of failure in the presence of other
risks. The estimator of this function as given by Kalbfleisch and Prentice is consistent, and, properly normalized, converges weakly to a zero-mean Gaussian process with a covariance function for
which a consistent estimator is provided. A resampling technique is developed to approximate the distribution of this process, which enables one to construct confidence bands for the cumulative
incidence curve over the entire time span of interest and to perform Kolmogorov—Smirnov type tests for comparing two such curves. An AIDS
- Diskussionspapier Nr. 14 des SFB 386, LMU Munchen , 1997
"... Discrete--time grouped duration data, with one or multiple types of terminating events, are often observed in social sciences or economics. In this paper we suggest and discuss dynamic models
for flexible Bayesian nonparametric analysis of such data. These models allow simultaneous incorporation and ..."
Cited by 2 (0 self)
Add to MetaCart
Discrete--time grouped duration data, with one or multiple types of terminating events, are often observed in social sciences or economics. In this paper we suggest and discuss dynamic models for
flexible Bayesian nonparametric analysis of such data. These models allow simultaneous incorporation and estimation of baseline hazards and time--varying covariate effects, without imposing
particular parametric forms. Methods for exploring the possibility of time--varying effects, as for example the impact of nationality or unemployment insurance benefits on the probability of
re--employment, have recently gained increasing interest. Our modeling and estimation approach is fully Bayesian and makes use of Markov Chain Monte Carlo (MCMC) simulation techniques. A detailed
analysis of unemployment duration data, with full--time job, part--time job and other causes as terminating events, illustrates our methods and shows how they can be used to obtain refined results
and interpretations. Key words...
, 2008
"... Simultaneous discrimination among various parametric lifetime models is an important step in the parametric analysis of survival data. We consider a plot of the skewness versus the coefficient
of variation for the purpose of discriminating among parametric survival models. We extend the method of C ..."
Cited by 1 (0 self)
Add to MetaCart
Simultaneous discrimination among various parametric lifetime models is an important step in the parametric analysis of survival data. We consider a plot of the skewness versus the coefficient of
variation for the purpose of discriminating among parametric survival models. We extend the method of Cox & Oakes from complete to censored data by developing an algorithm based on a competing risks
model and kernel function estimation. A by-product of this algorithm is a nonparametric survival function estimate.
, 2006
"... For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates [Prentice et al.
Biometrics 34 (1978) 541–554]. This article studies an extension of this approach to allow a continuum of competin ..."
Cited by 1 (0 self)
Add to MetaCart
For time-to-event data with finitely many competing risks, the proportional hazards model has been a popular tool for relating the cause-specific outcomes to covariates [Prentice et al. Biometrics 34
(1978) 541–554]. This article studies an extension of this approach to allow a continuum of competing risks, in which the cause of failure is replaced by a continuous mark only observed at the
failure time. We develop inference for the proportional hazards model in which the regression parameters depend nonparametrically on the mark and the baseline hazard depends nonparametrically on both
time and mark. This work is motivated by the need to assess HIV vaccine efficacy, while taking into account the genetic divergence of infecting HIV viruses in trial participants from the HIV strain
that is contained in the vaccine, and adjusting for covariate effects. Mark-specific vaccine efficacy is expressed in terms of one of the regression functions in the mark-specific proportional
hazards model. The new approach is
"... This paper develops omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. Confidence bands for the difference and the ratio of
two conditional cumulative incidence functions are also constructed. The omnibus test is formulated in te ..."
Cited by 1 (0 self)
Add to MetaCart
This paper develops omnibus tests for comparing cause-specific hazard rates and cumulative incidence functions at specified covariate levels. Confidence bands for the difference and the ratio of two
conditional cumulative incidence functions are also constructed. The omnibus test is formulated in terms of a test process given by a weighted difference of estimates of cumulative cause-specific
hazard rates under Cox proportional hazards models. A simulation procedure is devised for sampling from the null distribution of the test process, leading to graphical and numerical techniques for
detecting significant differences in the risks. The approach is applied to a cohort study of type-specific HIV infection rates. Key words: Cause-specific hazard rates; Cox proportional hazards model;
Cumulative incidence function; Dependent competing risks; Human immunodeficiency virus. 1 1. Introduction In longitudinal studies, where individuals are subject to failure f
"... In the competing risks literature, one usually compares whether two risks are equal or whether one is "more serious." In this paper, we propose tests for the equality of two competing risks
against an ordered alternative specified by their sub-survival functions. These tests are naturally developed ..."
Add to MetaCart
In the competing risks literature, one usually compares whether two risks are equal or whether one is "more serious." In this paper, we propose tests for the equality of two competing risks against
an ordered alternative specified by their sub-survival functions. These tests are naturally developed as extensions of those based on hazard rates and cumulative incidence functions. We note that the
interpretation of the new test results is more direct compared to the situation when the hypotheses are framed in terms of their cumulative incidence functions. The proposed tests are of the
Kolmogrov-Smirnov type, based on maximum differences between sub-survival functions. Our simulation studies indicate that they are excellent competitors of the existing tests, that are based mainly
on differences between cumulative incidence functions. A numerical example will demonstrate the advantages of the proposed tests. 1 Introduction The competing risks problem involves subjects or
experimental unit...
, 1989
"... PAIGE L. WilLIAMS. Analytic Expressions for Maximum Likelihood Estimators in a Nonparametric Model ofTumor Incidence and Death (under the direction ofDr. Christopher 1. Portier) ABSTRACf: The
primary objective ofa long-tenn animal carcinogenicity experiment is the comparison of tumor incidence rates ..."
Add to MetaCart
PAIGE L. WilLIAMS. Analytic Expressions for Maximum Likelihood Estimators in a Nonparametric Model ofTumor Incidence and Death (under the direction ofDr. Christopher 1. Portier) ABSTRACf: The primary
objective ofa long-tenn animal carcinogenicity experiment is the comparison of tumor incidence rates among treatment groups. Complications arise in the statistical analysis of tumor incidence data
when the tumor type ofinterest is not observable. Since reliance on assumptions regarding tumor and treatment lethality is likely to introduce bias, this research focuses attention on the estimation
of tumor incidence rates from long-term animal studies which incorporate interim sacrifices. A nonparametric stochastic model is described with transition rates between states corresponding to the
tumor incidence rate, overall death rate, and death rate for tumor-free animals. Exact analytic solutions for the maximum likelihood estimators (MLE's) ofthe discrete hazard rates are presented, and
constrained MLE's are derived for a study design with up to three intervals under the imposition of boundary constraints. For a study design with more than three intervals, alternative estimators
ofthe discrete death rates and tumor incidence rate are | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2048135","timestamp":"2014-04-19T23:15:04Z","content_type":null,"content_length":"38404","record_id":"<urn:uuid:67398bf8-e1bc-4e87-a533-501bd8693d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gravitational entropy and thermodynamics away from the horizon
Brustein, R and Medved, A.J.M (2012) Gravitational entropy and thermodynamics away from the horizon. Physics Letters B, 715 (1-3). pp. 267-270.
We define, by an integral of geometric quantities over a spherical shell of arbitrary radius, an invariant gravitational entropy. This definition relies on defining a gravitational energy and
pressure, and it reduces at the horizon of both black branes and black holes to Wald's Noether charge entropy. We support the thermodynamic interpretation of the proposed entropy by showing that, for
some cases, the field theory duals of the entropy, energy and pressure are the same as the corresponding quantities in the field theory. In this context, the Einstein equations are equivalent to the
field theory thermodynamic relation TdS=dE+PdV supplemented by an equation of state.
Item Type: Article
Uncontrolled Keywords: Black-hole entropy; Field-theories; String theory; Gravity; tensor; High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)
Subjects: Q Science > QC Physics
Divisions: Faculty > Faculty of Science > Physics & Electronics
ID Code: 3777
Deposited By: Mrs Eileen Shepherd
Deposited On: 23 Oct 2012 12:58
Last Modified: 23 Oct 2012 12:58
full-text download(s) since 23 Oct 2012 12:58
full-text download(s) in the past 12 months
More statistics...
Repository Staff Only: item control page | {"url":"http://eprints.ru.ac.za/3777/","timestamp":"2014-04-16T16:49:36Z","content_type":null,"content_length":"21856","record_id":"<urn:uuid:fac17c31-bf1e-4061-a5c0-d85d94caabbd>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
Higher-Dimensional Algebra: A Language for Quantum Spacetime
Category theory is a general language for describing things and processes - called "objects" and "morphisms". In this language, many counterintuitive features of quantum theory turn out to be
properties shared by the category of Hilbert spaces and the category of cobordisms, in which objects are choices of "space" and emorphisms are choices of "spacetime". This striking fact suggests that
"n-categories with duals" are a promising language for a quantum theory of spacetime. We sketch the historical development of these ideas from Feynman diagrams to string theory, topological quantum
field theory, spin networks and spin foams, and especially recent work on open-closed string theory, 3d quantum gravity coupled to point particles, and 4d BF theory coupled to strings. | {"url":"http://www.perimeterinstitute.ca/videos/higher-dimensional-algebra-language-quantum-spacetime","timestamp":"2014-04-21T01:20:37Z","content_type":null,"content_length":"27297","record_id":"<urn:uuid:5858bd0c-7487-4526-a3a9-49ff20fffbfa>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triplex Algebra
Welcome, Guest. Please login or register. April 18, 2014, 08:14:16 PM
Author Topic: Triplex Algebra (Read 26177 times)
Description: Triplex Algebra
0 Members and 1 Guest are viewing this topic.
Re: Triplex Algebra
« Reply #105 on: September 19, 2010, 04:15:35 AM »
Bugman has defined a triplex polar form using a particular matrix product. I have extended this idea to give 48 different polar forms. If you are interested then please have a look at: http://
Re: Triplex Algebra
« Reply #106 on: September 20, 2010, 05:42:37 AM »
Something a hell of a lot easier is.. the complex triplex (from another post of mine elsewhere in the forums). It really makes the mathematical relationship to the original 2d set apparent:
Quote from: M Benesi
You don't need anything too complex** to do triplex "algebra". You simply require:
A) a square root function to calculate a magnitude:
r1= sqrt(y^2+z^2)
B) a complex power function:
complex_1= (x + i r1)^n
complex_2= (y + i z)^n
C) a real power function:
r3=r1^-n (you are applying the magnitude of y and z two times (once in each complex number), so need to divide it out once)
D) the ability to directly access the real and imaginary components of the complex numbers:
new x = real part of complex 1 + x pixel value OR x Julia seed (for Julias use the seed, Mandys use the pixel value)
new y = imaginary part of complex_1 * real part of complex 2 * r3 + y pixel value OR y Julia seed
new z = imaginary part of complex_1 * imaginary part of complex 2 * r3 + z pixel value OR z Julia seed
It's easily extended to higher dimensions... and is faster than the trig version in certain compilers (although I haven't tried them all).
** pun was and is still intended.... 2 complex.. although I didn't mention that it was intentional in the original post, as I felt it was a bit heavy handed to point out the pun. This, however, has
Re: Triplex Algebra
« Reply #107 on: September 22, 2010, 02:33:45 AM »
Here is an image from one of the 48 variations:
Many more are at
All were made using Visins of Chaos.
Re: Triplex Algebra
« Reply #108 on: September 22, 2010, 03:11:26 AM »
"I have extended this idea to give 48 different polar forms"
I think you could say there are an infinity of polar forms, and I think it is right to say there is one for each rotation, which is equivalent to rotating the point by this rotation each iteration.
(75.09 KB, 70x70 - viewed 776 times.)
Re: Triplex Algebra
« Reply #109 on: September 26, 2010, 07:25:52 AM »
Have a look at Soler's PDF. It does show variations that do cover 48 possible variations. Some of them do match the orginal +SIN -SIN COS etc variations, but the rest are new and do give unique bulb
types. I have included Phase, Theta and Phi scaling/shifting in all the existing bulb formulas and these 48 do give new/unique variations.
Anyway, I hope the next 3D fractal that gets as much publicity as the Mandelbulb comes from these forums. Keep going guys. The Mandelbulb and Kaleidoscopic IFS were great examples of someone thinking
"what if" outside the scientific community.
« Last Edit: September 26, 2010, 07:36:20 AM by Softology »
Re: Triplex Algebra
« Reply #110 on: September 27, 2010, 02:23:11 PM »
Bugman has defined a triplex polar form using a particular matrix product.
I have extended this idea to give 48 different polar forms. If you are
interested then please have a look at:
Here is an image from one of the 48 variations:
Many more are at: http://soler7.com/Fractals/3D0.html
All were made using Visins of Chaos.
Greetings, and Welcome to this particular Forum !!!
Some nice work you have there. Looking forward to your future contributions.
_Sincerely, Paul N. Lee_ _ _
Re: Triplex Algebra
« Reply #111 on: November 23, 2010, 10:45:23 AM »
Here is an image from one of the 48 variations:
<Quoted Image Removed>
Many more are at
All were made using Visins of Chaos.
Wow, nice image! Very interesting!
Fractalis Surrealis: http://mandelwerk.deviantart.com/gallery/28152444
And my 3.3 GIGA PIXEL zoomable mandelbulb: http://mandelwerk.com/
Re: Triplex Algebra
« Reply #112 on: July 23, 2011, 07:05:41 AM »
Here are another 20 new variations.
A few sample images...
Re: Triplex Algebra
« Reply #113 on: July 27, 2011, 12:00:11 AM »
27 more new variations
Re: Triplex Algebra
« Reply #114 on: July 27, 2011, 01:57:16 PM »
Very nice! Paticularly like the plant analogue.
May a trochoid of ¥h¶h iteratively entrain your Logos Response transforming into iridescent fractals of orgasmic delight and joy, with kindness, peace and gratitude at all scales within your
experience. I beg of you to enrich others as you have been enriched, in vorticose pulsations of extravagance!
Re: Triplex Algebra
« Reply #115 on: August 01, 2011, 02:59:45 AM »
And yet another 16 new varieties.
Re: Triplex Algebra
« Reply #105 on: September 19, 2010, 04:15:35 AM »
Bugman has defined a triplex polar form using a particular matrix product. I have extended this idea to give 48 different polar forms. If you are interested then please have a look at: http://
Re: Triplex Algebra
« Reply #105 on: September 19, 2010, 04:15:35 AM »
Re: Triplex Algebra
« Reply #106 on: September 20, 2010, 05:42:37 AM »
Something a hell of a lot easier is.. the complex triplex (from another post of mine elsewhere in the forums). It really makes the mathematical relationship to the original 2d set apparent:
Quote from: M Benesi
You don't need anything too complex** to do triplex "algebra". You simply require:
A) a square root function to calculate a magnitude:
r1= sqrt(y^2+z^2)
B) a complex power function:
complex_1= (x + i r1)^n
complex_2= (y + i z)^n
C) a real power function:
r3=r1^-n (you are applying the magnitude of y and z two times (once in each complex number), so need to divide it out once)
D) the ability to directly access the real and imaginary components of the complex numbers:
new x = real part of complex 1 + x pixel value OR x Julia seed (for Julias use the seed, Mandys use the pixel value)
new y = imaginary part of complex_1 * real part of complex 2 * r3 + y pixel value OR y Julia seed
new z = imaginary part of complex_1 * imaginary part of complex 2 * r3 + z pixel value OR z Julia seed
It's easily extended to higher dimensions... and is faster than the trig version in certain compilers (although I haven't tried them all).
** pun was and is still intended.... 2 complex.. although I didn't mention that it was intentional in the original post, as I felt it was a bit heavy handed to point out the pun. This, however, has
Re: Triplex Algebra
« Reply #106 on: September 20, 2010, 05:42:37 AM »
Re: Triplex Algebra
« Reply #107 on: September 22, 2010, 02:33:45 AM »
Here is an image from one of the 48 variations:
Many more are at
All were made using Visins of Chaos.
Re: Triplex Algebra
« Reply #107 on: September 22, 2010, 02:33:45 AM »
Re: Triplex Algebra
« Reply #108 on: September 22, 2010, 03:11:26 AM »
"I have extended this idea to give 48 different polar forms"
I think you could say there are an infinity of polar forms, and I think it is right to say there is one for each rotation, which is equivalent to rotating the point by this rotation each iteration.
(75.09 KB, 70x70 - viewed 776 times.)
Re: Triplex Algebra
« Reply #108 on: September 22, 2010, 03:11:26 AM »
Re: Triplex Algebra
« Reply #109 on: September 26, 2010, 07:25:52 AM »
Have a look at Soler's PDF. It does show variations that do cover 48 possible variations. Some of them do match the orginal +SIN -SIN COS etc variations, but the rest are new and do give unique bulb
types. I have included Phase, Theta and Phi scaling/shifting in all the existing bulb formulas and these 48 do give new/unique variations.
Anyway, I hope the next 3D fractal that gets as much publicity as the Mandelbulb comes from these forums. Keep going guys. The Mandelbulb and Kaleidoscopic IFS were great examples of someone thinking
"what if" outside the scientific community.
« Last Edit: September 26, 2010, 07:36:20 AM by Softology »
Re: Triplex Algebra
« Reply #109 on: September 26, 2010, 07:25:52 AM »
« Last Edit: September 26, 2010, 07:36:20 AM by Softology »
Re: Triplex Algebra
« Reply #110 on: September 27, 2010, 02:23:11 PM »
Bugman has defined a triplex polar form using a particular matrix product.
I have extended this idea to give 48 different polar forms. If you are
interested then please have a look at:
Here is an image from one of the 48 variations:
Many more are at: http://soler7.com/Fractals/3D0.html
All were made using Visins of Chaos.
Greetings, and Welcome to this particular Forum !!!
Some nice work you have there. Looking forward to your future contributions.
_Sincerely, Paul N. Lee_ _ _
Re: Triplex Algebra
« Reply #110 on: September 27, 2010, 02:23:11 PM »
Re: Triplex Algebra
« Reply #111 on: November 23, 2010, 10:45:23 AM »
Here is an image from one of the 48 variations:
<Quoted Image Removed>
Many more are at
All were made using Visins of Chaos.
Wow, nice image! Very interesting!
Fractalis Surrealis: http://mandelwerk.deviantart.com/gallery/28152444
And my 3.3 GIGA PIXEL zoomable mandelbulb: http://mandelwerk.com/
Re: Triplex Algebra
« Reply #111 on: November 23, 2010, 10:45:23 AM »
Re: Triplex Algebra
« Reply #112 on: July 23, 2011, 07:05:41 AM »
Here are another 20 new variations.
A few sample images...
Re: Triplex Algebra
« Reply #112 on: July 23, 2011, 07:05:41 AM »
Re: Triplex Algebra
« Reply #113 on: July 27, 2011, 12:00:11 AM »
27 more new variations
Re: Triplex Algebra
« Reply #113 on: July 27, 2011, 12:00:11 AM »
Re: Triplex Algebra
« Reply #114 on: July 27, 2011, 01:57:16 PM »
Very nice! Paticularly like the plant analogue.
May a trochoid of ¥h¶h iteratively entrain your Logos Response transforming into iridescent fractals of orgasmic delight and joy, with kindness, peace and gratitude at all scales within your
experience. I beg of you to enrich others as you have been enriched, in vorticose pulsations of extravagance!
Re: Triplex Algebra
« Reply #114 on: July 27, 2011, 01:57:16 PM »
May a trochoid of ¥h¶h iteratively entrain your Logos Response transforming into iridescent fractals of orgasmic delight and joy, with kindness, peace and gratitude at all scales within your
experience. I beg of you to enrich others as you have been enriched, in vorticose pulsations of extravagance!
Re: Triplex Algebra
« Reply #115 on: August 01, 2011, 02:59:45 AM »
And yet another 16 new varieties.
Re: Triplex Algebra
« Reply #115 on: August 01, 2011, 02:59:45 AM »
Related Topics
Subject Started by Replies Views Last post
Question about triplex algebra and one experiment Theory Jesse 10 1769 January 28, 2010, 10:14:06 AM
by Paolo Bonzini
Planar triplex algebra New Theories & Research M Benesi 7 2086 February 12, 2010, 05:50:09 AM
by M Benesi
Triplex Algebra Fractal Fun Power 8 0 732 April 04, 2010, 09:24:01 PM
by Power 8
Geometric Algebra, Geometric Calculus New Theories & Research « 1 2 3 4 » kram1032 59 2213 April 13, 2014, 11:52:07 PM
by kram1032
Triplex Algebra with Triplex Multiplication Theory n4t3 10 567 August 14, 2013, 07:17:50 AM
by n4t3 | {"url":"http://www.fractalforums.com/theory/triplex-algebra/105/","timestamp":"2014-04-18T18:14:16Z","content_type":null,"content_length":"87762","record_id":"<urn:uuid:eb529f8c-7098-4384-8d7e-e2767707e80b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] glibc error
Gideon Simpson simpson@math.toronto....
Sun Jan 25 09:44:15 CST 2009
Rebuilding the library against ATLAS 3.8.2 with lapack 3.1.1 seems to
have done the trick. I do get one failure:
FAIL: test_umath.TestComplexFunctions.test_against_cmath
Traceback (most recent call last):
File "/usr/local/nonsystem/simpson/lib/python2.5/site-packages/nose/
case.py", line 182, in runTest
File "/usr/local/nonsystem/simpson/lib/python2.5/site-packages/
numpy/core/tests/test_umath.py", line 268, in test_against_cmath
assert abs(a - b) < atol, "%s %s: %s; cmath: %s"%(fname,p,a,b)
AssertionError: arcsinh -2j: (-1.31695789692-1.57079632679j); cmath:
On Jan 25, 2009, at 5:46 AM, Michael Abshoff wrote:
> David Cournapeau wrote:
>> Hoyt Koepke wrote:
> <SNIP>
>> Actually, I would advise using only 3.8.2. Previous versions had bugs
>> for some core routines used by numpy (at least 3.8.0 did). I am a bit
>> surprised that a 64 bits-built atlas would be runnable at all in a 32
>> bits binary - I would expect the link phase to fail if two different
>> object formats are linked together.
> Linking 32 and 64 bit ELF objects together in an extension will fail
> on
> any system but OSX where the ld will happily link together anything.
> Since that linker also does missing symbol lookup at runtime you will
> see some surprising distutils bugs when you thought that the build
> went
> perfectly, i.e. scipy 0.6 would not use the fortran compiler I would
> tell it to use, but one extension would use gfortran instead of
> sage_fortran when it was available in $PATH. sage_fortran would would
> just inject an "-m64" into the options and call gfortran. But with a
> few
> fortran objects being 32 bit some extensions in scipy would fail to
> import and it took me quite a while to track this one down. I haven't
> had time to test 0.7rc2 yet, but hopefully will do so in the next
> day or
> two.
>> cheers,
>> David
> Cheers,
> Michael
>> _______________________________________________
>> Numpy-discussion mailing list
>> Numpy-discussion@scipy.org
>> http://projects.scipy.org/mailman/listinfo/numpy-discussion
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-January/039863.html","timestamp":"2014-04-18T08:17:42Z","content_type":null,"content_length":"5757","record_id":"<urn:uuid:57b8f974-b1eb-4d64-9b02-7cc14b204eae>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Review problems that confuse me
June 9th 2009, 03:48 PM
Review problems that confuse me
First of all, I have a calculus final coming up and I couldn't even do some problems that were apparently basic. I would really appreciate it if someone could break down each question and try to
explain to me how they got to the answer. I really don't want to fail this final. (Angry)
1) The farmer plans to fence a regular pasture adjacent to a river. The pasture must contain 2,000,000 square meters in order to provide enough grass for the herd. What dimensions would require
the least amount of fencing if no fencing is needed along the river?
2) A population of 500 bacteria is introduced into a culture and grows in number according to the equation p(t)=500 (1+4t/50+t^2) where t is measured in hours. Find the rate at which the
population is growing at t=2.
3) find the horizontal and vertical asymptote of y=2x/x-3
Find the limit.
lim 2x/x-3
lim 2x/x-3
lim 2x/x-3
lim 2x/x-3
lim 3x^3-2x^2+4
lim 2x^2-x-3/x+1
lim 2x-3/x+5
lim sin x/5x
Prove that the lim f(x) = 2 and lim f(x)=infinity are approaching infinity.
x>5 x>0
Thank you in advance.
June 9th 2009, 05:03 PM
Here is some help with the first three problems. Mine in red.
1) The farmer plans to fence a regular pasture adjacent to a river. The pasture must contain 2,000,000 square meters in order to provide enough grass for the herd. What dimensions would require
the least amount of fencing if no fencing is needed along the river?
Draw a diagram. Called the sides x, x, and y. The length is then 2x+y (excluding the river side). Restriction xy = 2,000,000. Solve for x from the restriction, then plug into the perimeter
$x = \frac{2000000}{y}$
$P(y) = \frac{4000000}{y} + y$
Use calculus to optimize this function. Find the first derivative, then critical numbers, then show that you have minimized the perimeter.
2) A population of 500 bacteria is introduced into a culture and grows in number according to the equation p(t)=500 (1+4t/50+t^2) where t is measured in hours. Find the rate at which the
population is growing at t=2.
Find the first derivative using the quotient rule, then plug in t=2. The value you get is a rate measured by the units bacteria/hour.
3) find the horizontal and vertical asymptote of y=2x/x-3
Horizontal: The degrees of the numerator and denominator are equal, therefore the horizontal asymptote is the ratio of the leading coefficient $y= \frac{2}{1} = 2$
Vertical: x values that make ONLY the denominator 0, so it's $x=3$
Good luck on the final!
June 9th 2009, 06:01 PM
Thank you very much for you explanations! They were very helpful! (Rofl)
On #2, I got 31.58. Could anyone verify my answer?
June 9th 2009, 06:11 PM
No problem. You're welcome.
For number 2, I get 30.864, verified with a graphing calculator and Mathematica's Derivative Calculator: Step-by-Step Derivatives
Good luck!
June 9th 2009, 06:21 PM
First of all, I have a calculus final coming up and I couldn't even do some problems that were apparently basic. I would really appreciate it if someone could break down each question and try to
explain to me how they got to the answer. I really don't want to fail this final. (Angry)
1) The farmer plans to fence a regular pasture adjacent to a river. The pasture must contain 2,000,000 square meters in order to provide enough grass for the herd. What dimensions would require
the least amount of fencing if no fencing is needed along the river?
Let x is length of the rectangular field, parallel to river flow.
Let y be the width of the rectangular field.
Area = xy = 2 000 000
$y = \frac{2000000}{x}$ ..............(1)
Length of fencing = Perimeter (P) around three sides of rectangular garden
P = x + 2y
$P = x + 2\left(\frac{2000000}{x}\right)$
$P = x + \frac{4000000}{x}$
Now, differentiate P
$P' = 1 - \frac{4000000}{x^2}$
for Max or Min, P' = 0
$<br /> 1 - \frac{4000000}{x^2}=0<br />$
$x = 2000$
$y = \frac{2000000}{x}$
$y = \frac{2000000}{2000}=1000$
Length of fencing = 2000 m and width of fencing = 1000 m
$P'' = 0 + \frac{8000000}{x^3}$
$P'' = + \frac{8000000}{2000^3}= +$
so, it shows fencing is minimum. | {"url":"http://mathhelpforum.com/calculus/92346-review-problems-confuse-me-print.html","timestamp":"2014-04-21T03:44:59Z","content_type":null,"content_length":"12516","record_id":"<urn:uuid:85e0af2d-da49-49a0-aee4-8cf176a3b88f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
v-t graph
Could someone check this because I have different answers to those given for parts ii and iii.
i) You want to use $a=\frac{\Delta v}{\Delta t}$. Your result will be in $\frac{\text{km}}{\text{min}\cdot\text{hr}}$. To make the required conversion to the units given use: $x\frac{\text{km}}{\text
{min}\cdot\text{hr}}\cdot \frac{1000\text{ m}}{1\text{ km}}\cdot\frac{1\text{ hr}}{3600\text{s}}\cdot\frac{1\text{ min}}{60\text{ s}}=\frac{x}{216}\,\frac{\text{m}}{\text{s}^2}$ ii) The area under
the velocity function is two right triangles, a trapezoid and a rectangle. Then divide the area by 60 to convert the minutes to hours to get the result in km. iii) Use $\bar{v}=\frac{d}{t}$
The unit conversion has been tricky for me. Its not x/216 its 1000x/60^3. Your left hand side seems to be ok. I had not the trapezoid (trapezium to non americans). | {"url":"http://mathhelpforum.com/math-topics/209331-v-t-graph-print.html","timestamp":"2014-04-16T20:21:58Z","content_type":null,"content_length":"5640","record_id":"<urn:uuid:96d2542b-604e-4488-9fa7-40005a4e2164>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gas Density
An important property of any gas is its density. Density is defined as the mass of an object divided by its volume, and most of our experiences with density involve solids. We know that some objects
are heavier than other objects, even though they are the same size. A brick and a loaf of bread are about the same size, but a brick is heavier--it is more dense. Among metals, aluminum is less dense
than iron. That's why airplanes and rockets and some automobile parts are made from aluminum. For the same volume of material, one metal weighs less than another if it has a lower density.
For solids, the density of a single element or compound remains fairly constant because the molecules are bound to one another. For example, if you found a pure gold nugget on the earth or you found
a pure gold nugget on the moon, the measured density would be nearly the same. But for gases, the density can vary over a wide range because the molecules are free to move. Air at the surface of the
earth has a very different density than air 50 kilometers above the earth. An interactive simulator allows you to study how air density varies with altitude. Understanding density and how it works is
fundamental to the understanding of rocket aerodynamics and propulsion.
There are two ways to look at density: (1) the small scale action of individual air molecules or (2) the large scale action of a large number of molecules. Starting with the small scale action, from
the kinetic theory of gases, a gas is composed of a large number of molecules that are very small relative to the distance between molecules. The molecules are in constant, random motion and
frequently collide with each other and with the walls of a container. Because the molecules are in motion, a gas will expand to fill the container. Since density is defined to be the mass divided by
the volume, density depends directly on the size of the container in which a fixed mass of gas is confined. As a simple example, consider Case #1 on our figure. We have 26 molecules of a mythical
gas. Each molecule has a mass of 20 grams (.02 kilograms), so the mass of this gas is .52 kg. We have confined this gas in a rectangular tube that is 1 meter on each side and 2 meters high. We are
viewing the tube from the front, so the dimension into the slide is 1 meter for all the cases considered. The volume of the tube is 2 cubic meters, so the density is .26 kg/cubic meter. This
corresponds to air density at about 13 kilometers altitude. If the size of our container were decreased to 1 meter on all sides, as in Case #3, and we kept the same number of molecules, that density
would increase to .52 kg/cubic meter. Notice that we have the same amount of material; it is just contained in a smaller volume. How we decrease the volume is very important for the final value of
pressure and temperature. You can explore the variations in pressure and temperature at the animated gas lab.
Turning to the larger scale, the density is a state variable of a gas and the change in density during a process is governed by the laws of thermodynamics. Actual molecules of a gas are incredibly
small. In one cubic meter the number of molecules is about ten to the 23rd power. (That's 1 followed by 23 zero's !!!) For a static gas, the molecules are in a completely random motion. Because there
are so many molecules, and the motion of each molecule is random, the value of the density is the same throughout the container. Density is a scalar quantity; it has a magnitude but no direction
associated with it. As an example, consider Case #1, in which the mass is .52 kg, the volume is 2 cu m, and the density is .26 kg/cu m. If we sample a smaller volume of 1 meter on a side as in Case #
2, we will obtain the same density. The volume of the blue box in Case #2 is only 1 cu m, but the number of molecules in the box is 13 at .2 kg per molecule; and the density is .26 kg/cu m. (This
example REALLY works only for a very large number of molecules moving at random. Case #2 is just an illustration.) Another way to obtain the same density for a smaller volume is to remove molecules
from the container. In Case #4, the container is the same size as in Case #3, but the number of molecules (the mass) has been decreased to only 13 molecules. The density is .26 kg/cubic meter, which
is the same density seen in the blue box of Case #2 and throughout Case #1. A careful study of these four cases will help you understand the meaning of gas density.
These rather simple examples help explain a fundamental effect that we see in nature. Between Cases #3 and #4, the number of molecules in a given volume decreased, and the corresponding density
decreased. In the atmosphere, air molecules near the surface of the earth are held together more tightly than the molecules in the higher atmosphere because of the gravitational pull of the earth on
all the molecules above the surface molecules. The higher up you go in the atmosphere, the fewer the molecules there are above you, and the lower the confining force. So in the atmosphere, density
decreases as you increase altitude; there are fewer molecules.
Gas density is defined to be the mass of gas divided by the volume confining the gas. There is a related state variable called the specific volume which is the reciprocal of the density r. The
specific volume v is given by:
v = 1 /r
Specific volume is often used when solving static gas problems for which the volume is known, while density is used for moving gas problems. They are equivalent state variables.
Guided Tours
• Gas Statics:
• Standard Atmosphere Model:
Gas Density Activity: Grade 10-12
Related Sites:
Rocket Index
Rocket Home
Exploration Systems Mission Directorate Home | {"url":"http://microgravity.grc.nasa.gov/education/rocket/fluden.html","timestamp":"2014-04-16T10:09:25Z","content_type":null,"content_length":"14688","record_id":"<urn:uuid:65c46c61-a7c2-474f-8772-c2bb9371e8b6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 2-17.Rafter length.
Figure 2-17. Rafter length. Look at the first line of the rafter table on a framing square to find LENGTH COMMON RAFTERS PER FOOT RUN (also known as the bridge measure). Since
the roof in this example has a 7-inch unit of rise, locate the number 7 at the top of the square. Directly beneath the number 7 is the number 13.89. This means that a
common rafter with a 7-inch unit of rise will be 13.89 inches long for every unit of run. To find the length of the rafter, multiply 13.89 inches by the
number of feet in the total run. (The total run is always one-half the span.) The total run for a roof with a 16-foot span is 8 feet; therefore, multiply 13.89 inches by 8 to
find the rafter length. Figure 2-17 is a schematic of this procedure. If a framing square is not available, the bridge measure can be found by using the Pythgorean theorum root of 193 is 13.89.
Two steps remain to complete the procedure. Step 1. Multiply the number of feet in the total run (8) by the length of the common rafter per foot of run (13.89 inches):
Step 2. To change .12 of an inch to a fraction of an inch, multiply by 16: The number 1 to the left of the decimal point represents 1/16 inch. The number .92 to the right of the
decimal represents ninety-two hundredths of 1/16 inch. For practical purposes, 1.92 is calculated as being equal to 2 x 1/16 inch, or 1/8 inch. As a general rule in this
kind of calculation, if the number to the right of the decimal is 5 or more, add 1/16 inch to the figure on the left side of the decimal. The result of steps 1 and 2 is a
total common rafter length of 111 1/8 inches, or 9 feet 3 1/8 inches. Example 2. A roof has a 6-inch unit of rise and a Step 1. Step 2. Step 3. 25-foot span. The total run of the roof
is 12 feet 6 inches. You can find the rafter length in four steps. Change 6 inches to a fraction of a foot by placing the number 6 over the number 12: (1/2 foot = 6 inches).
Change the fraction to a decimal by dividing the bottom number (denomi- nator) into the top number (numerator): (.5 foot = 6 inches). Multiply the total run (12.5) by the length
of the common rafter per foot of run (13.42 inches) (fig. 2-16): Step 4. To change .75 inch to a fraction of an inch, multiply by 16 (for an answer expressed in sixteenths of an inch). .75 x 16 = 12
The result of these steps is a total common rafter length of 167 3/4 inches, or 13 feet 11 3/4 inches. 2-12 | {"url":"http://constructionmanuals.tpub.com/14044/css/14044_64.htm","timestamp":"2014-04-18T19:22:18Z","content_type":null,"content_length":"24957","record_id":"<urn:uuid:092678d8-54dc-4cdc-9904-608b0abf7529>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDE - rotation
February 12th 2008, 03:18 PM #1
Junior Member
Oct 2007
PDE - rotation
How would you rotate an equation around the origin in the y plane?
This is the question:
u_xx+2u_xy+2u_yy+u_x_u_y = 0
Find if this equation for u(x,y) has the same coefficients in front of the derivitves under all rotations about the origin in the xy-plane.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/28124-pde-rotation.html","timestamp":"2014-04-23T23:55:25Z","content_type":null,"content_length":"28692","record_id":"<urn:uuid:4963802d-89cc-4bb3-a682-9bbb88d4cd36>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
vanishing theorems
up vote 2 down vote favorite
I would be glad to know about possible generalizations of the following results:
1) (Grothendieck) Let $X$ be a noetherian topological space of dimension $n$. Then for all $i>n$ and all sheaves of abelian groups $\cal{F}$ on $X$, we have $H^i(X; \cal{F})=$ 0. [See Hartshorne,
Algebraic Geometry, III.2.7.]
2) Let $X$ be an $n$-dimensional $C^0$-manifold. Then for all $i>n$ and all sheaves of abelian groups $\cal{F}$ on $X$, we have $H^i(X; \cal{F})=$ 0 . [See Kashiwara-Schapira, Sheaves on manifolds,
More precisely, I'm interested in dropping the "abelian groups" hypothesis: could I take sheaves in any, say, AB5 abelian category?
Apparently, in Grothendieck's theorem, the "abelian groups" hypothesis is necessary -at least in Hartshorne's proof-, because at the end you see a big constant sheaf $\mathbf{Z}$. But what happens if
we talk about sheaves of $R$-modules, with $R$ any commutative ring with unit, for instance?
Are those generalizations trivial ones? False for trivial reasons?
Any hints or references will be welcome.
sheaf-theory cohomology
For any sheaf of rings $O$, sheaf cohomology on the category of $O$-modules coincides with such cohomology on underlying abelian sheaves (due to acyclicity of flasques). So the generalizations are
2 obvious. In (2) it isn't necessary to restrict to manifolds; any separable metric space (or disjoint union thereof) with dimension $n$ in the sense of topological dimension theory satisfies (2)
(see Engelking's book "General Topology", especially the notion of "covering dimension"; recall that Cech = derived functor cohomology on paracompact Hausdorff spaces, and metric spaces are such
spaces). – Boyarsky Jun 29 '10 at 10:44
@Boyarsky. Thanks. So the generalization to sheaves of O-modules is trivial. Do you know anything about possible generalizations to sheaves with values in an (AB5?) abelian category? – a.r. Jun 29
'10 at 16:14
@Agusti: goodness, I can't even remember which one AB5 is...but is there some real reason for asking that kind of question? Like an example to motivate it? – Boyarsky Jun 29 '10 at 16:20
@Boyarsky. Thanks for your help again, Boyarsky. Well, I have a nasty spectral sequence with this kind of guys which I would like to converge strongly. For this, I need some zeros in it. As for
1 AB5, is just a conjecture: exactness of filtered colimits seems to me, at first sight, the less you should ask -or the less I need- to work with sheaves with values in an abelian category – a.r.
Jun 29 '10 at 17:28
add comment
1 Answer
active oldest votes
Well, I think I can answer my question, thanks to Boyarsky's remark.
The point is that, since the theorem is also true for sheaves of $R$-modules, given a sheaf $\cal{F}$ with values in an abelian category $\cal{A}$, with the help of Mitchell's embedding
theorem, http://en.wikipedia.org/wiki/Mitchell%27s_embedding_theorem, we can consider it as a sheaf of $R$-modules, for some ring $R$. Moreover, the embedding $V: {\cal A} \longrightarrow \
up vote 2 mathbf{Mod}_R$ is full, faithful, and exact. That is to say, $V$ sends exact sequences to exact sequences. So $H^n(X;\cal{F})$ = $H^n(X;V(\cal{F}))$.
down vote
Hence, both vanishing theorems are also (trivially) true for sheaves with values in any abelian category.
1 Unfortunately, the answer is wrong: see mathoverflow.net/questions/32173/mitchells-embedding-theorem . – a.r. Jul 16 '10 at 16:41
Does some Leray spectral sequence help here? – Martin Brandenburg Sep 21 '13 at 8:31
Don't know: do you have something in mind, Martin? – a.r. Sep 21 '13 at 10:01
add comment
Not the answer you're looking for? Browse other questions tagged sheaf-theory cohomology or ask your own question. | {"url":"http://mathoverflow.net/questions/29883/vanishing-theorems/30756","timestamp":"2014-04-18T01:14:08Z","content_type":null,"content_length":"60336","record_id":"<urn:uuid:1fd007ae-4c3e-4ff0-9a8d-3562b65be316>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of a circle, analyzing with trigonometry and geometry
December 25th 2009, 02:11 PM #1
Dec 2009
I was trying to find the area of a circle decomposing it into polygons. So looking at the parametric equations of a circumference and the formula to define the area of a polygon having its
coordinates i did the following calculus:
(I) Parametric equations of circle:
x= r * cos(t) I assume r=1 so I must get: x= cos(t) and area=pi
y= r * sin(t) y= sin(t)
(II) Area of a polygon having its coordinates:
A(poly)=1/2 * | Xa Xb Xc Xd ... Xn Xa|
| Ya Yb Yc Yd ... Yn Ya|
Giving any value of 't' i will get a point of the circle, so i imagined a polygon of 360 sides (I used t as degrees) that would have an area very near the area of the circle.
A(circle)=1/2*| cos(1) cos(2) cos(3) ... cos(360) cos(1) |
| sin(1) sin(2) sin(3) ... sin(360) sin(1) |
Developing the determinant I get:
A(circle)= 1/2* | [cos(1)*sin(2)+cos(2)*sin(3)+...+cos(360)*sin(1)]-
[sin(1)*cos(2)+sin(2)cos(3)+...+sin(360)*cos(1)] |
And finally syntesis of the formula to get circle's area:
{summation of t=1 to t=359} cos(t)* sin(t+1)- sin(t)* cos (t+ 1)
Using the calculator On-Line Calculator to find the result of the sum a got a very big number and after some adjusts i got still a number very far away from pi. Could someone tell me what did i
make wrong?
Sorry by mistakes in grammar, i am not native.
You could have done it a lot simpler.
A circle can be roughly represented as lots of triangles (with the same area) put together. Like a pizza, yes.
Let's look at one of these triangles (they are all equal). The angle that touches the center of the circle can be obtained easily : for example, if you use $90$ triangles, then each triangle will
have that angle equal to $4$, because $90 \times 4 = 360°$.
Now, you know two sides of the triangle easily, too, because they are equal to the radius of the circle (let's take $1$).
So, if we take a triangle ABC, with A pointing towards the center of the circle, we have : angle $BAC = \frac{360}{t}$, where $t$ is the number of triangles wanted, and $AB = AC = 1$ (radius of
the circle).
Now, what was the goal of this, already ? Ah, yes, approximating the area of the circle. Thus, we need to know the area of the triangles. Note that since triangle ABC is isosceles, we can divide
it into two equal rectangle triangles. Let $H$ be the middle of $BC$.
Therefore, angle $BAH = \frac{180}{t}$, and using trigonometry, $\cos{\frac{180}{t}} = \frac{AH}{AB}$, thus $AH = \cos{\frac{180}{t}}$.
You are still missing something ! You need the side $BH$ to calculate the area. Since you know two sides in the rectangle triangle, you can use trig :
$\sin{\frac{180}{t}} = \frac{BH}{AB}$, thus $BH = \sin{\frac{180}{t}}$.
From there you can calculate the area of the rectangle triangle. Let $A$ be the area of the rectangle triangle.
$A = \frac{AH \times BH}{2}$.
Now remember we divided our isosceles triangle into two equal rectangle triangles, therefore the area of the isosceles triangle must be twice the area of the rectangle triangle. Let $A'$ be the
area of the isosceles triangle.
$A' = AH \times BH$.
Now that you know that, all you have to do is multiply this area by the quantity $t$ of triangles used. Let $Q$ be the area of the circle.
$Q = AH \times BH \times t$.
That is :
$Q = \cos{\frac{180}{t}} \times \sin{\frac{180}{t}} \times t$.
Note that this only works if $t > 2$ (can you put two triangles next to each other inside a circle ? $t = 1$ and $t = 2$ fails).
(Remember that the area of a circle of radius $1$ is equal to $\pi$). Let's see how our formula goes :
$t = 8$, $Q = \cos{\frac{180}{8}} \times \sin{\frac{180}{8}} \times 8 = \cos{22.5} \times \sin{22.5} \times 8 \approx 2,83$. That is not awesome yet ...
$t = 10000$, $Q = \cos{\frac{180}{10000}} \times \sin{\frac{180}{10000}} \times 10000 = \cos{0.018} \times \sin{0.018} \times 10000 \approx 3,1415924$. Now this is great !
Provided you input enough triangles (a greater $t$), and that your computing device has good floating point precision, you can approximate your areas easily !
PS : this formula only works with a circle of radius $1$. I think you can change it easily enough yourself to make it fit to any circle, by introducing a new parameter $r$ (radius). Okay, I'll
give you a hint : it only involves multiplying the area of a $1$-radius circle by $r^2$
December 25th 2009, 04:24 PM #2 | {"url":"http://mathhelpforum.com/pre-calculus/121603-area-circle-analyzing-trigonometry-geometry.html","timestamp":"2014-04-18T04:00:06Z","content_type":null,"content_length":"43978","record_id":"<urn:uuid:9afb864d-0be9-49b2-a946-bc9d2fd6d76f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - landau pole
Has the energy scale (or cutoff) at which the electric charge goes to infinity (the landau pole) been calculated? I sometimes hear that QED has been probed to tiny distance scales, and I was curious
how far away we are until we get to distances corresponding to the Landau pole, after which we shouldn't be able to do anymore calculations.
Also I just realized that SU(2) isospin interaction is asymptotically free. I had thought only SU(3) color interaction was asymptotically free, but from the looks of the SU(2) isospin beta function
(which is negative), so is SU(2) isospin. It seems strange however that whenever I look up asymptotic freedom, there is only mention of SU(3) color asymptotic freedom and no mention of SU(2) isospin. | {"url":"http://www.physicsforums.com/showpost.php?p=2261975&postcount=1","timestamp":"2014-04-17T12:45:47Z","content_type":null,"content_length":"9014","record_id":"<urn:uuid:1f471dee-fabc-4b43-b4a1-791d392fa98d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morton, PA Math Tutor
Find a Morton, PA Math Tutor
...As a mother of three, I know what it is like to place your trust in someone to care for your child. Therefore, I treat every student as I would want my children to be treated. I am stern but
caring, serious but fun, and nurturing but have high expectations of all of my students.
12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2
...The material I have covered ranges from general chemistry (for majors and non-majors) to analytical chemistry courses such as quantitative analysis and instrumental analysis as well as organic
chemistry. During my doctoral studies I participated in a program where I worked in the high school cla...
9 Subjects: including algebra 1, algebra 2, chemistry, geometry
...Geometry was one of my personal strengths in high school. And as part of a summer camp, I have helped students with geometry to get them ready for their SAT math. While I am certified to teach
biology and chemistry, I have a strong math foundation and have helped a summer camp put together curriculum material for math, including algebra.
12 Subjects: including algebra 1, algebra 2, biology, chemistry
...I have tutored privately in both these subjects for many years. I have had the opportunity to work with a wide variety of students from all backgrounds and age groups. I have prepared high
school students for the AP Calculus exams (both AB and BC), undergraduate students for the math portion of...
22 Subjects: including statistics, discrete math, differential equations, C++
...I primarily focus increasing ability to communicate and confidence in one's abilities to do so. I have taken Physics in both high school and college, primarily focused around Mechanics and
Acoustics. Additionally, I have learned essential Optics.
33 Subjects: including precalculus, philosophy, Adobe InDesign, art history
Related Morton, PA Tutors
Morton, PA Accounting Tutors
Morton, PA ACT Tutors
Morton, PA Algebra Tutors
Morton, PA Algebra 2 Tutors
Morton, PA Calculus Tutors
Morton, PA Geometry Tutors
Morton, PA Math Tutors
Morton, PA Prealgebra Tutors
Morton, PA Precalculus Tutors
Morton, PA SAT Tutors
Morton, PA SAT Math Tutors
Morton, PA Science Tutors
Morton, PA Statistics Tutors
Morton, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Morton_PA_Math_tutors.php","timestamp":"2014-04-19T09:38:00Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:1c236d79-4eeb-47c3-9fa1-cab0519ede67>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
About integer polynomials which are sums of squares of rational polynomials...
up vote 10 down vote favorite
I have the following question for which I haven't been able to find any reference or proof.
Suppose we know that a univariate polynomial $P(X)$ with integer coefficients is the sum of squares of two polynomials with rational coefficients.
Is it true that $P(X)$ must also be the sum of squares of two polynomials with integer coefficients?
For example, take $P(X)=50X^2+14X+1$, then we see that $P(X)=(5X+3/5)^2+(5X+4/5)^2$, but it is also $X^2+(7X+1)^2$.
I would greatly appreciate any help pointing me into the right direction.
Thanks in advance, and regards, Guillermo
ac.commutative-algebra nt.number-theory
1 I wonder if it helps to interpret "sum of two squares of rational polynomials" as the equivalent "norm of an element of ${\mathbb Q}[i][x]$". – Greg Martin Nov 28 '11 at 5:40
I think this is false. Consider (5x^2+3x/5+4)^2+(5x^2+4x/5-3)^2. – Peter McNamara Nov 28 '11 at 7:20
1 @Hsueh-Yung Lin: 3 is not a sum of two squares of rational numbers. – Marc van Leeuwen Nov 28 '11 at 7:49
4 @Peter: (7x^2+x)^2 + (x^2+5)^2 – Andres Caicedo Nov 28 '11 at 8:01
add comment
2 Answers
active oldest votes
Yes. Suppose $n\in \mathbb N$ is minimal so that $P(x)=f_1^2+f_2^2$, where $nf_1$ and $nf_2$ are in $\mathbb Z[x]$.
Let $p$ be a prime with $p^\alpha||n$. Since $P\in \mathbb Z[x]$ we have $p^{2\alpha}| (p^\alpha f_1)^2+(p^\alpha f_2)^2$. Denoting $p^\alpha f_i$ by $g_i$, and letting $\beta$ be
square root of $-1\pmod{p^{2\alpha}}$ (it is not hard to show that this must exist by looking at the coefficients of $g_i$ with lowest $p$-valuation).
up vote 19 down We have $g_1^2+g_2^2\equiv 0\pmod{p^{2\alpha}}$ so $g_2^2\equiv (\beta g_1)^2\pmod{p^{2\alpha}}$ so that $p^{2\alpha}| ag_1+bg_2$ for some integers $a,b$ with $a^2+b^2=p^{2\alpha}$
vote accepted and $(ab,p)=1$.
Now we can take $P(x)^2=\left(\frac{af_1+bf_2}{p^{\alpha}}\right)^2+\left(\frac{af_2-bf_1}{p^\alpha}\right)^2$ and both polynomials have coefficients with $\nu_p\geq 0$. Now repeat
the procedure with other prime divisors of $n$ until you have polynomials with integer coefficients.
add comment
In fact, if $P(x)$ is a polynomial with integer coefficients and if every arithmetic progression contains an integer $n$ for which $P(n)$ is a sum of two rational squares, then $P(x) = u_1
(x)^2 + u_2(x)^2$ identically, where $u_1(x)$ and $u_2(x)$ are polynomials with integral coefficients. This follows from a theorem of Davenport, Lewis, and Schinzel; see the Corollary to
Theorem 2 in Polynomials of certain special types (Acta Arith. IX, 1964, 107--116).
up vote 7 (In my restatement of their result, I use that being a sum of two rational squares is equivalent to being a sum of two integer squares. This is easy to prove directly from the
down vote characterization; alternatively, it follows from a lemma in Serre's book, attributed to Davenport--Cassels, used to prove the three squares theorem. Also, Davenport, Lewis, and Schinzel
seem to have an argument similar to Gjergji's implicitly in mind in their proof of the Corollary above. So Gjergji's answer is the "real" one; but maybe this paper will interest others.)
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/82046/about-integer-polynomials-which-are-sums-of-squares-of-rational-polynomials?sort=oldest","timestamp":"2014-04-19T02:32:48Z","content_type":null,"content_length":"60894","record_id":"<urn:uuid:6fda4a54-17aa-4dbc-b375-773e33f0020c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edmonston, MD Calculus Tutor
Find an Edmonston, MD Calculus Tutor
There is always more than one way to solve a math problem! As a former math teacher with a Bachelor's degree in Mathematics from Stanford and a Master's degree in Teaching from American, I know
that there are infinite ways to approach math problems. I also know that exploring many methods is the best way to build conceptual understanding of math.
16 Subjects: including calculus, English, writing, geometry
...Hence, I make sure a student understands a given material very well before moving on to the next. If I find that a student has missed a fundamental concept in the past, I go back and address
it. I also try my best so that I am very flexible in scheduling and meet student’s demands.
14 Subjects: including calculus, chemistry, physics, geometry
...I have tutored Chinese to more than 5 students in the past 2 years. I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more
than 3 years' intense training in programming, especially in C and Java, both of which have been widely used in my daily job.
27 Subjects: including calculus, chemistry, physics, geometry
...Matlab can handle vast amounts of input data and manipulate the data in accordance with the instructions that the user provides. It has amazing plotting capabilities with both 2-D and 3-D
plots. It also provides a vast array of statistical functions including means, variances, medians, and modes of data sets.
17 Subjects: including calculus, English, geometry, ASVAB
...I've taught college classes, mentored students in research, and given research presentations at the college, high school and middle school levels. In addition, I have taught English to
non-native speakers in South Korea. I began tutoring as an undergraduate student, and have tutored many students at middle school, high school and college levels.
46 Subjects: including calculus, English, reading, French
Related Edmonston, MD Tutors
Edmonston, MD Accounting Tutors
Edmonston, MD ACT Tutors
Edmonston, MD Algebra Tutors
Edmonston, MD Algebra 2 Tutors
Edmonston, MD Calculus Tutors
Edmonston, MD Geometry Tutors
Edmonston, MD Math Tutors
Edmonston, MD Prealgebra Tutors
Edmonston, MD Precalculus Tutors
Edmonston, MD SAT Tutors
Edmonston, MD SAT Math Tutors
Edmonston, MD Science Tutors
Edmonston, MD Statistics Tutors
Edmonston, MD Trigonometry Tutors
Nearby Cities With calculus Tutor
Berwyn Heights, MD calculus Tutors
Bladensburg, MD calculus Tutors
Brentwood, MD calculus Tutors
Cheverly, MD calculus Tutors
Colmar Manor, MD calculus Tutors
Cottage City, MD calculus Tutors
Hyattsville calculus Tutors
Landover Hills, MD calculus Tutors
Mount Rainier calculus Tutors
North Brentwood, MD calculus Tutors
Riverdale Park, MD calculus Tutors
Riverdale Pk, MD calculus Tutors
Riverdale, MD calculus Tutors
Rogers Heights, MD calculus Tutors
University Park, MD calculus Tutors | {"url":"http://www.purplemath.com/Edmonston_MD_Calculus_tutors.php","timestamp":"2014-04-20T16:21:58Z","content_type":null,"content_length":"24421","record_id":"<urn:uuid:9a0761e9-935c-4413-88ef-116728ff8e47>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Verga, NJ Calculus Tutor
Find a Verga, NJ Calculus Tutor
...I have obtained a bachelor's degree in mathematics from Rutgers University. One of the classes I took there was an upper level geometry class, which dealt with the subject on a level much more
advanced than one finds in high school (I had to write a paper for that class, that I think was about 1...
16 Subjects: including calculus, English, physics, geometry
...This includes two semesters of elementary calculus, vector and multi-variable calculus, courses in linear algebra, differential equations, analysis, complex variables, number theory, and
non-euclidean geometry. I taught Prealgebra with a national tutoring chain for five years. I have taught Prealgebra as a private tutor since 2001.
12 Subjects: including calculus, writing, geometry, algebra 1
I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I
received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school.
8 Subjects: including calculus, geometry, algebra 1, algebra 2
...I have experience in both derivatives and integration. I have taken several courses in geometry and have experience with shapes and angles. I have tutored many students in pre algebra and have
experience dealing with different types of equations and variables I have spent the past two years at Jacksonville University tutoring math.
13 Subjects: including calculus, geometry, GRE, algebra 1
...When I teach, I find it most important to -Give perspective about the field of study we're covering. Where does this fit in to other subjects you've taken before? What will this course not
include? -Provide a thorough understanding of basic concepts.
25 Subjects: including calculus, chemistry, physics, writing
Related Verga, NJ Tutors
Verga, NJ Accounting Tutors
Verga, NJ ACT Tutors
Verga, NJ Algebra Tutors
Verga, NJ Algebra 2 Tutors
Verga, NJ Calculus Tutors
Verga, NJ Geometry Tutors
Verga, NJ Math Tutors
Verga, NJ Prealgebra Tutors
Verga, NJ Precalculus Tutors
Verga, NJ SAT Tutors
Verga, NJ SAT Math Tutors
Verga, NJ Science Tutors
Verga, NJ Statistics Tutors
Verga, NJ Trigonometry Tutors
Nearby Cities With calculus Tutor
Almonesson calculus Tutors
Blackwood Terrace, NJ calculus Tutors
Blenheim, NJ calculus Tutors
Center City, PA calculus Tutors
Chews Landing, NJ calculus Tutors
Grenloch calculus Tutors
Hilltop, NJ calculus Tutors
Jericho, NJ calculus Tutors
Lakeland, NJ calculus Tutors
Lester, PA calculus Tutors
Passyunk, PA calculus Tutors
Penn Ctr, PA calculus Tutors
West Collingswood Heights, NJ calculus Tutors
West Collingswood, NJ calculus Tutors
Westville Grove, NJ calculus Tutors | {"url":"http://www.purplemath.com/Verga_NJ_Calculus_tutors.php","timestamp":"2014-04-20T13:20:31Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:f8deee14-a6ff-4948-a3ea-6fb9a5aa419e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project Euler probelm 81 on Python
up vote -1 down vote favorite
In the 5 by 5 matrix below, the minimal path sum from the top left to the bottom right, by only moving to the right and down, is indicated in bold red and is equal to 2427.
Find the minimal path sum, in matrix.txt (right click and 'Save Link/Target As...'), a 31K text file containing a 80 by 80 matrix, from the top left to the bottom right by only moving right and down.
Remark: I think they did mistake, when they mark the way http://projecteuler.net/problem=81
import numpy as np
matrix0 = [ map(int, row.split()) for row in open('matrix.txt')]
for i in range(80):
for j in range(80):
matrix[i, j]=0
for i in range(80):
for j in range(80):
matrix[i, j]=matrix0[i][j]
while (k+n)<158:
for i in range(k, k+1):
for j in range(n, n+1):
if i!=79 and j!=79:
if matrix[i+1, j]<=matrix[i, j+1]:
sum=sum+matrix[i+1, j]
sum+=matrix[i, j+1]
elif i==79:
sum+=matrix[i, j+1]
elif j==79:
sum+=matrix[i+1, j]
print sum
When I use this code for the matrix 5x5 like in problem it gives me correct answer. I can't understand why it doesn't work on bigger matrix?
python project-euler
add comment
1 Answer
active oldest votes
Because you're not performing the search properly. The problem is asking for the overall least cost path, so you want an A* search or Dijkstra's algorithm. A simple one pass check for
up vote 4 lowest branch at each node won't cut it.
down vote
For an 80x80 matrix you could use the Floyd as well, just wait a little longer :) – unkulunkulu Jun 27 '12 at 8:28
so why it works on example matrix? – Alibek Galiyev Jun 27 '12 at 8:30
Because it just so happens that the naive algorithm that you are using happens to work on the example matrix. Consider instead the matrix: [[001, 002, 001, 001], [001, 002, 001, 001],
[001, 002, 001, 001], [001, 999, 999, 001]]. The algorithm you have would take you down, (001 vs 002), down (001 vs 002), down (001 vs 002). Then right 3 times (999, 999, 001). Whereas a
more intelligent algorithm would find the lower cost routes that have some locally suboptimal choices. – OmnipotentEntity Jun 27 '12 at 8:38
ok, i got it! I think i didn't understand the problem properly. In problem condition, they didn't have mistake!!! I'm going to learn this algorithms – Alibek Galiyev Jun 27 '12 at 8:47
4 i don't think the best way to solve the problem is any shortest path algorithm, there's quite a obvious DP model. – Marcus Jun 27 '12 at 8:56
show 2 more comments
Not the answer you're looking for? Browse other questions tagged python project-euler or ask your own question. | {"url":"http://stackoverflow.com/questions/11222019/project-euler-probelm-81-on-python","timestamp":"2014-04-24T08:30:55Z","content_type":null,"content_length":"69723","record_id":"<urn:uuid:4154b0af-312c-4471-b2ff-b8ef5b9c7716>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
abline {graphics}
Add Straight Lines to a Plot
This function adds one or more straight lines through the current plot.
abline(a = NULL, b = NULL, h = NULL, v = NULL, reg = NULL,
coef = NULL, untf = FALSE, ...)
a, b
the intercept and slope, single values.
logical asking whether to untransform. See ‘Details’.
the y-value(s) for horizontal line(s).
the x-value(s) for vertical line(s).
a vector of length two giving the intercept and slope.
an object with a coef method. See ‘Details’.
graphical parameters such as col, lty and lwd (possibly as vectors: see ‘Details’) and xpd and the line characteristics lend, ljoin and lmitre.
Typical usages are
abline(a, b, untf = FALSE, ...) abline(h =, untf = FALSE, ...) abline(v =, untf = FALSE, ...) abline(coef =, untf = FALSE, ...) abline(reg =, untf = FALSE, ...)
The first form specifies the line in intercept/slope form (alternatively a can be specified on its own and is taken to contain the slope and intercept in vector form).
The h= and v= forms draw horizontal and vertical lines at the specified coordinates.
The coef form specifies the line by a vector containing the slope and intercept.
reg is a regression object with a coef method. If this returns a vector of length 1 then the value is taken to be the slope of a line through the origin, otherwise, the first 2 values are taken to be
the intercept and slope.
If untf is true, and one or both axes are log-transformed, then a curve is drawn corresponding to a line in original coordinates, otherwise a line is drawn in the transformed coordinate system. The h
and v parameters always refer to original coordinates.
The graphical parameters col, lty and lwd can be specified; see par for details. For the h= and v= usages they can be vectors of length greater than one, recycled as necessary.
Specifying an xpd argument for clipping overrides the global par("xpd") setting used otherwise.
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
Murrell, P. (2005) R Graphics. Chapman & Hall/CRC Press.
See Also
lines and segments for connected and arbitrary lines given by their endpoints. par.
## Setup up coordinate system (with x == y aspect ratio):
plot(c(-2,3), c(-1,5), type = "n", xlab = "x", ylab = "y", asp = 1)
## the x- and y-axis, and an integer grid
abline(h = 0, v = 0, col = "gray60")
text(1,0, "abline( h = 0 )", col = "gray60", adj = c(0, -.1))
abline(h = -1:5, v = -2:3, col = "lightgray", lty = 3)
abline(a = 1, b = 2, col = 2)
text(1,3, "abline( 1, 2 )", col = 2, adj = c(-.1, -.1))
## Simple Regression Lines:
sale5 <- c(6, 4, 9, 7, 6, 12, 8, 10, 9, 13)
abline(lsfit(1:10, sale5))
abline(lsfit(1:10, sale5, intercept = FALSE), col = 4) # less fitting
z <- lm(dist ~ speed, data = cars)
abline(z) # equivalent to abline(reg = z) or
abline(coef = coef(z))
## trivial intercept model
abline(mC <- lm(dist ~ 1, data = cars)) ## the same as
abline(a = coef(mC), b = 0, col = "blue")
Documentation reproduced from R 3.0.2. License: GPL-2. | {"url":"http://www.inside-r.org/r-doc/graphics/abline","timestamp":"2014-04-18T06:46:13Z","content_type":null,"content_length":"30342","record_id":"<urn:uuid:0b3cc587-a973-48fa-8298-5e365bcde1f0>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
A newbie in Haskell land
I am a newbie in the Haskell land. I was lost but found some good maps and discovered there is a tradition in Haskell land : writing a monad tutorial.
There are so many monad tutorials that writing a new one is getting difficult. And writing a good one if even more difficult. So, I am just going to explain my own understanding.
The first thing to note is that monads are EASY !!
What's difficult is trying to understand what they have in common because they can look so different. I have identified three kinds of monads (not exclusive - a monad can belong to more than one
• Monad as control of the sequencing ;
• Monad as control of side effects ;
• Monad as container
1. Monad as control of the sequencing
In a lazy functional programming language like Haskell, the order of evaluation does not matter. It does not mean you cannot control the order of evaluation. It means you can abstract it and build
your own sequencing, your own control.
In imperative languages (like C), you need to extend the language to support new control statements.
In less elegant functional languages (like LISP) you need to have special forms which do not follow the normal rules for evaluation.
In Haskell, you "just" build your own control operators. Let's see some examples:
1.1. Control in IO monad
repeatN 0 a = return ()
repeatN n a = a >> repeatN (n-1) a
test = repeatN 3 $ do
putStrLn "TEST"
And, if you want to pass the loop index to the loop body, you may write:
repeatN 0 a = return ()
repeatN n a = (a n) >> repeatN (n-1) a
test = repeatN 3 $ \i -> do
putStrLn $ "TEST : " ++ (show i)
1.2. Indeterminism monad also known as List monad
Another example of control of the sequencing is the indeterminism monad:
import Control.Monad.List
-- f is a function returning several possible results
f :: Int -> [Int]
f x = [1+x,2*x]
test :: IO ()
test = putStrLn . show $ do
a <- return 5
b <- f a
return b
Here we apply a function f to the value 5. The function f is returning several possible results.
It is possible to chain indeterminate functions like f:
test2 :: IO ()
test2 = putStrLn . show $ do
a <- return 5
b <- f a
c <- f b
return c
but we do not need to give a name to the intermediate results, so let's write it like:
test2 :: IO ()
test2 = putStrLn . show $ return 5 >>= f >>= f
The Maybe and Either monads are special cases
2. Monad as control of side effects
2.1. IO Monad
It is the standard example so I won't write about it
2.2. Reader monad
A reader monad is used to maintain an environment.
import Control.Monad.Reader
-- The data type for my environment
data MyState = MyState { vara :: Int
, varb :: Int
-- The initial environment
initState = MyState { vara = 10
, varb = 20
-- Computation in the initial environment
test = do theVarA <- asks vara
lift . putStrLn $ show theVarA
`runReaderT` initState
We create a Reader monad to have access to the environment defined by initState. Then in the monad, we can access the fields of initState.
This state is available whenever we need it in the monad and we do not need to pass it as argument.
runReaderT and lift are explained later. They are not important to understand this example. You just have to know that the line with lift is used to display a value and the runReaderT is used to
initialize the environment.
Now, we can temporarily change the value of one variable and work in this modified environment.
-- Increment vara from the environment
incrementVarA :: Int -> MyState -> MyState
incrementVarA x p = p {vara = (vara p) + x}
test = do theVarA <- asks vara
lift . putStrLn $ show theVarA
-- computation in the new modified environment
local (incrementVarA 5) $ do
theVarA <- asks vara
lift . putStrLn $ show theVarA
theVarA <- asks vara
lift . putStrLn $ show theVarA
`runReaderT` initState
We have a side effect since the environment is modified and this change is visible in a non local way. But this change is nevertheless restricted by the local function.
The previous examples are in fact using the Reader monad and the IO monad hence the use of the monad transformer ReaderT and runReaderT.
You may use runReader. With runReader the type of test is no more IO () but Int:
test = do theVarA <- asks vara
return theVarA
`runReader` initState
So, an equivalent code (with IO) is:
test = putStrLn . show $ do theVarA <- asks vara
return theVarA
`runReader` initState
runReader has type : Reader r a -> r -> a
It is applying a Reader monad to an initial environment (r).
runReaderT is just a bit more complex. It has type: ReaderT r m a -> r -> m a
So, when you're working in ReaderT r IO a, you need to specify if you are working with values of type ReaderT r IO a or IO a. The lift function is used for this. Its type is m a -> t m a. So it will
transform IO a values to ReaderT r IO a.
A different way to look at this (probably a wrong way) is:
If you have a value v of type a, you use return v to inject it in the ReaderT r IO a monad.
return v would not work if v was of type IO a since you would get a value of type ReaderT r IO (IO a).
So, lift is used to inject the value in the monad.
3. Monad as container
In each monad, you have the return function which is injecting an element into the monad. So, any monad can be seen as a kind of container. For the List monad it is obvious. Seeing a monad as a
container can be very useful.
Assume you want to add an integer to the result of a computation which could return no result. You may have to do something like that
result = Just 20
test = case result of
Just a -> Just (a + 10)
_ -> Nothing
So, you need to extract the value from the container (if there is something to extract), apply your function and package the result in the same container.
Or you can just write:
test = (+10) `fmap` result
fmap is a kind of generalization of map. map is lifting a function a -> b to the container [a] -> [b]
fmap is doing the same for a container m (a monad). So, fmap is transforming the type a -> b to m a -> m b
4. Deriving monad (you have to use -fglasgow-exts)
In a same code you may have to use different Reader monads even if they have the same type since they may be for different uses.
You may create a type synonym :
type MyEnvironment a = Reader Int a -- (here the environment is just an Int)
But it would not prevent from mixing two different Reader monads if they have the same type.
So, you need to create a new type:
newtype MyEnvironment a = MyEnvironment {runMyEnvironment :: Reader Int a}
Then, you want the same behavior. This is just a reader monad (from a behavior point of view) like newtype Meter = Meter Int is just a number (from a behavior point of view).
So, instead of having to write several instance declarations, you just write:
import Control.Monad.Reader
import Control.Monad.Identity
newtype MyEnvironment a = MyEnvironment {runMyEnvironment :: Reader Int a}
deriving (Monad, MonadReader Int)
Then you create an environment . It is just a Reader monad contained in your new type
r :: MyEnvironment Int
r = do
r <- MyEnvironment $ ask -- This is packaging the result of ask
-- in MyEnvironment. Hence the work is done
return r -- in the MyEnvironment monad and not in a
-- simple Reader monad r is an Int but
-- return r is a MyEnvironment Int and not
-- a Reader Int Int
Then, you extract the reader monad and apply it to the initial state
test = putStrLn . show $ (runMyEnvironment r) `runReader` 4
5. What's common ?
What do the previous monads have in common ? Nothing ! Or not a lot. Indeed, being a monad is a very general concept and focusing on the part they have in common (return, >>=) is not the interesting
part nor the difficult one. What is interesting is how different they are : a Reader monad is providing ask and local functions ; an IO monad is providing putStrLn etc...
Each monad has its own personality. Of course, >>= will not be the same in each monad but from a user point of view, it will respect the same monadic laws :
return a >>= k == k a -- return is a "neutral element" on left
m >>= return == m -- return is a "neutral element" on right
m >>= (\x -> k x >>= h) == (m >>= k) >>= h -- a kind of associativity of >>=
The only things shared by all monads : the monadic laws.
(This post was imported from my old blog. The date is the date of the import. The comments where not imported.) | {"url":"http://www.alpheccar.org/content/60.html","timestamp":"2014-04-18T16:11:31Z","content_type":null,"content_length":"36263","record_id":"<urn:uuid:19dfde0a-bdad-4a12-92b5-7b1d35c7906a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAT 305 Semester Exam Spring 2000
1. a. Show how to simplify the following expression to generate a positive integer: C(5,2)
b. Determine the number of ways to arrange 10 distinct dogs in a straight line.
c. There are 8 coffee choices and 12 tea choices on the menu at Farms and Babble Bookstore. These are the only beverage choices.
(i) If a customer orders either tea or coffee, how many selections does the customer have to choose from?
(ii) If a customer orders one tea choice and one coffee choice, how many choices are possible? Disregard whether tea or coffee is ordered or served first.
d. Determine the number of non-negative integer solutions to the equation
A + B + C + D + E + F = 40.
e. Consider Pascal's Triangle, where 1 is the 0th row, 1 1 is the 1st row, and 1 2 1 is the 2nd row.
(i) Show the elements in row 6 of Pascal's Triangle.
(ii) State the sum of the elements that appear in row 20 of Pascal's Triangle.
2. a. Determine the number of collected terms in the expansion of
b. Determine the value of the coefficient K in the collected term
c. Determine the number of uncollected terms in the expansion of
d. Determine the number of collected terms in the expansion of
3. The following conjecture is to be proven true by induction or shown to be false using a counterexample:
a. State and carry out the first step in the induction process.
b. State and carry out the second step in the induction process.
c. State but do not carry out the third step in the induction process.
4. A passenger jet may fly three routes from New York to Chicago and four routes from Chicago to Los Angeles. For a round trip from New York to Los Angeles and back, determine the number of
ways a passenger can travel without repeating the same route on any leg of the round trip.
5. Five unique dice are thrown simultaneously. Determine the portion of all possible throws that results in at least two 5s appearing.
6. A nurse walks from home at 10th and H to her clinic at 16th and M, always walking to higher numbers or to letters further along in the alphabet. On a certain day, police block off 13th
street between K and L streets. What portion of all possible paths from the nurse's home to the clinic contain the blocked-off street?
7. A company named GAMES has an advertising display with the letters of its name, "GAMES." Colors are used for each letter, but the colors may be repeated. On one particular day, for example,
the colors might be red, green, green, blue, red. The company wishes to use a different color scheme for each of the 365 days in the year 2001. Determine the minimum number of colors that
are required for this task.
8. In 7,843 families, all of which have a TV set, a dishwasher, a microwave, and a car, there are six different types of TV sets, five different types of dishwashers, four different types of
microwaves, and eight different types of cars. What is the least number of families that have the same type of TV, dishwasher, microwave, and car?
9. Cannon balls are stacked in a compact equilateral triangular pattern. When there are n layers in the stack, there are n balls per side of the triangle on the lowest layer, n-1 per side on
the next layer, and so on, up to 1 ball on the top. Determine a recursion relationship B(n), including any initial conditions, for the total number of balls in a pile with n layers.
10. Exactly 10 chocolate chips are to be distributed at random into 6 chocolate-chip cookies. What is the probability that some cookie has at least 3 chips in it?
BONUS! A survey was conducted of 983 families to determine whether they possessed (1) a cell phone, (2) a microwave, (3) a satellite dish, or (4) a CD player. No family was completely without such
items, and 481 families had at least two of these items. At least three items were possessed by 345 families and 264 families possessed all 4 items.
a. Determine the total number of pieces of equipment held by all 983 families.
b. Determine the number of families that held the number of pieces of equipment specified below. Assume that none of these families had more than one of any particular item.
(i) exactly one piece of equipment
(ii) exactly two different pieces of equipment
(iii) exactly three different pieces of equipment | {"url":"http://math.illinoisstate.edu/day/courses/old/305/old/00/semexam2000.html","timestamp":"2014-04-20T16:32:56Z","content_type":null,"content_length":"13920","record_id":"<urn:uuid:7fba9c7f-994a-4f76-b18b-87170233be39>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westford, MA Algebra 2 Tutor
Find a Westford, MA Algebra 2 Tutor
...You list the information you know and use variables for unknown information. Then you find the connection between them to form one or more equations. Then you solve those equations.
5 Subjects: including algebra 2, physics, Chinese, precalculus
...I recently (03/2013) passed the Massachusetts Test for Educator Licensing (MTEL) subject 09 test (which covers the standard math curriculum from grades 8 - 12, including Algebra II) with the
maximum scores in each category. Although my primary educational passion has been the physical sciences, ...
12 Subjects: including algebra 2, chemistry, calculus, physics
...Majored in English in college, have a degree in Architecture and worked in the field for many years, now semi-retired. Wide travel and periods of living overseas in different cultures. Have
tutored students with a wide range of backgrounds, physical and emotional issues, as well as students needing a short boost in skills and confidence.
19 Subjects: including algebra 2, English, writing, geometry
...It includes transformations, trig and lots of modeling of real world behavior. When teaching Pre-calculus I focus on helping students to make the connections between the graphical, table and
equation forms of the functions we work with. The SAT is a test that with some simple techniques and pra...
23 Subjects: including algebra 2, physics, calculus, statistics
...I have been able to achieve success by setting a pace that is appropriate for each individual student. During our sessions and the attentiveness of the student, I also believe in engaging in a
certain amount of conversation with the student that can make our sessions feel more like getting help ...
13 Subjects: including algebra 2, calculus, geometry, GRE
Related Westford, MA Tutors
Westford, MA Accounting Tutors
Westford, MA ACT Tutors
Westford, MA Algebra Tutors
Westford, MA Algebra 2 Tutors
Westford, MA Calculus Tutors
Westford, MA Geometry Tutors
Westford, MA Math Tutors
Westford, MA Prealgebra Tutors
Westford, MA Precalculus Tutors
Westford, MA SAT Tutors
Westford, MA SAT Math Tutors
Westford, MA Science Tutors
Westford, MA Statistics Tutors
Westford, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Westford_MA_Algebra_2_tutors.php","timestamp":"2014-04-16T16:36:16Z","content_type":null,"content_length":"24010","record_id":"<urn:uuid:2865a751-c487-4af4-8f3c-a81d1367d805>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
problem with passing matrix to glsl [Archive] - OpenGL Discussion and Help Forums
02-17-2011, 07:05 PM
hello guys,
i'm trying to debug a problem i met while implementing the shadow map method.
i first render the scene from the light source. and save the depth map onto a texture.
and then, i write a second pass, use the modelview matrix and the projection matrix from the first pass to find the texture coordinate of a vertex on the shadow map.
the problem is that i cannot pass the matrices from the first pass to the glsl shader of the second pass properly.
i striped down my program and did some simple tests:
here is the vertex shader:
void main()
now if i set the modelview and projection matrix with gluPerspective and gluLookAt, everything renders fine.
i can also change the ftransform to gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex; , everything is still working.
however, if i pass the modelview matrix and projection matrix to two uniform mat4 variables i defined in the shader, then the rendering becomes wrong, i can't see anything.
uniform mat4 modelview;
uniform mat4 projection;
void main()
i'm sure that the uniform location ids are all valid. they are 1 and 0 in my program.
and this is how i get the two matrices inside my c++ program and pass them to the shader:
glUniformMatrix4fv(modelviewUniform,1,GL_FALSE,tmo delview);
glUniformMatrix4fv(projectionUniform,1,GL_FALSE,tp rojection);
my self-defined matrices should have the same values as those in the gl_ProjectionMatrix and the gl_ModelViewMatrix, because i just acquired them from the opengl context before passing them to the
I am working under ubuntu 10.10 with nvidia 8800GTX. I have stuck on this for two days, can't figure out why.
thank you for your help. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-173617.html","timestamp":"2014-04-21T07:41:10Z","content_type":null,"content_length":"7961","record_id":"<urn:uuid:5d09ad06-f0c0-4314-bc75-d002708dea72>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
The WTN function returns a multi-dimensional discrete wavelet transform of the input array A. The transform is based on a Daubechies wavelet filter.
WTN is based on the routine wtn described in section 13.10 of Numerical Recipes in C: The Art of Scientific Computing (Second Edition), published by Cambridge University Press, and is used by
Result = WTN( A, Coef [, /COLUMN] [, /DOUBLE] [, /INVERSE] [, /OVERWRITE] )
Return Value
Returns an output array of the same dimensions as A, containing the discrete wavelet transform over each dimension.
The input vector or array. The dimensions of A must all be powers of 2.
Note: If WTN is complex then only the real part is used for the computation.
An integer that specifies the number of wavelet filter coefficients. The allowed values are 4, 12, or 20. When Coef is 4, the daub4() function (see Numerical Recipes, section 13.10) is used. When
Coef is 12 or 20, pwt() is called, preceded by pwtset() (see Numerical Recipes, section 13.10).
Set this keyword if the input array A is in column-major format (composed of column vectors) rather than in row-major format (composed of row vectors).
Set this keyword to force the computation to be done in double-precision arithmetic.
If the INVERSE keyword is set, the inverse transform is computed. By default, WTN performs the forward wavelet transform.
Set the OVERWRITE keyword to perform the transform “in place.” The result overwrites the original contents of the array.
This example demonstrates the use of IDL’s discrete wavelet transform and sparse array storage format to compress and store an 8-bit gray-scale digital image. First, an image selected from the
people.dat data file is transformed into its wavelet representation and written to a separate data file using the WRITEU procedure.
Note: If you are viewing this topic from within the IDL Workbench, you can click on each code block in turn to execute the example.
; Begin by choosing the number of wavelet coefficients to use and a
; threshold value:
coeffs = 12 & thres = 10.0
; Open the people.dat data file, read an image using associated
; variables, and close the file:
OPENR, 1, FILEPATH('people.dat', SUBDIR = ['examples','data'])
images = ASSOC(1, BYTARR(192, 192, /NOZERO))
image_1 = images[0]
CLOSE, 1
; Expand the image to the nearest power of two using cubic
; convolution, and transform the image into its wavelet
; representation using the WTN function:
pwr = 256
image_1 = CONGRID(image_1, pwr, pwr, /CUBIC)
wtn_image = WTN(image_1, coeffs)
; Write the image to a file using the WRITEU procedure and check
; the size of the file (in bytes) using the FSTAT function:
OPENW, 1, 'original.dat'
WRITEU, 1, wtn_image
status = FSTAT(1)
CLOSE, 1
PRINT, 'Size of the file is ', status.size, ' bytes.'
Next, the transformed image is converted, using the SPRSIN function, to row-indexed sparse storage format retaining only elements with an absolute magnitude greater than or equal to a specified
threshold. The sparse image is written to a data file using the WRITE_SPR procedure.
; Now, we convert the wavelet representation of the image to a
; row-indexed sparse storage format using the SPRSIN function,
; write the data to a file using the WRITE_SPR procedure, and check
; the size of the "compressed" file:
sprs_image = SPRSIN(wtn_image, THRES = thres)
WRITE_SPR, sprs_image, 'sparse.dat'
OPENR, 1, 'sparse.dat'
status = FSTAT(1)
CLOSE, 1
PRINT, 'Size of the compressed file is ', status.size, ' bytes.'
; Determine the number of elements (as a percentage of total
; elements) whose absolute magnitude is less than the specified
; threshold. These elements are not retained in the row-indexed
; sparse storage format:
PRINT, 'Percentage of elements under threshold: ',$
100.*N_ELEMENTS(WHERE(ABS(wtn_image) LT thres, $
count)) / N_ELEMENTS(image_1)
Finally, the transformed image is reconstructed from the storage file and displayed alongside the original.
; Next, read the row-indexed sparse data back from the file
; sparse.dat using the READ_SPR function and reconstruct the
; image from the non-zero data using the FULSTR function:
sprs_image = READ_SPR('sparse.dat')
wtn_image = FULSTR(sprs_image)
; Apply the inverse wavelet transform to the image:
image_2 = WTN(wtn_image, COEFFS, /INVERSE)
; Finally, display the original and reconstructed
; images side by side:
WINDOW, 1, XSIZE = pwr*2, YSIZE = pwr, $
TITLE = 'Wavelet Image Compression and File I/O'
TV, image_1, 0, 0
TV, image_2, pwr - 1, 0
; Calculate and print the amount of data used in
; reconstruction of the image:
PRINT, 'The image on the right is reconstructed from:', $
100.0 - (100.* count/N_ELEMENTS(image_1)),$
'% of original image data.'
IDL Output
Size of the file is 262144 bytes.
Size of the compressed file is 69600 bytes.
Percentage of elements under threshold: 87.0331
The image on the right is reconstructed from: 12.9669% of original image data.
The sparse array contains only 13% of the elements contained in the original array. The following figure is created from this example. The image on the left is the original 256 by 256 image. The
image on the right was compressed by the above process and was reconstructed from 13% of the original data. The size of the compressed image’s data file is 26.6% of the size of the original image’s
data file. Note that due to limitations in the printing process, differences between the images may not be as evident as they would be on a high-resolution printer or monitor.
Original image (left) and image reconstructed
from 13% of the data (right).
Version History
See Also
This page has no user notes yet. Be the first one! | {"url":"http://exelisvis.com/docs/WTN.html","timestamp":"2014-04-20T08:14:39Z","content_type":null,"content_length":"69719","record_id":"<urn:uuid:e567bcb3-dcbb-4ee5-b9c2-38b15a855ce6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magnetotransport in an aluminum thin film on a GaAs substrate grown by molecular beam epitaxy
Magnetotransport measurements are performed on an aluminum thin film grown on a GaAs substrate. A crossover from electron- to hole-dominant transport can be inferred from both longitudinal
resistivity and Hall resistivity with increasing the perpendicular magnetic field B. Also, phenomena of localization effects can be seen at low B. By analyzing the zero-field resistivity as a
function of temperature T, we show the importance of surface scattering in such a nanoscale film.
Aluminum has found a wide variety of applications in heat sinks for electronic appliances such as transistors and central processing units, electrical transmission lines for power distribution, and
so forth. As a result, it is highly desirable to prepare high-quality aluminum materials for practical device applications. In particular, the epitaxial growth of Al thin films on GaAs substrates has
attracted much interest because of its relevance to the field of electronic interconnects [1,2]. Fundamental limitations on the speed of interconnects are the various scattering processes [3,4]
occurring in low-dimensional systems. In order to fully utilize it in the integrated circuits consisting of GaAs-based high electron mobility transistors, investigations of the scattering mechanism
on an Al thin film grown on a GaAs substrate are necessary.
One of the most important issues regarding the power dissipation and the speed of the device is the inelastic process such as electron-phonon scattering and electron-electron scattering. It is also
important for the illustrations of quantum interference phenomena [5-12], one of which is weak localization [WL]. In the WL regime, phase-coherent loops formed by the paths of electrons undergoing
multiple scattering events and the time-reversed ones lead to constructive interference at the original position of electrons at zero magnetic field under the assumption that the inelastic scattering
time is much larger than the elastic one. However, phase coherence would be destroyed under a perpendicular B and lead to the negative magnetoresistance [NMR]. Positive magnetoresistance [PMR] can
also be observed in the WL regime if the spin-orbit scattering [6,8,12] is strong enough.
Here, we review the temperature dependences of resistivity for various scattering mechanisms [13,14] that are generally observed in bulk materials. At low temperatures, T (lower than the Debye
temperature), electron-phonon scattering is usually the dominant one, which is expected to give a Bloch-Gruneisen T^5 contribution to the resistivity. However, for the materials with complex Fermi
surfaces or are suffering from interband scattering, Umklapp process [13-15] should be taken into account, leading to the T^3 dependence instead. Umklapp process means that the crystal momentum is
not conserved after an electron-phonon scattering event. A reciprocal lattice vector is added after this process, possibly leading to a large-angle scattering [15-17]. That is, the resistivity would
not decrease as rapidly as T^5, which introduces an additional factor of T^2 for the low-angle phonon scattering at low T. Also, the T^2 term expected for electron-electron scattering may possibly
appear at low T [13,15], while at extremely high T (much larger than the Debye temperature), the resistivity follows AT [15], where A is a constant depending on the properties of the system.
It is well known that electronic transport is significantly affected by surface scattering [18-20], in addition to electron-electron scattering and electron-phonon scattering, as the thickness of a
system is reduced to become comparable to the electron mean free path. There are several theories dealing with surface scattering.
As proposed by Olsen [21], neglecting the Umklapp process, low-angle scattering of electrons by phonons is important in a thin film where electrons are deflected by low-energy phonons to the surface
[22,23] more easily than that in the bulk sample. That is, surface scattering occurs frequently in a thin film. A more careful treatment for the size effects considering the surface conditions is
proposed by Soffer [24]. Here, we use Soffer's theory as the beginning of our analyses for the zero-field resistivity.
An Al thin film is investigated in our experiments especially for its special properties. With increasing B, a crossover from electron- to hole-dominant transport occurs as a result of its non-simple
Fermi surface [25-28]. Also, it is a good material for the investigations of quantum phenomena in low-dimensional systems ascribed to its long inelastic scattering time [7].
Experimental details
The sample used in this study was grown by molecular beam epitaxy [MBE]. The following layer sequence is grown on a semi-insulating GaAs (100) substrate: 200-nm undoped GaAs and 60-nm Al film. All
the processes were performed in the ultra-high-vacuum MBE chamber to prevent unnecessary defects. The Al thin film investigated here is a single crystalline, which can be checked by the X-ray shown
in Figure 1a. Figure 1b shows an atomic force microscopy [AFM] image of the Al thin film. Four-terminal magnetotransport measurements were performed in a top-loading He^3 system equipped with a
superconducting magnet over the temperature range from T = 4 K to T = 78 K using standard ac phase-sensitive lock-in techniques. The magnetic field is applied perpendicular to the plane of the Al
thin film. It is necessary to mention that all the resistivity results have been divided by the thickness (60 nm).
Figure 1. X-ray and AFM of the Al thin film. (a) The φ scanning of Al(111) peak of the sample. (b) An AFM 5 × 5-μm^2 image of a 60-nm-thick Al thin film.
Result and discussion
Longitudinal resistivity and Hall resistivity (ρ[xx ]and ρ[xy]) as a function of magnetic field B at various temperatures T are shown in Figure 2a,b, respectively. PMR [7,9] can be observed at all T.
It is generally believed that PMR is proportional to the quadratic B in the low-field region followed by a linear dependence on B with increasing B for non-compensated (the numbers of electrons and
holes are different) metals [14,26], such as aluminum investigated here. A classical PMR based on the two-band model [14,15,29] results in this B^2 dependence in the low-field regime where the Fermi
surface is spherical. With increasing B, the number of electrons undergoing Bragg reflection at the cusps in the second Brillouin zone increases, leading to the linear dependence on B for ρ[xx ][26,
27]. Another phenomenon regarding the crossover from electron- to hole-dominant transport is the reverse of the sign of the Hall resistivity [28] with increasing B, as presented in Figure 2b. Such a
bipolar phenomenon with increasing B can also be understood by the Bragg reflection occurring at the cusps, leading to the hole-like orbit.
Figure 2. Resistivity at various temperatures T. (a) Longitudinal resistivity, ρ[xx]. (b) Hall resistivity, ρ[xy], as a function of magnetic field B at various temperatures T.
While deviations from the B^2 dependence in the low-field regime at various T can be observed in Figure 3a, it is beyond the classical mechanism. Thus, we know that quantum interference-induced
corrections are needed to be taken into account for the exact illustration of our results. The contribution of weak localization [6,10] is usually dominant for T ≧ 20 K. At high B, ρ[xx ]shows a
trend toward a linear dependence on B, shown in Figure 3b, representing that the hole-like transport becomes dominant indeed. It is worth mentioning that the PMR can still be observed at T ≧ 20 K,
without turning into the NMR [6]. Most of the measurements on Al [6-10] show that the PMR is almost diminished at T > 10 K due to its weak spin-orbit scattering. As suggested by Bergmann et al. [7],
PMR almost diminishes at T ≧ 9.4 K for Al in the low-field regime. In order to study the scattering mechanisms in different T ranges, we analyzed the zero-field ρ[xx ]as a function of T in the next
Figure 3. Deviations from the B^2 dependence in the low-field regime at various T. ρ[xx ]as function of B^2 (a) and B (b). The dotted lines in blue represent linear parts of the data.
As shown in Figure 4a, for 4.8 K ≦ T ≦ 78 K, the metallic behavior can be observed without a transition to the insulator, as is the case for a pure metal [11]. The mean free path for the bulk Al is
approximately equal to 17.5 μm [23], substantially larger than the thickness of the thin film studied here (60 nm). It prevails that surface scattering is important instead of the grain boundary
scattering in such a thin film. For a polycrystalline material, grain boundary scattering needs to be considered, while for the single crystal, it is a minor effect. In accordance with Soffer's model
[24] of surface scattering and the extensive work of Sambles et al. [19,20], the resistivity takes the form
Figure 4. Resistivity and metallic behavior. (a) Zero-field resistivity as a function of T ranging from T = 4.8 K to T = 78 K. The red solid line corresponds to a fit to Eq. (1). The best fit is
limited at T > 30 K, as shown in the inset. (b), (c) ρ[xx ](B = 0) as functions of T^2 and T^3, respectively. The red dashed lines are a guide to the eye.
where A and B are system-dependent constants. The first term represents the residual resistivity. The second and the third terms are due to electron-electron scattering and Bloch-Gruneisen
electron-phonon scattering, respectively. The fittings of Eq. (1) to the resistivity over the whole temperature range and above T = 30 K are shown in Figure 4a and its inset, respectively. It can be
seen that the good fitting is limited to the temperature above 30 K. The obtained coefficient of T^2 dependence is approximately equal to 600 fΩmK^-2. However, Soffer's theory cannot produce such a
large T^2 term over such a wide temperature range 30 K < T < 78 K. Also, electron-electron scattering would not exist at such high T. It is believed that the violation of Soffer's theory in aluminum
is due to its complex Fermi surface. As suggested by Sambles et al. [30], T^2 dependence can exist alone without a T^5 term, which is derived by considering the Umklapp scattering process occurring
at the surface for materials with a disconnected Fermi surface [31]. Figure 4b shows that ρ[xx ]follows the T^2 dependence as T > 30 K, indeed consistent with the model of surface Umklapp scattering.
On the other hand, it shows a trend toward a T^3 dependence with decreasing T below 30 K, as shown in Figure 4c, which can be ascribed to the electron-phonon scattering introducing the Umklapp
process, usually observed in the bulk material [13]. Even though we know that the Umklapp process is likely to be important in our system, the crossover from T^2 to T^3 dependence with decreasing T
can still be explained by Olsen's argument for low-angle scattering qualitatively. At relatively low T, the magnitude of the momentum of phonons is too small to induce the size effect such that the
Umklapp scattering process occurring in the interior may possibly be dominant over that occurring at the interface. Thus, the crossover from the T^2 dependence to T^3 dependence of resistivity with
decreasing T below 30 K can be predicted. A similar T^2 term can be observed for 46 K < T < 90 K performed in a subsequent cooldown in a closed cycle system, as shown in Figure 5. A deviation from
this dependence at T > 90 K is ascribed to the mean free path shortening with decreasing T. Thus, the size effect becomes less important, also consistent with Olsen's argument. At T > 105 K, ρ[xx ]
shows a tendency toward a linear dependence on T, as shown in the inset of Figure 5. A classical model has predicted such a linear term at high T (much larger than the Debye temperature, about 394 K
for aluminum). However, our result is not in this case. The onset of this linear dependence with increasing T and how the size effects modulate the magnetoresistance requires further investigations.
Figure 5. ρ[xx ]as a function of T^2 performed in a subsequent cooldown in a closed cycle system ranging from T = 46 K to T = 298 K. Inset: ρ[xx ]as a function of T, where the red dashed line
represents the linear fit at T > 105 K.
Here, it is worth mentioning that the electron-phonon impurity interference also leads to the T^2 contribution to the resistivity [32-34], which should be smaller than the residual resistivity.
However, in our results, the difference between ρ(T = 78 K) and ρ(T = 30 K) is approximately equal to 0.059 Ω, which is larger than ρ(T = 4.8 K) = 0.025 Ω, taken as the residual resistivity,
inconsistent with the requirement for the correction term. Also, there are several experimental results indicating that such a mechanism is not the dominant one for a relatively pure metal.
Therefore, we can safely neglect the influence of the electron-phonon impurity interference in our Al thin film.
In conclusion, we have performed magnetotransport measurements on an aluminum thin film grown on a GaAs substrate. A crossover from electron- to hole-dominant transport can be inferred from both
longitudinal resistivity and Hall resistivity with increasing B, characteristic of the complex Fermi surface of aluminum. The existence of positive magnetoresistance at T ≧ 20 K indicates that the
spin-orbit scattering should be taken into account for the exact treatment of localization effects. The observed surface caused T^2 term for ρ[xx ]demonstrates that surface Umklapp scattering is
important. With decreasing T, a tendency toward a T^3 dependence suggests that an Umklapp process occurring in the interior is more important than that occurring at the surface. Such a crossover is
consistent with Olsen's argument for low-angle electron-phonon scattering qualitatively. All these experimental results show that the nature of the interface between the Al thin film and the GaAs
substrate would significantly affect the electrical properties of such a nanoscale film.
The authors declare that they have no competing interests. This work was funded by the NSC, Taiwan.
STL and CC performed the low-temperature experiments on the Al film and drafted the manuscript. KYC and MRY performed the low-temperature experiments on the Al film. SDL and CTL conceived of the
study. JYW fabricated the Al samples. SWL prepared the Al samples and performed the AFM and X-Ray measurements. All authors read and approved the final manuscript.
Sign up to receive new article alerts from Nanoscale Research Letters | {"url":"http://www.nanoscalereslett.com/content/6/1/102/?fmt_view=mobile","timestamp":"2014-04-18T17:24:09Z","content_type":null,"content_length":"103042","record_id":"<urn:uuid:3868bac8-e6d9-48bf-ab5c-8728bd2f3285>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Time Travel
from SkyBooksUSA Website
Space - Time
by Kalen J. Craig
Einstein said that three dimensional space may be curved and could be closed into a sphere or a torus. It would likely have a radius of curvature of approximately: R[E] = GM[u]c^2 = 6.4 x 10^26 cm,
where G is the constant of gravity and Mu is the mass of the universe.
In 1926 two scientists Theodore Kaluza and Oskar Klein suggested that electromagnetic theory could be explained if space had a fourth dimension composed of a multitude of compacted space bubbles
whose radii of curvature approximate the Planck length: d = √(hG/c^3) = 1.61 x 10^-33 cm. A Kaluza-Klein compacted space bubble is represented by Planck’s constant of action h. Which is the unit of
angular momentum, h = mcy, where mc is the electron momentum and y is the Compton wavelength of an electron.
Planck=s action represents one rotation cycle of an electron. Each such cycle of action is thought to produce a compact space bubble that is emitted from an electron to translate through three-space
at less than c. Such action bubbles have a slight mass so that an acceleration or deceleration of the bubble flow, represents an electrostatic force. Motion of the source electrons and hence the flow
produces an orthogonal magnetic field. An acceleration or oscillation of the charge source generates an electromagnetic field that moves at c.
Eugene and I agree with this and go a bit further. We assume that the four fundamental forces: the electromagnetic and gravitational forces plus the strong and weak nuclear forces can each be
represented by a compacted space dimension. This makes physical space seven dimensional.
We further assume that these compacted spaces (sometimes called Calabi-Yau space) make physical space into a super fluid ether.
We also assume that the ether fluid produces two independent flows. One which we call charge space is a manifestation of the flow properties of the electrostatic and strong force compacted
dimensions. We call these compacted bubbles geoids. They are probably two dimensional toruses (doughnut shaped surfaces).
Charge space geoids flow out of positive charge spinning one way and out of negative charge spinning the other. The flows start out at near velocity c inside the particle expand out through the
particle and decelerate generating an all prevailing electrostatic force field. This field is the charge space (ether). Whenever the flows get together they cancel creating an attraction between
opposite charges.
We call the other ether flow gravity space. It may exists as very tiny three dimensional blobs which are the compacted space bubbles from gravity and the weak nuclear force. The size of these blobs
could approximate the gravitational radius of an electron: s = Gm/c^2 = 6.75 x 10^-56 cm, where the m, in this case, is the mass of an electron. We suspect these tiny geons are Higgs particles, with
a mass something like 10^-191 grams.
The assumption that space is composed of compacted space bubbles with a slight mass accords with quantum mechanics; because, empty space is commonly thought to generate quantum fluctuations that give
it a small energy or mass.
The mass of space could generate a positive cosmological constant (repulsion) that, like Einstein suggested in his 1916 general theory, could balance the attraction of gravity, and keep the radius of
3-space curvature constant. (See Steven Weinbergs, "Dreams of a Final Theory", page 224, Random House, 1992).
It is not usually recognized that the observed redshift of light from distant sources could be due to the collapse of the time dimensions, of space-time, as well as it can be from the generally
assumed expansion of 3-space. (see Figures 7&8 from our book the Kalen Universe, on our web site the, "kalenuniverse.com".
If the reader wishes to follow our concept of space flows in more detail, he can check out the summary link or the link to chapter 6 of our book in the above web site.
He would see how and why gravity space geons appear instantly out of wormholes between matter and antimatter galaxies; this causes an outer space repulsion (Einstein’s cosmological constant) between
opposite types of matter. This repulsion separates the universe into equal parts of matter and antimatter, and helps explain the missing mass dark matter problem for cosmologists.
In brief: gravity space geons in outer space converge and accelerate. The acceleration produces gravity and the convergence produces mass particles. At the center of each particle the flow at
velocity c produces a black wormhole, through which we postulate that the geons instantly transfer to a mirror image of the particle. This image occurs at another place in space-time which we call
the shadow world. However, space and time does not exist in the wormhole between the particle and its mirror image. Hence these images are simply a continuance of the real world particles in an
unseen shadow world.
The fast moving geons at the center of the shadow world image particles expand outward and decelerate producing a weak nuclear force in the particle. The weak force is similar to the electromagnetic
force and helps produce particle decays. The deceleration reduces the G flow to zero near the particle surface. We postulate the zero motion generates a wormhole that allows the stoped geons to
transfer instantly out through macro space to an interface between matter and antimatter galaxies.
Again, the geon flows start in outer space at zero velocity then converge and accelerate down toward fermion particles. The acceleration creates gravity and the convergence gives inertial mass to the
shadow world particles. Geons spinning one way converge toward matter while those spinning the other converge toward antimatter.
What is time? Time is mysterious and hard to define. In this paper we will limit our discussion to physical time, because psychological time seems even more mysterious.
Dirk Brower of MIT (who consulted with Kalen when he worked at the Naval Research Laboratory) characterized time as the great undefined variable of physics.
We have also heard time defined as that which is measured by a clock. A clock measures some steady motion, or change, such as the evolution or decay of a physical quantity; like mass, energy,
pressure, entropy and etc. The change of the quantity could be in space as position, size or shape.
Einstein suggested that a light beam bouncing back and forth between two mirrors would be a perfect clock. A light beam is a perfectly steady motion.
Motion usually implies the translation of mass particles through space, but if the motion is a light beam no mass is involved. Light is just an oscillatory motion of space. So a unit of time for this
motion would be a unit of space. Likewise, if as we say, mass can be defined as the convergence of space toward a wormhole in space, then again a unit of motion or time is a unit of space. This may
seem a bit vague so we will give one more example.
We propose that: all motion is wave motion.
In order to explain this concept, we first refer to the basic postulate #2 of the KALEN UNIVERSE: That a condition of zero time opens a wormhole, which is an instantaneous path to another location in
space-time (see the link to chapter 3 (Postulates) of our book in the kalenuniverse.com web site).
In relativity theory, zero time occurs at the velocity c of light. We assume in our #2 postulate that zero time also occurs at zero velocity (no motion no time).
Electromagnetic waves move at c and have zero time along the line of motion. However, orthogonal to the line of motion, the electric and magnetic fields move (oscillate) at less than c. When,
however, a magnetic or electric field goes through a maximum there is a moment of zero motion. This occurs for any sine wave motion. Wormholes can occur at these wave peaks.
Electromagnetic waves expand spherically as retarded waves from a charge source. Under Maxwell=s equations normal (retarded) waves are received after they are emitted. Whereas, his advanced waves
have negative time and, we predict, they converge through wormholes and are received at the same time as they start. You see that, when a wave front reaches a target charge (electron) it triggers a
wormhole all along the wave front. The retarded wave collapses instantly through the wormhole (as an advanced wave) onto the target charge. One can often plot this expansion and collapse as a
straight line from the source to the target. You see a photon does not move as a particle along a line but rather moves as a wave function from source to target.
The two hole experiment of quantum mechanics shows that not only do bosons (photons) travel as waves but so do fermion particles such as electrons. See our article Quantum Weirdness. This paper along
with Questionable Cosmological Assumptions, are good background articles to read along with the present paper.
Inertial particles (fermions) contain both charge and mass. They are both electromagnetic and gravitational, so are composed of both electromagnetic and gravity waves.
Eugene and I postulate that these tiny gravity waves are a sub harmonic of electromagnetic waves, but are much, much weaker, smaller and more complex. A mathematical theory of such tiny gravity waves
has not been written.
Our suggestion is a new action constant k which we call the kalen. The constant k = mcd where mc is the electron momentum and d is the Planck length: d = √(hG/c^3) = 1.61 x 10^-33 cm, where h is the
Planck unit of angular momentum and G is the constant of gravity. This k unit should give a sub harmonic of quantum theory for gravity waves. This would reduce the indeterminacy of quantum theory and
explain Einstein=s hidden variables.
However, a mathematical beginning for a theory of gravity waves (through the M theory of super strings) is on the horizon. Incidentally, in string theory all the particles are generated by (composed
of) vibrations of tiny strings of space or of membranes or blobs such as our geoids or geons.
Any mass such as the earth is composed of quantum particles (fermions) which are just complex wave packet particles, that move as waves much like photons move. They just make more starts and stops
and so travel slower than photons. Fermion particles do not need a target to move to. They just reproduce themselves in time as they move along.
Our all motion is wave motion idea, with its instantly collapsing advanced waves and multiple micro starts and stops, may seem far out, but is actually quite simple. When it is compared to the
concept of the various boson messenger particles of quantum mechanics.
Boson messenger particles can be better visualized as flow properties of space. That is, a force field between two objects is easily visualized as due to the appearance or disappearance of space
between the objects.
If all motion is wave motion and time is motion then again, the unit of time should be a unit of waves (space).
In the first section of this article (SPACE) we proposed two compacted units of space. One which we call geoids for charge (electromagnetic) space. The other we call geons for gravity space. The
geoids are unit electromagnetic cycles (from one electron) given by Planck=s constant of action (angular momentum): h = mcy where y is the Compton wavelength; of an electron. A geon is a unit gravity
cycle from one electron given by kalen=s constant of action k = mcd and, as we said, d is the Planck length.
If the basic increments of space and time are the same, then geoids and geons are also basic units of time.
However, time dimensions are not quite the same as space dimensions.
In general relativity the time dimension or dimensions are orthogonal to the space dimensions. This is indicated mathematically by multiplying the time dimensions by √(-1). Multiplying by -1 gives a
180º rotation and multiplying by √(-1) gives a 90º rotation.
Both; space and time are compounded from the basic units of action: the geoids h and the geons k. Gene and I assume that h and k are the ultimate units of existence and are more basic than length,
time or mass, even though, h and k appear to have the math dimensions of ML^2/T. Consequently, trying to measure the length, time or mass of quantum particles in terms of h and k leads to Heisenberg=
s uncertainty principle. Even though the discovery of a sub gravity quantum realm, say through use of the Kalen constant k, could largely remove indeterminacy from quantum mechanics, a certain amount
of uncertainty would remain. We like to think that it allows intelligent beings a certain amount of leeway in choosing their lives.
One ordinarily thinks of the evolution of space as due to the time dimension. In spite of this, I have tried to show that space and time are on an equal basis as far as change and evolution are
The big bang theory assumes that space-time is expanding spherically from a point singularity. This gives a beginning to time some 10 to 20 billion years ago. An amazing amount of work has been done
on this theory. It gives a creditable evolution of matter from a very hot start to the present very cold 3 degree background temperature. But it has run into serious problems with observation, due,
we believe, to certain long ingrained questionable assumptions. See our link to Questionable Cosmological Assumptions.
For one thing, we assume that space-time is not spherical but rather is an oscillating torus, that expands and contracts between two fixed limits set by a fixed radius of curvature of 3-space. In
order for this doughnut metric to evolve as we suggest, time must also be three dimensional. This makes space-time six dimensional; or rather ten dimensional when one considers the four compacted
force dimensions.
This geometry is complicated but easier to picture than ten dimensional string theory. See Figures 7 and 8 in our figures link, a page of the kalenuniverse.com web site.
If the time and space dimensions are much the same, why is three-space so obvious while the time dimensions are hidden?
One reason is that most of the space flows (motions) along the time dimensions are instantaneous through wormholes
In order to see how this comes about, the reader should understand our concept of the Shadow World.
The idea of a shadow world has been around for a long time. String theorists predict that all the particles have a mirror image partner that is too heavy to detect. Also their E8 x E8 super symmetry
seems to predict an invisible duplicate Shadow World.
Actually Einstein=s idea of particles being wormhole bridges between two 3D slices of space-time is closer to our idea of a shadow world (See Einstein=s quote in the link to Quantum Weirdness in the
kalen web site). We think of gravity space as being an ether like super fluid that converges upon matter producing a black wormhole at the center of any mass particle. These wormholes are
instantaneous connections between any real world particle and its shadow world counterpart.
Now, because distance and time do not exist inside of a wormhole; a real world particle and its shadow world antimatter counterpart can be thought of as one particle.
Space flows generate the real world, then flow through mass and charge wormholes to create the next slice of space-time which is the shadow world. Thus, time is essentially the instantaneous flow of
space through mass to the next observable slice of space-time which we call the shadow world.
In order to visualize this better, we will omit one dimension and think of space-time as three dimensional. Consider a three dimensional object such as a human body. A slice through the body would be
a two dimensional picture. One can think of the whole body as a series of these pictures. Visualize each picture slice as a moving picture frame. Imagine a two dimensional observer who could see
these pictures projected sequentially in time. He could combine and see them as a 3D object: The human body. The third dimension would be time to this 2D observer.
We see that time can be the sequential observation of our real world of three-space along the next higher dimension which we call time. We call the next slice of space-time the shadow world.
We also see how the time dimensions can be hidden in wormholes, and the shadow world hidden behind wormholes
Go Back
Is time Travel Possible?
by John and Mary Gribbin
In one of the wildest developments in serious science for decades, researchers from California to Moscow have recently been investigating the possibility of time travel. They are not, as yet,
building TARDIS lookalikes in their laboratories; but they have realized that according to the equations of Albert Einstein’s general theory of relativity (the best theory of time and space we have),
there is nothing in the laws of physics to prevent time travel. It may be extremely difficult to put into practice; but it is not impossible.
It sounds like science fiction, but it is taken so seriously by relativists that some of them have proposed that there must be a law of nature to prevent time travel and thereby prevent paradoxes
arising, even though nobody has any idea how such a law would operate. The classic paradox, of course, occurs when a person travels back in time and does something to prevent their own birth --
killing their granny as a baby, in the more gruesome example, or simply making sure their parents never get together, as in Back to the Future. It goes against commonsense, say the skeptics, so there
must be a law against it. This is more or less the same argument that was used to prove that space travel is impossible.
So what do Einstein’s equations tell us, if pushed to the limit? As you might expect, the possibility of time travel involves those most extreme objects, black holes. And since Einstein’s theory is a
theory of space and time, it should be no surprise that black holes offer, in principle, a way to travel through space, as well as through time.
A simple black hole won’t do, though. If such a black hole formed out of a lump of non-rotating material, it would simply sit in space, swallowing up anything that came near it. At the heart of such
a black hole there is a point known as a singularity, where space and time cease to exist, and matter is crushed to infinite density. Thirty years ago, Roger Penrose (now of Oxford University) proved
that anything which falls into such a black hole must be drawn into the singularity by its gravitational pull, and also crushed out of existence.
But, also in the 1960s, the New Zealand mathematician Roy Kerr found that things are different if the black hole is rotating. A singularity still forms, but in the form of a ring, like the mint with
a hole. In principle, it would be possible to dive into such a black hole and through the ring, to emerge in another place and another time. This "Kerr solution" was the first mathematical example of
a time machine, but at the time nobody took it seriously. At the time, hardly anybody took the idea of black holes seriously, and interest in the Kerr solution only really developed in the 1970s,
after astronomers discovered what seem to be real black holes, both in our own Milky Way Galaxy and in the hearts of other galaxies.
This led to a rash of popular publications claiming, to the annoyance of many relativists, that time travel might be possible. In the 1980s, though, Kip Thorne, of CalTech (one of the world’s leading
experts in the general theory of relativity), and his colleagues set out to prove once and for all that such nonsense wasn’t really allowed by Einstein’s equations.
They studied the situation from all sides, but were forced to the unwelcome conclusion that there really was nothing in the equations to prevent time travel, provided (and it is a big proviso) you
have the technology to manipulate black holes. As well as the Kerr solution, there are other kinds of black hole time machine allowed, including setups graphically described as "wormholes", in which
a black hole at one place and time is connected to a black hole in another place and time (or the same place at a different time) through a "throat".
Thorne has described some of these possibilities in a recent book, Black Holes and Time Warps (Picador), which is packed with information but far from being an easy read.
Now, Michio Kaku, a professor of physics in New York, has come up with a more accessible variation on the theme with his book Hyperspace (Oxford UP), which (unlike Thorne’s book) at least includes
some discussion of the contribution of researchers such as Robert Heinlein to the study of time travel. The Big Bang, string theory, black holes and baby universes all get a mention here; but it is
the chapter on how to build a time machine that makes the most fascinating reading.
"Most scientists, who have not seriously studied Einstein’s equations," says Kaku, "dismiss time travel as poppycock". And he then goes on to spell out why the few scientists who have seriously
studied Einstein’s equations are less dismissive. Our favourite page is the one filled by a diagram which shows the strange family tree of an individual who manages to be both his/her own father and
his/her own mother, based on the Heinlein story "All you zombies --".
And Kaku’s description of a time machine is something fans of Dr Who and H.G. Wells would be happy with:
[It] consists of two chambers, each containing two parallel metal plates. The intense electric fields created between each pair of plates (larger than anything possible with today’s technology)
rips the fabric of space-time, creating a hole in space that links the two chambers.
Taking advantage of Einstein’s special theory of relativity, which says that time runs slow for a moving object, one of the chambers is then taken on a long, fast journey and brought back: Time
would pass at different rates at the two ends of the wormhole, [and] anyone falling into one end of the wormhole would be instantly hurled into the past or the future [as they emerge from the
other end].
And all this, it is worth spelling out, has been published by serious scientists in respectable journals such as Physical Review Letters (you don’t believe us? check out volume 61, page 1446).
Although, as you may have noticed, the technology required is awesome, involving taking what amounts to a black hole on a trip through space at a sizeable fraction of the speed of light. We never
said it was going to be easy! So how do you get around the paradoxes? The scientists have an answer to that, too. It’s obvious, when you think about it; all you have to do is add in a judicious
contribution from quantum theory to the time travelling allowed by relativity theory. As long as you are an expert in both theories, you can find a way to avoid the paradoxes.
It works like this. According to one interpretation of quantum physics (there are several interpretations, and nobody knows which one, if any, is "right"), every time a quantum object, such as an
electron, is faced with a choice, the world divides to allow it to take every possibility on offer. In the simplest example, the electron may be faced with a wall containing two holes, so that it
must go through one hole or the other. The Universe splits so that in one version of reality -- one set of relative dimensions -- it goes through the hole on the left, while in the other it goes
through the hole on the right. Pushed to its limits, this interpretation says that the Universe is split into infinitely many copies of itself, variations on a basic theme, in which all possible
outcomes of all possible "experiments" must happen somewhere in the "multiverse". So there is, for example, a Universe in which the Labour Party has been in power for 15 years, and is now under
threat from a resurgent Tory Party led by vibrant young John Major.
How does this resolve the paradoxes? Like this. Suppose someone did go back in time to murder their granny when she was a little girl. On this multiverse picture, they have slid back to a
bifurcation point in history. After killing granny, they move forward in time, but up a different branch of the multiverse. In this branch of reality, they were never born; but there is no
paradox, because in he universe next door granny is alive and well, so the murderer is born, and goes back in time to commit the foul deed!
Once again, it sounds like science fiction, and once again science fiction writers have indeed been here before. But this idea of parallel universes and alternative histories as a solution to the
time travel paradoxes is also now being taken seriously by some (admittedly, not many) researchers, including David Deutsch, in Oxford.
Their research deals with both time, and relative dimensions in space. You could make a nice acronym for that -- TARDIS, perhaps?
Go Back
Time travel on Agenda
by John Gribbin
CLAIMS that time travel is impossible in principle have been shown to be in error by an Israeli researcher. Amos Ori, of the Technion-Israel Institute of Technology, in Haifa, has found a flaw in the
argument put forward recently by Stephen Hawking, of Cambridge University, claiming to rule out any possibility of time travel.
This is the latest twist in a story that began in the late 1980s, when Kip Thorne and colleagues at the California Institute of Technology suggested that although there might be considerable
practical difficulties in constructing a time machine, there is nothing in the laws of physics as understood at present to forbid this. Other researchers tried to find flaws in the arguments of the
CalTech team, and pointed in particular to problems in satisfying a requirement known as the "weak energy condition", which says that any real observer should always measure energy distributions that
are positive. This rules out some kinds of theoretical time machines, which involve travelling through black holes held open by negative energy stuff.
There are also problems with time machines that involve so-called singularities, points where space and time are crushed out of existence and the laws of physics break down. But Ori has found
mathematical descriptions, within the framework of the general theory of relativity, of spacetimes which loop back upon themselves in time, but in which no singularity appears early enough to
interfere with the time travel, and the weak energy condition is satisfied (Physical Review Letters, vol 71 p 2517).
"At present," he says, "one should not completely rule out the possibility of constructing a time machine from materials with positive energy densities."
Go Back
Why Time Travel is Possible
by John Gribbin
Physicists have found the law of nature which prevents time travel paradoxes, and thereby permits time travel. It turns out to be the same law that makes sure light travels in straight lines, and
which underpins the most straightforward version of quantum theory, developed half a century ago by Richard Feynman.
Relativists have been trying to come to terms with time travel for the past seven years, since Kip Thorne and his colleagues at Caltech discovered -- much to their surprise -- that there is nothing
in the laws of physics (specifically, the general theory of relativity) to forbid it. Among several different ways in which the laws allow a time machine to exist, the one that has been most
intensively studied mathematically is the "wormhole".
This is like a tunnel through space and time, connecting different regions of the Universe -- different spaces and different times. The two "mouths" of the wormhole could be next to each other in
space, but separated in time, so that it could literally be used as a time tunnel.
Building such a device would be very difficult -- it would involve manipulating black holes, each with many times the mass of our Sun. But they could conceivably occur naturally, either on this scale
or on a microscopic scale.
The worry for physicists is that this raises the possibility of paradoxes, familiar to science fiction fans. For example, a time traveller could go back in time and accidentally (or even
deliberately) cause the death of her granny, so that neither the time traveller’s mother nor herself was ever born.
People are hard to describe mathematically, but the equivalent paradox in the relativists’ calculations involves a billiard ball that goes in to one mouth of a wormhole, emerges in the past from the
other mouth, and collides with its other self on the way in to the first mouth, so that it is knocked out of the way and never enters the time tunnel at all. But, of course, there are many possible
"self consistent" journeys through the tunnel, in which the two versions of the billiard ball never disturb one another.
If time travel really is possible -- and after seven years’ intensive study all the evidence says that it is -- there must, it seems, be a law of nature to prevent such paradoxes arising, while
permitting the self- consistent journeys through time. Igor Novikov, who holds joint posts at the P. N. Lebedev Institute, in Moscow, and at NORDITA (the Nordic Institute for Theoretical Physics), in
Copenhagen, first pointed out the need for a "Principle of Self-consistency" of this kind in 1989 (Soviet Physics JETP, vol 68 p 439). Now, working with a large group of colleagues in Denmark,
Canada, Russia and Switzerland, he has found the physical basis for this principle.
It involves something known as the Principle of least action (or Principle of minimal action), and has been known, in one form or another, since the early seventeenth century. It describes the
trajectories of things, such as the path of a light ray from A to B, or the flight of a ball tossed through an upper story window. And, it now seems, the trajectory of a billiard ball through a time
tunnel. Action, in this sense, is a measure both of the energy involved in traversing the path and the time taken. For light (which is always a special case), this boils down to time alone, so that
the principle of least action becomes the principle of least time, which is why light travels in straight lines.
You can see how the principle works when light from a source in air enters a block of glass, where it travels at a slower speed than in air. In order to get from the source A outside the glass to a
point B inside the glass in the shortest possible time, the light has to travel in one straight line up to the edge of the glass, then turn through a certain angle and travel in another straight line
(at the slower speed) on to point B. Travelling by any other route would take longer.
The action is a property of the whole path, and somehow the light (or "nature") always knows how to choose the cheapest or simplest path to its goal. In a similar fashion, the principle of least
action can be used to describe the entire curved path of the ball thrown through a window, once the time taken for the journey is specified.
Although the ball can be thrown at different speeds on different trajectories (higher and slower, or flatter and faster) and still go through the window, only trajectories which satisfy the Principle
of least action are possible.
Novikov and his colleagues have applied the same principle to the "trajectories" of billiard balls around time loops, both with and without the kind of "self collision" that leads to paradoxes. In a
mathematical tour de force, they have shown that in both cases only self-consistent solutions to the equations satisfy the principle of least action -- or in their own words,
"the whole set of classical trajectories which are globally self-consistent can be directly and simply recovered by imposing the principle of minimal action"
(NORDITA Preprint, number 95/49A).
The word "classical" in this connection means that they have not yet tried to include the rules of quantum theory in their calculations. But there is no reason to think that this would alter their
conclusions. Feynman, who was entranced by the principle of least action, formulated quantum physics entirely on the basis of it, using what is known as the "sum over histories" or "path integral"
formulation, because, like a light ray seemingly sniffing out the best path from A to B, it takes account of all possible trajectories in selecting the most efficient.
So self-consistency is a consequence of the Principle of least action, and nature can be seen to abhor a time travel paradox. Which removes the last objection of physicists to time travel in
principle -- and leaves it up to the engineers to get on with the job of building a time machine.
Go Back
Time Travel for beginners
by John Gribbin
Exactly one hundred years ago, in 1895, H. G. Wells classic story The Time Machine was first published in book form. As befits the subject matter, that was the minus tenth anniversary of the first
publication, in 1905, of Albert Einstein’s special theory of relativity. It was Einstein, as every schoolchild knows, who first described time as "the fourth dimension" -- and every schoolchild is
wrong. It was actually Wells who wrote, in The Time Machine, that,
"there is no difference between Time and any of the three dimensions of Space, except that our consciousness moves along it"
Since the time of Wells and Einstein, there has been a continuing literary fascination with time travel, and especially with the paradoxes that seem to confront any genuine time traveller (something
that Wells neglected to investigate). The classic example is the so- called "granny paradox", where a time traveller inadvertently causes the death of his granny when she was a small girl, so that
the traveller’s mother, and therefore the traveller himself, were never born. In which case, he did not go back in time to kill granny . . . and so on.
A less gruesome example was entertainingly provided by the science fiction writer Robert Heinlein in his story By his bootstraps (available in several Heinlein anthologies). The protagonist in the
story stumbles on a time travel device brought back to the present by a visitor from the far future.
He steals it and sets up home in a deserted stretch of time, constantly worrying about being found by the old man he stole the time machine from -- until one day, many years later, he realises that
he is now the old man, and carefully arranges for his younger self to "find" and "steal" the time machine. Such a narcissistic view of time travel is taken to its logical extreme in David Gerrold’s
The Man Who Folded Himself (Random House, 1973).
Few of the writers of Dr Who have had the imagination actually to use his time machine in this kind of way. It would, after all, make for rather dull viewing if every time the Doctor had been
confronted by a disaster he popped into the TARDIS, went back in time and warned his earlier self to steer clear of the looming trouble. But the implications were thoroughly explored for a wide
audience in the Back to the Future trilogy, ramming home the point that time travel runs completely counter to common sense.
Obviously, time travel must be impossible. Only, common sense is about as reliable a guide to science as the well known "fact" that Einstein came up with the idea of time as the fourth dimension is
to history. Sticking with Einstein’s own theories, it is hardly common sense that objects get both heavier and shorter the faster they move, or that moving clocks run slow. Yet all of these
predictions of relativity theory have been born out many times in experiments, to an impressive number of decimal places. And when you look closely at the general theory of relativity, the best
theory of time and space we have, it turns out that there is nothing in it to forbid time travel.
The theory implies that time travel may be very difficult, to be sure; but not impossible.
Perhaps inevitably, it was through science fiction that serious scientists finally convinced themselves that time travel could be made to work, by a sufficiently advanced civilization. It happened
like this. Carl Sagan, a well known astronomer, had written a novel in which he used the device of travel through a black hole to allow his characters to travel from a point near the Earth to a point
near the star Vega. Although he was aware that he was bending the accepted rules of physics, this was, after all, a novel.
Nevertheless, as a scientist himself Sagan wanted the science in his story to be as accurate as possible, so he asked Kip Thorne, an established expert in gravitational theory, to check it out and
advise on how it might be tweaked up. After looking closely at the non-commonsensical equations, Thorne realized that such a wormhole through space-time actually could exist as a stable entity within
the framework of Einstein’s theory.
Sagan gratefully accepted Thorne’s modification to his fictional "star gate", and the wormhole duly featured in the novel, Contact, published in 1985. But this was still only presented as a shortcut
through space. Neither Sagan nor Thorne realized at first that what they had described would also work as a shortcut through time. Thorne seems never to have given any thought to the time travel
possibilities opened up by wormholes until, in December 1986, he went with his student, Mike Morris, to a symposium in Chicago, where one of the other participants casually pointed out to Morris that
a wormhole could also be used to travel backwards in time.
Thorne tells the story of what happened then in his own book Black Holes and Time Warps (Picador). The key point is that space and time are treated on an essentially equal footing by Einstein’s
equations -- just as Wells anticipated. So a wormhole that takes a shortcut through spacetime can just as well link two different times as two different places. Indeed, any naturally occurring
wormhole would most probably link two different times. As word spread, other physicists who were interested in the exotic implications of pushing Einstein’s equations to extremes were encouraged to
go public with their own ideas once Thorne was seen to endorse the investigation of time travel, and the work led to the growth of a cottage industry of time travel investigations at the end of the
1980s and in to the 1990s.
The bottom line of all this work is that while it is hard to see how any civilization could build a wormhole time machine from scratch, it is much easier to envisage that a naturally occurring
wormhole might be adapted to suit the time traveling needs of a sufficiently advanced civilization. "Sufficiently advanced", that is, to be able to travel through space by conventional means, locate
black holes, and manipulate them with as much ease as we manipulate the fabric of the Earth itself in projects like the Channel Tunnel.
Even then, there’s one snag. It seems you can’t use a time machine to go back in time to before the time machine was built. You can go anywhere in the future, and come back to where you started, but
no further. Which rather neatly explains why no time travelers from our future have yet visited us -- because the time machine still hasn’t been invented!
So where does that leave the paradoxes, and common sense? There is a way out of all the difficulties, but you may not like it. It involves the other great theory of physics in the twentieth century,
quantum mechanics, and another favorite idea from science fiction, parallel worlds. These are the "alternative histories", in which, for example, the South won the American Civil War (as in Ward
Moore’s classic novel Bring the Jubilee), which are envisaged as in some sense lying "alongside" our version of reality.
According to one interpretation of quantum theory (and it has to be said that there are other interpretations), each of these parallel worlds is just as real as our own, and there is an alternative
history for every possible outcome of every decision ever made. Alternative histories branch out from decision points, bifurcating endlessly like the branches and twigs of an infinite tree. Bizarre
though it sounds, this idea is taken seriously by a handful of scientists (including David Deutsch, of the University of Oxford). And it certainly fixes all the time travel paradoxes.
On this picture, if you go back in time and prevent your own birth it doesn’t matter, because by that decision you create a new branch of reality, in which you were never born. When you go forward in
time, you move up the new branch and find that you never did exist, in that reality; but since you were still born and built your time machine in the reality next door, there is no paradox.
Hard to believe? Certainly. Counter to common sense? Of course. But the bottom line is that all of this bizarre behavior is at the very least permitted by the laws of physics, and in some cases is
required by those laws.
I wonder what Wells would have made of it all.
Go Back | {"url":"http://www.bibliotecapleyades.net/ciencia/time_travel/esp_ciencia_timetravel06.htm","timestamp":"2014-04-17T01:32:41Z","content_type":null,"content_length":"61774","record_id":"<urn:uuid:9a9d36ec-2174-4080-a247-af5b303ead9c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stoichiometry Worksheet Mole Mole Answer Key
Stoichiometry Worksheet Mole Mole Answer Key PDF
Sponsored High Speed Downloads
Stoichiometry Worksheet #1 Answers 1. Given the following equation: 2 C 4H 10 + 13 O 2---> 8 CO 2 + 10 H 2O, show what the following molar ratios should be. a. C 4H ... When 1.20 mole of ammonia
reacts, the total number of moles of products formed is: a. 1.20 b. 1.50 c. 1.80 d. 3.00 e. 12.0 d. 3 ...
Stoichiometry Problems Worksheet – Answer Key 1. Assume a sample of 100 g, then there are ... 28.87 g F or 1.5196 mole F The element with the smallest number of moles is hydrogen so we calculate the
mole ratio of other elements to hydrogen. ... Homework I Answer Key
Grams A x 1 mole A x y mole B x g B = Gram B g A x mole A 1 ... Stoichiometry Practice Worksheet ... Answer the following stoichiometry-related questions: 12) ...
... then determine the answer to the question. Show your work, and observe all significant figures. ... Chemistry Worksheet NAME: _____ Stoichiometry/Mass/Mole Relationships Block: _____ Copyright ©
2007, Alan D. Crosby, Newton South High School ...
Worksheet: Mixed Problems—Mole/Mole Name_____ and Mole/Mass CHEMISTRY: A Study of Matter © 2004, GPB 8.14a KEY Answer ... Microsoft Word - 8-14a,b MIxed Problems--Mole-Mole and Mole-Mass wkst-Key.doc
Author: Brent White Created Date:
Honors Chemistry: Unit 6 Test – Stoichiometry – PRACTICE TEST ANSWER KEY Page 1 Question Answer More information 1. What is a symbolic representation of a ... so far from the unit handout and your
mole day project: (C-4.4) a) 1 mole of any elemental particle (atoms, molecules, formula ...
Molar Mass Worksheet – Answer Key Calculate the molar masses of the following chemicals: 1) Cl 2 ... Avogadro’s Number and the Mole 1) ... Mass and the Mole– Answer Key 1) How many moles are in 15
grams of lithium?
Mole Ratio Worksheet 1) Given this equation: N2 + 3 H2---> 2 NH3, write the following molar ratios: a) N2 ... H2S / S8 3) Answer the following questions for this equation: 2 H2 + O2---> 2 H2O a) What
is the H2 / H2O molar ratio? b) Suppose you had 20 moles of H2 on hand and plenty of O2, how ...
Worksheet: Mole/Mass Problems Name_____K CHEMISTRY: A Study of Matter © 2004, GPB 8.10a EY Answer each of the following questions using the equation provided.
Worksheet: Mixed Problems—Mole/Mole Name_____ and Mole/Mass CHEMISTRY: A Study of Matter © 2004, GPB 8.13 Answer each of the following questions using the equation provided. BE SURE TO BALANCE EACH
EQUATION BEFORE SOLVING ANY PROBLEMS. SHOW ALL ...
[ANSWER KEY] Chemistry I: Worksheet on All Kinds of Mole Problems SHOW ALL WORK!!! 1) What is the mass of 134 L of O2 gas at STP? 134 O2 x 1 mol O2 x 32.0g O2 = 191g O2
Mole to Grams, Grams to Moles ... Mole Calculation Worksheet – Answer Key What are the molecular weights of the following compounds? 1) NaOH 22.99 + 16.00 + 1.01 = 40.00 grams/mol 2) H 3 PO 4 3(1.01)
+ 30.97 + 4(16.00) = 98.00 grams ... Mole Calculation Worksheet
Gases and Stoichiometry: Ideal Gas Law: Worksheet 1: Answer Key Copyright 2005 ... Gases and Stoichiometry: Ideal Gas Law: Worksheet 2: Answer Key ... (A) the moles of each gas moles CO 2 = 11.0 g/44
g/mol = 0.250 mol moles O 2 = 48.0 g /32 g/mol = 1.50 mol (B) the mole fraction of each gas ...
Gases and Stoichiometry Answer Key Page 1 of 1 WORKSHEET 1 1) A 2) E 3) A 4) E 5) E 6) B 7) A 8) B 9) B 10) C 11 ... .00328 mole I 2, b) 40.4 L WORKSHEET 6 1a) .024 mol, b) 656 ml, c) 1.87 atm 2)
14.2 gm/L 3) 247.6 gm/mol 4a) 11.27gm, b).46 L 5a) 761.6 mm Hg, b) 2.22 gm 6 ) 49.9 L ...
Mole Calculation Practice Worksheet Answer the following questions: 1) How many moles are in 25 grams of water? 2) ... Mole Calculation Practice Worksheet Author: Ian Guch Subject: http://
www.chemfiesta.com Created Date: 6/17/2000 12:11:47 AM ...
Moles,’Molecules,’and’Grams’Worksheet’–’Answer’Key ...
Balancing Equations and Simple Stoichiometry-KEY Balance the following equations: 1) 1 ... the correct answer is circled. 14) What is the ... 15 How much of the excess reagent will be left over after
the reaction is complete? Title: KEY- Solutions for the Stoichiometry Practice Worksheet:
http://www.chemfiesta.com Solutions for the Stoichiometry Practice Worksheet: For both of the problems on this worksheet, the method for solving them can be
Mass to Mass Stoichiometry Problems In the following problems, calculate how much of the indicated product is made. ... Mass to Mass Stoichiometry Problems – Answer Key In the following problems,
calculate how much of the indicated product is made. Show all your work. 1) ...
per mole. What is the molecular formula of this compound? 8) ... Empirical and Molecular Formula Worksheet ANSWER KEY Write the empirical formula for the following compounds. 1) C6H6 C 3 H 3 6) C8H18
C 4 H 9 7) WO2 WO 2 8) C2H6O2 CH 3 O 9) X39Y13 X 3 Y
Stoichiometry: Mole-Mole Problems www.dorettaagostine.com/Mole-mole%20WS.pdf · PDF file Stoichiometry: Mole-Mole Problems 1. N 2 + 3H 2 → 2NH 3 ... Chemistry If8766 Worksheet Answer Key Instructional
Fair Chemistry If8766 Chemistry If8766 Instructional Fair PDF Chemistry Mole and Avogadro's ...
... is a key step in photochemical smog formation. 2NO + O 2 Æ 2NO 2. ... When using this worksheet, the “Mole Machine” worksheet, “The Double Y ... Students will learn how to use their knowledge of
stoichiometry and mole ratios in order to
Worksheet 1: Mole Relationships 2. Lab: ... answer and survive their stoichiometry experience. Unfortunately, these methods tend to allow ... This lab is begun following the introduction of
stoichiometry in Worksheet 1. In this lab the
... Stoichiometry Vocabulary: Avogadro’s number, balanced equation, cancel, ... The correct answer, of course, is E. In chemistry, the mole (mol) ... A mole of a substance has a mass in grams that is
equal to the molecular mass. For example, ...
Chapter 12 Stoichiometry 291 ... Key Equations • mole-mole relationship used in every stoichiometric calculation: ... Answer the following questions in the space provided. 20. How many liters of
carbon monoxide (at STP) are needed to react with 4.8 g of
1 Unit 7 STOICHIOMETRY 1. Introduction to Stoichiometry 2. MoleMole Stoichiometry 3. MassMole Stoichiometry 4. MassMass Stoichiometry
Topic Investigating stoichiometry ... Key concepts include a) Avogadro’s principle and molar volume; b) stoichiometric relationships; ... The mole is the basic counting unit used in chemistry and is
used to keep track of the amount of
Practice Stoichiometry Problems ... (Reminder: At STP, 1 mole of gas occupies 22.4 L) 12. A solution of tin (II) sulfate is mixed with 31.8 g silver nitrate in solution. ... Practice Stoichiometry
Problems – Chapter 9 Name _____KEY_____ Per _____ For each of the ...
Concept of mole/molar ratio Honors Chemistry: Unit 6 Test Stoichiometry ... www.triciajoy.com/subject/stoichiometry+chapter+12+test+answer+key Stoichiometry Practice Problems. Stoichiometry ...
Stoichiometry Practice Worksheet Answer Key AP Chemistry Stoichiometry Practice Test
Place your final answer in the FORMULA MASS COLUMN. ... CHEMISTRY Stoichiometry Practice [Mole-Mass] Answers: (1) 460 (2) 0.4 (3) 0.27 (4) 4 ... CHEMISTRY Stoichiometry Worksheet [Mass Mass] 9. In
the chemical reaction below, ...
Chem -ANSWER KEY WORKSHEET- STOICHIOMETRY SET A: (Time required, 1 hour) A compound with the formula, BxH2003, contains 36.14 % by mass oxygen. What is the value ot the ... 28.65 g/mole St Dr-to-QA
-2 St rru9Cb St' X — 33 270673 X 2 ergs' INQ X .
reported in kilojoules per mole of reactant. ... rxn. 1 S Answer the following questions. Show all work and report answers with units. 1. How much heat will be released when 6.44 g of sulfur reacts
with excess O 2 ... Enthalpy Stoichiometry Name _____ Chem Worksheet 16-3 Example ...
coefficient when doing stoichiometry calculations. For ... many students will reply “2 moles of hydrogen” and cite the coefficient as their reason. 2. The “mole ratio” is the ratio of coefficients in
a balanced chemical ... The answer is “e” (each would produce the same amount ...
Stoichiometry 359 Print ... Key Concepts • How are mole ratios used in chemical calculations? • What is the general procedure for solving a stoichiometric problem? Vocabulary ... is more difficult to
estimate an answer. However, because the molar
Which has a greater mass, a mole of silver atoms or a mole of gold atoms? Explain your answer. A mole of gold atoms has more mass since Au has a higher molar mass than silver. 78. Explain the
difference between atomic mass ... Chapter 11 Review Key pg. 3 114.
Worksheet: Mole/Mole Problems Name_____ CHEMISTRY: A Study of Matter © 2004, GPB 8.6 Answer each of the following questions using the equation provided.
Knowledge in mole concept is the key to relating mass, mole and number of particles ... prepared in order for you to understand the mole concept and stoichiometry. So, what are you waiting for ...
who gave the incorrect answer. 6.02 x 1023 Cu atoms 1 mole Cu Molar mass = 63.5 g/mol 6.02 ...
Stoichiometry Worksheet ... Include a key to indicate what the symbols in your representation mean. ... OK, let’s keep going…if you had 1 MOLE of C2H4, you would need 3 MOLES of oxygen gas. This
reaction would produce 2 MOLES of carbon dioxide and 2
1 mole Answer (in grams): 115 grams O 2 are needed to burn 2.4 moles H 2S. Modeling Chemistry 9 U7 obj v2.0 Chemistry Unit 7 Lab Copper-Silver Nitrate Reaction ... Stoichiometry Worksheet 2: Percent
Yield For each of the problems below: a.
Answer Key 1. 2! 80.0g x 1mole I2O5 333.861g x 1mole I2 1mole I2O5 x 253.8g 1mole I2 =60.8g IO5! 28.0g x 1mole CO 28.0104g x 1mole I 2 5 mole CO x 253.8g 1mole I
These activities could be used in conjunction with concentration calculation and stoichiometry. To close the yellow note, ... Answer questions in Analysis and Interpretations. 3. ... Introduction to
the Mole, and possibly Mole worksheet # 1 could be done in one
Answer the following questions for this equation: 2 H 2 + O 2 2 H 2 O ... 90% of a worksheet must be completed to earn credit for that worksheet! ... Which of the following would not be studied
within the topic of stoichiometry? a. the mole ratio of Al to Cl in the compound aluminum chloride
Chapter 12 Stoichiometry 289 ... mole ratios relating reactants to products. _____ 8. The coefficients in a balanced chemical equation tell the relative volumes of reactants and products, ... Answer
the following in the space provided.
moles to stoichiometry is the subject of another SourceBook module.) ... (The worksheet on “dozens” uses ... Molar mass over one mole. Then you calculate your answer Canceling like terms, you’ll
reach your goal. 3.
SHORT ANSWER Answer the following questions in the space provided. 1. ... Which of the following would not be studied within the topic of stoichiometry? (a) the mole ratio of Al to Cl in the compound
aluminum chloride (b) ...
... Gas Laws Worksheet Key 1. Determine the pressure in torr and kPa on a day when the pressure is .96atm. ... Determine the volume of 1 mole of a gas at STP. V = nRT/P = (1)(.0821)(273.15) / (1) ...
From the Stoichiometry of the balanced equation, since CO 2 and H 2O are both gases, ...
What You Need to Know for Moles & Stoichiometry ... • Define a Mole • Determine mole from formula mass ... Answer the questions below by circling the number of the correct response 1. If 46 g of X
combines with 16 g of Yto form Z, how much Zis
determine the answer. If you didn’t use this method, you would just divide by Avogadro’s number and then turn around and ... Stoichiometry Worksheet #2continued. Title: Microsoft Word -
StoichiometryWorksheet22009KEY.doc Author: smeer
CH302 Worksheet 1b Answers and Solution Key: ... What is the change of enthalpy associated with the combustion of one mole of ethylene? C2H4 + 3O2 Æ 2CO2 + 2H2O 1. 0 kJ 2. -1323 kJ correct ... So
accounting for the stoichiometry, ∆Hreaction = [2(-393) + 2(-241)] - [52 + 3(0)] ...
MiniLab Worksheet, p. 58 OL ChemLab Worksheet, p. 60 OL Study Guide, p. 74 OL ... Mole-to-Mole Stoichiometry One disadvantage of burning propane ( C 3H 8) is that carbon dioxide ... • stoichiometry
(p. 368) Key Concepts | {"url":"http://ebookilys.org/pdf/stoichiometry-worksheet-mole-mole-answer-key","timestamp":"2014-04-19T02:32:37Z","content_type":null,"content_length":"43443","record_id":"<urn:uuid:7a8eb4cc-3ad3-4f3c-b2b5-45b93aca1609>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 18. Resonance
Soon after the mile-long Tacoma Narrows Bridge opened in July 1940, motorists began to notice its tendency to vibrate frighteningly in even a moderate wind. Nicknamed “Galloping Gertie,” the bridge
collapsed in a steady 42-mile-per-hour wind on November 7 of the same year. The following is an eyewitness report from a newspaper editor who found himself on the bridge as the vibrations approached
the breaking point.
“Just as I drove past the towers, the bridge began to sway violently from side to side. Before I realized it, the tilt became so violent that I lost control of the car... I jammed on the brakes and
got out, only to be thrown onto my face against the curb.
“Around me I could hear concrete cracking. I started to get my dog Tubby, but was thrown again before I could reach the car. The car itself began to slide from side to side of the roadway.
“On hands and knees most of the time, I crawled 500 yards or more to the towers... My breath was coming in gasps; my knees were raw and bleeding, my hands bruised and swollen from gripping the
concrete curb... Toward the last, I risked rising to my feet and running a few yards at a time... Safely back at the toll plaza, I saw the bridge in its final collapse and saw my car plunge into the
The ruins of the bridge formed an artificial reef, one of the world's largest. It was not replaced for ten years. The reason for its collapse was not substandard materials or construction, nor was
the bridge under-designed: the piers were hundred-foot blocks of concrete, the girders massive and made of carbon steel. The bridge was destroyed because of the physical phenomenon of resonance, the
same effect that allows an opera singer to break a wine glass with her voice and that lets you tune in the radio station you want. The replacement bridge, which has lasted half a century so far, was
built smarter, not stronger. The engineers learned their lesson and simply included some slight modifications to avoid the resonance phenomenon that spelled the doom of the first one.
18.1 Energy in vibrations
One way of describing the collapse of the bridge is that the bridge kept taking energy from the steadily blowing wind and building up more and more energetic vibrations. In this section, we discuss
the energy contained in a vibration, and in the subsequent sections we will move on to the loss of energy and the adding of energy to a vibrating system, all with the goal of understanding the
important phenomenon of resonance.
Going back to our standard example of a mass on a spring, we find that there are two forms of energy involved: the potential energy stored in the spring and the kinetic energy of the moving mass. We
may start the system in motion either by hitting the mass to put in kinetic energy or by pulling it to one side to put in potential energy. Either way, the subsequent behavior of the system is
identical. It trades energy back and forth between kinetic and potential energy. (We are still assuming there is no friction, so that no energy is converted to heat, and the system never runs down.)
The most important thing to understand about the energy content of vibrations is that the total energy is proportional to the square of the amplitude. Although the total energy is constant, it is
instructive to consider two specific moments in the motion of the mass on a spring as examples. When the mass is all the way to one side, at rest and ready to reverse directions, all its energy is
potential. We have already seen that the potential energy stored in a spring equals \((1/2)kx^2\), so the energy is proportional to the square of the amplitude. Now consider the moment when the mass
is passing through the equilibrium point at \(x=0\). At this point it has no potential energy, but it does have kinetic energy. The velocity is proportional to the amplitude of the motion, and the
kinetic energy, \((1/2)mv^2\), is proportional to the square of the velocity, so again we find that the energy is proportional to the square of the amplitude. The reason for singling out these two
points is merely instructive; proving that energy is proportional to \(A^2\) at any point would suffice to prove that energy is proportional to \(A^2\) in general, since the energy is constant.
Are these conclusions restricted to the mass-on-a-spring example? No. We have already seen that \(F=-kx\) is a valid approximation for any vibrating object, as long as the amplitude is small. We are
thus left with a very general conclusion: the energy of any vibration is approximately proportional to the square of the amplitude, provided that the amplitude is small.
Example 1: Water in a U-tube
If water is poured into a U-shaped tube as shown in the figure, it can undergo vibrations about equilibrium. The energy of such a vibration is most easily calculated by considering the “turnaround
point” when the water has stopped and is about to reverse directions. At this point, it has only potential energy and no kinetic energy, so by calculating its potential energy we can find the energy
of the vibration. This potential energy is the same as the work that would have to be done to take the water out of the right-hand side down to a depth \(A\) below the equilibrium level, raise it
through a height \(A\), and place it in the left-hand side. The weight of this chunk of water is proportional to \(A\), and so is the height through which it must be lifted, so the energy is
proportional to \(A^2\).
Example 2: The range of energies of sound waves
\(\triangleright\) The amplitude of vibration of your eardrum at the threshold of pain is about \(10^6\) times greater than the amplitude with which it vibrates in response to the softest sound you
can hear. How many times greater is the energy with which your ear has to cope for the painfully loud sound, compared to the soft sound?
\(\triangleright\) The amplitude is \(10^6\) times greater, and energy is proportional to the square of the amplitude, so the energy is greater by a factor of \(10^{12}\) . This is a phenomenally
large factor!
We are only studying vibrations right now, not waves, so we are not yet concerned with how a sound wave works, or how the energy gets to us through the air. Note that because of the huge range of
energies that our ear can sense, it would not be reasonable to have a sense of loudness that was additive. Consider, for instance, the following three levels of sound:
barely audible wind
quiet conversation ....... 10^5 times more energy than the wind
heavy metal concert ..... 10^12 times more energy than the wind
In terms of addition and subtraction, the difference between the wind and the quiet conversation is nothing compared to the difference between the quiet conversation and the heavy metal concert.
Evolution wanted our sense of hearing to be able to encompass all these sounds without collapsing the bottom of the scale so that anything softer than the crack of doom would sound the same. So
rather than making our sense of loudness additive, mother nature made it multiplicative. We sense the difference between the wind and the quiet conversation as spanning a range of about 5/12 as much
as the whole range from the wind to the heavy metal concert. Although a detailed discussion of the decibel scale is not relevant here, the basic point to note about the decibel scale is that it is
logarithmic. The zero of the decibel scale is close to the lower limit of human hearing, and adding 1 unit to the decibel measurement corresponds to multiplying the energy level (or actually the
power per unit area) by a certain factor.
18.2 Energy lost from vibrations
Until now, we have been making the relatively unrealistic assumption that a vibration would never die out. For a realistic mass on a spring, there will be friction, and the kinetic and potential
energy of the vibrations will therefore be gradually converted into heat. Similarly, a guitar string will slowly convert its kinetic and potential energy into sound. In all cases, the effect is to
“pinch” the sinusoidal \(x-t\) graph more and more with passing time. Friction is not necessarily bad in this context --- a musical instrument that never got rid of any of its energy would be
completely silent! The dissipation of the energy in a vibration is known as damping.
Most people who try to draw graphs like those shown on the left will tend to shrink their wiggles horizontally as well as vertically. Why is this wrong?
(answer in the back of the PDF version of the book)
In the graphs in figure b, I have not shown any point at which the damped vibration finally stops completely. Is this realistic? Yes and no. If energy is being lost due to friction between two solid
surfaces, then we expect the force of friction to be nearly independent of velocity. This constant friction force puts an upper limit on the total distance that the vibrating object can ever travel
without replenishing its energy, since work equals force times distance, and the object must stop doing work when its energy is all converted into heat. (The friction force does reverse directions
when the object turns around, but reversing the direction of the motion at the same time that we reverse the direction of the force makes it certain that the object is always doing positive work, not
negative work.)
Damping due to a constant friction force is not the only possibility however, or even the most common one. A pendulum may be damped mainly by air friction, which is approximately proportional to \(v^
2\), while other systems may exhibit friction forces that are proportional to \(v\). It turns out that friction proportional to \(v\) is the simplest case to analyze mathematically, and anyhow all
the important physical insights can be gained by studying this case.
If the friction force is proportional to \(v\), then as the vibrations die down, the frictional forces get weaker due to the lower speeds. The less energy is left in the system, the more miserly the
system becomes with giving away any more energy. Under these conditions, the vibrations theoretically never die out completely, and mathematically, the loss of energy from the system is exponential:
the system loses a fixed percentage of its energy per cycle. This is referred to as exponential decay.
A non-rigorous proof is as follows. The force of friction is proportional to \(v\), and \(v\) is proportional to how far the object travels in one cycle, so the frictional force is proportional to
amplitude. The amount of work done by friction is proportional to the force and to the distance traveled, so the work done in one cycle is proportional to the square of the amplitude. Since both the
work and the energy are proportional to \(A^2\), the amount of energy taken away by friction in one cycle is a fixed percentage of the amount of energy the system has.
Figure c shows an x-t graph for a strongly damped vibration, which loses half of its amplitude with every cycle. What fraction of the energy is lost in each cycle?
(answer in the back of the PDF version of the book)
It is customary to describe the amount of damping with a quantity called the quality factor, \(Q\), defined as the number of cycles required for the energy to fall off by a factor of 535. (The origin
of this obscure numerical factor is \(e^{2\pi}\), where \(e=2.71828...\) is the base of natural logarithms. Choosing this particular number causes some of our later equations to come out nice and
simple.) The terminology arises from the fact that friction is often considered a bad thing, so a mechanical device that can vibrate for many oscillations before it loses a significant fraction of
its energy would be considered a high-quality device.
Example 3: Exponential decay in a trumpet
\(\triangleright\) The vibrations of the air column inside a trumpet have a \(Q\) of about 10. This means that even after the trumpet player stops blowing, the note will keep sounding for a short
time. If the player suddenly stops blowing, how will the sound intensity 20 cycles later compare with the sound intensity while she was still blowing?
\(\triangleright\) The trumpet's \(Q\) is 10, so after 10 cycles the energy will have fallen off by a factor of 535. After another 10 cycles we lose another factor of 535, so the sound intensity is
reduced by a factor of \(535 \times 535=2.9\times10^5\).
The decay of a musical sound is part of what gives it its character, and a good musical instrument should have the right \(Q\), but the \(Q\) that is considered desirable is different for different
instruments. A guitar is meant to keep on sounding for a long time after a string has been plucked, and might have a \(Q\) of 1000 or 10000. One of the reasons why a cheap synthesizer sounds so bad
is that the sound suddenly cuts off after a key is released.
Example 4: \(Q\) of a stereo speaker
Stereo speakers are not supposed to reverberate or “ring” after an electrical signal that stops suddenly. After all, the recorded music was made by musicians who knew how to shape the decays of their
notes correctly. Adding a longer “tail” on every note would make it sound wrong. We therefore expect that stereo speaker will have a very low \(Q\), and indeed, most speakers are designed with a \(Q
\) of about 1. (Low-quality speakers with larger \(Q\) values are referred to as “boomy.”)
We will see later in the chapter that there are other reasons why a speaker should not have a high \(Q\).
18.3 Putting energy into vibrations
When pushing a child on a swing, you cannot just apply a constant force. A constant force will move the swing out to a certain angle, but will not allow the swing to start swinging. Nor can you give
short pushes at randomly chosen times. That type of random pushing would increase the child's kinetic energy whenever you happened to be pushing in the same direction as her motion, but it would
reduce her energy when your pushing happened to be in the opposite direction compared to her motion. To make her build up her energy, you need to make your pushes rhythmic, pushing at the same point
in each cycle. In other words, your force needs to form a repeating pattern with the same frequency as the normal frequency of vibration of the swing. Graph d/1 shows what the child's \(x-t\) graph
would look like as you gradually put more and more energy into her vibrations. A graph of your force versus time would probably look something like graph 2. It turns out, however, that it is much
simpler mathematically to consider a vibration with energy being pumped into it by a driving force that is itself a sine-wave, 3. A good example of this is your eardrum being driven by the force of a
sound wave.
Now we know realistically that the child on the swing will not keep increasing her energy forever, nor does your eardrum end up exploding because a continuing sound wave keeps pumping more and more
energy into it. In any realistic system, there is energy going out as well as in. As the vibrations increase in amplitude, there is an increase in the amount of energy taken away by damping with each
cycle. This occurs for two reasons. Work equals force times distance (or, more accurately, the area under the force-distance curve). As the amplitude of the vibrations increases, the damping force is
being applied over a longer distance. Furthermore, the damping force usually increases with velocity (we usually assume for simplicity that it is proportional to velocity), and this also serves to
increase the rate at which damping forces remove energy as the amplitude increases. Eventually (and small children and our eardrums are thankful for this!), the amplitude approaches a maximum value,
e, at which energy is removed by the damping force just as quickly as it is being put in by the driving force.
This process of approaching a maximum amplitude happens extremely quickly in many cases, e.g., the ear or a radio receiver, and we don't even notice that it took a millisecond or a microsecond for
the vibrations to “build up steam.” We are therefore mainly interested in predicting the behavior of the system once it has had enough time to reach essentially its maximum amplitude. This is known
as the steady-state behavior of a vibrating system.
Now comes the interesting part: what happens if the frequency of the driving force is mismatched to the frequency at which the system would naturally vibrate on its own? We all know that a radio
station doesn't have to be tuned in exactly, although there is only a small range over which a given station can be received. The designers of the radio had to make the range fairly small to make it
possible to eliminate unwanted stations that happened to be nearby in frequency, but it couldn't be too small or you wouldn't be able to adjust the knob accurately enough. (Even a digital radio can
be tuned to 88.0 MHz and still bring in a station at 88.1 MHz.) The ear also has some natural frequency of vibration, but in this case the range of frequencies to which it can respond is quite broad.
Evolution has made the ear's frequency response as broad as possible because it was to our ancestors' advantage to be able to hear everything from a low roars to a high-pitched shriek.
The remainder of this section develops four important facts about the response of a system to a driving force whose frequency is not necessarily the same as the system's natural frequency of
vibration. The style is approximate and intuitive, but proofs are given in section 18.4.
First, although we know the ear has a frequency --- about 4000 Hz --- at which it would vibrate naturally, it does not vibrate at 4000 Hz in response to a low-pitched 200 Hz tone. It always responds
at the frequency at which it is driven. Otherwise all pitches would sound like 4000 Hz to us. This is a general fact about driven vibrations:
(1) The steady-state response to a sinusoidal driving force occurs at the frequency of the force, not at the system's own natural frequency of vibration.
Now let's think about the amplitude of the steady-state response. Imagine that a child on a swing has a natural frequency of vibration of 1 Hz, but we are going to try to make her swing back and
forth at 3 Hz. We intuitively realize that quite a large force would be needed to achieve an amplitude of even 30 cm, i.e., the amplitude is less in proportion to the force. When we push at the
natural frequency of 1 Hz, we are essentially just pumping energy back into the system to compensate for the loss of energy due to the damping (friction) force. At 3 Hz, however, we are not just
counteracting friction. We are also providing an extra force to make the child's momentum reverse itself more rapidly than it would if gravity and the tension in the chain were the only forces
acting. It is as if we are artificially increasing the \(k\) of the swing, but this is wasted effort because we spend just as much time decelerating the child (taking energy out of the system) as
accelerating her (putting energy in).
Now imagine the case in which we drive the child at a very low frequency, say 0.02 Hz or about one vibration per minute. We are essentially just holding the child in position while very slowly
walking back and forth. Again we intuitively recognize that the amplitude will be very small in proportion to our driving force. Imagine how hard it would be to hold the child at our own head-level
when she is at the end of her swing! As in the too-fast 3 Hz case, we are spending most of our effort in artificially changing the \(k\) of the swing, but now rather than reinforcing the gravity and
tension forces we are working against them, effectively reducing \(k\). Only a very small part of our force goes into counteracting friction, and the rest is used in repetitively putting potential
energy in on the upswing and taking it back out on the downswing, without any long-term gain.
We can now generalize to make the following statement, which is true for all driven vibrations:
(2) A vibrating system resonates at its own natural frequency.^1 That is, the amplitude of the steady-state response is greatest in proportion to the amount of driving force when the driving force
matches the natural frequency of vibration.
Example 5: An opera singer breaking a wine glass
In order to break a wineglass by singing, an opera singer must first tap the glass to find its natural frequency of vibration, and then sing the same note back.
Example 6: Collapse of the Nimitz Freeway in an earthquake
I led off the chapter with the dramatic collapse of the Tacoma Narrows Bridge, mainly because it was well documented by a local physics professor, and an unknown person made a movie of the collapse.
The collapse of a section of the Nimitz Freeway in Oakland, CA, during a 1989 earthquake is however a simpler example to analyze.
An earthquake consists of many low-frequency vibrations that occur simultaneously, which is why it sounds like a rumble of indeterminate pitch rather than a low hum. The frequencies that we can hear
are not even the strongest ones; most of the energy is in the form of vibrations in the range of frequencies from about 1 Hz to 10 Hz.
Now all the structures we build are resting on geological layers of dirt, mud, sand, or rock. When an earthquake wave comes along, the topmost layer acts like a system with a certain natural
frequency of vibration, sort of like a cube of jello on a plate being shaken from side to side. The resonant frequency of the layer depends on how stiff it is and also on how deep it is. The
ill-fated section of the Nimitz freeway was built on a layer of mud, and analysis by geologist Susan E. Hough of the U.S. Geological Survey shows that the mud layer's resonance was centered on about
2.5 Hz, and had a width covering a range from about 1 Hz to 4 Hz.
When the earthquake wave came along with its mixture of frequencies, the mud responded strongly to those that were close to its own natural 2.5 Hz frequency. Unfortunately, an engineering analysis
after the quake showed that the overpass itself had a resonant frequency of 2.5 Hz as well! The mud responded strongly to the earthquake waves with frequencies close to 2.5 Hz, and the bridge
responded strongly to the 2.5 Hz vibrations of the mud, causing sections of it to collapse.
Example 7: Collapse of the Tacoma Narrows Bridge
Let's now examine the more conceptually difficult case of the Tacoma Narrows Bridge. The surprise here is that the wind was steady. If the wind was blowing at constant velocity, why did it shake the
bridge back and forth? The answer is a little complicated. Based on film footage and after-the-fact wind tunnel experiments, it appears that two different mechanisms were involved.
The first mechanism was the one responsible for the initial, relatively weak vibrations, and it involved resonance. As the wind moved over the bridge, it began acting like a kite or an airplane wing.
As shown in the figure, it established swirling patterns of air flow around itself, of the kind that you can see in a moving cloud of smoke. As one of these swirls moved off of the bridge, there was
an abrupt change in air pressure, which resulted in an up or down force on the bridge. We see something similar when a flag flaps in the wind, except that the flag's surface is usually vertical. This
back-and-forth sequence of forces is exactly the kind of periodic driving force that would excite a resonance. The faster the wind, the more quickly the swirls would get across the bridge, and the
higher the frequency of the driving force would be. At just the right velocity, the frequency would be the right one to excite the resonance. The wind-tunnel models, however, show that the pattern of
vibration of the bridge excited by this mechanism would have been a different one than the one that finally destroyed the bridge.
The bridge was probably destroyed by a different mechanism, in which its vibrations at its own natural frequency of 0.2 Hz set up an alternating pattern of wind gusts in the air immediately around
it, which then increased the amplitude of the bridge's vibrations. This vicious cycle fed upon itself, increasing the amplitude of the vibrations until the bridge finally collapsed.
As long as we're on the subject of collapsing bridges, it is worth bringing up the reports of bridges falling down when soldiers marching over them happened to step in rhythm with the bridge's
natural frequency of oscillation. This is supposed to have happened in 1831 in Manchester, England, and again in 1849 in Anjou, France. Many modern engineers and scientists, however, are suspicious
of the analysis of these reports. It is possible that the collapses had more to do with poor construction and overloading than with resonance. The Nimitz Freeway and Tacoma Narrows Bridge are far
better documented, and occurred in an era when engineers' abilities to analyze the vibrations of a complex structure were much more advanced.
Example 8: Emission and absorption of light waves by atoms
In a very thin gas, the atoms are sufficiently far apart that they can act as individual vibrating systems. Although the vibrations are of a very strange and abstract type described by the theory of
quantum mechanics, they nevertheless obey the same basic rules as ordinary mechanical vibrations. When a thin gas made of a certain element is heated, it emits light waves with certain specific
frequencies, which are like a fingerprint of that element. As with all other vibrations, these atomic vibrations respond most strongly to a driving force that matches their own natural frequency.
Thus if we have a relatively cold gas with light waves of various frequencies passing through it, the gas will absorb light at precisely those frequencies at which it would emit light if heated.
(3) When a system is driven at resonance, the steady-state vibrations have an amplitude that is proportional to \(Q\).
This is fairly intuitive. The steady-state behavior is an equilibrium between energy input from the driving force and energy loss due to damping. A low-\(Q\) oscillator, i.e., one with strong
damping, dumps its energy faster, resulting in lower-amplitude steady-state motion.
If an opera singer is shopping for a wine glass that she can impress her friends by breaking, what should she look for?
(answer in the back of the PDF version of the book)
Example 9: Piano strings ringing in sympathy with a sung note
\(\triangleright\) A sufficiently loud musical note sung near a piano with the lid raised can cause the corresponding strings in the piano to vibrate. (A piano has a set of three strings for each
note, all struck by the same hammer.) Why would this trick be unlikely to work with a violin?
\(\triangleright\) If you have heard the sound of a violin being plucked (the pizzicato effect), you know that the note dies away very quickly. In other words, a violin's \(Q\) is much lower than a
piano's. This means that its resonances are much weaker in amplitude.
Our fourth and final fact about resonance is perhaps the most surprising. It gives us a way to determine numerically how wide a range of driving frequencies will produce a strong response. As shown
in the graph, resonances do not suddenly fall off to zero outside a certain frequency range. It is usual to describe the width of a resonance by its full width at half-maximum (FWHM) as illustrated
in figure g.
(4) The FWHM of a resonance is related to its \(Q\) and its resonant frequency \(f_{res}\) by the equation
\[\begin{equation*} \text{FWHM} = \frac{f_{res}}{Q} . \end{equation*}\]
(This equation is only a good approximation when \(Q\) is large.)
Why? It is not immediately obvious that there should be any logical relationship between \(Q\) and the FWHM. Here's the idea. As we have seen already, the reason why the response of an oscillator is
smaller away from resonance is that much of the driving force is being used to make the system act as if it had a different \(k\). Roughly speaking, the half-maximum points on the graph correspond to
the places where the amount of the driving force being wasted in this way is the same as the amount of driving force being used productively to replace the energy being dumped out by the damping
force. If the damping force is strong, then a large amount of force is needed to counteract it, and we can waste quite a bit of driving force on changing \(k\) before it becomes comparable to the
damping force. If, on the other hand, the damping force is weak, then even a small amount of force being wasted on changing \(k\) will become significant in proportion, and we cannot get very far
from the resonant frequency before the two are comparable.
Example 10: Changing the pitch of a wind instrument
\(\triangleright\) A saxophone player normally selects which note to play by choosing a certain fingering, which gives the saxophone a certain resonant frequency. The musician can also, however,
change the pitch significantly by altering the tightness of her lips. This corresponds to driving the horn slightly off of resonance. If the pitch can be altered by about 5% up or down (about one
musical half-step) without too much effort, roughly what is the \(Q\) of a saxophone?
\(\triangleright\) Five percent is the width on one side of the resonance, so the full width is about 10%, FWHM / \(f_{res}=0.1\). This implies a \(Q\) of about 10, i.e., once the musician stops
blowing, the horn will continue sounding for about 10 cycles before its energy falls off by a factor of 535. (Blues and jazz saxophone players will typically choose a mouthpiece that has a low \(Q\),
so that they can produce the bluesy pitch-slides typical of their style. “Legit,” i.e., classically oriented players, use a higher-\(Q\) setup because their style only calls for enough pitch
variation to produce a vibrato.)
Example 11: Decay of a saxophone tone
\(\triangleright\) If a typical saxophone setup has a \(Q\) of about 10, how long will it take for a 100-Hz tone played on a baritone saxophone to die down by a factor of 535 in energy, after the
player suddenly stops blowing?
\(\triangleright\) A \(Q\) of 10 means that it takes 10 cycles for the vibrations to die down in energy by a factor of 535. Ten cycles at a frequency of 100 Hz would correspond to a time of 0.1
seconds, which is not very long. This is why a saxophone note doesn't “ring” like a note played on a piano or an electric guitar.
Example 12: \(Q\) of a radio receiver
\(\triangleright\) A radio receiver used in the FM band needs to be tuned in to within about 0.1 MHz for signals at about 100 MHz. What is its \(Q\)?
\(\triangleright\) \(Q=f_{res}/\text{FWHM}=1000\). This is an extremely high \(Q\) compared to most mechanical systems.
Example 13: \(Q\) of a stereo speaker
We have already given one reason why a stereo speaker should have a low \(Q\): otherwise it would continue ringing after the end of the musical note on the recording. The second reason is that we
want it to be able to respond to a large range of frequencies.
Example 14: Nuclear magnetic resonance
If you have ever played with a magnetic compass, you have undoubtedly noticed that if you shake it, it takes some time to settle down,
/1. As it settles down, it acts like a damped oscillator of the type we have been discussing. The compass needle is simply a small magnet, and the planet earth is a big magnet. The magnetic forces
between them tend to bring the needle to an equilibrium position in which it lines up with the planet-earth-magnet.
Essentially the same physics lies behind the technique called Nuclear Magnetic Resonance (NMR). NMR is a technique used to deduce the molecular structure of unknown chemical substances, and it is
also used for making medical images of the inside of people's bodies. If you ever have an NMR scan, they will actually tell you you are undergoing “magnetic resonance imaging” or “MRI,” because
people are scared of the word “nuclear.” In fact, the nuclei being referred to are simply the non-radioactive nuclei of atoms found naturally in your body.
Here's how NMR works. Your body contains large numbers of hydrogen atoms, each consisting of a small, lightweight electron orbiting around a large, heavy proton. That is, the nucleus of a hydrogen
atom is just one proton. A proton is always spinning on its own axis, and the combination of its spin and its electrical charge cause it to behave like a tiny magnet. The principle is identical to
that of an electromagnet, which consists of a coil of wire through which electrical charges pass; the circling motion of the charges in the coil of wire makes it magnetic, and in the same way, the
circling motion of the proton's charge makes it magnetic.
Now a proton in one of your body's hydrogen atoms finds itself surrounded by many other whirling, electrically charged particles: its own electron, plus the electrons and nuclei of the other nearby
atoms. These neighbors act like magnets, and exert magnetic forces on the proton, h/2. The \(k\) of the vibrating proton is simply a measure of the total strength of these magnetic forces. Depending
on the structure of the molecule in which the hydrogen atom finds itself, there will be a particular set of magnetic forces acting on the proton and a particular value of \(k\). The NMR apparatus
bombards the sample with radio waves, and if the frequency of the radio waves matches the resonant frequency of the proton, the proton will absorb radio-wave energy strongly and oscillate wildly. Its
vibrations are damped not by friction, because there is no friction inside an atom, but by the reemission of radio waves.
By working backward through this chain of reasoning, one can determine the geometric arrangement of the hydrogen atom's neighboring atoms. It is also possible to locate atoms in space, allowing
medical images to be made.
Finally, it should be noted that the behavior of the proton cannot be described entirely correctly by Newtonian physics. Its vibrations are of the strange and spooky kind described by the laws of
quantum mechanics. It is impressive, however, that the few simple ideas we have learned about resonance can still be applied successfully to describe many aspects of this exotic system.
Discussion Question
Nikola Tesla, one of the inventors of radio and an archetypical mad scientist, told a credulous reporter in 1912 the following story about an application of resonance. He built an electric vibrator
that fit in his pocket, and attached it to one of the steel beams of a building that was under construction in New York. Although the article in which he was quoted didn't say so, he presumably
claimed to have tuned it to the resonant frequency of the building. “In a few minutes, I could feel the beam trembling. Gradually the trembling increased in intensity and extended throughout the
whole great mass of steel. Finally, the structure began to creak and weave, and the steelworkers came to the ground panic-stricken, believing that there had been an earthquake. ... [If] I had kept on
ten minutes more, I could have laid that building flat in the street.” Is this physically plausible?
18.4 Proofs (optional)
Our first goal is to predict the amplitude of the steady-state vibrations as a function of the frequency of the driving force and the amplitude of the driving force. With that equation in hand, we
will then prove statements 2, 3, and 4 from section 18.3. We assume without proof statement 1, that the steady-state motion occurs at the same frequency as the driving force.
As with the proof in chapter 17, we make use of the fact that a sinusoidal vibration is the same as the projection of circular motion onto a line. We visualize the system shown in figures k-m, in
which the mass swings in a circle on the end of a spring. The spring does not actually change its length at all, but it appears to from the flattened perspective of a person viewing the system
edge-on. The radius of the circle is the amplitude, \(A\), of the vibrations as seen edge-on. The damping force can be imagined as a backward drag force supplied by some fluid through which the mass
is moving. As usual, we assume that the damping is proportional to velocity, and we use the symbol \(b\) for the proportionality constant, \(|F_d|=bv\). The driving force, represented by a hand
towing the mass with a string, has a tangential component \(|F_t|\) which counteracts the damping force, \(|F_t|=|F_d|\), and a radial component \(F_r\) which works either with or against the
spring's force, depending on whether we are driving the system above or below its resonant frequency.
The speed of the rotating mass is the circumference of the circle divided by the period, \(v=2\pi A/T\), its acceleration (which is directly inward) is \(a=v^2/r\), and Newton's second law gives \(a=
F/m=(kA+F_r)/m\). We write \(f_\text{o}\) for \(\frac{1}{2\pi}\sqrt{k/m}\). Straightforward algebra yields
\[\begin{equation*} \frac{F_r}{F_t} = \frac{2\pi m}{bf}\left(f^2-f_\text{o}^2\right) . \end{equation*}\]
This is the ratio of the wasted force to the useful force, and we see that it becomes zero when the system is driven at resonance.
The amplitude of the vibrations can be found by attacking the equation \(|F_t|=bv=2\pi bAf\), which gives
\[\begin{equation*} A = \frac{|F_t|}{2\pi bf} . (2) \end{equation*}\]
However, we wish to know the amplitude in terms of \(|\mathbf{F}|\), not \(|F_t|\). From now on, let's drop the cumbersome magnitude symbols. With the Pythagorean theorem, it is easily proved that
\[\begin{equation*} F_t = \frac{F}{\sqrt{1+\left(\frac{F_r}{F_t}\right)^2}} , (3) \end{equation*}\]
and equations 1-3 can then be combined to give the final result
A = \frac{F}{2\pi\sqrt{4\pi^2m^2\left(f^2-f_\text{o}^2\right)^2+b^2f^2}} . \end{equation*}\]
Statement 2: maximum amplitude at resonance
Equation [] makes it plausible that the amplitude is maximized when the system is driven at close to its resonant frequency. At \(f=f_\text{o}\), the first term inside the square root vanishes, and
this makes the denominator as small as possible, causing the amplitude to be as big as possible. (Actually this is only approximately true, because it is possible to make \(A\) a little bigger by
decreasing \(f\) a little below \(f_\text{o}\), which makes the second term smaller. This technical issue is addressed in homework problem 3 on page 473.)
Statement 3: amplitude at resonance proportional to \(Q\)
Equation [] shows that the amplitude at resonance is proportional to \(1/b\), and the \(Q\) of the system is inversely proportional to \(b\), so the amplitude at resonance is proportional to \(Q\).
Statement 4: FWHM related to \(Q\)
We will satisfy ourselves by proving only the proportionality \(FWHM\propto f_\text{o}/Q\), not the actual equation \(FWHM=f_\text{o}/Q\). The energy is proportional to \(A^2\), i.e., to the inverse
of the quantity inside the square root in equation []. At resonance, the first term inside the square root vanishes, and the half-maximum points occur at frequencies for which the whole quantity
inside the square root is double its value at resonance, i.e., when the two terms are equal. At the half-maximum points, we have
\[\begin{align*} f^2-f_\text{o}^2 &= \left(f_\text{o} \pm \frac{\text{FWHM}}{2}\right)^2 - f_\text{o}^2\\ &= \pm f_\text{o}\cdot \text{FWHM} + \frac{1}{4}\text{FWHM}^2 \end{align*}\]
If we assume that the width of the resonance is small compared to the resonant frequency, then the \(\text{FWHM}^2\) term is negligible compared to the \(f_\text{o}\cdot \text{FWHM}\) term, and
setting the terms in equation 4 equal to each other gives
\[\begin{equation*} 4\pi^2m^2\left(f_\text{o}\text{FWHM}\right)^2 = b^2f^2 . \end{equation*}\]
We are assuming that the width of the resonance is small compared to the resonant frequency, so \(f\) and \(f_\text{o}\) can be taken as synonyms. Thus,
\[\begin{equation*} \text{FWHM} = \frac{b}{2\pi m} . \end{equation*}\]
We wish to connect this to \(Q\), which can be interpreted as the energy of the free (undriven) vibrations divided by the work done by damping in one cycle. The former equals \(kA^2/2\), and the
latter is proportional to the force, \(bv\propto bAf_\text{o}\), multiplied by the distance traveled, \(A\). (This is only a proportionality, not an equation, since the force is not constant.) We
therefore find that \(Q\) is proportional to \(k/bf_\text{o}\). The equation for the FWHM can then be restated as a proportionality \(\text{FWHM}\propto k/Qf_\text{o}m\propto f_\text{o}/Q\).
damping — the dissipation of a vibration's energy into heat energy, or the frictional force that causes the loss of energy
quality factor — the number of oscillations required for a system's energy to fall off by a factor of 535 due to damping
driving force — an external force that pumps energy into a vibrating system
resonance — the tendency of a vibrating system to respond most strongly to a driving force whose frequency is close to its own natural frequency of vibration
steady state — the behavior of a vibrating system after it has had plenty of time to settle into a steady response to a driving force
\(Q\) — the quality factor
\notationitem{\(f_\text{o}\)}{the natural (resonant) frequency of a vibrating system, i.e., the frequency at which it would vibrate if it was simply kicked and left alone}
\(f\) — the frequency at which the system actually vibrates, which in the case of a driven system is equal to the frequency of the driving force, not the natural frequency
The energy of a vibration is always proportional to the square of the amplitude, assuming the amplitude is small. Energy is lost from a vibrating system for various reasons such as the conversion to
heat via friction or the emission of sound. This effect, called damping, will cause the vibrations to decay exponentially unless energy is pumped into the system to replace the loss. A driving force
that pumps energy into the system may drive the system at its own natural frequency or at some other frequency. When a vibrating system is driven by an external force, we are usually interested in
its steady-state behavior, i.e., its behavior after it has had time to settle into a steady response to a driving force. In the steady state, the same amount of energy is pumped into the system
during each cycle as is lost to damping during the same period.
The following are four important facts about a vibrating system being driven by an external force:
(1) The steady-state response to a sinusoidal driving force occurs at the frequency of the force, not at the system's own natural frequency of vibration.
(2) A vibrating system resonates at its own natural frequency. That is, the amplitude of the steady-state response is greatest in proportion to the amount of driving force when the driving force
matches the natural frequency of vibration.
(3) When a system is driven at resonance, the steady-state vibrations have an amplitude that is proportional to \(Q\).
(4) The FWHM of a resonance is related to its \(Q\) and its resonant frequency \(f_\text{o}\) by the equation
\[\begin{equation*} \text{FWHM} = \frac{f_\text{o}}{Q}. \end{equation*}\]
(This equation is only a good approximation when \(Q\) is large.)
Homework Problems
1. If one stereo system is capable of producing 20 watts of sound power and another can put out 50 watts, how many times greater is the amplitude of the sound wave that can be created by the more
powerful system? (Assume they are playing the same music.)
2. Many fish have an organ known as a swim bladder, an air-filled cavity whose main purpose is to control the fish's buoyancy an allow it to keep from rising or sinking without having to use its
muscles. In some fish, however, the swim bladder (or a small extension of it) is linked to the ear and serves the additional purpose of amplifying sound waves. For a typical fish having such an
anatomy, the bladder has a resonant frequency of 300 Hz, the bladder's \(Q\) is 3, and the maximum amplification is about a factor of 100 in energy. Over what range of frequencies would the
amplification be at least a factor of 50?
3. As noted in section 18.4, it is only approximately true that the amplitude has its maximum at \(f=(1/2\pi)\sqrt{k/m}\). Being more careful, we should actually define two different symbols, \(f_0=
(1/2\pi)\sqrt{k/m}\) and \(f_\text{o}\) for the slightly different frequency at which the amplitude is a maximum, i.e., the actual resonant frequency. In this notation, the amplitude as a function of
frequency is
\[\begin{equation*} A = \frac{F}{2\pi\sqrt{4\pi^2m^2\left(f^2-f_0^2\right)^2+b^2f^2}} . \end{equation*}\]
Show that the maximum occurs not at \(f_o\) but rather at the frequency
\[\begin{equation*} f = \sqrt{f_0^2-\frac{b^2}{8\pi^2m^2}} = \sqrt{f_0^2-\frac{1}{2}\text{FWHM}^2} \end{equation*}\]
Hint: Finding the frequency that minimizes the quantity inside the square root is equivalent to, but much easier than, finding the frequency that maximizes the amplitude. ∫
4. (a) Let \(W\) be the amount of work done by friction in the first cycle of oscillation, i.e., the amount of energy lost to heat. Find the fraction of the original energy \(E\) that remains in the
oscillations after \(n\) cycles of motion.
(b) From this, prove the equation
\[\begin{equation*} \left(1-\frac{W}{E}\right)^Q = e^{-2\pi} \end{equation*}\]
(recalling that the number 535 in the definition of \(Q\) is \(e^{2\pi}\)).
(c) Use this to prove the approximation \(1/Q\approx(1/2\pi )W/E\). (Hint: Use the approximation \(\ln (1+x)\approx x\), which is valid for small values of \(x\).)
5. The goal of this problem is to refine the proportionality \(\text{FWHM} \propto f_{res}/Q\) into the equation \(\text{FWHM}=f_{res}/Q\), i.e., to prove that the constant of proportionality equals
(a) Show that the work done by a damping force \(F=-bv\) over one cycle of steady-state motion equals \(W_{damp}=-2\pi ^2bfA^2\). Hint: It is less confusing to calculate the work done over half a
cycle, from \(x=-A\) to \(x=+A\), and then double it.
(b) Show that the fraction of the undriven oscillator's energy lost to damping over one cycle is \(|W_{damp}|/ E=4\pi ^2bf/k\).
(c) Use the previous result, combined with the result of problem 4, to prove that \(Q\) equals \(k/2\pi bf\) .
(d) Combine the preceding result for \(Q\) with the equation \(\text{FWHM}=b/2\pi m\) from section 18.4 to prove the equation \(\text{FWHM}=f_{res}/Q\).
6. (a) We observe that the amplitude of a certain free oscillation decreases from \(A_\text{o}\) to \(A_\text{o}/Z\) after \(n\) oscillations. Find its \(Q\).(answer check available at
(b) The figure is from Shape memory in Spider draglines, Emile, Le Floch, and Vollrath, Nature 440:621 (2006). Panel 1 shows an electron microscope's image of a thread of spider silk. In 2, a spider
is hanging from such a thread. From an evolutionary point of view, it's probably a bad thing for the spider if it twists back and forth while hanging like this. (We're referring to a back-and-forth
rotation about the axis of the thread, not a swinging motion like a pendulum.) The authors speculate that such a vibration could make the spider easier for predators to see, and it also seems to me
that it would be a bad thing just because the spider wouldn't be able to control its orientation and do what it was trying to do. Panel 3 shows a graph of such an oscillation, which the authors
measured using a video camera and a computer, with a 0.1 g mass hung from it in place of a spider. Compared to human-made fibers such as kevlar or copper wire, the spider thread has an unusual set of
1. It has a low \(Q\), so the vibrations damp out quickly.
2. It doesn't become brittle with repeated twisting as a copper wire would.
3. When twisted, it tends to settle in to a new equilibrium angle, rather than insisting on returning to its original angle. You can see this in panel 2, because although the experimenters initially
twisted the wire by 35 degrees, the thread only performed oscillations with an amplitude much smaller than \(\pm35\) degrees, settling down to a new equilibrium at 27 degrees.
4. Over much longer time scales (hours), the thread eventually resets itself to its original equilbrium angle (shown as zero degrees on the graph). (The graph reproduced here only shows the motion
over a much shorter time scale.) Some human-made materials have this “memory” property as well, but they typically need to be heated in order to make them go back to their original shapes.
Focusing on property number 1, estimate the \(Q\) of spider silk from the graph.(answer check available at lightandmatter.com)
\begin{handson}{}{Resonance}{\onecolumn} 1. Compare the oscillator's energies at A, B, C, and D.
2. Compare the Q values of the two oscillators.
3. Match the x-t graphs in #2 with the amplitude-frequency graphs below. | {"url":"http://www.lightandmatter.com/html_books/lm/ch18/ch18.html","timestamp":"2014-04-16T16:23:51Z","content_type":null,"content_length":"63797","record_id":"<urn:uuid:1f7d865f-5678-42e8-82eb-d808ec894ff9>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider An Electron In A Linear Triatomic Molecule ... | Chegg.com
Consider an electron in a linear triatomic molecule formed by three equidistant atoms. We use|fA>, |fB>, |fC> to denote three orthonormal states of this electron, corresponding respectively to
localizing the electron at the position of the three atoms A,B and C. The action of the Hamiltonian in this basis is described by
ˆH|fA> = E0|fA> - a|fB>
ˆH|fB> = E0|fB> - a|fA> - a|fC>
ˆH|fC> = E0|fC> - a|fB>
where a is a real, positive constant.
(a) Calculate the energies and stationary states of the Hamiltonian ˆH .
(b) If the electron is localized on atom A at time t = 0 when is the first time that the electron is localized on atom C?
(c) For this initial state, is there ever a time when the electron is localized on atom B?
(d) Let ˆD be an observable whose eigenstates are {|fA>, |fB>, |fC>} with respective eigenvalues -d, 0,+d. If the electron is localized on atom A at t = 0 and ˆD is measured at time t; what values
can be found and with what probabilities?
(e) When the initial state of the system is arbitrary, what are the frequencies that appear in <ˆD>(t)? Give a physical interpretation of ˆD. What are the frequencies of electromagnetic waves that
can be absorbed or emitted by the molecule? | {"url":"http://www.chegg.com/homework-help/questions-and-answers/consider-electron-linear-triatomic-molecule-formed-three-equidistant-atoms-use-fa-fb-fc-de-q945697","timestamp":"2014-04-19T17:50:11Z","content_type":null,"content_length":"20918","record_id":"<urn:uuid:cb232077-0564-436a-9003-ad4c7d2bf4c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Travel Advice
Travel Advice - How much does this cost in dollars?
Converting the value of a foreign currency into US dollars is easy if you are carrying a calculator, but most of us travel without one.
• If you do not have a calculator handy, set a counting number that approximates the conversion rate to perform your currency calculation.
□ For example, let’s assume that the conversion rate for one Euro is $1.197 (i.e. how much you would have to pay in U.S. dollars to buy one Euro).
□ To create a counting number you would round $1.197 to $1.20 to calculate costs.
□ Conversion rates can be found at currency exchanges, banks or in local and international newspapers.
A vase in a country shop in England is priced at ₤15 (English Pound Sterling). If the exchange rate was $1.475 dollars to buy £1 Pound Sterling, use of a calculator would help you to compute the cost
of the vase in dollars as $22.12 [15 x $1.475].
• If you established a counting number to approximate the cost, you would estimate the cost to be 15 x 1.50 or approximately $22.50.
• Remember, you are merely using the method to understand approximate cost.
• The actual cost will be different since it will use the actual exchange rate and added taxes.
• In addition, if you purchase items using a credit card, your credit card issuer will assign a conversion rate based on the date and time the transactions was credited to the vendor’s account.
□ It has been our experience that the conversion rates set by Visa and MasterCard are quite fair, but these companies have also added a new currency conversion fee that you will see only when
you receive your monthly bill, if you have used your card abroad.
Another way of calculating cost conversion is to consider (using the same details provided in the example above) that one dollar is the equivalent of a fraction of a Pound Sterling (1/1.475 = 0.678
of a Pound Sterling). If the vase was priced at £15, then the cost in dollars would be calculated as 15 divided by 0.678 or $22.12.
If you are traveling to Europe in countries participating in the Economic Union, the changeover to the Euro made currency conversion much easier, since you only need to know the relationship between
the Dollar and the Euro (see our article on the Euro).
If you need to find information about Destinations or other Things Travelers Need To Know, try Googling ThereArePlaces.
Custom Search
Top of page Money Home | {"url":"http://www.thereareplaces.com/infgdes/money/currconv.htm","timestamp":"2014-04-19T11:57:43Z","content_type":null,"content_length":"14806","record_id":"<urn:uuid:7c3edbd8-b6c4-4925-9cb6-c0c317640f7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
finding critical points with polynomial functions (attached)
• 8 months ago
• 8 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52006dd4e4b0cc46c14b79a1","timestamp":"2014-04-17T03:55:27Z","content_type":null,"content_length":"75790","record_id":"<urn:uuid:beddfee4-39c6-4b0c-b287-bd9008c922e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
The hydrodynamics of dolphin drafting
Drafting in cetaceans is defined as the transfer of forces between individuals without actual physical contact between them. This behavior has long been surmised to explain how young dolphin calves
keep up with their rapidly moving mothers. It has recently been observed that a significant number of calves become permanently separated from their mothers during chases by tuna vessels. A study of
the hydrodynamics of drafting, initiated in the hope of understanding the mechanisms causing the separation of mothers and calves during fishing-related activities, is reported here.
Quantitative results are shown for the forces and moments around a pair of unequally sized dolphin-like slender bodies. These include two major effects. First, the so-called Bernoulli suction, which
stems from the fact that the local pressure drops in areas of high speed, results in an attractive force between mother and calf. Second is the displacement effect, in which the motion of the mother
causes the water in front to move forwards and radially outwards, and water behind the body to move forwards to replace the animal's mass. Thus, the calf can gain a 'free ride' in the forward-moving
areas. Utilizing these effects, the neonate can gain up to 90% of the thrust needed to move alongside the mother at speeds of up to 2.4 m/sec. A comparison with observations of eastern spinner
dolphins (Stenella longirostris) is presented, showing savings of up to 60% in the thrust that calves require if they are to keep up with their mothers.
A theoretical analysis, backed by observations of free-swimming dolphin schools, indicates that hydrodynamic interactions with mothers play an important role in enabling dolphin calves to keep up
with rapidly moving adult school members.
The problem of separation of mother-calf pairs in chase situations has become a serious concern in fishing-related cetacean mortality, in particular in the eastern tropical Pacific Ocean where tuna
are fished with a purse-seine method, in which schools of dolphins are encircled with a fishing net to capture the tuna concentrated below [1]. The phenomenon of separation has been linked to the
escape response of the mother, and has been described in detail in a pair of recent reports [2,3]. The present study examines the hydrodynamics of dolphin mother-calf interactions, with the purpose
of identifying possible reasons for loss of contact between mother and calf during chases.
The hydrodynamics of the drafting situation is extremely complex, as it deals with unsteady motions of two flexible bodies of different size, moving, while changing shape, at varying speeds and
distances from the water surface and from each other, and periodically piercing the surface. In addition, there are several different preferred drafting positions for the calf [2], which appear at
different ages and different modes of motion. These include the newborn being supported high on the mother's flank, within a few centimeters of her body, immediately following birth; this is
sometimes called the 'neonate position'. It has been observed that neonates cannot control buoyancy well [2], and they tend to 'pop like corks' to the surface. Within a few hours the calf is moved
down to a more lateral position (the 'echelon position'). This involves positioning of the infant within 10 cm of the mother's flank, with the neonate's dorsal fin a little anterior to, level with,
or slightly behind, the mother's dorsal fin, and the neonate's body stationed vertically somewhere between the mother's upper body and mid-body. The echelon position is characterized by the infant
making relatively few tail fluke movements as it 'drafts' alongside its mother, indicating a hydrodynamic advantage.
Older calves are seen more often in the 'infant position', which involves swimming under the mother's tail section with the neonate's head (or melon) lightly touching the mother's abdomen. Once they
are several months old, calves swim in the echelon position about 40% of the time and swim in the infant position about 30% of the time. Gubbins et al. [4] report that at the age of 12 months, calves
still spend about 50% of their time in close proximity to their mothers, with probabilities of 30% and 35% of finding a calf in the echelon or the infant position, respectively. Calves have
apparently outgrown their dependence upon the mother by the end of the third year, and spend relatively little time in the echelon position, mainly swimming side by side with their mothers as adults
There is very little quantitative information on drafting in dolphins, and much of the extant data is qualitative (for example, 'close proximity' is not reported in actual distances in the different
positions). Such data as do exist will be briefly reviewed here, and mentioned again when specifically used in the calculations later in this article.
The experimental comparisons used here are based on data for eastern spinner dolphins, Stenella longinostris, for which drafting situations (such as those discussed below and shown in Figure 1) were
documented by aerial photography [5]; see Materials and methods section for further details. Physical data for the size and mass of dolphin calves and adults of several species are reviewed by
Edwards [2], showing that the shape does not change significantly during growth, and thus the body shape from beak to caudal peduncle can be well approximated by a body of rotation of ellipsoidal
shape and aspect ratio 6:1. (Data from [2] show that the actual aspect ratio decreases from about 6.3:1 for neonates, to 6:1 for adults.) Calves at birth are 85-90 cm long, while adults are up to
about 190 cm [2], so that the mass ratio changes from 10:1 to 1:1 during the first three years of life. Drafting is observed at swimming speeds of up to 2.4 m/sec. For this aerobic speed range the
ratio of swimming drag to gliding drag for dolphins is in the range 1 <β < 5, with the value of 3 applicable for average estimates [6].
Figure 1. Aerial photographs of swimming dolphins. (a) An actual leaping sequence; (b) several mother-calf pairs swimming at high speed. In (a), the calf performs a bad leap, resulting in a large
splash (frame 4), slowing it down and losing the close connection required for drafting. The data in (b) are the basis for several of the entries in Table 4.
In some aerial records of mother-calf pairs moving at high speed, one can observe the calf moving from one side to the other obliquely behind the mother. This motion may be due to the bias in yaw
that the calf experiences when moving on one side, and an attempt to 'even' this out, by periodically changing sides.
The first step in the analysis is to try to extract the dominant effects of the postulated hydrodynamic interaction and to build a simplified model, which will be able to give quantitative
predictions for the major parameters, without losing relevance. This model can then serve as a building block for further, more complicated descriptions. This procedure is complicated, however, by
the fact that empirical data are scarce and partial with large inherent errors. Such a model should be simple enough to be solvable by semi-analytic methods before delving into full numerical
analysis, the accuracy of which will be compromised by the large scatter in experimental input data.
Obviously the model needs to be accurate and detailed enough to give useful results. Lang [7] made a list of possible interactions, based in part on the analysis by Kelly [8]. In the latter paper, it
is assumed that the flow is described by classical solutions of two equal-sized spheres moving at the same speed, either in the direction of the line of centers, or perpendicular to this line [9,10].
These solutions show the great advantage of the inviscid flow assumption, which allows linearization, and thus superposition of solutions, such that the two existing solutions can be combined to form
the flow-field around two spheres at any orientation to the oncoming flow. While Kelly's work [8] showed that there is a possibility of one sphere producing a pressure field that can produce thrust
on an object in certain neighboring areas, the spherical model is too crude to offer accurate enough insight into the forces on nearby elongate bodies, especially when there is a big size difference
between the interacting bodies, as in neonate drafting. Kelly's results [8] are based on Lamb's [9] approximate method of reflections, which, as he mentions, results in errors of up to 12% for
touching spheres, dropping to 0.3% at one radius separation. Since that work was performed, exact solutions for the two-sphere interaction have been calculated [11], but the differences do not
qualitatively change the conclusions reached by Kelly [8], and therefore the two-sphere model is still not accurate enough to be a predictive, or even an explanatory, tool for drafting.
Results and discussion
The modeling process is started by looking at drafting in water far enough from the surface to neglect surface wave (Froude number) effects. Viscosity (boundary layer) effects are left out at this
stage, allowing the use of the linear, potential flow model. This will allow superposition of solutions, as mentioned above. Effects of viscosity will be included where required at a later stage (see
below). This is the equivalent of using the Kutta-Joukowski condition in airfoil theory [10], which simulates the results of viscous effects into an inviscid computation.
Next, assume that both mother and calf are moving without changing body shape - that is, with a fixed (rigid) body shape (no tail oscillations). On the basis of observations on several dolphin
species, this shape is taken to be an oblate ellipsoidal shape of aspect ratio of about 6 (see Figure 2). Lang [7] used a similar approach, defining the body shape as a 6:1 ellipsoid with an added
tail region, which adds 20% to the length and 20% to the total surface area. The effects of swimming motions are partly accounted for by adjusting the drag coefficient to include the effects of
swimming, as mentioned in the Background section. Further effects of the swimming motions are discussed later.
Figure 2. Schematic description of (a) a mother-calf pair of dolphins, and (b) two ellipsoids modeling them.
Thus the drag on a swimming dolphin will be estimated as that of a 6:1 ellipsoid moving in the direction of its long axis, multiplied by 3, while a coasting dolphin will have the drag of a 6:1
ellipsoid. The drag on streamlined bodies at zero angle of attack (measured between the direction of motion and the animal's longitudinal axis) is well known [12]. The drag coefficient decreases only
very slightly as the Reynolds number grows (that is, speed increases, or calf size changes with age), and also as the body aspect ratio changes slightly with age, as mentioned previously. These
changes are small enough for us to take equal drag coefficients for both mother and calf.
The basic premise here is that drafting is advantageous as a result of the mother producing a flow-field that has areas of forward-moving water, resulting from a non-uniform pressure field. When the
calf positions itself in these areas, it needs to produce less thrust, as the relative velocity it experiences is lower than the absolute speed of motion, and the energy required is roughly
proportional to the relative velocity cubed. This is the same principle I and others used in developing models for fish schooling [13,14], wave riding by dolphins [7,15] and other drafting
situations, such as duckling motions behind their mothers [16].
A series of cases simplified sufficiently to allow semi-analytical solutions (that is, solutions that do not require numerical analysis of the differential equations, but use computations to obtain
numerical values of the solution functions) are now analyzed. The flow around ellipsoidal shapes is calculated. As mentioned above, these closely approximate dolphin shapes, when excluding fins.
First a single ellipsoid is analyzed, and then two ellipsoidal shapes of equal or different sizes in close proximity.
Motion of a single ellipsoid
The first model developed here is of a single ellipsoid moving in still waters. The flow field obtained is an accurate representation of the force on each point in the flow field and can be seen as
the flow field experienced by a body much smaller than the ellipsoid itself. Thus, such a calculation is a good approximation for the positioning of pilot fish in the vicinity of sharks, but less so
for dolphin calves, which at birth are already about one half the length, and over one tenth of the mass, of an adult [2].
This model is only an approximation to the flow pressure distribution on a large body, showing the generally advantageous areas for calf positioning. Figure 3 shows the flow field around a moving 6:1
ellipsoid. The significant point to notice is the area in front of the body's equator, within the forward 20% of the length (to the left of x = -5) based at the body center of mass, in which there is
a forward component of velocity forced upon the surrounding water. This is essentially the water being pushed out of the way by the approaching body. This is maximal directly in front of the body's
'nose', as the lateral component of velocity vanishes there (not shown). The lateral component grows, and the forward component is reduced, until at about x = -0.5 the forward component vanishes, and
further downstream the x component of velocity becomes negative. This means that the area beyond the cone mentioned above is bad for the calf, as the relative velocity is larger. The negative
longitudinal induced velocity is maximal at the ellipsoid equator, where the lateral component vanishes. Moving backwards, a symmetric situation is observed, with gradually growing lateral velocity
and a smaller longitudinal component until, at about x = +4.5, the longitudinal velocity becomes positive, such that a second advantageous area for the calf is obtained. This flow is a result of
water moving in after the body to fill the volume vacated as the body moves forward. The 'best' position again is directly behind the body, where the forward velocity is maximal, reaching the forward
velocity of the body itself.
Figure 3. A snapshot description of the flow around a single ellipsoid moving from right to left at speed U. The length units are normalized by the maximum diameter of the 6:1 ellipsoid. Lines
indicate the paths of individual fluid 'particles' as they are pushed out of the way and then 'sucked' back in after the body has moved forward. Arrows indicate the direction of motion.
But the positions identified here need to be carefully scrutinized to make sure that they are relevant when the simplifications in the model are taken into account. Thus, some of the 'best' positions
predicted by this model are irrelevant. These are as follows: first, a position just ahead of the mother's nose, being 'pushed' forward; this position will not be adopted for several reasons,
including that the pushing motions are unstable [17] and thus considerable energy and skill (not available to the calf) would be required to keep this position; also, this position would disturb the
mother and interfere with her navigation. Second, a position just behind the mother's rear end. Again, in the real situation, the tail motions result in a different flow pattern, including a jet
moving backwards relative to the mother's body, which cancels out the effect identified by the simplified model. Third, the zone adjacent to the body, behind the equator, is dominated by body motions
when the animal is swimming, so that the horizontal line (actually a cylinder) tangent to the equator in Figure 3 delineates another zone in which this simple model is not relevant.
Thus, the actual best positions will be obliquely in front, and obliquely behind the mother's center-line. The forward position is less practical, for the reasons explained above, and so is used only
in the first few hours after birth. The remaining preferred zone is obliquely behind the mother's equator, which is more reminiscent of the 'infant' position, to which we will return later. All
positions, except directly in front of or behind the mother's center-line, experience lateral velocity components, which need to be compensated for by a lateral force if the calf is to swim in a
straight line. Thus, an optimal trade-off between forward velocity contribution and loss due to sideways compensation can be found.
Two slender ellipsoidal shapes moving in proximity
Here, the analysis is based on the studies by Tuck and Newman [18], and Wang [19] of the interactions of forces and moments produced by two slender ships moving in close proximity, at low enough
Froude numbers that the free surface can be assumed to be flat. Interestingly, they modeled the ships as being equivalent to axisymmetric bodies moving deep under the surface, which is very different
than their original application, as ships are far from being submerged ellipsoids of revolution. Fortuitously, however, the model is directly applicable to the present case of two submerged dolphins,
which are much closer to axisymmetric shapes. The actual motion of the mother-calf pair is now translated into the motion of two slender, ellipsoidal shapes moving at different velocities (Figure 4).
The coordinate system as presented by Tuck and Newman [18], and Wang [19] is here adjusted to present requirements so that the calf is body 1 and the mother is designated body 2. A review of the
basics of slender body theory is presented in the additional data file, with only the final equations required for actual calculations appearing below. The model thus describes the effect of the
mother moving next to a calf, showing the forces on the calf.
Figure 4. Planar coordinate systems for a mother-calf pair of ellipsoidal shapes. L[1], L[2 ]and U[1], U[2 ]are calf and mother lengths and speeds, respectively. The instantaneous longitudinal and
lateral distances between the centers of mass are ξ and η, respectively.
As mentioned in the additional data file, each of the bodies can be defined by a distribution of doublets along the longitudinal axis
where we take
where d is the doublet strength, S[i](x[i]) is the cross sectional area, i = 1 describes the calf, I = 2 the mother, and D[i ]is the maximum area of each at the equator. Without loss of generality,
the calf is assumed to be non-moving and the mother moving at U[2 ]relative to the calf.
After some rather complicated algebraic development (see the additional data file), one can finally obtain expressions for the forces by substituting equation (1) into equations (A-8) and (A-9) in
the additional data file. The force in the longitudinal direction on the calf (that is, the force pushing the calf forward) is:
and the lateral (side) force (including the Bernoulli effect mentioned by Kelly [8]) is
where ρ is water density, ξ is the horizontal distance between center-lines, η is the lateral distance between center-lines, and the remainder of the terms also appear in Figure 4. The yawing moment
on the calf (definitions and further discussion of the forces and moments on moving dolphins can be found in [20]) is
Some results of these calculations are presented in non-dimensional form, in Figure 5.
Figure 5. The forces on the calf calculated from equations (2) and (3). Definitions of the parameters appear in Figure 4. (a) The non-dimensional peak longitudinal force X[max ](thrust) on the calf
as a function of the normalized lateral distance η/L[1 ]from the mother, for different mother/calf size ratios (as indicated by the numbered arrows). The dashed red line indicates closest probable
proximity. The ratios relevant here are from 1 (fully grown calf - the solid blue line) to 2 (neonate). (b) The non-dimensional peak lateral force on the calf, for different mother/calf size ratios,
as a function of the normalized lateral distance η/L[1 ]from the mother. The peak lateral force is obtained when the centers of mass of mother and calf are on a line perpendicular to the long axis (ξ
= 0 in Figure 4). The curve marked 'wall effect' describes the lateral force on the calf when moving close to a wall, as in a tank. (c) The variation of forces and moments as one animal is placed at
different normalized longitudinal positions relative to the other in the fore-aft direction. Positive values on the horizontal axis indicate that the mother's center is ahead of the calf. The lateral
distance is one quarter of the calf's length. The curves marked X are the non-dimensional longitudinal force and Y is the normalized lateral (Bernoulli) force. Positive values indicate forward force
and attraction, respectively, while negative values represent backward forces and repulsion, respectively. Two sets of curves are shown, for neonate calves (where the mother is twice as long; in
solid blue), and for equal-sized animals (fully grown calf; dashed red line).
These results are now applied to the dolphin-drafting situation. The mother and calf are in the same horizontal plane, but the results are applicable also for depth differences, as the assumption is
that both are approximated by bodies of revolution, so that all that is required is that the plane including the two center-lines is defined as the horizontal.
Figure 5a shows the non-dimensional peak longitudinal force on the calf, as a function of the normalized lateral distance from the mother. This force will appear at the position described before,
when the calf centroid is slightly behind the mother's equator. Recalling that the newborn calf is roughly one half the length of the mother, L[2]/L[1 ]is 2 at birth, so that the top line is
applicable. Both mother and calf are approximately 6:1 ellipsoids, so that the minimal distance in terms of calf length is η/L[1 ]= 0.16, with a value of 0.2-0.3 probably best to avoid collisions,
beyond the first few hours after birth. The non-dimensional force can be seen to be about 3.3 at η/L[1 ]= 0.3 for newborn calves, going down to about 1.3 for almost fully grown calves (when L[2]/L[1
]≈ 1). The dotted red line in Figure 5a shows the probable minimal separation so that collision is avoided.
Figure 5b shows the peak lateral force that occurs when the equators of both are side by side, when ξ = 0. Comparing values from Figure 5a and Figure 5b, we see that the lateral force is three to
five times times the longitudinal force. This is the so-called Bernoulli effect. The presence of the neighboring calf body causes the flow on the mother's body on the side closer to the calf to move
faster, and thus produces a pressure drop - leading to a suction force 'pulling' the calf to the mother (and vice versa, but the effect on the mother is less important). This is significant mainly in
the newborn and echelon positions. It is also used in 'bolting with infant' baby-snatching occurrences during the first weeks after birth, in which an adult female stranger swims by the mother-calf
pair at high speed, attracting the calf [21]. Again, the dotted red line in Figure 5b shows the minimal distance to avoid collision. Here, higher ratios of length are calculated, to show the approach
to the limit of moving next to a wall, which results in strong attraction.
Figure 5c is somewhat different, as it looks at how the effects change with relative longitudinal (fore-aft) positions of the two animals. In this case, two limiting cases are presented, the first
where the calf is half the length of the mother (as for a neonate), and the second where both animals are the same size (adult). This figure is thus not to be used directly for calculating forces and
moments, but is presented in order to show the preferential areas. Thus, looking first at the longitudinal force X and starting with the calf being far ahead of the mother (at the left side of the
curve) the calf experiences a growing forward force, reaching a first maximum when its center-line is at the mother's head. This is one of the impractical maxima predicted by the single-ellipsoid
model mentioned earlier. The forward force drops, and becomes negative, rising again to a zero value when the two animals are abreast of one another. When the mother's equator is somewhat ahead of
the calf's, the maximal thrust is provided when the calf's center of mass is approximately at two-thirds of the mother's length. This probably corresponds to the infant and echelon positions. The
position of the thrust maximum does not vary with the mother/calf size ratio. The fact that the relative position does not change as the calf grows is probably very useful, as the calf has to learn
to find only one such position. The lateral force Y has full fore-aft symmetry. The maximal force is obtained when the animals are side by side, and is about three to four times as large as the
maximal forward force, thus being the most dominant effect (the Bernoulli effect). This is especially important for very young calves, where it acts as a suction force keeping them by their mothers.
Here again, the optimal position does not change with calf size.
Force calculations
To find the actual forces in specific cases, we need to define a normalizing coefficient based on specific data, to make the remaining terms in the integral dimension-free. This coefficient is
obtained by taking all constants in equations (2) to (4) out of the integration. The coefficient is thus
Recalling that we assumed 6:1 ellipsoidal bodies of revolution for both mother and calf with no allometric changes during growth, the ratios S[i]/L[i]^2 can be calculated once and for all as
From [2] we see that drafting is observed at swimming speeds that are up to U = 2.4 m/sec. We take the mother's length to be about 1.9 m and the neonate's length as 0.95 m. Substituting these values
into the equation for the non-dimensionalization factor K, we obtain for the neonate:
K = 1030*2.4^2*0.95^2*0.0218^2 = 2.54 (7)
A reasonable minimal distance between mother and calf center-lines is the sum of half the mother's thickest section plus half the calf's thickest section. This is 1.9/12 + 0.95/12 = 0.24 m for the
neonate, and 2*1.9/12 = 0.32 m for a fully grown calf. The spacing parameter is therefore at best η/L[1 ]= 0.25 for neonates and η/L[1 ]= 0.17 for fully-grown calves.
The maximal forward force on the calf can now be obtained from Figure 5a. The non-dimensional value is about 4.16, so applying equation (7) it is found that the force is about 4.16*2.54 = 10.6 N. The
maximal Bernoulli attraction is close to three times as large (the non-dimensional value for η/L[1 ]= 0.25 and length ratio L[2]/L[1 ]= 2, in Figure 5b, is about 12.1), at about 12.1*2.54 = 27.9 N.
These values can now be compared to viscous drag on the calf, recalling that the drag force is defined as D = 0.5ρU^2AC[D' ]where A is the surface area and C[D ]is the drag coefficient. At speeds of
2.4 m/sec the drag coefficient based on wetted area for a 6:1 ellipsoid is approximately 0.003 [12] in the longitudinal direction. The surface area of the 0.95 m calf is about 1.5 m^2, so that the
drag is about 12 N for the stretched body and about 36 N for the swimming calf. Comparing these drag estimates to the forward and lateral forces found previously, it is seen that the drafting forward
force is close to 90% of the total drag force (that is, 10.6/12 for a coasting, stretched-straight calf) and the Bernoulli suction is much larger, but in a different direction. Thus, even when
considering the enhanced drag when performing swimming motions, we see that the mother can provide a large proportion of the force required for a neonate. These numbers are reduced for larger calves,
but this is again reasonable, as the larger calves are both more powerful and more adept at swimming. The cost to the mother is increased by the presence of the calf, obviously, as the curve for
thrust (X) is antisymmetric.
Next, the effect of increased lateral distance between the center-lines of mother and calf is assessed. This is obtained from Figure 5a by moving along lines of constant L[2]/L[1]. As mentioned
above, a reasonable minimal distance between mother and calf center-lines is the sum of half the mother's thickest section plus half the calf's thickest section. Table 1 shows the loss in forward
force as the distance grows beyond 0.24 m by a quantity ε.
Table 1. The forward force (in N) on a neonate, as a function of lateral distance from the mother
As shown above, the mother can provide close to 90% of the thrust needed for the calf to move at 2.4 m/sec when the mother and coasting calf move side by side, almost touching. Table 1 shows that
even when they are laterally separated by 30 cm (two calf diameters) the mother can still provide 3.3/12 = 27% of the required thrust.
A similar calculation for full-grown calves, where L[1 ]= 1.9 m, and L[2]/L[1 ]= 1, appears in Table 2. The minimal distance is again taken to be the sum of the body half-thicknesses. This sum is now
1.9/12 + 1.9/12 = 0.317 m. The nominal non-dimensional distance is then η/L[1 ]= 0.317/1.9 = 0.167. The factor K, from equation (5) is now K = 10.16, and this is used to generate the figures in Table
2, applying the values from Figure 5.
Table 2. The forward force (in N) on a fully grown calf, as a function of lateral distance from the mother
Tables 1 and 2 show that the gains from drafting are concentrated in an area close to the mother, with the slope of the decrease not changing as the calf grows. This is true also of the side forces,
as shown in Table 3. As the thrust required for a fully grown calf is much (about four-fold) larger than for a neonate, however, the percentage gain is smaller, being about 62% (=29.8/(12*4) when ε =
0, and only 25% when ε = 30 cm. These percentages are for coasting, fully grown calves. We expect fully grown calves to swim most of the time, so that the drag is about three times as large, and the
real savings will be only about 20%, at best. This probably helps explain why less drafting is observed as the calf grows: the relative gain decreases, while the calf has more stamina. On the other
hand, this may lead to loss of contact in strenuous chase situations where the calf needs more help.
Table 3. The peak side force (in N) on a neonate, as a function of lateral distance from the mother
Similar results can be obtained for the side force and yawing moments. We only show the side force on the neonate, for which the Bernoulli effect is most important. This appears in Table 3, which is
based on Figure 5b. Tables 1, 2, 3 show the effects of changes in lateral (in the horizontal plane) distance. In some of the aerial records of mother-calf pairs moving at high speed, however, one can
observe the calf moving from one side to the other obliquely behind the mother. As mentioned previously, this motion may be due to the bias in yaw that the calf experiences when moving on one side,
and an attempt to even this out by periodically changing sides.
The rapid decrease of the transmitted forces with lateral distance is a clear indication that forced 'running', as in chases by fishing vessels, can easily cause loss of the mother-calf connection.
Moving at high speeds will require strenuous, large-amplitude motions by both mother and calf, so that in order not to interfere with each other they would have to enlarge the lateral distance, from
almost touching (ε = 0) to a safe distance. Thus, a significant conclusion here is that long high-speed chases, where the drafting gain for the calf is much smaller as a result of the increased
lateral distance, are much more dangerous. This is in addition to the fact that there is less time for catching up after an error in judgment by mother or calf. It is interesting to mention, in this
context, that adult schooling dolphins, which are usually more widely dispersed, tend to move closer to each other when chased, perhaps attempting to use some of these hydrodynamic advantages. This
is also observed in fish schools [13,14].
Further hydrodynamic effects
In order to obtain an exact mathematical solution to the drafting problem, I had to make some simplifying assumptions: first, the propulsive motions were not accounted for; second, no free surface
effects were considered (water of infinite depth); third, inviscid flow was assumed; and fourth, uniform velocity (no jumps) was assumed. The effects of relaxing these assumptions are now examined,
to see what effect such relaxation has on the results presented above. Obviously, only rough estimates of these additional, complicated effects can be made.
Effects of propulsive motions
The propulsive motions of the mother and calf are now considered. These can be described as a vertical oscillation of the body and caudal flukes, with amplitude minimal at the shoulders, and growing
as one moves rearwards. For example, Romanenko [22] presents a case in which the distribution of maximal vertical excursion along the body of Tursiops truncatus is approximated by
where h[T ]is the maximal vertical excursion of the fluke and x[n ]= x/L is the longitudinal coordinate measured from the beak, divided by the animal's length, L. The actual periodic excursions of
the body center-line are h = h[max]sin(2πt/ν) where the amplitude h[max ]is obtained from (8), ν is the frequency and t is time. As mentioned previously, the body swimming oscillations increase drag
by a factor of about three [6,20]. Thus, the hydrodynamic benefits of interaction, which do not increase due to the propulsive motions, are reduced by this factor. Presumably the calf will have a
large swimming drag penalty factor, at least during the first few days and weeks of its life, until it masters the 'secrets' of efficient swimming and overcoming the effects of buoyancy. As a result,
there is a clear advantage for the calf to swim in the 'burst-and-coast' mode [20,23,24], in which it swims by body oscillations for a short burst, then coasts for a while, repeating this behavior
In the burst-and-coast mode, the animal accelerates during the burst and decelerates during the coast. Thus, a calf using this mode of energy saving would appear to move relative to the mother. This
would appear as forward motion during the burst, and slipping backwards during the coast. It might be difficult to observe this behavior, as the effectiveness of the burst-and-coast rises (more
energy is saved) when the bursts are short and the velocity does not change appreciably [20]. For example, at an average swimming speed of about 3.2 m/sec, the energy required for burst-and-coast
swimming is only 37% of that required for constant-speed swimming. This means that the swimming drag penalty for the calf is essentially eliminated at mother and calf cruising speeds, as the swimming
drag would be only 3*0.37 = 1.11 times the coasting drag. At higher speeds, say about 6.4 m/sec, the best achievable saving is approximately 0.56, which means that the swimming drag on the calf, even
when using burst-and-coast, would be 3*0.56 = 1.68 times the coasting drag. This is probably one of the reasons for calves becoming detached from mothers at higher swimming speeds: the cost increases
by 1.68/1.11 = 1.51. This 51% increase in cost, when combined with the fact that the energy required goes up roughly as the speed cubed, means a 12-fold total increase in energy required (1.51 * 6.4^
3/3.2^3 = 12.1) by the calf to keep up with the mother when the swimming speed doubles from 3.2 m/sec (fast cruising) to 6.4 m/sec (escape speeds).
It may seem that using burst-and-coast could be counter-productive, as the calf moves away from the rather narrow range of beneficial positions. This loss can be minimized if the calf starts the
burst and accelerates when it is at the optimum position for maximum forward force, at about 65% of the mother's length where the acceleration is easiest (Figure 5c), and coasts when the surge force
contribution drops, as it approaches the mother's center-line, thus dropping back to the more advantageous positions.
The propulsive motions cause a periodically varying pressure field, which affects the results shown previously in two additional ways. First, the fact that the body, and especially the caudal flukes,
produce a backward-moving wake makes the zone directly behind the mother highly undesirable, as moving in that area means moving against a backward-flowing current (as in schooling [13,14]). Thus,
the calf should be located outside of two cones with apices at the mother's head and tail. The cone around the tail is larger, due to the larger vertical excursions, which puts the apex here further
forward (Figure 6).
Figure 6. Exclusion zones for the calf due to increased energy expenditure. Exclusion zones are bounded by dashed lines. The (a) side and (b) top views show the zone of exclusion around the tail; (c)
the front view shows the preferred angular sector for calf placement.
In the analysis for non-oscillating ellipsoids presented in the previous section, the effects are axially symmetric, such that any angular displacement between the line connecting the center-lines of
mother and calf and the horizon gives the same result. It is clear, however, that moving closely above or directly below the mother is more difficult, because of the vertical body oscillations (and,
in addition, the surface effects if the calf were to be above the mother). As a result, we see that the calf is limited to zones between approximately 45° above and below the mother's center-line.
The next question addressed here is what are the preferred positions for the calf, in the vertical plane, relative to the mother.
Logvinovich showed (as cited in [22], page 135) that for slender bodies the pressure field around a circular cross section performing transverse oscillations is:
where p is the local pressure, p[8 ]is the undisturbed pressure, ρ is the water density, and r and θ are polar (cylindrical) coordinates. Here the angle θ is measured from the vertical (see the front
view in Figure 6c), t is time and V[n ]is the vertical excursion velocity. V[n ]can be related to the vertical excursion of the dolphin's body as
where U is the dolphin's forward speed [22] and h is defined in the paragraph following equation (8). Here, only the time-averaged angular dependence of the pressure is needed. This is defined by the
factor 1 - 4sin^2θ in equation (9). The added pressure is positive (repulsion) for 0° <θ < 30°, negative (attraction) for 30° <θ < 150° and positive again for 150° <θ < 180°. So, the calf should be
within 60° upwards and downwards from the horizontal, relative to the mother, for the Bernoulli suction shown above to be most effective. Within this sector, the vertical motions of the mother
increase the suction force. This increase grows as the calf's center of mass is located further backwards relative to the mother, but is canceled out by the larger excursions of the mother's body,
which means that the deviations from the mean attractive force are larger and thus more difficult to adjust to. The conclusion is that the swimming motions actually increase the Bernoulli attraction
somewhat, with the preferred position for this attractive force still roughly laterally to the mother's center of mass. More exact calculations of this contribution are not realistic, as the
assumptions of slender body theory are relatively inaccurate in this situation, and are mainly used to show trends.
Looking at cases where the mother helps the calf, and not vice versa, the mother's propulsive motions are defined as a periodic vertical motion of the rear part of the body and caudal flukes. This
motion produces forces, which are evident in the backwards-moving thrust component of the wake, that are, at least near the animal, distinct from the drag part of the wake. The thrust component
appears as a reverse, thrust-type Kármán vortex street [14,25]. The drag part is the shed boundary layer, an annular mass of water moving forward (relative to the earth). Far behind the dolphin these
two components cancel out in the case of constant-speed swimming, as from Newton's law there is no net motion far from a body moving at constant speed.
In the study of fish schooling mentioned previously [13,14], I showed that a following fish can save energy by choosing the right position relative to a leader. That analysis is fully applicable here
after rotating the plane of motion by 90° to accommodate the vertical motions of the dolphin. The analysis is based on the fact that while the reverse Kármán vortex street produces an undulating
backwards-moving jet, relative to the vortices, between the vortices it produces a forward-moving component of velocity outside that area. Translating this observation to the drafting situation, the
area directly within the rectangle roughly defined by the extreme positions of the mother's caudal flukes during propulsive motions is an area of higher backwards velocity than elsewhere. This means
that a calf that finds itself in the box defined by this rectangle and the longitudinal axis (see side-view in Figure 6a) will have to work against a higher current than if it were elsewhere. On the
other hand, being either slightly above or below this box puts the calf in a position where it will need less energy to keep up with the mother. This effect is probably very important for suckling
situations, especially for neonates. The real drag wake is obviously three-dimensional, such that it will appear as a series of tilted vortex rings, but the two-dimensional vortex street model, while
probably quantitatively inaccurate, gives a good description of the preferable zones for the calf to occupy.
Recently some studies have appeared [26,27] on fish utilizing regular Kármán vortex streets from fixed bodies. This is not directly relevant to the present situation, as the flow directions are
reversed, but is useful as experimental evidence for using periodic vortices as a drafting accessory; thus, this serves as proof that such vortices are detectable and can be utilized. Actually, there
is an additional small gain possible by using forward momentum in the boundary layer shed by the mother, when the calf is in the suckling position. The propulsive oscillations will result in boundary
layer separation so that the suckling calf will be able to benefit from this layer. This gain is small, however, as will be shown by the following argument. The total boundary layer momentum shed per
unit time is, at most, equal to the drag force on the mother. This boundary layer has an annular shape with elliptical cross section. Thus, the part that the calf will pass through is relatively
small, and the gain is similarly small.
A rough estimate for a neonate with 16 cm diameter is presented next. The detached wake of the swimming mother has a total circumference of roughly πD[m ]+ 2*0.2L[m ]where the subscript m stands for
the mother; D[m ]= 32 cm and L[m ]= 1.9 m. The second term is due to the oscillations. So, the calf will pass through 16/(π*32 + 2*0.2*190) = 0.0625; the gain in thrust due to the shed boundary layer
can therefore be, at best, 6.2% of the drag on the mother's fore-body (or about 3% of the total drag on the mother, which is approximately 12% of the neonate's drag at the same speed). It is
important to mention here that this gain can be obtained even when the mother is coasting, so one can predict that suckling calves will preferentially draft during coasting.
Free surface effects
The main influence of the free surface of the water is to increase the energy required to move at a given speed as a result of the energy wasted on lifting the free surface. This can be roughly
modeled by an increase in drag coefficient, by a factor of up to 5, depending on the ratio of depth to body hydraulic diameter, and on the swimming speed (Froude number); further details may be found
in [28-30]. This means that the gains due to the different hydrodynamic effects discussed in this and the previous reports are reduced. When dolphins are relatively close to the surface the practical
conclusion from the previous statement is that dolphin calves are expected to be deeper than the mother, in the lower quadrants shown in the front view in Figure 6c. Thus, the calf is expected to be
at a depth equal to, or greater than, that of the mother, except when very young. As mentioned previously, neonates cannot control buoyancy well, and tend to 'pop like corks' to the surface [2]. In
this case, being slightly above the mother's depth may help, in that the hydrodynamic suction towards the mother's body will help reduce the upwards force due to the positive buoyancy. Unfortunately,
in chase situations the rate of breathing is increased, so that swimming has to occur closer to the surface. Thus, just when the calf needs the most assistance, the drag is increased beyond the
nominal value because of wave drag.
Another free-surface effect stems from the fact that the interaction is negligible when in air, so that breaching effectively breaks up the drafting interaction. This effect is not too harmful for
juveniles, as the ballistic motion the mother and calf perform means that if they leave the water together, and return together, they will be able to re-establish drafting. But infants, and
especially neonates, who are less adept at porpoising, may either breach or return at non-optimal penetration angles (see analysis below, and Figure 7), increasing their drag and causing a speed
differential. In addition, if the calf does not emerge from the water at the right angle, its aerial trajectory will be shorter. The calf will end up further behind the mother, and would then have to
catch up.
Figure 7. The effects of non-optimal porpoising leaps by a mother-calf pair. (a) Optimal for distance (45° water exit and entrance) and minimal splash, with longitudinal penetration; and (b)
Viscous flow
Viscous flow theory is required to estimate the original drag force on the animal, before calculating the interactive corrections. However, as we are interested in the mother-calf interactions here,
we do not need this type of calculation. Furthermore, as the Reynolds numbers are large (R = O(10^6)), the boundary layer approximation is sufficiently accurate. This means that only thin layers of
fluid are affected by viscosity. The thickness of these layers is not more than 1-3% of the body radius, so that the body may be assumed to be that much thicker and to move in inviscid fluid
(displacement thickness model). At distances of 25-50% of body radius, where the calf may be found, the effects are therefore negligible, to the level of accuracy of the present discussion.
Synchronization of jumping
At high swimming speeds, dolphins usually resort to porpoising [20,30]. To examine whether this might be a factor in separation, the surface-piercing event is broken up into five steps: step 1,
horizontal swimming before the event; step 2, water exit; step 3, aerial motion; step 4, water return; and step 5, horizontal swimming after the event. Steps 1 and 5 are regular drafting situations
and thus do not require specific consideration here. Step 3 is a ballistic trajectory for which the distance crossed in a jump is a function of speed and angle only, but not of animal mass (see
equation 11)
Thus, if the mother-calf pair exits the water at the same speed and angle as each other, they will land in the same relative positions as when leaving. On exit, leaving at the wrong angle can reduce
the distance crossed in the air; this effect, however, is very small. From equation (11), the maximum distance is achieved at 45°. Thus, the difference in distance is
For a calf jumping at 40° (a 5° difference), the decrease in distance jumped is only 0.015 (1.5%), and even if the calf jumped at 30° the difference in distance crossed by the center of mass, while
out of the water, is 0.134 (13.4%). For 50°, from equation (13), the difference in distance crossed is again only 0.015 (1.5%) as the decrease is symmetrical with respect to the angular difference
from the maximal 45°. The distance the mother can cross, when moving at 2.4 m/sec is, from equation (11), above, about 59 cm, and at 4 m/sec it is 1.63 m. Thus, even at the higher speed, a 15° error
by the calf will result in only a 22 cm longitudinal displacement in landing.
Next, steps 2 and 4 are examined. If re-entry is at same penetration angle (the angle between the animal's long axis and the horizon) for both mother and calf, the only differences that may occur are
a result of the spray energy being proportional to animal mass (equation 2 from [20])
Here, β is the ratio of swimming to gliding drag (usually about 3 for strenuous swimming, as mentioned previously); m is the added mass coefficient, which is a function of angle of incidence between
the animal's long axis and the direction of motion. The added mass coefficient m ranges from about 0.2 for swimming in the longitudinal direction [31] to about 1.0 for broadside motion of smooth
bodies of revolution, such as the ellipsoid. M is the animal mass and U the speed. The energy E[j ]is proportional to animal mass so, all other parameters being equal, the energy lost by the smaller
calf is less, but can be a higher proportion of the calf's energy store. For example, neonates can have less than 40% of the oxygen-storage capacity of adult dolphins [32]. This is probably a minor
consideration, however, when compared to the effects of possible differences in attitude when leaving and re-entering the water.
The stretched straight dolphin was modeled as a 6:1 ellipsoidal body. The splash produced by such a shape, when penetrating water, is highly dependent on the angle between the body longitudinal axis
and the angle of penetration, with the lowest value being, naturally, when these angles are equal (see Figure 7). This is roughly mirrored in the change in value of m as described above: the splash
energy lost if the ellipsoid hits the water surface broadside, which is roughly proportional to the penetrating body's area parallel to the surface, will be at least five times that of the same
ellipsoid moving in the direction of its long axis. This increase is even before accounting for separation of flow, and other drag-enhancing factors. Thus, leaving or re-entering the water at the
wrong attitude (the angle between body axis and penetration angle not being zero) can result in massive slowing of the body.
Optimal exit and entry require that the animal change orientation in mid-air from roughly 45° above the horizon when exiting, to roughly 45° below the horizon for re-entry. In practice, dolphins are
observed to exit at 30-45° [33], but this does not change the present argument. This pitching rotation of the body is easily achieved by adults and experienced juveniles, but may be more difficult
for neonates, who can 'lose' twice, both on exit, as a result of carrying more water as spray on exit, thus slowing down, and on re-entry when they encounter much higher drag. Copying the mother's
motions will probably enable the calf to exit smoothly, but water entry is probably more difficult. An additional point to note is that the infant position is harder to keep in acceleration towards
leaping, as it requires synchronization of tail beats, so it makes sense for the calf to move to the echelon position when leap-burst-and-coast motion starts.
Comparison with existing observations
Observations of drafting in the literature have been sparse and mainly anecdotal, and almost no data of the accuracy and detail required were found. Data collected from flights over spinner dolphin
groups [1] are the only reliable data found. Ten instances of assumed mother-calf pairs were found to be sufficiently clear to extract measurements. These are shown in Table 4; five of the
mother-calf pairs appear in Figure 1b. These pairs were analyzed using the following procedure.
Table 4. Geometric parameters of drafting mother-calf pairs from Figure 1b, for drafting calculations
First, I assumed the larger animal in each pair to be the mother, and took its size to be 1.90 m [2] from nose to caudal peduncle. This defined the scale of the photograph. Using the same scale, and
assuming that any possible depth differences between mother and calf were negligibly less than the altitude of the airborne camera so that parallax effects on the size could be neglected, I obtained
the calf length L[1 ]in Table 4 and the ratio of L[2]/L[1]. Using the same scaling, the lateral distance between center-lines of the mother and calf η and the non-dimensional value η/L[1 ]were
obtained. Next, the longitudinal displacement of centers of mass ξ was measured, and its value normalized by the mother's length ξ/L[2]. This displacement ξ/L[2 ]was used to find the value of thrust
(X) force from calculations such as those shown in Figure 5c. Figure 5c is calculated for two cases: that of equal-sized mother and calf, and for a 2:1 length ratio. As we see, for different-sized
calves only the numerical values change, not the shape of the curve. Thus, we can find the value of the X force, relative to the maximum forward force, depending on the coordinate ξ/L[2]. From Figure
5c we see that the maximum is obtained approximately at ξ/L[2 ]= 0.35, so that, for example, for pair A2, ξ/L[2 ]= 0.22, so that X/X[max ]= 0.90.
Next, we find the maximal thrust (X[max]) for this case from Figure 5a, given the values for L[2]/L[1 ]and η/L[1]. Taking again pair A2 as the example, we have L[2]/L[1 ]= 1.48 and η/L[1 ]= 0.36.
From Figure 5a the ordinate is then Ordinate = 1.76. We now use equation (5) to obtain the actual force in newtons, assuming a swimming speed of 2.4 m/sec and the calf length of 1.28 m from Table 4.
The force is then:
X = X[max]*0.9 = K* Ordinate* 0.9 (15)
Where, in this case,
So that X = 1.76*4.62*0.9 = 7.3 N.
Recalling that the drag on a newborn calf coasting at 2.4 m/sec was estimated at 12 N, and that the drag coefficient does not change because of geometric similarity, the drag increases simply with
surface area (length squared). We can thus estimate the drag on a calf of length L[1 ]coasting at 2.4 m/sec by equation (16)
which for the calf of pair A2 is 21.8 N.
So, we finally determine that, in this case, the drafting thrust is 7.3/21.8 = 0.33 of the force required for the calf to coast. This value appears as X/X[req ]in Table 5. From this column we see
that energy savings of up to 61% were available to the calves pictured, with only one case (A3) in which no thrust interaction was obtained. Table 6 summarizes the side-force interactions for these
10 pairs, showing large forces in some cases. There is no clear correlation between Bernoulli forces produced and calf size; one might be led to think that larger (or even relatively larger)
Bernoulli forces would be produced by mothers with small calves, but this trend is not in evidence here. This Bernoulli force does not cause rapid attractive motions bringing the mother and calf to
collision, as the drag coefficient in the broadside direction is at least five times the value for motion along the longitudinal axis [12].
Table 5. The thrust force increment on the calf due to drafting
Table 6. The lateral suction force (Bernoulli attraction) on a calf
Figure 1a shows a sequence of snapshots of a mother-calf pair leaping, in which the calf, probably a neonate, misjudged the attitude angle and so produced a large splash and ended up far behind the
mother, in a zone where essentially no drafting gains are possible. This calf is therefore at risk of detachment and loss. Unfortunately, the recorded sequence ended as this point, so the later
development of this situation is unknown.
Drafting has been shown to enable adult dolphins to help their young by reducing the forces required of the young for swimming. Several separate hydrodynamic effects join to produce this interaction.
Under ideal conditions, the drafting force can counteract a large part of the drag experienced by a neonate calf. Examination of aerial photographs of eastern spinner dolphin mother-calf pairs shows
that the predicted preferred positions for the calf to maximally benefit from these hydrodynamic effects are found in most cases. There is a need for more controlled experimental data to be able to
improve the current model, especially where the effects of viscosity and free surface penetration are concerned, and to ascertain whether burst-and-coast motions are found when dolphins flee tuna
fishermen. The clear implication for dolphin chases is that long chases at high speeds will result in an increased probability of separation of mother-calf pairs, as a result of a combination of
fatigue on the calf's side, decreased help from the mother due to the larger body oscillations by both mother and calf, and the increased probability of erroneous leaping.
Materials and methods
Aerial photography
Images were taken in the eastern tropical Pacific Ocean from a Hughes 500D helicopter flying at approximately 60 knots (around 110 km/h) at about 250 m altitude (Figure 1a) and 220 m (Figure 1b). The
camera was a 126 mm Chicago Aerial Industries KA-76 with a 152 mm lens, f = 5.6, and shutter speed 1/1200 sec, based on ambient light conditions, using Kodak Plus-X type 3404 black-and-white film.
The images were converted to digital format by magnifying under the microscope by 1.25× (in each image, 1 mm is approximately 170 pixels).
Additional data file
The following is provided as an additional file: a brief overview of slender body theory used for the calculation of flow around a pair of slender bodies (Additional data file 1).
Additional data file 1. A brief overview of slender body theory used for the calculation of flow around a pair of slender bodies
Format: PDF Size: 38KB Download file
This file can be viewed with: Adobe Acrobat Reader
I thank Elizabeth Edwards for asking the questions that resulted in this paper and her careful and helpful monitoring of the project, F. Archer, W.F. Perrin, and S. Reilly for useful discussions, W.
Perryman and K. Cramer for permitting the use of their unpublished photographs (Figure 1) and O. Kadri and T. Haimowitz (deceased) for helping with the calculations. This study was supported by NOAA/
NMFS Southwest Fisheries Science Center.
Sign up to receive new article alerts from Journal of Biology | {"url":"http://jbiol.com/content/3/2/8","timestamp":"2014-04-18T23:26:02Z","content_type":null,"content_length":"157875","record_id":"<urn:uuid:78f879fc-7eb1-43d9-86b3-f5d2181c8f83>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Integrability by Quadratures of Pricing Equations
Claudio Albanese, Giuseppe Campolieti
January 29, 2001
Department of Mathematics, University of Toronto
Math Point Ltd., Toronto
Find us at www.math-point.com
We introduce a canonical transformation method for finding solutions to pricing prob-
lems by quadratures. The method is systematic and allows one to derive in a unified
framework the exact solutions in the pricing literature. As an application, we construct
a new families of pricing models based on the squared Bessel process which extends the
constant-variance-of-volatility (CEV) model and is integrable by quadratures.
The main difficulty in integrating a given differential
equation lies in introducing convenient variables, which
there is no rule for finding. Therefore we must travel the
reverse path and after finding some notable substitution,
look for problems to which it can be successfully applied.
Jacobi, "Lectures on Dynamics", 1847.
1 Introduction | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/1023/2116163.html","timestamp":"2014-04-20T03:55:43Z","content_type":null,"content_length":"8078","record_id":"<urn:uuid:3b675566-b30f-48ef-b664-299e56f726a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gosper's Verstion of Stirling's Formula
Date: 05/30/2002 at 07:01:20
From: Amy Choi
Subject: Stirling's formula
I am doing a project on Stirling's formula, and I found that
there is a better approximation to n! which was noted by Gosper.
I would like to know a proof of the approximation which is
n! ˜ sqrt((2n + 1/3)Pi) * n^n * exp(-n)
Thank you.
Date: 05/30/2002 at 09:38:29
From: Doctor Mitteldorf
Subject: Re: Stirling's formula
The derivation comes from expressing the log of the factorial as
the sum of n logs, then noticing that, for large n, this looks
like the Riemann sum for the corresponding integral of log(n).
There's a fuller explanation at
- Doctor Mitteldorf, The Math Forum
Date: 06/01/2002 at 02:38:25
From: Amy Choi
Subject: Thank you (Stirling's formula)
Thank you for answering my question. That was very helpful. | {"url":"http://mathforum.org/library/drmath/view/60692.html","timestamp":"2014-04-21T07:36:31Z","content_type":null,"content_length":"5991","record_id":"<urn:uuid:43c4bb37-4e6f-45da-83fd-d8f169774f28>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
January 12th 2008, 02:43 PM #1
Dec 2007
I hope and pray that this thread does not get neg feedback....
Ok I am trying to find the area of three points;
A=(-2,5) B=(1,3) C=(-1,0)
I am following the text books examples on how to do this and so far it's working out. BUT in the text/example it talks about the "right angle is at vertex B."
MY question is (to understand this); what does the word vertex mean?
Thank you in advance for any reply/help
I hope and pray that this thread does not get neg feedback....
Ok I am trying to find the area of three points;
A=(-2,5) B=(1,3) C=(-1,0)
I am following the text books examples on how to do this and so far it's working out. BUT in the text/example it talks about the "right angle is at vertex B."
MY question is (to understand this); what does the word vertex mean?
Thank you in advance for any reply/help
here, each of the points is a vertex of a triangle that is obtained by connecting all the points. you can think of a vertex here as a sharp or pointed edge of a figure. see here for more info
Since the area of a single point is zero, the area of three three points will be (3)(0) = 0
I hope and pray that this thread does not get neg feedback....
Ok I am trying to find the area of three points;
A=(-2,5) B=(1,3) C=(-1,0)
I am following the text books examples on how to do this and so far it's working out. BUT in the text/example it talks about the "right angle is at vertex B."
MY question is (to understand this); what does the word vertex mean?
Thank you in advance for any reply/help
Do you understand why the angle at B is a right angle?
What will you do when the triangle you get by connecting three points does NOT contain a right angle?
..no, sorry I don't understand, and the book doesn't explain either. uhmmmm I have no idea, this is the first time I ever see anything like this. Sorry for such inconvenience, and much thanks!!!
Are you familiar with the fact that if the product of the gradients of two line segments is equal to -1, then the line segments are perpendicular to each other? If so, calculate the gradient of
line segments AB and CB. Now take the product of these two gradients ......
I used the given formula to calculate AB and CB
and after I was asked to find the area. I got $\frac{13}{2}$...don't know if this is right or not...
This is telling you how to show that the angle at B is 90 degrees .....
January 12th 2008, 02:53 PM #2
January 12th 2008, 07:12 PM #3
January 13th 2008, 04:54 AM #4
Dec 2007
January 13th 2008, 05:10 AM #5
January 13th 2008, 05:29 AM #6
Dec 2007
January 13th 2008, 11:43 AM #7
January 14th 2008, 05:49 PM #8
Dec 2007 | {"url":"http://mathhelpforum.com/pre-calculus/25966-question.html","timestamp":"2014-04-19T07:24:03Z","content_type":null,"content_length":"54971","record_id":"<urn:uuid:876f1db4-dac5-4a32-a0e7-fefcb99788cc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
│Requirements │Four-Year Honours Plans │Double Degree Plans │Three-Year General Programs │
│ │Co-op │Regular │Co-op │Regular** │Co-op │Regular │
│Minimum total units │22.5* │20 │28 │26 │17 │15 │
│Minimum work-term units │2.5* │0 │2 │0 │2 │0 │
│Minimum non-work-term units │20 │26 │15 │
│Minimum math units │9 - 14 │12 │8 │
│Minimum non-math units │5 │12 │5 │
│Minimum Cumulative Average (CAV) │60% │60% │60% │
│Minimum Major Average (MAV) All ACTSC, AMATH, PMATH, and Mathematical Physics plans │65% │not applicable │not applicable │
│Minimum Major Average (MAV) All other plans │60% │60% │not applicable │
│Maximum excluded units │three units │three units │four units │
│Maximum course attempts allowed │25 units │31 units │20 units │
│Minimum number of full-time terms │8 │7 │10 │none │
│English Writing Skills │All BMath and BCS degree candidates must satisfy an English Writing Skills Requirement. See below.│
*The minimum total units and work-term units are 22.0 and 2.0, respectively, for the Chartered Accountancy and Mathematics/Teaching Option plans.
**WLU students
The terms used in Table I are explained below.
Math courses – Courses with one of these prefixes: ACTSC (Actuarial Science), AMATH (Applied Mathematics), CO (Combinatorics and Optimization), CM (Computational Mathematics), CS (Computer Science),
MATH (non-departmental Faculty courses), PMATH (Pure Mathematics), and STAT (Statistics). Any course that is cross-listed with a course having one of these prefixes is also considered a math course,
regardless of the label under which it is taken. The following courses, with content very similar to courses offered in the Mathematics Faculty, are also considered to be math courses: ECE 222, 354,
428; SE 112, 240, 382, 463, 464, 465.
Non-math courses – Courses with the prefix MTHEL and those courses offered by other faculties (excluding courses cross-listed with math courses and courses, listed above as math courses). Work-term
courses (COOP 1 to COOP 6) and professional development (PD) courses do not count as math or non-math courses.
Major Average – See sections 2 and 4 in "Faculty Policies" on pages 10:43-10:45.
Cumulative Average – See sections 1 and 3 in "Faculty Policies" on pages 10:43-10:45.
Course Attempt – Any course enrolment for which a student is assigned a final grade (including a grade of WD). Transfer credits from other institutions are also considered to be course attempts.
Excluded Course – A course which has been excluded is not included in any unit counts toward degree completion or in averages but is included in course attempts. Any failed course must be excluded,
but a student may also choose to exclude a course with a passing grade below 60 (such a request must be made no more than six months after the grade appears in Quest). An excluded passed course
normally cannot be used to meet any degree requirement or to meet the prerequisite requirements for another course.
Full-time Term – A term in which a student is enrolled in at least 1.5 course-attempt units.
Unit – The credit value associated with any course. All courses offered in the Faculty of Mathematics have a value of 0.5 units.
Co-op Requirements
In addition to the requirements specified in Table I, Co-op students are required to complete a minimum of five Professional Development courses. PD 1 must be taken in the term prior to the first
work term and PD 2 must be taken during the first work term. At least one other Professional Development course must cover non-technical skills. With the exception of PD 1, these courses are normally
taken during Co-op work terms. Students are encouraged to take a professional-development course each work term until the requirement is met; the required schedule for completing the courses is as
│By the start of term│Minimum number of credited PD courses │
│2B │2 │
│3A │3 │
│3B │4 │
│4A │5 │
First-Year English Writing Skills Requirement
All students in the Faculty of Mathematics must satisfy the following Writing Skills Requirement before enrolling in their 2B term:
• A grade of 60 or better on the UW English Language Proficiency Exam (ELPE), or
• Successfully complete the study program offered by the UW Writing Centre, or
• Complete one of the following courses with a grade of at least 60%:
ENGL 109 Introduction to Academic Writing
ENGL 129R Introduction to Written English
ENGL 210E Genres of Technical Communication
ENGL 210F Genres of Business Communication
ESL 102R Introduction to Error Correction in Writing
1. Students who have written and failed ELPE should enrol in the Writing Centre or enrol in one of the above courses rather than attempt ELPE again.
2. Students who arrange a special sitting of the ELPE outside the scheduled dates will be assessed an administrative charge.
3. A completed English Proficiency milestone on a student's academic record will indicate successful completion of this requirement.
4. Students in the Software Engineering program must satisfy this requirement as set down by the Faculty of Engineering (see page 8:3).
5. Students in the Computing and Financial Management program must satisfy this requirement as set down by the Faculty of Arts (see page 7:4).
No-Credit/Overlap Courses
There are some restrictions on course selection for obtaining credit toward a BMath, BCS, or BCFM degree. Before enrolling in a course, students should check the Faculty of Mathematics "No-Credit
List" and "Course Overlap List" (available on the Math Undergraduate website at www.math.uwaterloo.ca/navigation/Current/nocredit_overlap.shtml), to determine whether or not the course will count
towards their BMath, BCS, or BCFM degree. See section 13.4 in "Faculty Policies" on page 10:48 for further details.
Table II – Required Faculty Core Courses – BMath Honours Plans except Mathematics/Chartered Accountancy
MATH 135 (or MATH 145) Algebra
MATH 136 (or MATH 146) Linear Algebra 1
MATH 235 (or MATH 245) Linear Algebra 2
MATH 137 (or MATH 147) Calculus 1
MATH 138 (or MATH 148) Calculus 2
STAT 230 (or STAT 240) Probability
STAT 231 (or STAT 241) Statistics
One of
CS 134 Principles of Computer Science
CS 136 Elementary Algorithm Design and Data Abstraction
One of
CS 125 Introduction to Programming Principles
CS 133 Developing Programming Principles
CS 135 Designing Functional Programs
CS 230 Introduction to Computers and Computer Systems
CS 234 Data Types and Structures
CS 241 Foundations of Sequential Programs
One of
MATH 237 (or MATH 247) Calculus 3
MATH 239 (or MATH 249) Introduction to Combinatorics
1. Refer to individual plan requirements to determine which of MATH 237 (or MATH 247) or MATH 239 (or MATH 249) is required for your plan. Some plans require both courses.
2. The MATH and STAT core courses are offered at two levels: Advanced and Honours. The Advanced courses are more challenging than the Honours courses. The Advanced course numbers are listed in
parentheses in Table II above.
3. Students entering with no prior CS experience may either take the Programming-Basics sequence (CS 125 in their 1A term, followed by CS 134 in their 1B term), or the Functional-First sequence (CS
135 in their 1A term, followed by CS 136 in their 1B term).
Students with ICS3M or equivalent may choose to take the Objects-First sequence (CS 133 in their 1A term, followed by CS 134 in their 1B term), or the Functional-First sequence.
Students with extensive programming experience (ICS4M or equivalent) may take CS 134 in their 1A term followed by CS 241, 230, or 234 in their 1B term.
4. The three algebra and three calculus courses are normally taken in sequence in the 1A, 1B, and 2A terms. The two STAT courses are normally taken in the 2A and 2B terms.
5. Table II applies only to students enrolled in plans leading to the BMath degree, not any other degrees offered through the Faculty of Mathematics. Most requirements in Table II apply to
Mathematics/Chartered Accountancy (MATH 235/245 is an exception). A full set of course requirements is given with the Chartered Accountancy plan.
Responsibility For Meeting Degree Requirements
Students are responsible for being aware of all regulations pertaining to their academic plans. This responsibility includes, submitting a completed "Intention to Graduate - Undergraduate Studies"
form to the Registrar's Office (by the designated date for submission of such forms) during their last academic study term (i.e., the term in which they anticipate completing the requirements for
their degree).
Incompatibility of Full-time Study with Full-time Employment
Students who by choice or necessity work on non-academic activities more than 10 hours per week should, where possible, structure their course/work load so that they can attend fully to their
academic obligations. The Standings and Promotions Committee will not normally grant petitions based on time pressure resulting from employment.
Honours Fallback Provision
Students who satisfy all the conditions below, but do not satisfy the cumulative major average requirement and/or the excluded-units requirement for an Honours degree, will be eligible for a
three-year BMath General degree:
1. all course requirements for a specific BMath Honours or BCS plan;
2. cumulative average (CAV) at least 60%;
3. at most four units excluded;
4. at most 25 units attempted. | {"url":"http://www.ucalendar.uwaterloo.ca/0708/MATH/mathdeg.html","timestamp":"2014-04-19T06:58:54Z","content_type":null,"content_length":"27178","record_id":"<urn:uuid:0cfb1e77-3749-4cb1-aab2-a59f31460fa2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Small note on Quaternion distance metrics
There’s multiple ways to measure distances between unit quaternions (a popular rotation representation in 3D). What’s interesting is that the popular choices are essentially all equivalent.
Polar form
A standard way to build quaternions is using the polar (axis-angle) form
$q = \exp(\frac{1}{2} \theta \mathbf{n}) = \cos(\theta/2) + \sin(\theta/2) (i n_x + j n_y + k n_z)$, where n is the (unit length) axis of rotation, θ is the angle, and i, j and k are the imaginary
basis vectors.
For a rotation in this form, we know how “far” it goes: it’s just the angle θ. Since the real component of q is just cos(θ/2), we can read off the angle as
$\theta(q) = \theta(q, 1) := 2 \arccos(\textrm{real}(q)) = 2 \arccos(q \cdot 1)$
where the dot denotes the quaternion dot product.
This measures, in a sense, how far away the quaternion is from the identity element 1. To get a distance between two unit quaternions q and r, we rotate both of them such that one of them becomes the
identity element. To do this for our pair q, r, we simply multiply both by r’s inverse from the left, and since r is a unit quaternion its inverse and conjugate are the same:
$\theta(q,r) := \theta(r^*q, r^*r) = \theta(r^*q, 1) = 2 \arccos((r^*q) \cdot 1)$
Note that cosine is a monotonic function over the interval we care about, so in any numerical work, there’s basically never the need to actually calculate that arc cosine: instead of checking, say,
whether the angle is less than some maximum error threshold T, we can simple check that the dot product is larger than cos(T/2). If you’re actually taking the arc cosine for anything other than
display purposes, you’re likely doing something wrong.
Dot product
Another way is to use the dot product directly as a distance measure between two quaternions. How does this relate to the angle from the polar form? It’s the same, as we quickly find out when we use
the fact that the dot product is invariant under rotations:
$(q \cdot r) = (r^*q \cdot r^*r) = (r^*q \cdot 1)$
and hence also
$\theta(q,r) = 2 \arccos(q \cdot r)$
So again, whether we minimize the angle between q and r (as measured in the polar form) or maximize the dot product between q and r boils down to the same thing. But there’s one final choice left.
L[2] distance
The third convenient metric is just using the norm of the difference between the two quaternions: $||q-r||$. The question is, can we relate this somehow to the other two? We can, and as is often the
case, it’s easier to work with the square of the norm:
$||q-r||^2 = ||q||^2 - 2 (q \cdot r) + ||r||^2 = 1 - 2 (q \cdot r) + 1 = 2 (1 - q \cdot r)$.
In other words, the distance between two unit quaternions again just boils down to the dot product between them – albeit with a scale and bias this time.
The popular choices of distance metrics between quaternions all boil down to the same thing. The relationships between them are simple enough that it’s easy to convert, say, an exact error bound on
the norm between two quaternions into an exact error bound on the angle of the corresponding rotation. Each of these three representations is the most convenient to use in some context; feel free to
convert back and forth between them for different solvers; they’re all compatible in the sense that their minima will always agree.
UPDATE: As Sam points out in the comments, you need to be careful about the distinction between quaternions and rotations here (I cleared up the language in the article slightly). Each rotation in
3-dimensional real Euclidean space has two representations as a quaternion: the quaternion group double-covers the rotation group. If you want to measure the distances between rotations not
quaternions, you need to use slightly modified metrics (see his comment for details).
1. I’m afraid there’s a serious problem here. Cosine is not monotonic over the interval we care about, because that interval has size 2π, not π.
The « distance » between two rotations cannot be reduced to θ(q,r) = 2acos((q/r)·1) because there would be a discontinuity at 2π: when θ goes beyond π the rotations are actually becoming closer
to each other. Luckily this can be fixed using a simple absolute value: θ(q,r) = 2acos(|(q/r)·1|)
Similarly, the L2 metric is not a proper measure of how far apart two rotations are. You can just use 2(1-|q·r|) instead. (By the way, if you only care about monotonicity, you can just drop the
One last note: some people believe (and some write in books and articles) that enforcing rules such as q.w ≥ 0 when storing or creating quaternions will magically fix the problem. It will not.
□ Note that, in the polar form, we are taking the cosine of θ/2, not θ directly, and similarly all other forms measure half-angles not angles. I’m a bit careless in using the term “rotation”
here though; technically, the metrics defined here are over the quaternions (i.e. spinors), not the rotations they represent (the problem here is that the quaternion group provides a
double-cover of SO(3), so each rotation has two representations as quaternions; if q represents a given rotation, so does -q).
Cosine is monotonic over [0,π], which is enough to cover quaternions up to 2π apart (as measured by the angle in the polar form). There is no discontinuity here! The distance is nonzero
because two rotations 2π apart actually have distinct quaternions; due to the double-cover, the period is indeed 4π. Because of this period, two quaternions can’t be apart by further than 2π
– you can, figuratively speaking, just go the other way round (and indeed our distance metric will slowly decrease, until at 4π we again report a distance of 0). This is not a bug and does
not need “fixing”, because sloppy terminology on my part aside, this post is really (as the title states) about distance metrics on quaternions, not rotations. While you can enforce
2π-periodicity like you mention, this is usually not the right thing to do; slathering quaternion-handling code in neighborhooding operations (whether it be enforcing a nonnegative real part
or something else) usually causes more problems than it solves; when properly used, quaternions double-covering the rotation group is a feature, not a bug.
☆ Sure, if you consider the class of rotations spanning 4π you’re perfectly right in all respects here. I think I just wish it was made clearer to the reader that they must be careful to
use the shortcuts if their set of quaternions conver the more naive 3D rotations rather than the doubly covered ones, then.
2. aside: I’ve always had a pet peeve about the double-coverage thing. p’=qp(1/q), which can be simplified to p’=qpq* with assumption that ||q||=1. The first version all lines thorough the origin
(excluded) represent the same orientation. The second version becomes a scaled (by ||q||^2) orientation if you ignore the restriction. | {"url":"http://fgiesen.wordpress.com/2013/01/07/small-note-on-quaternion-distance-metrics/","timestamp":"2014-04-18T00:12:34Z","content_type":null,"content_length":"63193","record_id":"<urn:uuid:b4fe7db0-d684-4c46-a3eb-a2e3bee46370>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-Isomorphic Graphs with Cospectral Symmetric Powers
The symmetric $m$-th power of a graph is the graph whose vertices are $m$-subsets of vertices and in which two $m$-subsets are adjacent if and only if their symmetric difference is an edge of the
original graph. It was conjectured that there exists a fixed $m$ such that any two graphs are isomorphic if and only if their $m$-th symmetric powers are cospectral. In this paper we show that given
a positive integer $m$ there exist infinitely many pairs of non-isomorphic graphs with cospectral $m$-th symmetric powers. Our construction is based on theory of multidimensional extensions of
coherent configurations.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v16i1r120","timestamp":"2014-04-19T06:59:20Z","content_type":null,"content_length":"14757","record_id":"<urn:uuid:d44b9d6d-6a50-40b1-9341-f87c18b82225>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches
Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches
ISBN: 978-0-470-04533-6
558 pages
June 2006
A bottom-up approach that enables readers to master and apply the latest techniques in state estimation
This book offers the best mathematical approaches to estimating the state of a general system. The author presents state estimation theory clearly and rigorously, providing the right amount of
advanced material, recent research results, and references to enable the reader to apply state estimation techniques confidently across a variety of fields in science and engineering.
While there are other textbooks that treat state estimation, this one offers special features and a unique perspective and pedagogical approach that speed learning:
* Straightforward, bottom-up approach begins with basic concepts and then builds step by step to more advanced topics for a clear understanding of state estimation
* Simple examples and problems that require only paper and pen to solve lead to an intuitive understanding of how theory works in practice
* MATLAB(r)-based source code that corresponds to examples in the book, available on the author's Web site, enables readers to recreate results and experiment with other simulation setups and
Armed with a solid foundation in the basics, readers are presented with a careful treatment of advanced topics, including unscented filtering, high order nonlinear filtering, particle filtering,
constrained state estimation, reduced order filtering, robust Kalman filtering, and mixed Kalman/H? filtering.
Problems at the end of each chapter include both written exercises and computer exercises. Written exercises focus on improving the reader's understanding of theory and key concepts, whereas computer
exercises help readers apply theory to problems similar to ones they are likely to encounter in industry. With its expert blend of theory and practice, coupled with its presentation of recent
research results, Optimal State Estimation is strongly recommended for undergraduate and graduate-level courses in optimal control and state estimation theory. It also serves as a reference for
engineers and science professionals across a wide array of industries.
See More
List of algorithms.
1 Linear systems theory.
1.1 Matrix algebra and matrix calculus.
1.1.1 Matrix algebra.
1.1.2 The matrix inversion lemma.
1.1.3 Matrix calculus.
1.1.4 The history of matrices.
1.2 Linear systems.
1.3 Nonlinear systems.
1.4 Discretization.
1.5 Simulation.
1.5.1 Rectangular integration.
1.5.2 Trapezoidal integration.
1.5.3 RungeKutta integration.
1.6 Stability.
1.6.1 Continuous-time systems.
1.6.2 Discretetime systems.
1.7 Controllability and observability.
1.7.1 Controllability.
1.7.2 Observability.
1.7.3 Stabilizability and detectability.
1.8 Summary.
Probability theory.
2.1 Probability.
2.2 Random variables.
2.3 Transformations of random variables.
2.4 Multiple random variables.
2.4.1 Statistical independence.
2.4.2 Multivariate statistics.
2.5 Stochastic Processes.
2.6 White noise and colored noise.
2.7 Simulating correlated noise.
2.8 Summary.
3 Least squares estimation.
3.1 Estimation of a constant.
3.2 Weighted least squares estimation.
3.3 Recursive least squares estimation.
3.3.1 Alternate estimator forms.
3.3.2 Curve fitting.
3.4 Wiener filtering.
3.4.1 Parametric filter optimization.
3.4.2 General filter optimization.
3.4.3 Noncausal filter optimization.
3.4.4 Causal filter optimization.
3.4.5 Comparison.
3.5 Summary.
4 Propagation of states and covariances.
4.1 Discretetime systems.
4.2 Sampled-data systems.
4.3 Continuous-time systems.
4.4 Summary.
5 The discrete-time Kalman filter.
5.1 Derivation of the discrete-time Kalman filter.
5.2 Kalman filter properties.
5.3 One-step Kalman filter equations.
5.4 Alternate propagation of covariance.
5.4.1 Multiple state systems.
5.4.2 Scalar systems.
5.5 Divergence issues.
5.6 Summary.
6 Alternate Kalman filter formulations.
6.1 Sequential Kalman filtering.
6.2 Information filtering.
6.3 Square root filtering.
6.3.1 Condition number.
6.3.2 The square root time-update equation.
6.3.3 Potter's square root measurement-update equation.
6.3.4 Square root measurement update via triangularization.
6.3.5 Algorithms for orthogonal transformations.
6.4 U-D filtering.
6.4.1 U-D filtering: The measurement-update equation.
6.4.2 U-D filtering: The time-update equation.
6.5 Summary.
7 Kalman filter generalizations.
7.1 Correlated process and measurement noise.
7.2 Colored process and measurement noise.
7.2.1 Colored process noise.
7.2.2 Colored measurement noise: State augmentation.
7.2.3 Colored measurement noise: Measurement differencing.
7.3 Steady-state filtering.
7.3.1 a-P filtering.
7.3.2 a-P-y filtering.
7.3.3 A Hamiltonian approach to steady-state filtering.
7.4 Kalman filtering with fading memory.
7.5 Constrained Kalman filtering.
7.5.1 Model reduction.
7.5.2 Perfect measurements.
7.5.3 Projection approaches.
7.5.4 A pdf truncation approach.
7.6 Summary.
8 The continuous-time Kalman filter.
8.1 Discrete-time and continuous-time white noise.
8.1.1 Process noise.
8.1.2 Measurement noise.
8.1.3 Discretized simulation of noisy continuous-time systems.
8.2 Derivation of the continuous-time Kalman filter.
8.3 Alternate solutions to the Riccati equation.
8.3.1 The transition matrix approach.
8.3.2 The Chandrasekhar algorithm.
8.3.3 The square root filter.
8.4 Generalizations of the continuous-time filter.
8.4.1 Correlated process and measurement noise.
8.4.2 Colored measurement noise
8.5 The steady-state continuous-time Kalman filter
8.5.1 The algebraic Riccati equation.
8.5.2 The Wiener filter is a Kalman filter.
8.5.3 Duality.
8.6 Summary.
9 Optimal smoothing.
9.1 An alternate form for the Kalman filter.
9.2 Fixed-point smoothing.
9.2.1 Estimation improvement due to smoothing.
9.2.2 Smoothing constant states.
9.3 Fixed-lag smoothing.
9.4 Fixed-interval smoothing.
9.4.1 Forward-backward smoothing.
9.4.2 RTS smoothing.
9.5 Summary.
10 Additional topics in Kalman filtering.
10.1 Verifying Kalman filter performance.
10.2 Multiple-model estimation.
10.3 Reduced-order Kalman filtering.
10.3.1 Anderson's approach to reduced-order filtering.
10.3.2 The reduced-order Schmidt-Kalman filter.
10.4 Robust Kalman filtering.
10.5 Delayed measurements and synchronization errors.
10.5.1 A statistical derivation of the Kalman filter.
10.5.2 Kalman filtering with delayed measurements.
10.6 Summary.
PART III THE H, FILTER.
11 The H, filter.
11.1 Introduction.
11.1.1 An alternate form for the Kalman filter.
11.1.2 Kalman filter limitations.
11.2 Constrained optimization.
11.2.1 Static constrained optimization.
11.2.2 Inequality constraints.
11.2.3 Dynamic constrained optimization.
11.3 A game theory approach to H, filtering.
11.3.1 Stationarity with respect to xo and wk.
11.3.2 Stationarity with respect to 2 and y.
11.3.3 A comparison of the Kalman and H, filters.
11.3.4 Steady-state H, filtering.
11.3.5 The transfer function bound of the H, filter.
11.4 The continuous-time H, filter.
11.5 Transfer function approaches.
11.6 Summary.
12 Additional topics in H, filtering.
12.1 Mixed KalmanIH, filtering.
12.2 Robust Kalman/H, filtering.
12.3 Constrained H, filtering.
12.4 Summary.
13 Nonlinear Kalman filtering.
13.1 The linearized Kalman filter.
13.2 The extended Kalman filter.
13.2.1 The continuous-time extended Kalman filter.
13.2.2 The hybrid extended Kalman filter.
13.2.3 The discrete-time extended Kalman filter.
13.3 Higher-order approaches.
13.3.1 The iterated extended Kalman filter.
13.3.2 The second-order extended Kalman filter.
13.3.3 Other approaches.
13.4 Parameter estimation.
13.5 Summary.
14 The unscented Kalman filter.
14.1 Means and covariances of nonlinear transformations.
14.1.1 The mean of a nonlinear transformation.
14.1.2 The covariance of a nonlinear transformation.
14.2 Unscented transformations.
14.2.1 Mean approximation.
14.2.2 Covariance approximation.
14.3 Unscented Kalman filtering.
14.4 Other unscented transformations.
14.4.1 General unscented transformations.
14.4.2 The simplex unscented transformation.
14.4.3 The spherical unscented transformation.
14.5 Summary.
15 The particle filter.
15.1 Bayesian state estimation.
15.2 Particle filtering.
15.3 Implementation issues.
15.3.1 Sample impoverishment.
15.3.2 Particle filtering combined with other filters.
15.4 Summary.
Appendix A: Historical perspectives.
Appendix B: Other books on Kalman filtering.
Appendix C: State estimation and the meaning of life.
See More
DAN SIMON, PhD, is an Associate Professor at Cleveland State University. Prior to this appointment, Dr. Simon spent fourteen years working for such firms as Boeing, TRW, and several smaller
See More
"This book is obviously written with care and reads very easily. A very valuable resource for students, teachers, and practitioners…highly recommended." (
, February 2007)
"The dozens of helpful step-by-step examples, visual illustrations, and lists of exercises proposed at the end of each chapter significantly facilitate a reader's understanding of the book's
content." (Computing Reviews.com, December 4, 2006)
See More | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470045337.html","timestamp":"2014-04-17T10:17:29Z","content_type":null,"content_length":"51914","record_id":"<urn:uuid:3ca38383-f928-4822-a703-4942535a5d63>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship of Bousfield Classes of Morava K-theories
up vote 1 down vote favorite
Suppose we have $\langle K(n)\rangle$ and $\langle K(n-1) \rangle$ for some fixed prime $p$. do we know whether or not $\langle K(n) \rangle \geq \langle K(n-1) \rangle$ or $\langle K(n-1) \rangle \
geq \langle K(n) \rangle$?
It seems to me that the first relation is accurate if we somehow restricted ourselves to finite spectra (from Ravenel's "Localization at Certain Periodic Homology Theories"). However, we obviously
aren't doing that (right?).
It also seems to me that such a relationship would be incredibly problematic, mainly because we'd have the following. Assume the first relationship held and we restrict ourselves to operating within
the distributive sub-lattice of the Bousfield lattice. Then $\langle K(n)\rangle \wedge\langle K(n-1)\rangle=\langle K(n-1)\rangle = \langle 0\rangle$ and this would seem to me to be incredibly
problematic. Is this an accurate assessment of the situation, or have I missed something?
add comment
2 Answers
active oldest votes
It is standard that $K(n)\wedge K(m)=0$ for $n\neq m$. One way to think about this is as follows: if $E$ and $F$ are complex oriented ring spectra then the corresponding formal
group laws become isomorphic over $\pi_*(E\wedge F)$, but it is easy to see that formal group laws of different heights can only become isomorphic over the zero ring.
On the other hand, as $K(n)$ is a ring spectrum we have maps $K(n)\xrightarrow{\eta}K(n)\wedge K(n) \xrightarrow{\mu} K(n)$ whose composite is the identity, so $K(n)\wedge K(n)$ is
up vote 12 down This means that we cannot have $\langle K(n)\rangle\leq\langle K(m)\rangle$ unless $m=n$. Indeed, if $m\neq n$ then we saw that $K(n)$ is $K(m)$-acyclic. If we had $\langle K(n)\
vote accepted rangle\leq\langle K(m)\rangle$ we could conclude that $K(n)$ was also $K(n)$-acyclic, or in other words $K(n)\wedge K(n)=0$, which is false.
It is true that when $n\leq m$ we have $$\{K(n)-\text{acyclic finite spectra}\}\supseteq\{K(m)-\text{acyclic finite spectra}\}$$ which might suggest that $\langle K(n)\rangle\leq\
langle K(m)\rangle$, but that is only a suggestion and it does not work out to be true.
Thanks so much! – Jon Beardsley Feb 23 '12 at 21:22
add comment
Neil's answer is great. I just wanted to add that in fact the Bousfield classes of the Morava $K$-theories are minimal non-zero classes in the Bousfield lattice. In particular, $\langle K
up vote 3 (n) \rangle$ and $\langle K(n-1) \rangle$ are not comparable.
down vote
Thanks John. So we might say something like $\langle K(n)\rangle$ are the atoms of the Bousfield lattice? – Jon Beardsley Feb 23 '12 at 21:59
Sorry that probably seems like a completely random question. I'm trying to look at either BA or DL from the point of view of order theory and that kind of stuff and figure out things
like what are the prime ideals and so forth. In a Boolean algebra an element generates a prime ideal if and only if its complement is an atom. So, it appears that there are prime ideals
of the form ↓a⟨K(n)⟩. – Jon Beardsley Feb 23 '12 at 22:04
Look at my paper with Hovey, "The structure of the Bousfield lattice." We don't address prime ideals, but we discuss a conjectured description of the atoms for BA. (The atoms for the
whole Bousfield lattice could be more complicated. The Brown-Comenetz dual of the sphere might be one.) – John Palmieri Feb 24 '12 at 15:49
Thankyou! I'm looking at it now. – Jon Beardsley Feb 24 '12 at 21:18
add comment
Not the answer you're looking for? Browse other questions tagged stable-homotopy stable-homotopy-category lattices or ask your own question. | {"url":"http://mathoverflow.net/questions/89330/relationship-of-bousfield-classes-of-morava-k-theories","timestamp":"2014-04-17T09:39:47Z","content_type":null,"content_length":"60143","record_id":"<urn:uuid:f6a1671a-1cc3-4545-a736-1d297f66ead2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sphere Circumference, Surface Area and Volume
This simple program computes the basic parameters
of a sphere given its radius (
) in any linear units.
Sphere Radius = 1 unit
Circumference = 6.2831853071796 units
Surface Area = 12.566370614359 units^2
Volume = 4.1887902047864 units^3
The circumference of a sphere is:
The surface area of a sphere is:
The volume of a sphere is:
A Few Basic Definitions Applied
Regardless of the size (scale) of the circle, the ratio (
1 radian = That angle at which the length of the corresponding arc of a circle is equal to its radius.
The relationship between degrees and radians is:
Radians / Degrees =
So, 1 radian = 180/
© 2014 - Jay Tanner - PHP Science Labs - v1.5 | {"url":"http://www.neoprogrammics.com/spheres/circumference_area_volume.php","timestamp":"2014-04-17T00:49:17Z","content_type":null,"content_length":"3490","record_id":"<urn:uuid:436f8e9d-205b-467d-b6f0-a366aa554f77>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Propagation of transient waves in a random medium.
ASA 126th Meeting Denver 1993 October 4-8
5aPA4. Propagation of transient waves in a random medium.
Alan R. Wenzel
NASA Langley Res. Ctr., MS 460, Hampton, VA 23681-0001
A theoretical analysis of the propagation of transient scalar waves in a one-dimensional random medium is presented. The index of refraction of the medium is assumed to deviate only slightly from
unity, which allows the analysis to be carried out with the aid of a perturbation method. The specific approach adopted here combines a renormalization technique with a travel-time-corrected
averaging procedure called asynchronous ensemble averaging. A general expression, valid for an arbitrary initial disturbance, is obtained for the variance of the wave. From that result, an expression
for the variance is derived for the special case in which the initial disturbance has the form of a ramp function with arbitrary slope. These results show that the variance of the wave is directly
proportional to the variance of the refractive index of the medium, but is only weakly dependent on the propagation path length. It is also found that, as the slope of the ramp function decreases,
the wave variance decreases as well. The presentation concludes with some observations on the relevance of these results to the problem of sonic-boom propagation in the atmosphere. [Research
supported by NASA.] | {"url":"http://www.auditory.org/asamtgs/asa93dnv/5aPA/5aPA4.html","timestamp":"2014-04-19T14:30:57Z","content_type":null,"content_length":"1796","record_id":"<urn:uuid:00743c63-9af4-4877-a3d5-34246996e7eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cylinders and Elephants
Date: 05/30/97 at 00:47:44
From: philip brand
Subject: Volume and shapes of containers
Why are cans always manufactured as cylinders? I think it has
something to do with the amount of volume it can hold.
Are the volume of an animal and the surface area of its feet related?
Date: 05/30/97 at 05:40:55
From: Doctor Mitteldorf
Subject: Re: volume and shapes of containers
Dear Phillip,
It's true that when the shape of a can is being engineered, the ratio
of the surface area to the volume is one of the things considered.
However, there are other things to consider as well: attracting
attention on the store shelf, for one. And ease of manufacture is
On the subject of ease of manufacture: even though a cylindrical
surface looks "curved" and we call it "curved", it can made from a
flat piece of metal. Since rolled steel (or aluminum) comes in
sheets, this is important. Imagine how much more complicated it would
be to manufacture a sphere, which is curved in two dimensions, and for
which you can't just start with a flat sheet. In fact, you have to
start with a spherical piece of exactly the right radius.
The foot size that an animal needs is proportional to its weight which
is proportional to its volume, since almost all animals are roughly
the density of water. If you just made a copy of a 1-foot cat the
size of a ten-foot elephant, you'd find that the elephant weighed 1000
times as much but its feet were only 100 times the area. The feet
wouldn't be able to hold the weight.
-Doctor Mitteldorf, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/59006.html","timestamp":"2014-04-17T18:35:46Z","content_type":null,"content_length":"6601","record_id":"<urn:uuid:0acc3313-3032-4a30-8623-8e3e10b9ce87>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miscellaneous Keywords
Next: File I/O Keywords Up: Technology File Attributes Previous: Font Assignments Contents Index
Below are a few keywords that don't seem to fit in the other categories.
RoundFlashSides sides
This keyword specifies the number of sides to use in the round objects created. The sides must be between 8 and 150.
Default: 20
BoxLineStyle style
This sets the linestyle of the boxes used in electrical mode, and in physical mode for some highlighting purposes such as zooming with button 3. The style is an integer whose binary value is
replicated to form the lines used in the box.
Default: e38 (hex)
Constrain45 [y|n]
When `y' is given, vertices entered to new polygons and wires will be constrained to form angles at multiples of 45 degrees with existing vertices. The rotations in the spin command are
restricted to multiples of 45 degrees.
Default: n
Stephen R. Whiteley 2012-04-01 | {"url":"http://wrcad.com/manual/xicmanual/node636.html","timestamp":"2014-04-21T09:36:05Z","content_type":null,"content_length":"3517","record_id":"<urn:uuid:76588c9f-0b15-4502-a47c-1af5a9742a92>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |