content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Frequency analysis of gradient estimators in volume
Results 1 - 10 of 41
- In IEEE Symposium on Volume Visualization , 1998
"... Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes
generating an informative rendering challenging. In particular, the specification of the transfer function ..."
Cited by 244 (7 self)
Add to MetaCart
Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes
generating an informative rendering challenging. In particular, the specification of the transfer function --- the mapping from data values to renderable optical properties --- is frequently a
time-consuming and unintuitive task. Ideally, the data being visualized should itself suggest an appropriate transfer function that brings out the features of interest without obscuring them with
elements of little importance. We demonstrate that this is possible for a large class of scalar volume data, namely that where the regions of interest are the boundaries between different materials.
A transfer function which makes boundaries readily visible can be generated from the relationship between three quantities: the data value and its first and second directional derivatives along the
gradient direction. ...
- IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS , 2001
"... Accurately and automatically conveying the structure of a volume model is a problem not fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create
images which may match the appearance of translucent materials in nature, but may not embody important struct ..."
Cited by 158 (14 self)
Add to MetaCart
Accurately and automatically conveying the structure of a volume model is a problem not fully solved by existing volume rendering approaches. Physics-based volume rendering approaches create images
which may match the appearance of translucent materials in nature, but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance, but
generally require substantial hand tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physics-based illumination
model with the ability to enhance important features using non-photorealistic rendering techniques. Since features to be enhanced are defined on the basis of local volume characteristics rather than
volume sample value, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified
framework for enhancing structural perception of volume models through the amplification of features and the addition of illumination effects.
- Proceedings of the IEEE , 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained
in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Cited by 61 (0 self)
Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in
different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively
recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolution-based interpolation, history, image processing, polynomial interpolation, signal
processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation.
It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be
extended by considering noteworthy examples of it. ” 1 I.
- IEEE Transactions on Visualization and Computer Graphics , 1997
"... We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is
based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the m ..."
Cited by 60 (6 self)
Add to MetaCart
We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based
on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the method for the normalization of derivative filter weights. Under certain minimal restrictions of the
underlying function, we are able to compute tight absolute error bounds of the reconstruction process. We demonstrate the utilization of our methods to the analysis of the class of cubic BC-spline
filters. As our technique is not restricted to interpolation filters, we are able to show that the Catmull-Rom spline filter and its derivative are the most accurate reconstruction and derivative
filters, respectively, among the class of BC-spline filters. We also present a new derivative filter which features better spatial accuracy than any derivative BC-spline filter, and is optimal within
our fra...
- In IEEE Vol. Vis , 1998
"... Figure 1: Shaded, volume rendered spheres stored with two values per voxel: a value indicating the distance to the closest surface point; and a binary intensity value. The sphere in a) has
radius 30 voxels and is stored in an array of size. The spheres in b), c), and d) have radii 3 voxels, 2 voxels ..."
Cited by 59 (3 self)
Add to MetaCart
Figure 1: Shaded, volume rendered spheres stored with two values per voxel: a value indicating the distance to the closest surface point; and a binary intensity value. The sphere in a) has radius 30
voxels and is stored in an array of size. The spheres in b), c), and d) have radii 3 voxels, 2 voxels and 1.5 voxels respectively and are stored in arrays of size. The surface normal used in surface
shading was calculated using a 6-point central difference operator on the distance values. Remarkably smooth shading can be achieved for these low resolution data volumes because the function of the
distance-to-closest surface varies smoothly across surfaces. (See color plate.) High quality rendering and physics-based modeling in volume graphics have been limited because intensity-based
volumetric data do not represent surfaces well. High spatial frequencies due to abrupt intensity changes at object surfaces result in jagged or terraced surfaces in rendered images. The use of a
distance-to-closestsurface function to encode object surfaces is proposed. This function varies smoothly across surfaces and hence can be accurately reconstructed from sampled data. The zero-value
iso-surface of the distance map yields the object surface and the derivative of the distance map yields the surface normal. Examples of rendered images are presented along with a new method for
calculating distance maps from sampled binary data.
- IN PROCEEDINGS OF THE 2000 IEEE SYMPOSIUM ON VOLUME VISUALIZATION , 2000
"... This paper evaluates and compares four volume rendering algorithms that have become rather popular for rendering datasets described on uniform rectilinear grids: raycasting, splatting,
shear-warp, and hardware-assisted 3D texture-mapping. In order to assess both the strengths and the weaknesses of t ..."
Cited by 25 (2 self)
Add to MetaCart
This paper evaluates and compares four volume rendering algorithms that have become rather popular for rendering datasets described on uniform rectilinear grids: raycasting, splatting, shear-warp,
and hardware-assisted 3D texture-mapping. In order to assess both the strengths and the weaknesses of these algorithms in a wide variety of scenarios, a set of real-life benchmark datasets with
different characteristics was carefully selected. In the rendering, all algorithm-independent image synthesis parameters, such as viewing matrix, transfer functions, and optical model, were kept
constant to enable a fair comparison of the rendering results. Both image quality and computational complexity were evaluated and compared, with the aim of providing both researchers and
practitioners with guidelines on which algorithm is most suited in which scenario. Our analysis also indicates the current weaknesses in each algorithm’s pipeline, and possible solutions to these as
well as pointers for future research are offered.
, 1999
"... Real-time visualization of large volume datasets demands high performance computation, pushing the storage, processing, and data communication requirements to the limits of current technology.
General purpose parallel processors have been used to visualize moderate size datasets at interactive frame ..."
Cited by 21 (2 self)
Add to MetaCart
Real-time visualization of large volume datasets demands high performance computation, pushing the storage, processing, and data communication requirements to the limits of current technology.
General purpose parallel processors have been used to visualize moderate size datasets at interactive frame rates; however, the cost and size of these supercomputers inhibits the widespread use for
real-time visualization. This paper surveys several special purpose architectures that seek to render volumes at interactive rates. These specialized visualization accelerators have cost,
performance, and size advantages over parallel processors. All architectures implement ray casting using parallel and pipelined hardware. We introduce a new metric that normalizes performance to
compare these architectures. The architectures included in this survey are VOGUE, VIRIM, Array Based Ray Casting, EM-Cube, and VIZARD II. We also discuss future applications of special purpose
- IEEE Transactions on Visualization and Computer Graphics , 1996
"... Reconstruction is prerequisite whenever a discrete signal needs to be resampled as a result of transformation such as texture mapping, image manipulation, volume slicing, and rendering. We
present a new method for the characterization and measurement of reconstruction error in spatial domain. Our ..."
Cited by 20 (3 self)
Add to MetaCart
Reconstruction is prerequisite whenever a discrete signal needs to be resampled as a result of transformation such as texture mapping, image manipulation, volume slicing, and rendering. We present a
new method for the characterization and measurement of reconstruction error in spatial domain. Our method uses the Classical Shannon's Sampling Theorem as a basis to develop error bounds. We use this
formulation to provide, for the first time, an efficient way to guarantee an error bound at every point by varying the size of the reconstruction filter. We go further to support position-adaptive
reconstruction and data-adaptive reconstruction which adjust filter size to the location of reconstruction point and to the data values in its vicinity. We demonstrate the effectiveness of our
methods with 1D signals, 2D signals (images), and 3D signals (volumes). ------------------------------ F ------------------------------ 1I NTRODUCTION ECONSTRUCTION is the process of recovering a
- Proc. of IEEE/ACM SIGGRAPH Volume visualization and graphics symposium 2000 , 2000
"... Ideal reconstruction filters, for function or arbitrary derivative reconstruction, have to be bounded in order to be practicable since they are infinite in their spatial extent. This can be
accomplished by multiplying them with windowing functions. In this paper we discuss and assess the quality of ..."
Cited by 17 (7 self)
Add to MetaCart
Ideal reconstruction filters, for function or arbitrary derivative reconstruction, have to be bounded in order to be practicable since they are infinite in their spatial extent. This can be
accomplished by multiplying them with windowing functions. In this paper we discuss and assess the quality of commonly used windows and show that most of them are unsatisfactory in terms of numerical
accuracy. The best performing windows are Blackman, Kaiser and Gaussian win- ftheussl,helwig,meisterg@cg.tuwien.ac.at dows. The latter two are particularly useful since both have a parameter to
control their shape, which, on the other hand, requires to find appropriate values for these parameters. We show how to derive optimal parameter values for Kaiser and Gaussian windows using a Taylor
series expansion of the convolution sum. Optimal values for function and first derivative reconstruction for window widths of two, three, four and five are presented explicitly. Keywords: ideal
reconstruction, wind...
, 1997
"... Splatting is a popular direct volume rendering algorithm. However, the algorithm does not correctly render cases where the volume sampling rate is higher than the image sampling rate (e.g. more
than one voxel maps into a pixel). This situation arises with orthographic projections of high-resolution ..."
Cited by 14 (1 self)
Add to MetaCart
Splatting is a popular direct volume rendering algorithm. However, the algorithm does not correctly render cases where the volume sampling rate is higher than the image sampling rate (e.g. more than
one voxel maps into a pixel). This situation arises with orthographic projections of high-resolution volumes, as well as with perspective projections of volumes of any resolution. The result is
potentially severe spatial and temporal aliasing artifacts. Some volume ray casting algorithms avoid these artifacts by employing reconstruction kernels which vary in width as rays diverge. Unlike
ray casting algorithms, existing splatting algorithms to not have an equivalent mechanism for avoiding these artifacts. In this paper we propose such a mechanism, which delivers high-quality splatted
images and has the potential for a very efficient hardware implementation.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=143127","timestamp":"2014-04-19T20:08:27Z","content_type":null,"content_length":"42283","record_id":"<urn:uuid:e78f4546-36b7-426a-9ba2-9c71c54541dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Max Profit
Posted by Ronny on Thursday, August 2, 2012 at 2:34pm.
To make pom poms in our school colors, we will have expenses of $100 to rent the Acme PomPom Plant, and then $0.25 per pom pom in materials. We believe that we can sell 500 pom poms if we charge
$1.50. Assume the number sold is a linear function of the price. How much should we charge to maximize profit?
I have done this problem many times but I cannot find max profit. I keep fining a max income. I am lost and don't know what to do please help!
• Calculus Max Profit - MathMate, Friday, August 3, 2012 at 10:51am
"Assume the number sold is a linear function of the price"
but no information is given on the price elasticity, so we will assume, in general, an increase in price of $1 will increase sales by m units.
Under normal supply-demand curve, m is necessarily negative, of the order of -200 or so.
With that in mind, and knowing that (500,1.50) is a point on the line sales versus price, we construct the sales(y)-price(x) relation as:
therefore, at a price of x, we expect sales of
Total revenue,
Total profit
P=xy - cost
=xy - (0.25x+100)
=x(m(x-1.5)+500) - (0.25x+100)
To get the maximum profit, we differentiate profit with respect to price, and equate to zero to find the optimum price:
dp/dx = m*x+m*(x-1.5)+1999/4 =0
Solve for x:
If the price elasticity m=-100,
if m=-200
if m=-300
Related Questions
Economics - Assume a monopolist with the following: a. Qd = 100 – 10p b. TC = 1...
Economics - Q = 400 - 20p TC = 10 + 5q + q2 Calculate the following: Profit max ...
economics - Assume a monopolist with the following demand and cost relationships...
economics - 1. Assume there are three markets: A: Wool; B: Synthetic Fiber; C: ...
Algebra 2 - sauce 1: 5 green peppers + 4 hot peppers sauce 2: 4 green peppers + ...
Math( help please ! ) - The profit p(x) of a cosmetics company , in thousands of...
Math - The profit p(x) of a perfume company , in thousands of dollars,is given ...
English - I forgot to include this part. Thank you. 1) The principal of my ...
English - Could you please tell me if the following sentences are possible? ...
health care?business - I am doing a paper on not for profit and for profit ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1343932464","timestamp":"2014-04-20T06:30:02Z","content_type":null,"content_length":"9638","record_id":"<urn:uuid:7320d289-4cde-455a-ad1d-95e7a2d841d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding minimum distance between two lines
December 5th 2011, 04:51 PM
Finding minimum distance between two lines
I need to find the points on two lines in 4 dimensions that are closest to each other. How would I go about doing that?
December 5th 2011, 05:19 PM
Re: Finding minimum distance between two lines
What are the equations of the lines?
December 9th 2011, 11:03 AM
Re: Finding minimum distance between two lines
The first line is all scalar multiples of (1,2,0,0).
The second line is (1,0,-2,3) + all scalar multiples of (-1,1,2,-1).
The answer I got was the points (3/7,6/7,0,0) and (-1/7,8/7,2/7,5/7). The distance I got was one.
Is this correct?
|
{"url":"http://mathhelpforum.com/advanced-algebra/193524-finding-minimum-distance-between-two-lines-print.html","timestamp":"2014-04-17T10:28:42Z","content_type":null,"content_length":"4149","record_id":"<urn:uuid:dc2972e6-04bf-4286-9048-ec41a15f1620>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1 Introduction
Presumably you have heard of the first law of motion, which says that a free particle moves in a straight line at a uniform velocity. That’s true, but in order to make it useful, we need to be able
to recognize straight paths and distinguish them from non-straight paths.
Tangential remark: If you think about things in spacetime, both parts of the first law of motion – the “straight line” part and the “uniform velocity” part – turn out to be exactly the same
thing. A uniform velocity is a straight line in spacetime, nothing more, nothing less. A discussion of this, along with an interactive diagram, can be found in reference 1.
Also, you have probably heard something about general relativity, including the idea that gravitation is explained by the “curvature” of spacetime. The purpose of this document is to explain some of
the important details such as the direction in which the space is curved, how much it is curved, and how this produces the effects we call gravitation.
It would be helpful to have some prior understanding of what we mean by spacetime, as discussed in reference 1 and elsewhere.
* Contents
1 Introduction
2 Masking Tape is As Straight As Can Be
3 Two Models of Gravitation
4 Spacetime Curvature
5 Clocks and Timing
6 A Real-World Application: Plumbing
2 Masking Tape is As Straight As Can Be
Masking tape has the wonderful property that it very non-stretchy, as you can verify by trying to stretch a piece. (If you have some stretchy tape, set it aside and get some non-stretchy tape.)
Next, note that it has definitely nonzero width. It is non-stretchy across its width, and also across innumerable diagonals, so it can hold itself straight, the way a triangular truss holds itself
rigid, as in the bridge in figure 1.
Figure 2 shows the kind of cross-bracing we are talking about. Lines drawn on the tape cannot stretch, and these define triangles that cannot change their shape. The length of line DB and line AC
(shown in gray) cannot change, because the tape is non-stretchy. Similarly the length of line AD and line BC (shown in red) cannot change. In this way you can prove that all the triangles keep their
shape. We know from high-school geometry that if the lengths stay the same, the angles must stay the same also. See reference 2 page 249 for a much more detailed and rigorous discussion.
Figure 2
: Cross Bracing (Schild’s Ladder)
You can draw triangles like this on your tape if you want, but the tape keeps its shape whether you draw the triangles or not.
This notion of straightness, defined by cross-bracing, has numerous good properties. For one thing, if you stick an initial piece of tape to a surface, it defines a unique way of laying down the next
piece, and then the next. And it is reversible: You can retrace the such a tape-path in the reverse direction and get the same result.
It is intriguing that in mathematics, lines are defined to be straight and have zero width, but in physics, if you want to make sure it is straight, it needs to have nonzero width (so the
cross-braces have some leverage). You can pass to the limit of infinitesimal width, but not zero width.
In the first paragraph of the introduction to the Principia, Isaac Newton said: “The description of right lines and circles, upon which geometry is founded, belongs to mechanics. Geometry does not
teach us to draw these lines, but requires them to be drawn”.
A more modern way of framing the issue is this:
In your imagination, you can postulate straight lines and circles that exist in some formal, mathematical space. Such If you want to construct a “line” or anything else real life, making it
lines and circles are abstractions, devoid of physical meaning. straight requires physics, not just mathematics.
Sometimes people try to “define” straightness by saying that a straight “line” is the shortest path between two points. That is, however, not the ideal definition. It would be better to say that any
extremal path (either shortest or longest) is necessarily straight. You can show that the tape satisfies this definition, by the following argument: The two edges of the tape have equal length. The
tape was made to have that property, and it retains that property because it is non-stretchy. If you choose a hypothetical path that is not a geodesic – not straight – then nearby paths to one side
will be slightly longer, and nearby paths to the other side will be slightly shorter. You cannot make the tape follow such a path without buckling.
Once we have a good way to make straight paths, it is a simple matter to create curved paths, by forcing one edge of the tape to follow a longer path than the other.
Tangential remark: Similar notions of curvature play a role in the expansion of the universe, as discussed in reference 3.
3 Two Models of Gravitation
3.1 A Correct Model : Paths Controlled by Curvature
Here is a convenient way to correctly demonstrate the motion of a free particle in curved space.
We use a two-dimensional surface as our model universe. To model the path of a particle, apply masking tape to this surface.
For reasons explained in section 2, the tape will follow a geodesic (straight line) in the two-dimensional universe. Meanwhile, masking tape is so thin that it can bend as necessary in the third
dimension (the embedding dimension). Be sure you use masking tape, which is designed to be non-stretchy (as opposed to something stretchy like electrical tape).
1) Let’s start with an uncurved universe. Use a flat sheet of construction paper or (if you’re using an overhead projector) a blank acetate foil. Lay out some tape and see what happens. Put down
a couple of inches of tape to get things started, and then lay down the rest bit by bit. Allow the most-recently attached bit to guide the placement of the next bit. Hold the supply of tape
slightly slack; do not try to “force” the result to go in any particular direction. Observe that the result is automatically straight, to an excellent approximation.
As long as you don’t allow “crumpling” or “air pockets” under the tape, it should guide itself quite well.
2) Lay out another line of tape that is initially parallel to the first. Extend it, letting the tape itself do the guiding, and observe that the two lines remain parallel for a long ways.
3) Bend the sheet into a cylinder, in such a way that it doesn’t stretch the sheet. (Wrap it around an oatmeal carton if you need help maintaining a cylindrical shape.) This creates extrinsic
curvature without creating any intrinsic curvature.
A cone is another shape that has extrinsic curvature but no intrinsic curvature (except at the single point at the tip of the cone).
Observe that rolling the sheet into a cylinder or cone has no consequences for the geodesics; things that start out parallel remain parallel, et cetera. This is important because it shows that
extrinsic curvature has no effect on the path of particles in our two-dimensional model world. That contrasts with intrinsic curvature, which will be demonstrated next.
4) On top of a flat sheet, put a large bowl, upside down. I have a large salad bowl that works beautifully.
Lay out a geodesic that starts on the flat sheet and heads toward the bowl. When it reaches the bowl, it will refract.
5) Lay out a geodesic that starts out parallel to the previous one. It will hit the bowl with a different impact parameter, and refract differently.
*) Et cetera. You get the idea.
Note: If you are not using an overhead projector, choose the color of the paper and the color of the bowls to contrast with the color of the tape. If you can get multiple colors of non-stretchy tape,
so much the better.
Another suggestion: You can pile smaller bowls onto the back of the larger bowl, to change the shape of the potential.
Remark: The embedding world’s gravity has no effect on the tape. This model would work perfectly in the weightless environment in a spaceship.
Related remark: The tape does not care whether the curvature is a bump or a pit. Consider what happens if you have a flat countertop with a bowl-shaped sink set into it. The tape runs straight along
the countertop, but then drops down and follows the curvature of the inside of the sink. In all cases the geodesic will be bent toward the region of high intrinsic curvature.
A bump and a bowl both have positive intrinsic curvature, as discussed in section 7.5. By way of contrast, a saddle has negative intrinsic curvature.
3.2 The Classical Model is Half-True
Let us now consider a different model, namely a marble rolling in a bowl. This can be contrasted with the curvature-based model introduced in section 3.1.
Rolling in a bowl is a decent model of classical physics, i.e. Newtonian gravitation. Rolling in a bowl is a false and deceptive model of the modern physics, i.e. general relativity.
Rolling in a bowl depends on the fact that the bowl sits in the earth’s gravitational field. The correct model, as introduced in section 3.1, works just fine in zero-gravity conditions.
If the shape of the bowl is just right – a paraboloid – the height of the bowl faithfully represents the classical gravitational potential. At each point, the slope of the bowl represents the
gravitational field. (The fact that the marble rolls – rather than sliding freely – introduces some nonidealities, but let’s ignore that.)
If you use a glass bowl, you can demonstrate this to the whole room using an overhead projector.
Although this models the classical physics to a fair approximation, it does not correctly model general relativity. In particular, the curvature of the bowl is not a good model of the spacetime
curvature that general relativity uses to explain gravitation. Not at all.
There are several ways of seeing that it would be wrong to consider the bowl a model of general relativity.
• Let the marble roll in a flat dish – no curvature at all. If the dish is tilted, the marble will be deflected to one side. The marble follows a non-straight trajectory in the absence of curvature
– which is totally incompatible with general relativity.
• Even more dramatically, turn the bowl upside down, and let the marble roll on the backside of the bowl. The marble is not attracted to the center of the bowl, but is repelled instead. This is
peculiar, because (assuming the back of the bowl is congruent to the front) the intrinsic curvature is the same in both cases. Two-dimensional creatures living in the two-dimensional world of the
model can, by careful surveying, detect that the bowl is curved, but surveying cannot tell them whether it curves “up” or “down” into the third dimension.
To repeat: the alleged connection between “rolling in a bowl” and general relativity is essentially 100% wrong.
4 Spacetime Curvature
The model introduced in section 3.1 demonstrated the qualitative effects of curvature, but we have to refine it a bit if we want a really accurate model of, say, planetary orbits. It turns out that
bowl-shaped potentials are not what we need. Not even close.
The world-line of a particle in orbit is best described as a helix. It goes around and around in two spacelike dimensions, while moving steadily forward in the timelike direction.
As pointed out by David Bowman, if you try to project that helix onto a two-dimensional model, there will be problems. Spacetime is curved (by gravity) in the timelike direction as well as the
spacelike directions. So if you build a model that represents the two spacelike directions, suppressing the timelike direction, you can’t properly represent the sort of curvature that leads to
planetary orbits.
We can do much better if we use our model to represent one timelike dimension and only one spacelike dimension. We will illustrate the gravitational field of a planet that exists for a long time (a
long ways along the time axis). The gravitational field decreases as we move away from the planet in either direction along the X axis.
This can be modeled using darts, as shown in figure 3. The time axis runs vertically up the page, and the spacelike X axis runs horizontally across the page. The five ribs are made of two darts each,
for a total of 10 darts. See section 8 for hints on how to fabricate darts.
Figure 3
: Orbits due to Curvature due to Darts
I emphasize that the tape follows geodesics determined by the local curvature of spacetime at each point. Given an arrangement of darts, each trajectory is completely determined by its starting point
and initial direction; there is no other choice involved.
Whenever the tape crosses a dart, the curvature of spacetime will deflect the trajectory toward the center. As it continues along, it will “orbit” the center, as you can see in the figure. In D=1+1,
each trajectory looks like a sine-wave when viewed from above the page. One trajectory starts with an X value slightly left of center and completes a half-cycle before exiting at the top of the
diagram. The next trajectory starts with an X value slightly right of center, and also completes a half-cycle. The third trajectory starts farther right of center, and completes only a small fraction
of a cycle.
It is interesting to think about “what is the shortest path from point A[1] to point A[2]”. The path in the presence of the darts is very different from what it would be in the absence of the darts.
Actually in the presence of the darts (spacetime curvature) there are a couple of different geodesics that connect A[1] to A[2].
Hint: The stores around here sell masking tape that is 3/4” or 1” wide. Narrower tape is better for this demo. So you may want to divide the tape in half lengthwise. Do this while it was still on the
roll, using a very sharp knife to cut through many layers.
If you attach the darts to a piece of acetate foil, you can set the whole thing on an overhead projector so everybody can see it. But letting people play with it hands-on is the best.
5 Clocks and Timing
By way of background, suppose you travel from point A to point B along some chosen path. You can use an odometer to measure the length of the path. This length will not usually be the same as the
length of some other path from A to B. In particular, an odometer is not like a rigid ruler, which measures the straight-line distance from A to B.
It turns out that ordinary clocks are more like odometers than like rulers. Suppose you travel from point A to point B along some chosen path, perhaps one of the paths shown in figure 3. The elapsed
time along this path will not usually be the same as the elapsed time along some other path.
One way that the elapsed times can be different is if one of the paths has a lot of curvature in the time direction. You can see in figure 3 that a path that is deeper in the gravitational potential
will have more curvature. Our simple model is quite faithful to the real gravitational physics in this regard: A clock deep in a gravitaitional potential will rack up more time than a clock not so
deep in the gravitational potential.
You might be tempted to say that the deeper clock “runs fast” ... but you should resist this temptation. There is nothing wrong with either clock; both clocks run at the standard rate of 60 minutes
per hour. (In our model, that is represented by saying the tape does not stretch.) The difference in elapsed time has nothing to do with the clocks. It has everything to do with the difference in
path-length. Clocks are like odometers. They measure the length (in the time direction) of the path. This very much depends on which path you choose. The deeper clock is following a path that is more
crumpled in the time direction.
6 A Real-World Application: Plumbing
Suppose you are helping a buddy with a plumbing project that involves cutting PVC pipes and gluing them into fittings. Your buddy sees no reason to buy a pipe cutter, since he has a perfectly good
hacksaw. The problem is, if you want the joint to be strong, the pipe needs to be cut square, and this isn’t always easy to do. It’s not too hard under favorable conditions in the shop, but not so
easy out in the field when the pipe is at a funny angle and there are other constraints.
The desired result can be obtained more easily and more reliably if the pipe is first properly marked ... then all you need to do is cut along the mark.
So now we come to the heart of the matter: What is the quick and easy way to create a “cut here” guide on the pipe?
Answer: Masking tape.
As discussed in previous sections, masking tape follows a geodesic. We assume the pipe is accurately cylindrical (otherwise the glue joint would fail anyway, so the whole exercise would be
pointless). The geodesics of a cylinder are helices, including the zero-pitch helix which is a circle.
If you start the tape perpendicular to the axis of the pipe, the tape will come around and meet itself, making a zero-pitch helix, and you know you have succeeded. Cut parallel to the edge of the
If you start the tape at an angle, it will come around and make a non-zero-pitch helix, and you know you need to re-do the taping.
This trick with the tape is super-accurate, super-quick, and super-easy.
7 Parallel Transport of Vectors
The previous discussion has focused on what happens to a single point as it moves from place to place along a geodesic. In this section, we consider what happens to a vector as it moves from place to
place along a geodesic. This is important, because it provides us with a particularly easy way to measure the curvature of the space.
Note the contrast:
When we were just talking about position-space, at each position there was a point, and Now are talking about a vector field. That means at each point in position-space there is a vector space. We
that’s all there was. have a space of spaces.
7.1 Parallel Transport in D=3
Consider the scenario shown in figure 4. We start with the yellow arrow that is located just north of the eastern tip of Brazil, on the equator at 45 degrees west longitude. The vector points due
north. We then construct another vector 18 degrees farther west, at a point near the mouth of the Amazon. We take care that the new vector is parallel to the old vector. It also points due north. We
keep constructing new vectors, each parallel to the previous one, until we come to 90 degrees west longitude, in the ocean south of Guatemala.
The next vector is constructed at a position due north of the previous one. We are now moving northward along a line of constant longitude, namely 90 degrees west longitude. All the arrows point
toward the celestial north pole, i.e. a point high above the earth’s geographic north pole. As we move north, each of the arrows is parallel to the previous arrow.
In this context moving north means moving closer to the geographic north pole, following the surface of the earth (as opposed to moving directly closer to the celestial north pole, which would take
us away from the surface). Note the contrast: We are using geographic north for the positioning, but we are comparing the orientation against celestial north.
We continue around the triangular path until we get back to the starting point. The final arrow appears to be parallel to the original arrow. There is nothing surprising about this.
Actually, due to general relativity, the final arrow is not quite exactly parallel to the original arrow, but the difference is far too small to be perceptible in figure 4. The rest of this section
is devoted to explaining how it is possible for these two arrows to wind up not exactly parallel.
7.2 Parallel Transport in a Curved Space, D=2
Consider the scenario shown in figure 5. It starts out exactly the same as the previous scenario. However, in this case we suppose that the arrows exist in a two-dimensional space, namely a sphere,
i.e. the surface of the earth.
Figure 5
: Parallel Transport on the Earth,
The yellow arrows along the equator are exactly the same here as in the previous scenario. Even though the arrows are the same, we have to describe them differently. We say they all point north along
the earth’s surface. They point toward the earth’s geographic pole, not toward the celestial north pole, because the latter does not exist in the two-dimensional space we are using.
As we move northward along the leg of the triangle that goes through North America, the arrows in figure 4 continue to point north toward the geographic north pole. Relative to the arrows in figure 5
, these arrows must pitch down so that they remain within the two-dimensional space. They are confined to be everywhere tangent to the surface of the earth. As we move north, each of the arrows is
parallel to the previous arrow, as parallel as it possibly could be.
Let’s be clear: Each new arrow is constructed to be parallel to the previous one, as parallel as it possibly could be. What we mean by “parallel” is discussed in more detail in section 7.3.
After we get to the north pole, we start moving south along a the prime meridian. We move south through Greenwich and keep going until we reach the equator at a point in the Gulf of Guinea. As
always, each newly constructed vector is parallel to the previous arrow. All the arrows on this leg point due east.
Finally, we move west along the equator until we reach the starting point. Again each arrow is parallel to the previous one. All the red arrows on this leg point due east.
At this point we see something remarkable: The final arrow is not parallel to the arrow we started with.
From this we learn that in a curved space, there cannot be any global notion of A parallel to B. We must instead settle for a notion of parallel transport along a specified path. That is: the notion
of parallelism is path-dependent. It also depends on whether you go around the path clockwise or counter-clockwise.
If you start with a northward-pointing vector in Brazil and parallel-transport it to the Gulf If you start with the same vector and transport it clockwise around two legs of the triangle as shown
of Guinea, you get a northward-pointing vector. in figure 5, you get an eastward-pointing vector.
Creatures who live in the curved space can perceive this in a number of ways. Careful surveying is one way. Gyroscopes provide another way. That is, a gyroscope that is carried all the way around a
loop will precess relative to a gyroscope that remains at the starting point.
It must be emphasized that this precession has got nothing to do with the spin of the earth or the peculiarities of the latitude/longitude coordinate system. In figure 6 the sphere is completly
abstract, with no spin, no latitude, no longitude, and no poles. As we shall see in section 7.4, the area of the loop is what matters, not the shape or orientation.
Figure 6
: Parallel Transport on an Abstract Sphere
You can also see in figure 6 that the initial arrow does not need to be parallel or perpendicular to the path. Any initial orientation is allowed. The orientation of each successive arrow is dictated
by the requirement that it be parallel to the previous arrow.
7.3 Definition of Parallel
In all these diagrams, each new arrow is constructed to be parallel to the previous one, as parallel as it possibly could be. If you don’t see each successive arrow in this part of the diagram as
being exactly parallel to the previous one, that’s because you live in three dimensions. Creatures who actually live in the curved space – the two-dimensional curved surface in this model system –
cannot detect the third dimension.
This is related to the idea that the tape used in figure 3 is non-stretchy in two dimensions but is free to bend in the third dimension. We are modeling the physics as seen by creatures who live in
the two-dimensional space. The third dimension – the embedding dimension – is not part of their world.
In particular, in the discussion associated with figure 2 we said that all the triangles keep their shape, whatever that shape might be. Now, if we impose the further requirement that the midpoint of
line DB coincides with the midpoint of line AC, then we have similar triangles (indeed congruent triangles) and this guarantees that line BC is parallel to line AD. We use this to define what we mean
by parallel transport. We use this construction – called Schild’s Ladder – to transport vector AD along the path AB and thereby prove, by construction, that it AD is parallel to BC.
Here is another argument that leads to the same conclusion: The situation in figure 7 is very similar to the situation in figure 5. The main difference is that rather than having a single vector at
each point, we have two vectors. This is one way of representing a bivector. (If the term “bivector” doesn’t mean anything to you, don’t worry about it.)
Figure 7
: Parallel Transport of a Bivector
In particular, let’s look at the six bivectors on the leg of the triangle that goes through North America. Each of the gray vectors here points due west, along a line of constant latitude. Such lines
are called parallels of latitude – as in the 49th parallel – and they are called that for a reason. The gray arrows are undoubtedly parallel. If you treat them as existing in the embedding space they
are parallel, as surely as the arrows in figure 4 are parallel. Also if you treat them as existing in the D=2 surface they are parallel.
Once you are convinced that these six gray arrows are parallel, the rest of the argument is easy. The yellowish arrows are perpendicular to the gray arrows. In two dimensions, this implies that the
yellowish arrows must be parallel to one another. In two dimensions, there is no other possibility.
7.4 Calculating the Curvature
The amount of precession is proportional to the area enclosed by the loop, times the amount of curvature. In two dimensions we can write:
where dA is an element of area, and K denotes something we call the Gaussian curvature.
We can always define the average curvature (averaged over some area) as follows:
⟨K⟩ := (by definition of “average”) (2)
Then as an immediate corollary to equation 1 we obtain:
This gives us a convenient way of measuring the average curvature. Let’s see how it works for the spherical triangle shown in figure 5. The loop comprises one octant, i.e. one eighth of the area of
the sphere. A sphere of radius R has curvature K = 1/R^2 everywhere.
area =
curvature = (4)
precession =
which is nicely consistent with equation 1 and equation 3.
Figure 8 is similar to Figure 5 except that the path encompasses a smaller area. You can see that there is correspondingly less precession.
Figure 8
: Parallel Transport, Smaller Area
7.5 Intrinsic versus Extrinsic Curvature
Occasionally, we choose to imagine that our curved space is embedded in some higher-dimensional space. For example, the spherical surface shown in figure 5 is intrinsically a two-dimensional space.
We only need two numers (e.g. latitude and longitude) to span the space. In this subsection, we imagine that it sits in a three-dimensional embedding space.
This allows us to write the Gaussian curvature in the form
where k[1] is the principal curvature in one direction and k[2] is the principal curvature in the other direction. For example, a sphere of radius R has k[1] = k[2] = 1/R everywhere, which is
consistent with our previous assertion that K = 1/R^2.
By way of constrast, a cylinder of radius R has zero Gaussian curvature, because k[1] = 1/R but k[2] = 0. The cylinder is curved in one direction but not the other. We can apply similar reasoning to
a cone. We conclude that a cone has zero intrinsic curvature (except at the tip).
Each principal curvature is related to the corresponding radius of curvature:
It must be emphasized that r[1], r[2], k[1], and k[2] are extrinsic, and cannot be measured by the creatures who live in the two-dimensional space. In contrast, they can measure the intrinsic
curvature, i.e. the Gaussian curvature K, by carrying out parallel-transport experiments and applying equation 3.
7.6 Tabletop Model of Parallel Transport
To demonstrate parallel transport using the same tabletop model shown in figure 3, proceed as follows:
From a strip of masking tape at least one inch wide, cut out a parallelogram. The exact shape doesn’t matter.
Draw the long diagonal on the parallelogram, as shown in figure 9. Use a pen with a moderate width, not too fine, for reasons that will become obvious in a moment. Label the corners {A, B, C, D}. Add
another label, C′}, so that there are labels on both sides of the diagonal at corner C. For strain relief (aka crimp relief), make a keyhole-shaped cut, as shown in figure 9. That is, make a round
hole in the middle, and then make a cut from the middle to corner C, running right down the middle of the line you marked. Try to cut the line in half. I found it easy to make the crimp-relief cuts
by putting the tape on a firm board and using a scalpel (aka “hobbyist razor knife”). Using a scissors is possible but probably less convenient.
Figure 9
: Parallelogram for Parallel Transport
Lay the parallelogram on the model, so that the tip of a dart falls into the crimp-relief hole. I did it starting from point B and proceeding counterclockwise to point C′, then re-starting at point B
and proceeding clockwise to point C. There are probably other satisfactory tactics. The result is shown in figure 10.
The parallel transport story goes like this: Start at point C. The initial vector that we wish to transport is the line that is drawn on the tape, the line that runs from the center to point C. We
transport that via B, A, and D all the way to C′. The transported vector is now parallel to the line from the center to C′. Since the space between point C′ and point C is flat, you can easily use
your imagination to transport the vector the rest of the way, all the way back to the starting point C. As you can see in figure 10, the vector has precessed by about 7 degrees.
If you look closely, you find that the direction of precession is the opposite of what we saw in figure 5. There if you go around the loop clockwise the precession is also clockwise, but here if you
go around the loop clockwise the precession is counterclockwise. This is the hallmark of negative Gaussian curvature. A sheres has positive intrinsic curvature, while a saddle has negative intrinsic
Also: If you think about it for a while, and/or do some experiments with the model, you discover that for transport around any path that encompasses the tip of a dart, the amount of precession is the
same, namely −7 degrees. The only way to describe this is to say that there is a Dirac delta-function of curvature, located at the tip of the dart.
If you don’t know what a delta-function is, don’t worry about it too much. In general, it’s just a fancy way of saying something is hugely concentrated, with a high density in one place and zero
density elsewhere. Imagine a very tall, very narrow spike. In the present contect, when we talk about curvature, we are not talking about the curvature of the spike; the spike is not what’s
curved. The space is curved, and the spike is telling us where the curvature is.
We can formalize this concentration of curvature as follows:
K(x, y) = θ δ(x−x[0])δ(y−y[0]) (7)
where θ is −7 degrees in this model, and the tip of the dart is located at the point (x[0], y[0]). When we integrate this curvature in accordance with equation 1, we get the correct total precession.
Note that a δ-function has units of inverse length, so the dimensions in equation 7 are correct.
Meanwhile for any path that encompasses the base of a dart, the precession is +7 degrees. In the model, there is no way to encompass the base of one dart without encompassing the base of its partner,
so we see a total precession of +14. For any path that does not encompass the tip or base of any dart, there is no precession.
Tangential remark: If you carefully measure the angle in figure 10, you find that the angle is very close to −8 degrees, rather than the −7 that we were expecting based on the specifications of the
darts. I don’t know whether this has to do with imperfections in the darts, or imperfections in the way I laid down the tape.
To repeat: To us, living in the embedding space, the darts may “look” like they have curvature all along their length, but really the darts are like cones: zero intrinsic curvature except at the tips
and bases. To say the same thing another way: the dart has one extrinsic curvature (k[1]) that is nonzero all along its length, but the other extrinsic curvature (k[2]) is zero, so the intrinsic
Gaussian curvature is zero (except at the tip and the base).
As a consequence, the gravitational field that we are modeling in figure 5 is a piecewise-constant field. On the left side of the midline there is a field of constant strength pointing to the right,
and on the right side of the midline there is a field of constant strength pointing to the left. This is similar to the field you would find in a three-plate parallel-plate capacitor, with charges of
−Q, +2Q, and −Q (respectively) on the plates.
In particular, in the main region of the diagram, between the tips and the bases of the darts, there is no geodesic deviation. As seen by creatures who live in the two-dimensional space, there is no
spacetime curvature in this region. We who live in the embedding space see geodesics that seem to curve, but in this region they all curve together. A pair of geodesics that starts out parallel will
remain parallel. That is to say, they do not deviate from each other, so there is no geodesic deviation.
The parallel geodesics remain parallel unless and until the pair straddles the tip or the base of a dart, whereupon there will be geodesic deviation. This is the fundamental mechanism of general
relativity: Curvature causes geodesic deviation.
The analogy to real-world gravitation goes like this: Any region where the gravitational acceleration g is essentially uniform does not have any appreciable spacetime curvature. In particular, over
the typical laboratory lengthscale, spacetime curvature is negligible. This is related to Einstein’s principle of equivalence, which says that a uniform gravitational field is indistinguishable from
an acceleration of the reference frame. To say the same thing the other way, you can make a uniform gravitational field disappear by choosing a different reference frame. Therefore it is obvious that
a uniform gravitational field has got nothing to do with spacetime curvature, because you can’t make curvature go away by choosing a different reference frame.
8 How To Fabricate Darts
Many of the following fabrication ideas are due to Paul Fuoss.
Maple is an excellent material for making darts. (Any hard, fine-grained wood will do.) The grain should run down the long axis of the dart; this makes fabrication harder but makes the final result
nicer. We found that Paul’s compound miter saw was the smart way to make them. We tried making them with a table saw but the compound miter saw was a much better choice.
Attach the hardwood piece (from which you are cutting the darts) to a much larger carrier piece, using wood screws, so you can hold it very securely without endangering your fingers. (Holding the
piece securely is vastly more important for miter cuts than for regular cuts, where slight vertical motions are harmless.)
Each dart is a thin pyramid. The base of the pyramid is an equilateral triangle, 1/2 inch on a side. The pyramid is about 4 inch long in the other direction.
Set the saw so that the blade is inclined 30 degrees to the vertical. Then set it so that the arm is angled 3.6 degrees away from perpendicular to the fence. That gives a slope of 1 part in 16, i.e.
1/4 inch (half the base of the pyramid) per 4 inch.
Make the first cut. Cut just enough off one end of the stock so that the shape of the edge is determined by the cut. Then flip the stock over. If the stock was to the left of the blade to begin with,
it will be to the right of the blade now. This may require unbolting the stock from the carrier and rebolting it (although you might avoid this step by using an extra-fancy carrier). Mark the
starting point for the second cut. The base of the pyramid should be against the fence. Make the second cut. The first dart should fall free.
Jessica MacNaughton pointed out that you can make darts that are just as functional (but perhaps not as beautiful) out of clay. If you don’t have access to a compound miter saw, this may be your best
9 References
John Denker, “Welcome to Spacetime” http://www.av8n.com/physics/spacetime-welcome.htm
Misner, Thorne, Wheeler Gravitation
John S. Denker “Expansion of the Universe” ./expansion-of-the-universe.htm
|
{"url":"http://www.av8n.com/physics/geodesics.htm","timestamp":"2014-04-17T18:49:28Z","content_type":null,"content_length":"64076","record_id":"<urn:uuid:3dc7f230-6327-4d7e-aa61-488ddacaa31a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Login Using your
CaseInterview.com Account Info
[ ]
[ ]
[ ] Remember me on this computer
[Sign in]
Copyright 2014, Fast Forward Media, Inc., All Rights Reserved.
*Precise Math*
In many parts of a case interview, precise math is required. This commonly occurs when calculating optimal price points, changes in volume shipments, gross margin %, shifts in revenue mix from
one segment to another, break-even price points, and others.
It is VERY important to achieve 100% accuracy in these precise math type questions that are typically embedded within a case interview (and sometimes tested separately in an actual math exam).
No calculator or paper/pen is permitted. You must do all calculations in your head.
*A passing grade is 100% accuracy.*
A failing grade is any score less than 100% accurate.
In addition, it is important that you are able to be this accurate while under the pressure of a real-life interview. To simulate this stress, these practice tests are TIMED.
On your scoreboard, you will see your scores vs. other benchmark. These benchmarks include other people who have take the same test, the top 20% of those who have taken the test, and my (Victor
Cheng's) personal typical score.
With the exception of an explicit math test given by an employer, in real life it is okay to slow down the pace (compared to the benchmarks established here) to double check your math.
We use an aggressive timeframe in practice to simulate STRESS (which tends to cause people to make mistakes). So if you can get used the stress of the clock, it is easier to do math with the
stress of an interviewer looking at you.
*Estimation Math*
There are also times during a case where estimation math is the preferred level of accuracy. An acceptable estimate is one where the answer you give is within +/- 20% of the actual answer.
In many cases, this level of precision is "good enough".
This typically occurs when you are explicitly given an "estimation" problem during a case. Typically this is phrased along these lines,
"Estimate the # of tires sold in this country last year".
Other times you will find a math computation that is just too difficult to solve mentally. And the expectation is to keep the flow of the conversation going, just estimate the correct answer to
see if it changes the business decision being considered.
For example, if we know that a profit margin more than 20% is considered attractive, and under 20% is unattractive, then for a given project it doesn't make much difference if the estimated
profit margin is 30% - 40%... because both ends of the range are considered attractive.
For practice purposes, the goal in tackling estimation math question is to achieve a rating of 100% of responses being "acceptable". An acceptable response is one where your estimated answer is
within +/- 20% of the precise answer.
Note the questions provided in the estimation test are significantly more difficult than for precise math.
No calculator or paper/pen is permitted. You must do all calculations in your head.
Good luck!
|
{"url":"https://www.caseinterview.com/math/home.php","timestamp":"2014-04-16T22:42:38Z","content_type":null,"content_length":"13603","record_id":"<urn:uuid:7cd5d612-8f48-48be-809d-aa0e7e6e3260>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Polynomials from a Recurrence Relation
up vote 4 down vote favorite
Hi guys,
I have recently started looking at polynomials $q_n$ generated by initial choices $q_0=1$, $q_1=x$ with, for $n\geq 0$, some recurrence formula
$$q_{n+2}=xq_{n+1}+c_n q_n$$
where $c_n$ is some function in $n$. The first few of these are
$$q_2=x^2+c_0$$ $$q_3=x^3+(c_0+c_1)x$$ $$q_4=x^4+(c_0+c_1+c_2)x^2+c_2c_0$$ $$q_5=x^5+(c_0+c_1+c_2+c_3)x^3+(c_0c_2+c_0c_3+c_1c_3)x$$ $$q_6=x^6+(c_0+c_1+c_2+c_3+c_4)x^4+
My question is whether there is a name for the coefficients of the powers of $x$. I realise that they can be written as certain formulations of elementary symmetric polynomials but I am ideally
looking for a reference where the specific expressions are studied
Any help would be great :)
polynomials orthogonal-polynomials
I've added the tag orthogonal polynomials, that are related -orthogonal polynomials satisfy a three terms linear recurrence of a slightly more general form, and well characterized. Are you
interested in the results with $any$ sequence $(c_n)$ or is your $(c_n)$ a particular sequence? (in which case it should be healthy to check if it corresponds to an orthogonal sequence). – Pietro
Majer Dec 30 '10 at 17:23
@ Pietro : Yes sorry, I should originally have tagged as relating to orthogonal polynomials. I am interested in general $c_n$, although I came to the recurrence relations from a number of specific
examples which are well studied :) – backstoreality Dec 30 '10 at 17:25
add comment
2 Answers
active oldest votes
These polynomials are closely related to continuants, which arise in studying continuing fractions. The $n$th continuant of a sequence $a_0$, $a_1$, $\ldots$ is defined by $K(0)=1$, $K(1)
=a_1$, $K(n)=a_n K(n-1) + K(n-2)$. They are sums of products of $a_1,\dots, a_n$ in which consecutive pairs are deleted. (See, for example, http://en.wikipedia.org/wiki/Continuant_
up vote 5
down vote Neither continuants nor the coefficients of the $c_n$ are symmetric polynomials; they cannot be expressed in terms of (or as "combinations of") elementary symmetric functions.
1 +1. There is no known closed-form evaluation for the matrix product $M_n=\prod_{k=1}^n[x,c_k;1,0]$ and even some kind of "understanding" its structure. If it were for $x=1$, then
several hard conjectures in number theory about the continuants would be solved... – Wadim Zudilin Dec 31 '10 at 4:49
add comment
Let's treat the $c_i$ as formal indeterminates.
Let $S(n,m)$ be the set of increasing functions $i:\{1,\ldots, m\}\to \{0,\ldots, n-2\}$, written $j \mapsto i_j$, such that $i_{j+1} > i_j + 1$ for all $j=1,\ldots, m$. So $S(n,m)$ is
in bijection with the set of subsets of $\{0, \ldots, n-2\}$ of size $m$ which contain no adjacent pair $(k, k+1)$.
Then the coefficient of $x^{n - 2m}$ in $q_n$ is
$\sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m}$
and all other coefficients are zero:
up vote 1
down vote $q_n = \sum_{m = 0}^{[n/2]} \left( \sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m} \right) x^{n - 2m}$.
Proof: induction on $n$.
Possibly considering the generating function $F = \sum_{n=0}^\infty q_nt^n$ may be helpful?
Note that these coefficients are not elementary symmetric polynomials in the $c_i$, since for example already $c_2c_0$ isn't invariant under all permutations of $\{0,1,2\}$. I just
thought I'd spell out the symmetry that is involved here, and perhaps someone else knows a name for these coefficients.
@ Konstantin : Thanks for your reply and thanks for formally stating the weird symmetry that I hadn't yet put to paper :) Yes I should have been clearer in stating that I know that the
coefficients are not elementary symmetric polynomials but they could probably written as combinations of them :) – backstoreality Dec 30 '10 at 18:57
add comment
Not the answer you're looking for? Browse other questions tagged polynomials orthogonal-polynomials or ask your own question.
|
{"url":"http://mathoverflow.net/questions/50736/polynomials-from-a-recurrence-relation?sort=votes","timestamp":"2014-04-24T04:32:54Z","content_type":null,"content_length":"60020","record_id":"<urn:uuid:bbddd332-d910-4cd6-90f5-6c93a199ef65>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00482-ip-10-147-4-33.ec2.internal.warc.gz"}
|
digitalmars.D - Faster uniform() in [0.0 - 1.0(
bearophile <bearophileHUGS lycos.com>
Some kind of little D programs I write need a lot of random values, and tests
have shown me that std.random.uniform is slow.
So I have suggested to add a faster special case to generate a random double in
[0.0, 1.0), see:
Nov 19 2010
bearophile Wrote:
Some kind of little D programs I write need a lot of random values, and tests
have shown me that std.random.uniform is slow.
So I have suggested to add a faster special case to generate a random double
in [0.0, 1.0), see:
I did some testing with different combinations of types and boundary types. The problem noticed is a bit different to the one bearophile mentioned. Here is my test code: -------------------- import
std.conv; import std.date; import std.random; import std.stdio; void test(T, string boundaries)() { void fun() { uniform!(boundaries, T, T)(cast(T)0, cast(T)1000); } writefln("%-8s %s %6d", to!string
(typeid(T)), boundaries, benchmark!fun(10_000_000)[0]); } void testBoundaries(T)() { test!(T, "[]")(); test!(T, "[)")(); test!(T, "(]")(); test!(T, "()")(); writeln(); } void main() { testBoundaries!
(int)(); testBoundaries!(long)(); testBoundaries!(float)(); testBoundaries!(double)(); testBoundaries!(real)(); } -------------------- And here are the results for 10 million calls of uniform
(columns are: type, boundaries, elapsed time): -------------------- int [] 271 int [) 271 int (] 283 int () 285 long [] 372 long [) 399 long (] 401 long () 397 float [] 286 float [) 374 float (] 5252
float () 5691 double [] 348 double [) 573 double (] 5319 double () 5875 real [] 434 real [) 702 real (] 2832 real () 3056 -------------------- In my opinion floating point uniforms with (] or () as
boundary types are unacceptably slow. I had to use 1 - uniform!"[)"(0.0, 1.0) instead of uniform!"(]"(0.0, 1.0) because of this issue. I would also expect versions using float and double to be faster
than the version using real. -- tn
Nov 22 2010
On 22-nov-10, at 16:11, tn wrote:
bearophile Wrote:
Some kind of little D programs I write need a lot of random values,
and tests have shown me that std.random.uniform is slow.
So I have suggested to add a faster special case to generate a
random double in [0.0, 1.0), see:
I did some testing with different combinations of types and boundary types. The problem noticed is a bit different to the one bearophile mentioned. Here is my test code: -------------------- import
std.conv; import std.date; import std.random; import std.stdio; void test(T, string boundaries)() { void fun() { uniform!(boundaries, T, T)(cast(T)0, cast(T)1000); } writefln("%-8s %s %6d", to!string
(typeid(T)), boundaries, benchmark!fun(10_000_000)[0]); } void testBoundaries(T)() { test!(T, "[]")(); test!(T, "[)")(); test!(T, "(]")(); test!(T, "()")(); writeln(); } void main() { testBoundaries!
(int)(); testBoundaries!(long)(); testBoundaries!(float)(); testBoundaries!(double)(); testBoundaries!(real)(); } -------------------- And here are the results for 10 million calls of uniform
(columns are: type, boundaries, elapsed time): -------------------- int [] 271 int [) 271 int (] 283 int () 285 long [] 372 long [) 399 long (] 401 long () 397 float [] 286 float [) 374 float (] 5252
float () 5691 double [] 348 double [) 573 double (] 5319 double () 5875 real [] 434 real [) 702 real (] 2832 real () 3056 -------------------- In my opinion floating point uniforms with (] or () as
boundary types are unacceptably slow. I had to use 1 - uniform!"[)"(0.0, 1.0) instead of uniform!"(]"(0.0, 1.0) because of this issue. I would also expect versions using float and double to be faster
than the version using real. -- tn
blip & tango) is faster than phobos one, I did not try to support all possibilities, with floats just () and high probability (), but possible boundary values due to rounding when using an non 0-1
range, but I took lot of care to initialize *all* bits uniformly. The problem you describe looks like a bug though, if done correctly one should just add an if or two to the [] implementation to get
() with very high probability. Fawzi
Nov 22 2010
tn Wrote:
int [] 271
int [) 271
int (] 283
int () 285
long [] 372
long [) 399
long (] 401
long () 397
float [] 286
float [) 374
float (] 5252
float () 5691
double [] 348
double [) 573
double (] 5319
double () 5875
real [] 434
real [) 702
real (] 2832
real () 3056
In my opinion floating point uniforms with (] or () as boundary types are
unacceptably slow. I had to use 1 - uniform!"[)"(0.0, 1.0) instead of
uniform!"(]"(0.0, 1.0) because of this issue. I would also expect versions
using float and double to be faster than the version using real.
After further investigation it seems that the slowdown happens because of subnormal numbers in calculations. If there is an open boundary at zero then the call of nextafter in uniform returns a
subnormal number. Perhaps next normal number could be used instead?
Nov 22 2010
Don <nospam nospam.com>
tn wrote:
tn Wrote:
int [] 271
int [) 271
int (] 283
int () 285
long [] 372
long [) 399
long (] 401
long () 397
float [] 286
float [) 374
float (] 5252
float () 5691
double [] 348
double [) 573
double (] 5319
double () 5875
real [] 434
real [) 702
real (] 2832
real () 3056
In my opinion floating point uniforms with (] or () as boundary types are
unacceptably slow. I had to use 1 - uniform!"[)"(0.0, 1.0) instead of
uniform!"(]"(0.0, 1.0) because of this issue. I would also expect versions
using float and double to be faster than the version using real.
After further investigation it seems that the slowdown happens because of subnormal numbers in calculations. If there is an open boundary at zero then the call of nextafter in uniform returns a
subnormal number. Perhaps next normal number could be used instead?
No, it just shouldn't convert (] into []. It should do [], and then check for an end point. Since the probability of actually generating a zero is 1e-4000, it shouldn't affect the speed at all <g>.
Nov 22 2010
bearophile <bearophileHUGS lycos.com>
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said). Bye, bearophile
Nov 22 2010
bearophile wrote:
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said).
Yes, but randomly generated bits doesn't give a uniform distribution. With a uniform distribution, there should be as much chance of getting [1-real.epsilon .. 1] as [0.. real.epsilon] But there are
only two representable numbers in the first range, and approx 2^^70 in the second. Further, there are 2^^63 numbers in the range [0..real.min] which are all equally likely. So, if you want a
straightforward uniform distribution, you're better off using [1..2) or [0.5..1) than [0..1), because every possible result is equally likely.
Nov 23 2010
bearophile Wrote:
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said).
For uniform distribution different bit combinations should have different probabilities because floating point numbers have more representable values close to zero. So for doubles the probability
should be about 1e-300 and for reals about 1e-4900. But because uniform by default seems to use a 32 bit integer random number generator, the probability is actually 2^^-32. And that is actually
verified: I generated 10 * 2^^32 samples of uniform!"[]"(0.0, 1.0) and got 16 zeros which is close enough to expected 10. Of course 2^^-32 is still small enough to have no performance penalty in
practise. -- tn
Nov 23 2010
Fawzi Mohamed Wrote:
On 23-nov-10, at 10:20, tn wrote:
bearophile Wrote:
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said).
For uniform distribution different bit combinations should have different probabilities because floating point numbers have more representable values close to zero. So for doubles the probability
should be about 1e-300 and for reals about 1e-4900. But because uniform by default seems to use a 32 bit integer random number generator, the probability is actually 2^^-32. And that is actually
verified: I generated 10 * 2^^32 samples of uniform!"[]"(0.0, 1.0) and got 16 zeros which is close enough to expected 10. Of course 2^^-32 is still small enough to have no performance penalty in
practise. -- tn
that is the reason I used a better generation algorithm in blip (and tango) that guarantees the correct distribution, at the cost of being slightly more costly, but then the basic generator is
cheaper, and if one needs maximum speed one can even use a cheaper source (from the CMWC family) that still seems to pass all statistical tests.
Similar method would probably be nice also in phobos if the speed is almost the same.
The way I use to generate uniform numbers was shown to be better (and
detectably so) in the case of floats, when looking at the tails of
normal and other distributions generated from uniform numbers.
This is very relevant in some cases (for example is you are interested
in the probability of catastrophic events).
Just using 64 bit integers as source would be enough for almost(?) all cases. At the current speed it would take thousands of years for one modern computer to generate so much random numbers that
better resolution was justifiable. (And if one wants to measure probability of rare enough events, one should use more advanced methods like importance sampling.) -- tn
Nov 23 2010
On 23-nov-10, at 10:20, tn wrote:
bearophile Wrote:
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said).
For uniform distribution different bit combinations should have different probabilities because floating point numbers have more representable values close to zero. So for doubles the probability
should be about 1e-300 and for reals about 1e-4900. But because uniform by default seems to use a 32 bit integer random number generator, the probability is actually 2^^-32. And that is actually
verified: I generated 10 * 2^^32 samples of uniform!"[]"(0.0, 1.0) and got 16 zeros which is close enough to expected 10. Of course 2^^-32 is still small enough to have no performance penalty in
practise. -- tn
that is the reason I used a better generation algorithm in blip (and tango) that guarantees the correct distribution, at the cost of being slightly more costly, but then the basic generator is
cheaper, and if one needs maximum speed one can even use a cheaper source (from the CMWC family) that still seems to pass all statistical tests. The way I use to generate uniform numbers was shown to
be better (and detectably so) in the case of floats, when looking at the tails of normal and other distributions generated from uniform numbers. This is very relevant in some cases (for example is
you are interested in the probability of catastrophic events). Fawzi
Nov 23 2010
On 23-nov-10, at 13:12, tn wrote:
Fawzi Mohamed Wrote:
On 23-nov-10, at 10:20, tn wrote:
bearophile Wrote:
Since the probability of actually generating a
zero is 1e-4000, it shouldn't affect the speed at all <g>.
If bits in double have the same probability then I think there is a much higher probability to hit a zero, about 1 in 2^^63, and I'm not counting NaNs (but it's low enough to not change the substance
of what you have said).
For uniform distribution different bit combinations should have different probabilities because floating point numbers have more representable values close to zero. So for doubles the probability
should be about 1e-300 and for reals about 1e-4900. But because uniform by default seems to use a 32 bit integer random number generator, the probability is actually 2^^-32. And that is actually
verified: I generated 10 * 2^^32 samples of uniform!"[]"(0.0, 1.0) and got 16 zeros which is close enough to expected 10. Of course 2^^-32 is still small enough to have no performance penalty in
practise. -- tn
that is the reason I used a better generation algorithm in blip (and tango) that guarantees the correct distribution, at the cost of being slightly more costly, but then the basic generator is
cheaper, and if one needs maximum speed one can even use a cheaper source (from the CMWC family) that still seems to pass all statistical tests.
Similar method would probably be nice also in phobos if the speed is almost the same.
Yes, I was thinking of porting my code to D2, but if someone else wants to do it... please note that for double the speed will *not* be the same, because it always tries to guarantee that all bits of
the mantissa are random, and with 52 or 63 bits this cannot be done with a single 32 bit random number.
The way I use to generate uniform numbers was shown to be better (and
detectably so) in the case of floats, when looking at the tails of
normal and other distributions generated from uniform numbers.
This is very relevant in some cases (for example is you are
in the probability of catastrophic events).
Just using 64 bit integers as source would be enough for almost(?) all cases. At the current speed it would take thousands of years for one modern computer to generate so much random numbers that
better resolution was justifiable. (And if one wants to measure probability of rare enough events, one should use more advanced methods like importance sampling.)
I thought about directly having 64 bit as source, but the generators I know were written to generate 32 bit at a time. Probably one could modify CMWC to work natively with 64 bit, but it should be
done carefully. So I simply decided to stick to 32 bit and generate two of them when needed. Note that my default sources are faster than Twister (the one that is used in phobos), I especially like
CMWC (but the default combines it with Kiss for extra safety).
-- tn
Nov 23 2010
|
{"url":"http://www.digitalmars.com/d/archives/digitalmars/D/Faster_uniform_in_0.0_-_1.0_122412.html","timestamp":"2014-04-18T08:27:39Z","content_type":null,"content_length":"33560","record_id":"<urn:uuid:1be08dc4-a01a-49a5-9534-7061393bbadc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Definitions for idealizer
This page provides all possible meanings and translations of the word idealizer
Random House Webster's College Dictionary
i•de•al•izeaɪˈdi əˌlaɪz(v.)-ized, -iz•ing.
1. (v.t.)to consider or represent as having qualities of ideal perfection or excellence.
2. to represent in an ideal form or character.
3. (v.i.)to represent something in an ideal form.
Origin of idealize:
1. idealizer(Noun)
A person who idealizes
Webster Dictionary
1. Idealizer(noun)
an idealist
1. Idealizer
In abstract algebra, the idealizer of a subsemigroup T of a semigroup S is the largest subsemigroup of S in which T is an ideal. Such an idealizer is given by In ring theory, if A is an additive
subgroup of a ring R, then is the largest subring of R in which A is a two-sided ideal. In Lie algebra, if L is a Lie ring with Lie product [x,y], and S is an additive subgroup of L, then the set
is classically called the normalizer of S, however it is apparent that this set is actually the Lie ring equivalent of the idealizer. It is not necessary to mention that [S,r]⊆S, because
anticommutativity of the Lie product causes [s,r] = −[r,s]∈S. The Lie "normalizer" of S is the largest subring of S in which S is a Lie ideal.
Find a translation for the idealizer definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for idealizer?
|
{"url":"http://www.definitions.net/definition/idealizer","timestamp":"2014-04-20T08:57:40Z","content_type":null,"content_length":"25385","record_id":"<urn:uuid:cc9af1dd-5910-47f8-82c8-17cfe458586b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Flue Gas Steel Stack Design Calculations – Learn Chimney Design of Diesel Genset in India
• A flue gas stack or chimney is typically a vertical tubular structure used for ejecting exhaust flue gas to the atmosphere. You can see the chimney or flue gas stack in thermal power plants,
diesel gensets, kilns, and many other plants, where gases evolving from the combustion process need to be exhausted. The design calculation for flue gas stack varies from application to
application. Here in this article we will discuss the basic design criterion of diesel engine-driven genset flue gas stacks. See below how to calculate the diameter and height of the flue gas
stack of a diesel genset:
• Calculate Flue Gas Stack Height
- Calculate the specific fuel consumption of your diesel genset. Say it is X kg. Per hour.
- Find out the percentage of sulphur content in the diesel you are using. Say it is P%.
- Now, you have to calculate the sulphur dioxide (SO2) percentage in flue gas. Since the atomic weight of SO2 is double the atomic weight of sulphur, the percentage of the SO2 in flue gas is 2P
- The height of the flue gas stack (in meters) according to SO2 emission can be calculated as:
Height (H) = (X*2P)/100…………….Eqn. 1.1
- Now, you have to check out the recommended minimum chimney height by the Central Pollution Control Board (CPCB). In case the height calculated from the Eqn. 1.1 is higher than the recommended
height by CPCB then you go ahead with the calculated height or else you have to stick to the CPCB recommended height.
• Calculate Flue Gas Stack Diameter
- Calculate the exhaust gas quantity. Say it is Y kg per hour.
- Select the flue gas velocity you want to keep inside the stack. Say Z meters per second (Recommended flue gas velocity inside the stack is 16 to 20 m/sec as per IS: 6533).
- Diameter (in mm) of the flue gas stack can be calculated as:
Diameter (D) = [(4*Y)/ (3.142*Z)]0.5 .......Eqn.1.2
• Conclusion
The design of the steel stack or chimney is important from the diesel genset performance as well as air pollution point of view. While doing the flue gas steel stack design calculations you
should consider the design formulas and local pollution control norms.
|
{"url":"http://www.brighthubengineering.com/machine-design/53359-basic-design-calculations-for-flue-gas-stack-diesel-genset-in-india/","timestamp":"2014-04-17T18:23:45Z","content_type":null,"content_length":"42482","record_id":"<urn:uuid:5ed8fc23-dd9c-43ad-b1d1-e0a4cdc665be>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Relative Standard Deviation - Mathematica
November 29th 2012, 01:54 PM #1
Sep 2012
United States
Relative Standard Deviation - Mathematica
I am using Mathematica and I was wondering if there is a function that can calculate the relative standard deviation in mathematica. I researched the mathematica site and I can't seam to find it.
Would I have to individually calculate the standard deviation ( using mathematica ), xbar, divide them by each other and multiply by 100?
Thanks for any help you may be able to give me!
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/math-software/208721-relative-standard-deviation-mathematica.html","timestamp":"2014-04-17T01:29:56Z","content_type":null,"content_length":"29465","record_id":"<urn:uuid:38d7fc8a-84b8-45c2-a553-0ac326f005ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Phase-Amplitude Descriptions of Neural Oscillator Models
Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in
response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strong-attraction assumption may
well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion
that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on
cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phase-amplitude
description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator
models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phase-reduction technique. As an
explicit application of this phase-amplitude framework, we consider in some detail the response of a generic planar model where the strong-attraction assumption does not hold, and examine the
response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.
Phase-amplitude; Oscillator; Chaos; Non-weak coupling
1 Introduction
One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had
on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the
work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7].
At heart, the classic phase reduction approach recognises that if a high dimensional non-linear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories
near the cycle can be projected onto the cycle.
A natural phase variable is simply the time along the cycle (from some arbitrary origin) relative to the period of oscillation. The notion of phase can even be extended off the cycle using the
concept of isochrons [1]. They provide global information about the ‘latent phase’, namely the phase that will be asymptotically returned to for a trajectory with initial data within the basin of
attraction of an exponentially stable periodic orbit. More technically, isochrons can be viewed as the leaves of the invariant foliation of the stable manifold of a periodic orbit [8]. In rotating
frame coordinates given by phase and the leaf of the isochron foliation, the system has a skew-product structure, i.e. the equation of the phase decouples. However, it is a major challenge to find
the isochron foliation, and since it relies on the knowledge of the limit cycle it can only be found in special cases or numerically. There are now a number of complementary techniques that tackle
this latter challenge, and in particular we refer the reader to work of Guillamon and Huguet [9] (using Lie symmetries) and Osinga and Moehlis [10] (exploiting numerical continuation). More recent
work by Mauroy and Mezic [11] is especially appealing as it uses a simple forward integration algorithm, as illustrated in Fig. 1 for a Stuart–Landau oscillator. However, it is more common to
side-step the need for constructing global isochrons by restricting attention to a small neighbourhood of the limit cycle, where dynamics can simply be recast in the reduced form , where θ is the
phase around a cycle. This reduction to a phase description gives a nice simple dynamical system, albeit one that cannot describe evolution of trajectories in phase-space that are far away from the
limit cycle. However, the phase reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response
curve (iPRC). For a given high dimensional conductance based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [12]. The iPRC at
a point on cycle is equal to the gradient of the (isochronal) phase at that point. This approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing
is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phase-locking and central pattern generation in neural circuitry (see, for example [13]). Note that
the work in [9] goes beyond the notion of iPRC and introduces infinitesimal phase response surfaces (allowing evaluation of phase advancement even when the stimulus is off cycle), and see also the
work in [14] on non-infinitesimal PRCs.
Fig. 1. Isochrons of a Stuart–Landau oscillator model: , . The black curve represents the periodic orbit of the system, which is simply the unit circle for this model. The blue curves are the
isochrons obtained using the (forward) approach of Mauroy and Mezic [11]. The green dots are analytically obtained isochronal points [15]. Parameter values are , , and
The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type
I Morris–Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a
phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase “weakly connected neural networks”. Indeed, there are a number of
potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about
the isochrons and non-linear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using
this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limit-cycle and instantaneous
relaxation back to cycle. However, the presence of nearby invariant phase-space structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of
time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced
description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phase-amplitude models that better characterise
the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use non-isochronal
phase-amplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the
synchronisation of linearly coupled stochastic limit cycle oscillators.
In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phase-amplitude coordinates. This approach is directly taken from the classical theory
for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phase-centric
perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate
transformations and derive the evolution equations for the forced phase-amplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the
behaviour of the phase-amplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for
neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard
weak coupling approach.
2 Phase-Amplitude Coordinates
Throughout this paper, we study the dynamics prescribed by the system , , with solutions that satisfy . We will assume that the system admits an attracting hyperbolic periodic orbit (namely one
zero Floquet exponent and the others having negative real part), with period Δ, such that . A phase is naturally defined from . It has long been known in the dynamical systems community how to
construct a coordinate system based on this notion of phase as well as a distance from cycle; see [20] for a discussion. In fact, Ermentrout and Kopell [21] made good use of this approach to derive
the phase-interaction function for networks of weakly connected limit-cycle oscillators in the limit of infinitely fast attraction to cycle. However, this assumption is particularly extreme and
unlikely to hold for a broad class of single neuron models. Thus, it is interesting to return to the full phase-amplitude description. In essence, the transformation to these coordinates involves
setting up a moving orthonormal system around the limit cycle. One axis of this system is chosen to be in the direction of the tangent vector along the orbit, and the remaining are chosen to be
orthogonal. We introduce the normalised tangent vector ξ as
The remaining coordinate axes are conveniently grouped together as the columns of an matrix ζ. In this case we can write an arbitrary point x as
Here, represents the Euclidean distance from the limit cycle. A caricature, in , of the coordinate system along an orbit segment is shown in Fig. 2. Through the use of the variable ρ, we can
consider points away from the periodic orbit. Rather than being isochronal, lines of constant θ are simply straight lines that emanating from a point on the orbit in the direction of the normal. The
technical details of specifying the orthonormal coordinates forming ζ are discussed in Appendix A.
Fig. 2. Demonstration of the moving orthonormal coordinate system along an orbit segment. As t evolves from to , the coordinates vary smoothly. In this planar example, ζ always points to the
outside of the orbit
Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system:
and Df is the Jacobian of the vector field f evaluated along the periodic orbit u. The derivation of this system may be found in Appendix A. It is straightforward to show that as , and that . In
the above, captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22].
Additionally, describes the θ-dependent rate of attraction or repulsion from cycle.
It is pertinent to consider where this coordinate transformation breaks down, that is, where the determinant of the Jacobian of the transformation vanishes. This never vanishes on-cycle (where ),
but may do so for some . This sets an upper bound on how far away from the limit cycle we can describe the system using these phase-amplitude coordinates. In Fig. 3, we plot the curve along which
the transformation breaks down for the ML model. We observe that, for some values of θ, k is relatively smaller. The breakdown occurs where lines of constant θ cross, and thus the transformation
ceases to be invertible, and these values of θ correspond to points along which the orbit has high curvature. We note that this issue is less problematical in higher dimensional models.
Fig. 3. This figure shows the determinant K of the phase-amplitude transformation for the ML model. Colours indicate the value of K. The red contour indicates where , and thus where the coordinate
transformation breaks down. Note how all of the values for which this occurs have . Parameter values as in Appendix B.1
If we now consider the driven system,
where ε is not necessarily small, we may apply the same transformation as above to obtain the dynamics in coordinates, where and , as
and is the identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, . To
demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.
2.1 A 2D Conductance Based Model
The ML model was originally developed to describe the voltage dynamics of barnacle giant muscle fibre [23], and is now a popular modelling choice in computational neuroscience [7]. It is written as a
pair of coupled non-linear ODEs of the form
Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff’s current law across
the cell membrane, with representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure.
Roughly speaking, by varying a constant current , one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddle-node and homoclinic
bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24].
As the ML model is planar, ρ is a scalar, as are the functions A and . This allows us to use the moving coordinate system to clearly visualise parts of phase space where trajectories are attracted
towards the limit cycle, and parts in which they move away from it, as illustrated in Fig. 4. The functions and A, evaluated at are shown in Fig. 5. The evolution of θ is mostly constant, however
we clearly observe portions of the trajectories where this is slowed, along which . In fact, this corresponds to where trajectories pass near to the saddle node, and the dynamics stall. This occurs
around , and in Fig. 5 we see that both and are indeed close to 0. The reduced velocities of trajectories here highlights the importance of considering other phase space structures in forced
systems, the details of which are missed in standard phase only models. Forcing in the presence of such structures may give rise to complex and even chaotic behaviours, as we shall see in Sect. 3.
Fig. 4. Typical trajectory of the ML model of the transformed system. Left: Time evolution of θ and ρ. Right: Trajectory plotted in the phase plane. We see that when ρ has a local maximum, the
evolution of θ slows down, corresponding to where trajectories pass near to the saddle node. Parameter values as in Appendix B.1
Fig. 5. , , and A for the ML model, evaluated at . We clearly see the difference in the order of magnitude between and for small ρ. Note that, although the average of A over one period is
negative, it is positive for a non-trivial interval of θ. This corresponds to movement close to the stable manifold of the saddle node. Parameter values as in Appendix B.1
In the next example, we show how the same ideas go across to higher dimensional models.
2.2 A 4D Conductance Based Model
The Connor–Stevens (CS) model [25] is built upon the Hodgkin–Huxley formalism and comprises a fast Na^+ current, a delayed K^+ current, a leak current and a transient K^+ current, termed the
A-current. The full CS model consists of 6 equations: the membrane potential, the original Hodgkin–Huxley gating variables, and an activating and inactivating gating variable for the A-current. Using
the method of equivalent potentials [26], we may reduce the dimensionality of the system, to include only 4 variables. The reduced system is
The details of the reduced CS model are completed in Appendix B.2. The solutions to the reduced CS model under the coordinate transformation may be seen in Fig. 6, whilst, in Fig. 7, we show how this
solution looks in the original coordinates. As for the ML model, θ evolves approximately constantly throughout, though this evolution is sped up close to . The trajectories of the vector ρ are more
complicated, but note that there is regularity in the pattern exhibited, and that this occurs with approximately the same period as the underlying limit cycle. The damping of the amplitude of
oscillations in ρ over successive periods represents the overall attraction to the limit cycle, whilst the regular behaviour of ρ represents the specific relaxation to cycle as shown in Fig. 7.
Additional file 1 shows a movie of the trajectory in coordinates with the moving orthonormal system superimposed, as well as the solution in ρ for comparison.
Fig. 6. Solution of the transformed CS system. Top: Time evolution of θ. Bottom: Time evolution of ρ coordinates. Upon transforming these coordinates back to the original ones, we arrive at Fig. 7.
Parameter values given in Appendix B.2. In this parameter regime, the model exhibits type I firing dynamics
Fig. 7. Transformed trajectory in space of the phase-amplitude description of the reduced CS model. The dotted black line is the phase amplitude solution transformed in the original coordinates,
whilst the coloured orbit is the underlying periodic orbit, where the colour corresponds to the phase along the orbit. The solution of the phase-amplitude description of the model in coordinates is
shown in Fig. 6
3 Pulsatile Forcing of Phase-Amplitude Oscillators
We now consider a system with time-dependent forcing, given by (7) with
where δ is the Dirac δ-function. This describes T-periodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML
model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the
kicks occur. In terms of neural response, this means that the neuron is unreliable [27].
The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [27-30]. Of particular relevance here is [27], in which a qualitative reasoning of
the mechanisms that bring about shear in such models is supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the
homoclinic regime, kicks can cause trajectories to pass near the saddle-node, and folds may occur as a result [16].
We here would like to compare full planar neural models to the simple model, studied in [27]:
This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are ‘sped up’ or ‘slowed down’ dependent on
their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ. Supposing that the function P is smooth but non-constant, trajectories will
be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear
that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon
trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T, the time
between kicks must not be too small.
This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may co-exist with sinks,
meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a
stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a ‘strange attractor’; see, e.g. [
By comparing with the phase-amplitude dynamics described by Eqs. (8)–(9), we see that the model of shear considered in (15) is a proxy for a more general system, with , and , and .
To gain a deeper insight into the phenomenon of shear induced chaos, it is pertinent to study the isochrons of the limit cycle for the linear model (15), where the isochrons are simply lines with
slope . In Fig. 8, we depict the isochrons and stretch and fold action of shear. The bold thin grey line at is kicked, at , to the bold solid curve, where , as studied in [16] and this curve is
allowed to evolve under the dynamics with no further kicks through the dashed curve at and ultimately to the dotted curve at , which may be considered as evolutions of the solid curve by integer
multiples of the natural period of the oscillator. Every point of the dotted curve traverses the isochron it is on at until . The green marker shows an example of one such point evolving along its
associated isochron. The folding effect of this is clear in the figure, and further illustrated in the video in Additional file 2.
Fig. 8. Stretch-and-fold action of a kick followed by relaxation in the presence of shear. The thin black lines are the isochrons of the system, which in the case of the linear model (15), are simply
straight lines with slope . The thin grey line at represents the limit cycle, which is kicked, at by with strength to the solid curve. After this, the orbits are allowed to evolve under the flow
generated by the continuous part of the system. The dashed and dotted curves represent the image of the kicked solid curve under this flow, at times and , respectively. The green marker shows how
one point, evolves under the flow, first to and then to , following the isochron as it relaxes back to the limit cycle. The effect of the shear forces and the subsequent folding, caricatured by
the blue arrows can clearly be seen
This simple model with a harmonic form for provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange
attractors, but merely phase-shifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear
system (15) variation in this sense is generated by any non-constant ; the larger the ratio , the larger the variation (see [16] for a recent discussion).
The formation of chaos in the ML model is shown in Fig. 9. Here, we plot the response to periodic pulsatile forcing, given by (14), in the coordinate system. This clearly illustrates a folding of
phase space around the limit-cycle, and is further portrayed in the video in Additional file 3. We now show how this can be understood using phase-amplitude coordinates.
Fig. 9. Shear induced folding in the ML model, with parameters as in Fig. 5. The red curve in all panels represents the limit cycle of the unperturbed system, whilst the green dotted line represents
the stable manifold of the saddle node, indicated by the orange marker. We begin distributing points along the limit cycle and then apply an instantaneous kick taking , where , leaving w unchanged.
This essentially moves all phase points to the left, to the blue curve. The successive panels show the image of this set of points after letting points evolve freely under the system defined by the
ML equations, and then apply the kick again. The curves shown are the images of the initial phase points just after each kick, as indicated in the figure. We can clearly observe the shear induced
folding. Parameter values as in Appendix B.1
Compared to the phenomenological system (15), models written in phase-amplitude coordinates as (8)–(9) have two main differences. The intrinsic dynamics (without kicks) are non-linear, and the kick
terms appear in both equations for and (not just ). Simulations of (8)–(9) for both the FHN and ML models, with , show that the replacement of by σρ, dropping (which is quadratic in ρ), and
setting does not lead to any significant qualitative change in behaviour (for a wide range of ). We therefore conclude that, at least when the kick amplitude ε is not too large, it is more
important to focus on the form of the forcing in the phase-amplitude coordinates. In what follows, we are interested in discovering the effects of different functional forms of the forcing term
multiplying the δ-function, keeping other factors fixed. As examples, we choose those forcing terms given by transforming the FHN and the ML models into phase-amplitude coordinates. To find these
functions, we first find the attracting limit cycle solution in the ML model (11) and FHN model (52) using a periodic boundary value problem solver and set up the orthonormal coordinate system around
this limit cycle. Once the coordinate system is established, we evaluate the functions and (that appear in Eqs. (8) and (9)). For planar systems, we have simply that . Using the forcing term (14),
we are only considering perturbations to the voltage component of our system and thus only the first component of h, and the first column of B will make a non-trivial contribution to the dynamics. We
define as the first component of h and as the first component of ζ. We wish to force each system at the same ratio of the natural frequency of the underlying periodic orbit. To ease comparison
between the system with the ML forcing terms and the FHN forcing terms, we rescale so that in what follows. Implementing the above choices leads to
It is important to emphasise that are determined by the underlying single neuron model (unlike in the toy model (15)). As emphasised in [31], one must take care in the treatment of the state
dependent ‘jumps’ caused by the δ-functions in (16) to accommodate the discontinuous nature of θ and ρ at the time of the kick. To solve (16), we approximate with a normalised square pulse of the
where . This means that for , , the dynamics are governed by the linear system . This can be integrated to obtain the state of the system just before the arrival of the next kick, , in the form
In the interval and using (17) we now need to solve the system of non-linear ODEs
Rescaling time as , with , and writing the solution as a regular perturbation expansion in powers of τ as , we find after collecting terms of leading order in τ that the pair is governed by
with initial conditions . The solution (obtained numerically) can then be taken as initial data in (18)–(19) to form the stroboscopic map
Note that this has been constructed using a matched asymptotic expansion, using (17), and is valid in the limit . For weak forcing, where , vary slowly through the kick and can be approximated by
their values at so that to
Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)–(23), which is particularly useful for comparing and contrasting the
behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov
exponent of the map (22)–(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details.
In Fig. 10, we compare the functions for both the FHN and the ML models. We note that for the FHN model is near 0 for a large set of θ, whilst the same is true for for the ML model. This means
that kicks in the FHN model will tend to primarily cause phase shifts, whilst the same kicks in the ML model will primarily cause shifts in amplitude.
Fig. 10. The blue curves show the change in θ under the action of a pulsatile kick in v, whilst the red dashed curves show the change in ρ under the same kick. The top plot is for the FHN model,
whilst the bottom plot is for the ML model. We evaluate the effect of the kicks at , where we observe the largest changes in θ under the action of kicks
We plot in the top row of Fig. 11 the pair , for (24)–(25) for the FHN and ML models. For the FHN model, we find that the system has Lyapunov exponent of . For the ML model the Lyapunov exponent is
. This implies that differences in the functional forms of can help to explain the generation of chaos.
Fig. 11. Panel (a) shows successive iterates of θ in system (22)–(23) with functions taken from the FHN model, whilst panel (b) presents the same iterates but for functions from the ML model. Panel
(c) shows for the FHN model, where the bold blue line is and the red dashed line is . Superimposed on this panel is a histogram displaying how kicks are distributed in terms of θ alone. Panel (d)
shows the same information, except this time for forcing functions from the ML model. Parameter values are , , , and
Now that we know the relative contribution of kicks in v to kicks in , it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks
to the dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ. For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a
grouping of kicks around the region where is roughly zero. This means that kicks will not be felt as much in the ρ variable, and so trajectories here do not get kicked far from cycle. This helps
explain why it is more difficult to generate chaotic responses in the FHN model.
After transients, we observe a phase-locked state for the FHN model. For a phase-locked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phase-locked
state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for
discrete-time systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phase-locked state here occurs where is small, suggesting that
trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe
chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly
across θ, and so, some kicks will take trajectories away from the limit cycle, thus leading to shear-induced folding and chaotic behaviour.
4 Discussion
In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phase-amplitude coordinate system can be
constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time
dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation
does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where .
Importantly, it emphasises the role of the two functions and that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small
perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can
accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems,
where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.
One natural extension of this work is to move beyond the theory of weakly coupled oscillators to develop a framework for describing neural systems as networks of phase-amplitude units. This has
previously been considered for the case of weakly coupled weakly dissipative networks of non-linear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [33-35
]. It would be interesting to develop these ideas and obtain network descriptions of the following type:
with an appropriate identification of the interaction functions in terms of the biological interaction between neurons and the single neuron functions . Such phase-amplitude network models are
ideally suited to describing the behaviour of the mean-field signal in networks of strongly gap junction coupled ML neurons [36,37], which is known to vary because individual neurons make transitions
between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength
(predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.
Appendix A: Derivation of the Transformed Dynamical System
Starting from
we make the transformation , giving
We proceed by projecting (29) onto , using (1). The left-hand side of (29) now reads:
where denotes the transpose of ξ and the right-hand side of (29) becomes
Upon projecting both sides of (29) onto , the left-hand side reads
whilst the right-hand side becomes
since and where Df denotes the Jacobian of f. Putting together the previous two equations yields
It may be easily seen that as and that and . Overall, combining (32) and (37) we arrive at the transformed system:
In order to evaluate the functions , , and A for models with dimension larger than two, we need to calculate . Defining by , the direction angles of , we have that
where the index i denotes the column entry of ζ and denotes the dot product between vectors x and y. Defining
where j denotes the row index, we have
By the quotient rule for vectors we find that
and that
Overall, we have that
Appendix B: Gallery of Models
B.1 Morris–Lecar
The ML equations describe the interaction of membrane voltage with just two ionic currents: and . Membrane ion channels are selective for specific types of ions; their dynamics are modelled here by
the gating variable w and the auxiliary functions , , and . The latter have the form
The function models the action of fast voltage-gated calcium ion channels; is the reversal (bias) potential for the calcium current and the corresponding conductance. The functions and similarly
describe the dynamics of slower-acting potassium channels, with its own reversal potential and conductance . The constants and characterise the leakage current that is present even when the
neuron is in a quiescent state. Parameter values are , , , , , , , , , , , , and .
B.2 Reduced Connor–Stevens Model
For the reduced CS model, we start with the full Hodgkin–Huxley model, with m, n, h as gating variables and use the method of equivalent potentials as treated in [26], giving rise to the following
form for the function g:
where and are evaluated at and . For the gating variables , we have
Parameter values are , , , , , , , , , and .
B.3 FitzHugh–Nagumo Model
The FHN model is a phenomenological model of spike generation, comprising of 2 variables. The first represents the membrane potential and includes a cubic non-linearity, whilst the second variable is
a gating variable, similar to w in the ML model, which may be thought of as a recovery variable. The system is
Electronic Supplementary Material
Additional file 1. A movie showing the moving orthonormal system for the Connor–Stevens model. The top panel shows a projection of the moving orthonormal system from the full space onto . Around
the point , where θ is the phase, we establish an orthonormal basis in a subspace of . As θ evolves, so does this coordinate system, as shown by the moving black lines, which represent the moving
orthonormal basis. In this movie, we choose some initial conditions off cycle, shown by the blue orbit. The ρ coordinates along the moving coordinate system are shown in the bottom panel (MOV 649 kB)
Format: MOV Size: 648KB Download file | Watch movie
Additional file 2. A movie showing the stretch-and-fold action brought about by shear forces. In this movie, we show both the shear forces and the rate of attraction back to cycle are linear. The
limit cycle is first unravelled so that it may be represented by a straight line. We choose as our forcing function and apply it instantaneously at . We then allow the resulting image of the kicked
orbit to evolve under the flow generated by system (15) between kicks until (in arbitrary units). As the curve relaxes back to the cycle, we see that the shear forcing causes a fold in the curve to
develop. The accumulation of such folds over successive forcing periods can ultimately give rise to chaotic dynamics, which would not be observed in the corresponding phase-only model. The thinner
black lines represent the isochrons of the system which, in this simple example, are straight lines with slope . Since the isochrons of the system are unchanged between kicks, we observe that phase
points simply traverse the isochron they are kicked to as they relax back to cycle (MOV 769 kB)
Format: MOV Size: 769KB Download file | Watch movie
Additional file 3. A movie showing the accumulation of folds in the kicked ML model. The thin black line represents the underlying periodic orbit of the system , with f taken for the ML model.
Every T units of time, we apply a kick taking , where , whilst leaving w unchanged, to all phase-points. The movie then shows the evolution of all of these phase-points. Please note that this movie
does not show trajectories of the system, but the image of points starting on the limit cycle, under the action of the kick composed with the flow generated by . This movie show the action of 4 such
kicks. We observe that trajectories spend a long time near the saddle node to the bottom left of the figure, so that these trajectories travel slower than those close to the limit cycle. As we apply
more kicks, we see the folds developing and accumulating (MOV 2118 kB)
Format: MOV Size: 2.1MB Download file | Watch movie
Competing Interests
The authors confirm that they have no competing interests of which they are aware.
Authors’ Contributions
KCAW, KKL, RT and SC contributed equally. All authors read and approved the final manuscript.
1. Guckenheimer J: Isochrons and phaseless sets.
J Math Biol 1975, 1:259-273. Publisher Full Text
2. Cohen AH, Rand RH, Holmes PJ: Systems of coupled oscillators as models of central pattern generators. In Neural Control of Rhythmic Movements in Vertebrates. Wiley, New York; 1988.
3. Kopell N, Ermentrout GB: Symmetry and phaselocking in chains of weakly coupled oscillators.
Commun Pure Appl Math 1986, 39:623-660. Publisher Full Text
4. Ermentrout GB: n:m phase-locking of weakly coupled oscillators.
J Math Biol 1981, 12:327-342. Publisher Full Text
5. Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.
6. Guillamon A, Huguet G: A computational and geometric approach to phase resetting curves and surfaces.
SIAM J Appl Dyn Syst 2009, 8(3):1005-1042. Publisher Full Text
7. Osinga HM, Moehlis J: A continuation method for computing global isochrons.
SIAM J Appl Dyn Syst 2010, 9(4):1201-1228. Publisher Full Text
8. Mauroy A, Mezic I: On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics.
Chaos 2012., 22(3)
Article ID 033112
9. Brown E, Moehlis J, Holmes P: On the phase reduction and response dynamics of neural oscillator populations.
Neural Comput 2004, 16:673-715. PubMed Abstract | Publisher Full Text
10. Achuthan S, Canavier CC: Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators.
J Neurosci 2009, 29(16):5218-5233. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
11. Yoshimura K: Phase reduction of stochastic limit-cycle oscillators. In Reviews of Nonlinear Dynamics and Complexity. Volume 3. Wiley, New York; 2010:59-90.
12. Lin KK, Wedgwood KCA, Coombes S, Young LS: Limitations of perturbative techniques in the analysis of rhythms and oscillations.
J Math Biol 2013, 66:139-161. PubMed Abstract | Publisher Full Text
13. Demir A, Suvak O: Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods and accurate phase computations.
IEEE Trans Comput-Aided Des Integr Circuits Syst 2010, 29:1215-1228.
14. Medvedev GS: Synchronization of coupled stochastic limit cycle oscillators.
Phys Lett A 2010, 374:1712-1720. Publisher Full Text
15. Diliberto SP: On systems of ordinary differential equations. In Contributions to the Theory of Nonlinear Oscillations. Princeton University Press, Princeton; 1950:1-38. [Annals of Mathematical
Studies 20]
16. Ermentrout GB, Kopell N: Oscillator death in systems of coupled neural oscillators.
SIAM J Appl Math 1990, 50:125-146. Publisher Full Text
17. Ott W, Stenlund M: From limit cycles to strange attractors.
Commun Math Phys 2010, 296:215-249. Publisher Full Text
18. Morris C, Lecar H: Voltage oscillations in the barnacle giant muscle fiber.
Biophys J 1981, 35:193-213. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
19. Rinzel J, Ermentrout GB: Analysis of neural excitability and oscillations. In Methods in Neuronal Modeling. 1st edition. MIT Press, Cambridge; 1989:135-169.
20. Connor JA, Stevens CF: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma.
J Physiol 1971, 213:31-53. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
21. Kepler TB, Abbott LF, Marder E: Reduction of conductance-based neuron models.
Biol Cybern 1992, 66:381-387. PubMed Abstract | Publisher Full Text
22. Lin KK, Young LS: Shear-induced chaos.
Nonlinearity 2008, 21(5):899-922. Publisher Full Text
23. Wang Q, Young LS: Strange attractors with one direction of instability.
Commun Math Phys 2001, 218:1-97. Publisher Full Text
24. Wang Q, Young LS: From invariant curves to strange attractors.
Commun Math Phys 2002, 225:275-304. Publisher Full Text
25. Wang Q: Strange attractors in periodically-kicked limit cycles and Hopf bifurcations.
26. Catllá AJ, Schaeffer DG, Witelski TP, Monson EE, Lin AL: On spiking models for synaptic activity and impulsive differential equations.
SIAM Rev 2008, 50:553-569. Publisher Full Text
27. Christiansen F, Rugh F: Computing Lyapunov spectra with continuous Gram–Schmidt orthonormalization.
Nonlinearity 1997, 10:1063-1072. Publisher Full Text
28. Ashwin P: Weak coupling of strongly nonlinear, weakly dissipative identical oscillators.
29. Ashwin P, Dangelmayr G: Isochronicity-induced bifurcations in systems of weakly dissipative coupled oscillators.
Dyn Stab Syst 2000, 15(3):263-286. Publisher Full Text
30. Ashwin P, Dangelmayr G: Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators.
Dyn Syst 2005, 20(3):333-367. Publisher Full Text
31. Han SK, Kurrer C, Kuramoto Y: Dephasing and bursting in coupled neural oscillators.
Phys Rev Lett 1995, 75:3190-3193. PubMed Abstract | Publisher Full Text
32. Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models.
SIAM J Appl Dyn Syst 2008, 7(3):1101-1129. Publisher Full Text
Sign up to receive new article alerts from The Journal of Mathematical Neuroscience
|
{"url":"http://www.mathematical-neuroscience.com/content/3/1/2","timestamp":"2014-04-20T10:46:39Z","content_type":null,"content_length":"310526","record_id":"<urn:uuid:56e722ad-6d2b-4314-9aa1-9a67d696e776>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
question about topological spaces
May 19th 2008, 05:23 PM #1
Apr 2008
Seoul, South Korea
question about topological spaces
i need to show that if S is a subset of a topological space X, then int (X\S) = X\(closure of S). i'm sure this comes from some set theoretical properties, but i can't figure it out. help!
Follow Math Help Forum on Facebook and Google+
|
{"url":"http://mathhelpforum.com/advanced-algebra/38921-question-about-topological-spaces.html","timestamp":"2014-04-17T01:33:35Z","content_type":null,"content_length":"28878","record_id":"<urn:uuid:9f79a27a-abac-4bad-b518-a90fa54942fa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Halexandria Foundation
I thought I'd post something that's a little less mundane than the marketing gimmicks spread across the internet today. Let's call it some real MAAT !
This is an N-dimensional star tetrahedron(known in ancient times as a Merkaba).
It is similar to magic squares etc. but in more than 2 dimensions.
The numbers of each point on a line all add to 27 (3 * 9 or 3 cubed or 9 + 9 + 9).
Other connections can also be made by adding 3 other imaginary points.
The points on these connections also add up to 27.
The center imaginary point has a value of 9.
Re: 09/09/09
Blue Jay... this is friggin' cool!
It does, however, raise the question: What is the significance of the three "imaginary" points? Clearly, they connect the two tetrahedrons... but how to visualize this... how to incorporate these
three numbers (3, 9, and 15) into the geometry!
Meanwhile, it's time for some links to here!
Re: 09/09/09
<< The Strong Grip of the Lions Paw>>
Re: 09/09/09
DocPtah wrote:Blue Jay... this is friggin' cool!
It does, however, raise the question: What is the significance of the three "imaginary" points? Clearly, they connect the two tetrahedrons... but how to visualize this... how to incorporate these
three numbers (3, 9, and 15) into the geometry!
Meanwhile, it's time for some links to here!
I've been thinking about those points for a few years now. Intuitively I think that this is simply a 3D representation of a geometry that has higher dimensions and that those 3 points have something
to do with that step up. I wonder if in 4 dimensions there would be 4 such 'imaginary' points ?
Maybe someone else will have some enlightenment
Re: 09/09/09
It must be a Sunday; else why would I be using Geomags to duplicate this drawing in 3D?
Still, the process may have had its rewards. I did notice, for example, something -- which may or may not have been known to BlueJayWay (but who did NOT draw lines to connect the appropriate dots) --
is that the 3 and 15 connect to mid-points in the blue and red tetrahedrons in the following manner:
3 + 7 + 17 = 27
3 + 11 + 13 = 27
15 + 11 + 1 = 27
15 + 7 + 5 = 27
(Each mid point is actually combined twice, with 1 + 17, 5 + 13 being combined with 9 to, of course, yield 27.)
As already pointed out: 3 + 8 + 16 = 27; 15 + 2 + 10 = 27; and 3 + 15 + 9 = 27...
Therefore... not counting the links between the green and blue/red tetrahedrons, both 3 and 15 connect in three "directions" via the common midpoints (just like the vertices in the blue and red
tetrahedrons) to yield 27, and can therefore also be viewed as a part of their respective ("extra-dimensional") green tetrahedrons. Meanwhile, both the "green tetrahedrons" are intermingled with the
red and blue tetrahedrons.
Accordingly, it might appear that this yields 4 tetrahedrons with possibly 9 as a focal point.
Admittedly, one could also add 8 (BV) + 6 (RV) + 13 (MP) to yield 27... as well as 12 (BV) + 16 (RV) + 5 (MP) to yield 27... but then again there are roughly eight combinations for any given number
(nine if zero is included).
BTW, there are four (and only four) "doubles": 10 + 17, 11 + 15, 12 + 15, and 13 + 14... all of which add to 27. But with only four "sides", they are insufficient for a fifth tetrahedron (even
including "0" in the mixes). On the other hand, BlueJayWay's suggested additional imaginary number might be 18, that would combine with 9 for the fifth side of a fifth tetrahedron.
Obviously, my college minor in addition is paying off. Now if I just figure out what it means!
Re: 09/09/09
DocPtah wrote:It must be a Sunday; else why would I be using Geomags to duplicate this drawing in 3D?
It's also Sunday 09/27/2009 !
DocPtah wrote:Still, the process may have had its rewards. I did notice, for example, something -- which may or may not have been known to BlueJayWay (but who did NOT draw lines to connect the
appropriate dots) -- is that the 3 and 15 connect to mid-points in the blue and red tetrahedrons in the following manner:
3 + 7 + 17 = 27
3 + 11 + 13 = 27
15 + 11 + 1 = 27
15 + 7 + 5 = 27
(Each mid point is actually combined twice, with 1 + 17, 5 + 13 being combined with 9 to, of course, yield 27.)
I did not notice this either !
DocPtah wrote:Accordingly, it might appear that this yields 4 tetrahedrons with possibly 9 as a focal point.
Admittedly, one could also add 8 (BV) + 6 (RV) + 13 (MP) to yield 27... as well as 12 (BV) + 16 (RV) + 5 (MP) to yield 27... but then again there are roughly eight combinations for any given
number (nine if zero is included).
BTW, there are four (and only four) "doubles": 10 + 17, 11 + 15, 12 + 15, and 13 + 14... all of which add to 27. But with only four "sides", they are insufficient for a fifth tetrahedron (even
including "0" in the mixes). On the other hand, BlueJayWay's suggested additional imaginary number might be 18, that would combine with 9 for the fifth side of a fifth tetrahedron.
Obviously, my college minor in addition is paying off. Now if I just figure out what it means!
Very cool. I'd also love to figure out the meaning !
Re: 09/09/09
The 9 system is the key to the universe. We live in a 3 dimensional universe therefore all math is based on 3. I have a vast knowlege on the 9 system in which i will share a little to get you're
interest. Angles and degrees, to find the value of a cube is to measure all of it's angles which has 8 corners at 3 angles of 90 degrees so 8*3*90=2160. Now take a circle and make it 3 dimensional by
adding 360+360+360 or 3*360 which = 1080. So the value of a sphere is 1080. Notice the value of a sphere is exactly half of a cube. To prove this lets place the value of 4 to the equation. A 4 inch
cube is 4*4*4=64. A 4 inch sphere is radius times diameter times diameter or 2*4*4=32. Notice 32 is half of 64. If you place a 4 inch sphere inside a 4 inch cube, the sphere will superimpose the cube
on all the half center marks. Think about it.
Re: 09/09/09
notice the 4 double digits added together=90 . All the double digits add to 27. You said there are only 4 double digits that = 27. Notice 4*27=108 and add a 0 now is 1080. Notice 4*90=360.
Re: 09/09/09
Thanks Forseen for your application of nines in geometry. Not sure just how this relates to a date, but perhaps it's time for a new thread on the fascinating topic of nines.
I might mention in passing, however, that Nines are discussed in some detail in the main website, including the role of nines in the solar system. For example, the diameter of the Moon is 2,160
miles. AND the diameter of the Earth is roughly 11/3 of that of the Moon.
Re: 09/09/09
1080 is the whole value as 540 is the half. Notice these numbers add to 9. 1*9=9 2*9=18 3*9=27 4*9= 36 5*9=45 6*9=54 7*9=63 8*9=72 9*9=81 and one more 9 makes it whole. Notice all the anwsers add to
9 and all the anwsers added together equalls 504 which also adds to 9. The 9 system is ancient math all the way to nostradamus and all knowlege is in this system. The bible code, myans,i-ching,
physics, astrology, planets everything is all about the 9. How do we get the world to notice this. Also notice nineveh is in the bible, coincedence.
Re: 09/09/09
the value of a cube is 2160, the value of a sphere is 1080, the value of a tetrahydron is 720, the value of the star of david is 1440 and the value of the merkaba is 2160. The nineveh?
Re: 09/09/09
DocPtah wrote:Thanks Forseen for your application of nines in geometry. Not sure just how this relates to a date, but perhaps it's time for a new thread on the fascinating topic of nines.
I might mention in passing, however, that Nines are discussed in some detail in the main website, including the role of nines in the solar system. For example, the diameter of the Moon is 2,160
miles. AND the diameter of the Earth is roughly 11/3 of that of the Moon.
I think I have read your article "Nines" more than once. I think I reached a point where it displayed Chaldean Numerology. I think this article is where I read about the latter. I thought it was
cool. I used it to do my name. I'm supposed to learn discipline according to my ultimate destiny, i think. It's already been awhile. i better skim over it to be sure. Nine is considered good luck,
but in certain circumstances considered an evil omen. I have been interested ever since my discovery of this piece of information. Nine is the furthest distance away from the Source and these Rays
are known as Horizon. 9 only combines with 1 to make 10, with departure away and arrival upon another Octave. So perhaps 9 reaches Horizon and the motives for remaining there over-long, decide to
some extent why the good or the evil connotations exist. I'm only blathering away to see if I can make some foam for soap. In fact, I don't even think the last sentence made any sense.
Those were some interesting pieces of information given in foreseen's last post.
Re: 09/09/09
i recently looked at the pyramid and noticed that one side is 756 feet so 7+5+6=18 then 1+8=9. The pyramid has 4 sides so 4 times 756 = 3024 and 3+2+4=9. then i looked into finding the value of a
pyramid and it came to 1080. i found it interesting that the value of the sphere 1080 is the same as the pyramid 1080 but then if we made another pyramid and put the two together it would make a cube
which is 2160. Then i seen the dimensions of the tomb inside which is 30 by 18 by 15. This multiplied is 8100.Then added together is 63 which is in the 9 system.
Re: 09/09/09
Yes Nines are so fascinating ...Over the previous few years 9 's appeared every where in my life ..particularly involving another person who was in my Life ..so thanks Dorman for your rave as Now I
will be looking for the 10
to appear more often ...makes sense.
Re: 09/09/09
Swabhava. I use the word character beyond the generally understood meaning of it. Although I hear interest in the subject concerning the ONE, I think a Monad goes through manifestations with its
primal essence consisting of everything within the ONE or ALL. It is latent. What is more relavent to our limitations might be our own essence, and being more complex, our Monads essence. If the
latter utilized ethereal bodies in order to reach this physical form through graduations which remain connected, in earlier manvantara's, it developes an essence of which is the only thing it can
self regenerate. Self directed evolution. Speaking of the ALL can only be done with generalities, and if science finds it helpful toward ever increasing ability to identify new forces, and using them
to manipulate substances, this is cool, but math only defines clearly whatever one is trying to communicate. So if a man can't understand, what is math going to do? it will communicate precisely, a
conclusion irremediably requiring metaphysical assumptions.
The way I mentioned character, in my seeing it as more than a generally understood term, approaches Kundalini, because within the life of the physical body, character in one interpretation, I
conceive as centers of force, while post-mortem they are a life stream.
Nine gets me thinking along these lines. Keter is 1 on the Qabbalah Tree of Life, and Malchuth is nine down from Keter. Malchuth is 10 because it is an arrival point from a lower octave we no longer
exist within, or at least we are certainly not aspiring to. Malchuth to this lower octave is some abstraction just as whatever it is within Qabbalah that might be 10 upon a higher octave. This is
where I have to leave my usage of the Qabbalah. I have a problem with their lay-out, which uses a Tree of Life upon each heaven, commonly known are four. Forty. If all seven heavens, then 70. I
conceive Loka's and tala's, the latter elemental, former forces. Although never strictly elemental, and never strictly force. This globe is a tala loka combined. The other's separate until at the
seventh or eighth, depending how you start your count. Each Loka and tala divide into seven rays, octaves, densities, which also divide in same, and so on. Ten and even twelve apply here. This is
where I arrive upon the statement concerning the seven spheres, unable to pierce the Three. I leave my understanding.
Sorry for being lengthy and barely on topic. Perhaps I'll save this post somewhere else.
|
{"url":"http://www.halexandria-foundation.org/forums/viewtopic.php?f=5&t=321","timestamp":"2014-04-19T22:05:31Z","content_type":null,"content_length":"52526","record_id":"<urn:uuid:2970242b-a421-452c-81ae-9f7832a9444e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
New Rochelle Algebra Tutor
Find a New Rochelle Algebra Tutor
...I try to put myself in the shoes of my students. This way, I know exactly how to show a student how to do a problem, because I can know exactly what parts of a problem a student is struggling
with. I never assume any knowledge that a student may have and I never say "don't worry, this is easy" ...
6 Subjects: including algebra 1, algebra 2, calculus, prealgebra
...I volunteer as a chess teacher at a school in Brooklyn. I have a large breadth of teaching experience, which I believe distinguishes me from other chess instructors. As a debate coach for the
Berkeley Carroll School for five years, I worked with high school students one-on-one during national-level tournaments, as well as in the classroom.
13 Subjects: including algebra 1, algebra 2, Spanish, English
...Additionally, I have taught Algebra as part of GED preparation during summer school at Green Correctional Center for two summers. For this subject, my teaching strategy is relate the problems
to real world problems. I earned my license to teach English Language Arts at Western Oregon University.
15 Subjects: including algebra 1, English, grammar, GED
...After high school, I spent a gap year in an Israeli school and spoke Hebrew the entire year. In addition to completing two semesters of organic chemistry in college, I have spent the past two
years participating in academic research in organic chemistry. This research requires me to have a working knowledge of both theory and practical methods in organic chemistry.
8 Subjects: including algebra 1, algebra 2, chemistry, geometry
I am a Master's graduate in Elementary and Special Education, currently pursuing my doctorate in Education and Curriculum. I have been teaching for 10 years as well as tutoring underprivileged
students in low income areas. I believe each students has their own way of learning and I do not mind catering to their way of learning.
12 Subjects: including algebra 1, reading, writing, literature
|
{"url":"http://www.purplemath.com/New_Rochelle_Algebra_tutors.php","timestamp":"2014-04-20T23:48:41Z","content_type":null,"content_length":"24127","record_id":"<urn:uuid:da0d4e90-c7d3-4398-9d3e-8fab77a372df>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Charge conjugation and time reversal in the SM
Hi everyone. I have a doubt on charge conjugation symmetry. Consider the Standard Model lagrangian with just the gauge and the fermionic part (no Higgs and no Yukawa). This is invariant under [itex]
SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)_Y[/itex]. Moreover, as any other field theory, it is [itex]CPT[/itex] invariant.
Since, however [itex]P[/itex] is clearly violated by the asymmetry in the left-right fields we know that [itex]CT[/itex] must be violated too.
What can we say about [itex]C[/itex] and [itex]T[/itex] independently?
|
{"url":"http://www.physicsforums.com/showthread.php?s=f5682e4c8d4304e6ddd718b23a89b041&p=4648191","timestamp":"2014-04-21T12:14:40Z","content_type":null,"content_length":"20106","record_id":"<urn:uuid:11459387-20ee-4db3-90a0-85b11046eb0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
|
7 D in the W
7 D in the W
7 D in the W = "7 Days in the Week"
Can you guess these?
3 B M (S H T R)
3 P C
3 W on a T
4 C in the H H
4 H of the A
4 S and C in a S
4 S in a D of C
4 S of the Y
4 W of a C
5 A in a S P
5 R on the O F
5 S of G
5 T on a F
6 B to an O in C
6 is H a D
6 is the S P N
6 S of a C
6 S W M (S F)
7 D S
7 W of the W
8 B in a B
8 H in a R W D
8 L on a S
8 N in an O
8 T on an O
9 L of a C
9 M of P
9 P in S A
9 P in the S S
11 P in a F (S) T
12 D of J
12 M in a Y
12 N on a C
12 S of the Z
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: 7 D in the W
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 7 D in the W
hi MathsIsFun,
Great puzzle, but tough for me. Only got 9 so far! Not worth hiding yet.
later: 17, half way.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Hi MIF
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Love your answers, anonimnystefy!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: 7 D in the W
Thanks MIF!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
hi MathsisFun
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Sorry for peaking but I think 4 H of the A is only partially correct,Bob.
But you did all of them!!! Congrats!
Last edited by anonimnystefy (2012-04-30 07:03:44)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
hi Stefy,
I had a little help from my friends.
What is wrong with 4 H of the A ?
Apart from the obvious that I wouldn't like to meet them.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Last edited by anonimnystefy (2012-04-30 03:32:29)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Arrhh, the penny has dropped. I see what you mean.
But both should be acceptable
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
I almost forgot what that phrase means.
Ok,then. I think the others are ok,too. This is interesting. Wonder if MIF has more of these he would like to post.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Here's some from me.
3 W on M W
5 G R
2 is the O E P
5 M is (R) 8 K
5 P S
2 P in a Q
22 Y in a C
4 P in a R of B
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Hi Bob
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Well done! Both correct.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Hi Bob
Last edited by anonimnystefy (2012-04-30 06:50:51)
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Hi Stefy,
Not quite what I had but I'm happy with that answer.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
I would not know this one without Google.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
It comes from a song. Google is one of my friends. Well done.
I hope you get them all, because I've forgotten what one of the answers is. Oh dear!
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
For which one?
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Not saying. It'll come back to me in a moment. (I hope!)
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
Tell me which one it is.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Oh OK. Don't cry.
5 P S
It had a meaning when I typed it but, by the time I had done the others it had gone.
Must be my age. Help!
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 7 D in the W
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 7 D in the W
Good idea but it wasn't that.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=215172","timestamp":"2014-04-21T09:56:17Z","content_type":null,"content_length":"42904","record_id":"<urn:uuid:0918bcb4-4a56-4e72-b518-8f94eed05929>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Traffic measurements in packet communications networks - Patents - #pat109786
Traffic measurements in packet communications networks Browse by
5274625 - Advertisements
Application no:
07942873 -
Filed date:
1992-09-10 -
Issue date:
A packet communications network relies on a few simple parameters to characterize the wide variety of traffic offered to that network, such as peak bit rate, mean bit rate and average
packet burst length. A better representation of many types of traffic relies on an equivalent burst length which produces the same loss probability distribution, but assumes that the
distribution is uncorrelated and exponential. Access control and bandwidth management based on such an equivalent burst length produces improved decisions due to the more accurate
representation of the actual traffic distribution.
US Classes:
What is claimed is:
1. A network access control system for packet communications networks comprising a plurality of sources of digital traffic for transmission on one of said networks, means for
characterizing said digital traffic incoming to said one of said communications networks by the peak bit rate of said digital traffic, the mean bit rate of said digital traffic, and
the burst length of said digital traffic, means for determining the loss probability distribution of said digital traffic, means responsive to said distribution determining means for
determining an equivalent burst length for said digital traffic having the same peak and mean bit rates and producing said loss probability distribution under the assumption that the
burst and idle period distribution of said digital traffic is exponential and uncorrelated, and means utilizing said equivalent burst length for controlling the access of said digital
traffic to said one network.
2. The network access control system according to claim 1 further comprising an admissions buffer which is essentially infinite in length in comparison to said digital traffic for
connecting each of said sources to said one of said networks.
3. The network access control system according to claim 2 wherein said means for determining an equivalent burst length determines the equivalent burst length beq according to the
formula ##EQU5## where ##EQU6## and where z0 is the length of said admissions buffer, F(z0) is the value of said loss probability at said length z0, R is said peak bit rate, m is said
mean bit rate, and C is the transmission rate into said network.
4. The network access control system according to claim 1 further comprising an admissions buffer which is finite in comparison to said digital traffic for connecting each of said
sources to said one of said networks.
5. The network access control system according to claim 4 wherein said means for determining an equivalent burst length determines the equivalent burst length beq according to the
formula ##EQU7## where Î is given by ##EQU8## and where z0 is the length of said admissions buffer, F(z0) is the value of said loss probability at said length z0, R is said peak bit
rate, m is said mean bit rate, and C is the transmission rate into said network.
6. A packet communications network including a network access control system comprising a plurality of sources of digital traffic for transmission on said network, means for
characterizing said digital traffic incoming to said communications network by the peak bit rate of said digital traffic, the mean bit rate of said digital traffic, and the burst
length of said digital traffic, means for determining the loss probability distribution of said digital traffic, means responsive to said distribution determining means for determining
an equivalent burst length for said digital traffic having the same peak and mean bit rates and producing said loss probability distribution under the assumption that the burst and
idle period distribution of said digital traffic is exponential and uncorrelated, and means utilizing said equivalent burst length for controlling the access of said digital traffic to
said network.
7. The packet communications network according to claim 6 further comprising an admissions buffer which is essentially infinite in length in comparison to said digital traffic for
connecting each of said sources to said network.
8. The packet communications network according to claim 7 wherein said means for determining an equivalent burst length determines the equivalent burst length beq according to the
formula ##EQU9## where ##EQU10## and where z0 is the length of said admissions buffer, F (z0) is the value of said loss probability at said length z0, R is said peak bit rate, m is
said mean bit rate, and C is the transmission rate into said network.
9. The packet communications network according to claim 6 further comprising an admissions buffer which is finite in comparison to said digital traffic for connecting each of said
sources to said network.
10. The packet communications network according to claim 9 wherein said means for determining an equivalent burst length determines the equivalent burst length beq according to the
formula ##EQU11## where Î is given by ##EQU12## and where z0 is the length of said admissions buffer, F(z0) is the value of said loss probability at said length z0, R is said peak bit
rate, m is said mean bit rate, and C is the transmission rate into said network.
11. A method for controlling network access to a packet communications network comprising characterizing digital traffic incoming into said communications network by the peak bit rate
for said digital traffic, the mean bit rate for said digital traffic, and the burst length for said digital traffic, determining the loss probability distribution of said digital
traffic, in response to said step of distribution determination, determining an equivalent burst length for said digital traffic having the same peak and mean bit rates and producing
said loss probability distribution under the assumption that the burst and idle period distribution of said digital traffic is exponential and uncorrelated, and utilizing said
equivalent burst length for controlling the access of distribution said digital traffic to said one network.
12. The method of controlling network access according to claim 11 further comprising the step of storing said digital traffic in an admissions buffer which is essentially infinite in
length in comparison to said digital traffic.
13. The method of controlling access according to claim 12 further comprising the step of determining the equivalent burst length beq according to the formula ##EQU13## where ##EQU14##
and where z0 is the length of said admissions buffer, F(z0) is the value of said loss probability at said length z0, R is said peak bit rate, m is said mean bit rate, and C is the
transmission rate into said network.
14. The method of controlling access according to claim 11 further comprising the step of storing said digital traffic in an admissions buffer which is finite in comparison to said
digital traffic.
15. The method of controlling access according to claim 14 further comprising the step of determining the equivalent burst length beq according to the formula ##EQU15## where Î is
given by ##EQU16## and where z0 is the length of said admissions buffer, F(z0) is the value of said loss probability at said length z0, R is said peak bit rate, m is said mean bit
rate, and C is the transmission rate into said network.
TECHNICAL FIELD
This invention relates to packet communication networks and, more particularly, characterizing the traffic offered to such networks to accurately capture all of the distributional and
correlational effects of complex traffic with a very few, easily acquired traffic parameters which can be readily used to bandwidth management procedures.
BACKGROUND OF THE INVENTION
Modern high speed networking protocols provide both quality of service and bandwidth guarantees to every transport connection established across the network. Such guarantees are
achieved by means of an integrated set of procedures. One of the major inputs to this set of integrated procedures is an accurate but simple characterization of the connection traffic
offered to the network.
In such high speed packet switching networks, many different classes of traffic share the common transmission resources. The network must therefore be capable of providing traffic
generated by a wide range of multimedia services such as text, image, voice and video. The traffic characteristics of such different sources vary dramatically from one another and yet
the network must provide a bandwidth and a quality of service guaranteed for each and every connection that is established across the network. It is therefore essential to provide a
technique for characterizing the traffic on a high speed switching network which is, on the one hand, simple and easy to measure or calculate and, on the other hand, which captures all
of the significant parameters of each of the widely diverse traffic sources.
Several standards bodies have heretofore proposed to characterize the traffic on each connection in a packet communications network utilize the following descriptors:
R: The peak pulse rate of the connection, in bits per second (bps).
m: The mean pulse rate of the connection, in bits per second (bps).
b: The duration of a burst period, in seconds.
These parameters are, for example, defined in CCITT Study Group XVIII, "Traffic Control and Resource Management in B-ISDN," CCITT Recommendation 1.371, February 1992, and CCITT Study
Group XVIII, "Addendum to T1.606--"Frame Relaying Bearer Service . . . ", Recommendation T1S1/90-175R4, 1990. Bandwidth management procedures based on this set of
traffic-characterizing parameters for operating a packet communications network are disclosed in "A Unified Approach to Bandwidth Allocation and Access Control in Fast Packet-Switched
Networks," by R. Guerin and L. Gun Proceedings of the IEEE INFOCOM '92, Florence Italy, pages 1-12, May 1992, and the copending application Ser. No. 07/932,440, filed Aug. 19, 1992,
and assigned to applicants' assignee.
These prior art bandwidth management techniques utilize these three descriptors to model user traffic by means of a two-state on/off fluid-flow model, by interpreting the b parameter
as the average burst duration. In this model, the traffic source is either idle and generating no data, or active and transmitting data at its peak rate. It is assumed that the idle
periods and the burst lengths are exponentially distributed and are independent from each other. Under these assumptions, the three descriptors R, m, and b have been used to
characterize the source statistics and have been used to derive bandwidth management algorithms which are relatively easy to implement. When the idle periods and burst lengths are in
fact exponentially distributed and independent from each other, these three descriptors do indeed fully characterize the source statistics and permit accurate bandwidth management.
Unfortunately, the actual user traffic offered to such packet communications systems is typically very complex and its impact on the performance of the network cannot be accurately
predicted by the use of these three descriptors alone. Even when using the on/off fluid characterization of the traffic, the real traffic may have far more complex distributional
characteristics than the simple exponential on/off model assumed in the prior art systems. In general, the burst lengths and the duration of the idle periods may have arbitrary
distributions and may also have distributions which are correlated with each other. If these arbitrary, possibly correlated, distributions are not captured in the characterization of
the traffic, the value and success of the bandwidth management procedures based upon the simplified exponential on/off model will be heavily impacted and may result in entirely
inappropriate bandwidth management decisions. Furthermore, even if the actual traffic generation process does have exponential on and off time distributions which are independent from
each other, the fluid-flow approximation ignores the microscopic stochastic representation of the underlying point process and focuses on the macroscopic correlations. That is, the
fluid-flow approximation ignores such things as packet length distributions, inter-arrival time distributions within a burst, and so forth, and relies on such parameters as the length
of the bursts and successive idle periods. With such a fluid-flow model, the same queuing behavior is obtained regardless of the packet length distributions. A more accurate (albeit
more complex) characterization of the underlying point process will indeed show the effects of second order stochastic behavior on the queuing behavior of the packets at the switches
in the network. A serious problem in the management of packet networks, therefore, is to better characterize the actual traffic process on the connections so as to permit more accurate
and more useful management procedures.
In addition to requiring more accurate characterizations of the traffic entering a packet network, it is also necessary to identify the parameters of this characterization through
simple procedures applied at the access point to the network. More specifically, it is necessary to provide a simple measurement technique for identifying parameters that accurately
represent the essential characteristics of the actual traffic on a connection. Such characteristics must be available sufficiently rapidly that they can be used to drive the bandwidth
management procedures which will produce useful results in time to operate the network.
SUMMARY OF THE INVENTION
In accordance with the illustrative embodiment of the present invention, the key distributional and correlational characteristics of a complex traffic process is approximated by a few,
simple traffic descriptors which have already been rigorously defined by various standards bodies. More particularly, the average duration parameter b is improved substantially by
upgrading the estimation of the average duration b to obtain an equivalent burst length b[eq]. Unlike the prior art approach, where the average duration b is observed directly, the
equivalent burst length b[eq] is calculated from the actual queue length distribution F(Z[0]), which can be observed directly at the packet network access point, sometimes more readily
than the average burst duration b itself.
The equivalent burst length is utilized in the present invention, along with the peak and mean bit rates of the packet source, for bandwidth management in the packet communications
network. The improved value of the equivalent burst length results directly in improvements in the operation of the previously known bandwidth management algorithms based on these
parameters. That is, the new characterization of the source traffic provided by the present invention results in the ability to use many of the previously designed traffic management
algorithms without modification, and, at the same time, produce better results than was possible with the use of the simple average burst length parameter. The improvement flows
directly from utilizing the effect of the traffic distribution, that is, the buffer loss distribution, rather than relying on a burst length measurement which assumes a particular
distribution of such burst lengths. The actual burst length distributions can be distinctly different than the assumed on/off uncorrelated exponential distribution, and, furthermore,
the distribution can also vary over time, even when the average burst length duration remains constant.
The equivalent burst length b[eq] in accordance with the present invention is, in general, different from the average burst length b and, moreover, different in such a way as to better
capture more of the distributional and correlational effects of complex traffic. The use of the equivalent burst length b[eq] therefore improves the bandwidth management algorithms
utilizing this burst length as one of the basic characterizations of the traffic. Previously available management algorithms can therefore be used to produce improved management
results. Moreover, the improved characterization of the incoming traffic relies on a simple measurement taken at the network access point, thereby simplifying the implementation of
such congestion control algorithms and rendering periodic updating of the characterization parameters more readily implementable.
BRIEF DESCRIPTION OF THE DRAWINGS
A complete understanding of the present invention may be gained by considering the following detailed description in conjunction with the accompanying drawings, in which:
FIG. 1 shows a general block diagram of a packet communications system in which the improved access control mechanisms of the present invention might find use;
FIG. 2 shows a more detailed block diagram of a typical decision point in the network of FIG. 1 at which packets may enter the network and at which the traffic characterization
parameters of the packet source would be determined to control the access to the network;
FIG. 3 shows a graphical representation of the buffer occupancy probabilities at typical transmission adapter such as adapters 34-36 of FIG. 2 under varying assumptions of burst length
distributions but constant average durations, illustrating the inability of the prior art average burst length duration to accurately characterize complex incoming traffic;
FIG. 4 shows a graphical representation of the equivalent burst lengths versus access buffer size under the same assumptions of burst length distributions utilized in FIG. 3;
FIG. 5 shows a graphical representation of the buffer occupancy probabilities under varying assumptions of burst length distributions as shown in FIG. 3 with the buffer occupancy
probabilities using the equivalent burst lengths of the present invention superimposed thereon, showing the high level of accuracy of the approximation when using the equivalent burst
length of the present invention; and
FIG. 6 shows a detailed flow chart for determining the equivalent burst length parameter in accordance with the present invention.
To facilitate reader understanding, identical reference numerals are used to designate elements common to the figures.
DETAILED DESCRIPTION
Referring more particularly to FIG. 1, there is shown a general block diagram of a packet transmission system 10 comprising eight network nodes 11 numbered 1 through 8. Each of network
nodes 11 is linked to others of the network nodes 11 by one or more communication links A through L. Each such communication link may be either a permanent connection or a selectively
enabled (dial-up) connection. Any or all of network nodes 11 may be attached to end nodes, network node 2 being shown as attached to end nodes 1, 2 and 3, network node 7 being shown as
attached to end nodes 4, 5 and 6, and network node 8 being shown as attached to end nodes 7, 8 and 9. Network nodes 11 each comprise a data processing system which provides data
communications services to all connected nodes, network nodes and end nodes, as well as decision points within the node. The network nodes 11 each comprise one or more decision points
within the node, at which incoming data packets are selectively routed on one or more of the outgoing communication links terminated within that node or at another node. Such routing
decisions are made in response to information in the header of the data packet. The network node also provides ancillary services such as the calculation of routes or paths between
terminal nodes, providing access control to packets entering the network at that node, and providing directory services and maintenance of network topology data bases used to support
route calculations and packet buffering.
Each of end nodes 12 comprises either a source of digital data to be transmitted to another end node, a utilization device for consuming digital data received from another end node, or
both. Users of the packet communications network 10 of FIG. 1 utilize an end node device 12 connected to the local network node 11 for access to the packet network 10. The local
network node 11 translates the user's data into packets formatted appropriately for transmission on the packet network of FIG. 1 and generates the header which is used to route the
packets through the network 10.
In order to transmit packets on the network of FIG. 1, it is necessary to calculate a feasible path or route through the network from the source node to the destination node for the
transmission of such packets. To avoid overload on any of the links on this route, the route is calculated in accordance with an algorithm that insures that adequate bandwidth is
available for the new connection, using statistical multiplexing techniques. That is, given the statistical properties of each data source, a plurality of signals from such sources are
multiplexed on the transmission links A-L, reserving sufficient bandwidth to carry each signal if that signal stays within its statistically described properties. One such algorithm is
disclosed in the copending application, Ser. No. 07/874,917, filed Apr. 28, 1992, and assigned to applicants' assignee. Once such a route is calculated, a connection request message is
launched on the network, following the computed route and updating the bandwidth occupancy of each link along the route to reflect the new connection.
In FIG. 2 there is shown a general block diagram of a typical packet network decision point such as is found in the network nodes 11 of FIG. 1. The decision point of FIG. 2 comprises a
high speed packet switching fabric 33 onto which packets arriving at the decision point are entered. Such packets arrive over transmission links via transmission adapters 34, 35, . . .
, 36, or originate in user applications in end nodes via application adapters 30, 31, . . . , 32. It should be noted that one or more of the transmission adapters 34-36 can be
connected to intranode transmission links connected to yet other packet switching fabrics similar to fabric 33, thereby expanding the switching capacity of the node. The decision point
of FIG. 2 thus serves to connect the packets arriving at the decision point to a local user (for end nodes) or to a transmission link leaving the decision point (for network nodes and
end nodes). The adapters 30-32 and 34-36 include queuing circuits for queuing packets prior to or subsequent to switching on fabric 33. A route controller 37 is used to calculate
optimum routes through the network for packets originating at one of the user application adapters 30-32 in the decision point of FIG. 2. Network access controllers 39, one for each
connection originating at the decision point of FIG. 2, are used to regulate the launching of packets onto the network so as to prevent congestion. That is, if the transient rate of
any connection exceeds the statistical values assumed in making the original connection, the controllers 39 slow down the input to the network so as to prevent congestion. Both route
controller 37 and access controllers 39 utilize the statistical description of the new connection in calculating routes or controlling access. These descriptions are stored in topology
data base 38. Indeed, network topology data base 38 contains information about all of the nodes and transmission links of the network of FIG. 1 which information is necessary for
controller 37 to operate properly.
The controllers 37 and 39 of FIG. 2 may comprise discrete digital circuitry or may preferably comprise properly programmed digital computer circuits. Such a programmed computer can be
used to generate headers for packets originating at user applications in the decision point of FIG. 2 or connected directly thereto. Similarly, the computer can also be used to
calculate feasible routes for new connections and to calculate the necessary controls to regulate access to the network in order to prevent congestion. The information in data base 38
is updated when each new link is activated, new nodes are added to the network, when links or nodes are dropped from the network or when link loads change due to the addition of new
connections. Such information originates at the network node to which the resources are attached and is exchanged with all other nodes to assure up-to-date topological information
needed for route and access control calculations. Such data can be carried throughout the network on packets very similar to the information packets exchanged between end users of the
The incoming transmission links to the packet decision point of FIG. 2 may comprise links from local end nodes such as end nodes 12 of FIG. 1, or links from adjacent network nodes 11
of FIG. 1. In any case, the decision point of FIG. 2 operates in the same fashion to receive each data packet and forward it on to another local or remote decision point as dictated by
the information in the packet header. The packet network of FIG. 1 thus operates to enable communication between any two end nodes of FIG. 1 without dedicating any transmission or node
facilities to that communication path except for the duration of a single packet. In this way, the utilization of the communication facilities of the packet network is optimized to
carry significantly more traffic than would be possible with dedicated transmission links for each communication path.
The access controllers 39 of FIG. 2 operate to control the access of packets to the network in such a fashion as to eliminate or vastly reduce the possibility of congestion in the
network due to temporary changes in the behavior of the corresponding packet sources. In order to accomplish this purpose, controller 39 must determine a set of statistical
characteristics that capture the main elements of the behavior of the packet source. It has been found that peak bit rate, mean bit rate and average burst duration (R, m and b,
respectively) are one such set of characteristics and many access control schemes have been designed utilizing these characteristics. Such control schemes are shown, for example, in "A
Unified Approach to Bandwidth Allocation and Access Control in Fast Packet-Switched Networks," by R. Guerin and L. Gun Proceedings of the IEEE INFOCOM '92, Florence, Italy, pages 1-12,
May, 1992 and copending patent application, Ser. No. 07/932,440, filed Aug. 19, 1992, and assigned to applicants' assignee.
Prior art characterizations of the burst length have presumed that the distribution of the bursts over time was on/off and exponential and the access control schemes have been based on
traffic behavior with this assumed burst distribution. Typically, the actual distribution of bursts in packet sources for modern packet communications networks varies widely and the
presumed exponential distribution is often an inaccurate representation of the actual distribution of packet bursts. Such inaccuracies in the presumed burst distribution, in turn,
result in inappropriate or inadequate design of the network access buffers, or in traffic management decisions producing unintended results, particularly in terms of the probability of
loss of packets due to buffer overflow. This effect can be better seen in FIG. 3.
In FIG. 3 there is shown a graphical representation of the buffer occupancy distribution at a typical transmission adapter such as adapters 34-36 of FIG. 2 for packet sources with
different burst length distributions but with identical peak rates, mean rates and average burst lengths. In FIG. 3, curve 41 shows the relationship between packet loss probability and
buffer length for the prior art assumption of an on/off, independent exponential distribution of burst and idle periods. Curve 40 shows the same relationship between loss probability
and buffer size for a hyperexponential distribution of on time and an exponential distribution of off times, and having the same peak rate, mean rate and average burst length.
Similarly, curve 42 show this relationship for a two-stage Erlang distribution of both on and off times, and also having the same peak rate, mean rate and average burst length. It is
apparent that the loss probability is heavily dependent upon the actual distribution of the burst and idle periods in the incoming packet train. Moreover, it is likewise apparent that
the use of the prior art average burst length fails totally to capture very significant effects of burst distribution departing substantially from the assumed on/off exponential
distribution. More particularly, relying on the assumed exponential distribution in managing the access of a packet source to the network can result in unpredictable loss probabilities
and hence better than or worse than expected performance.
More particularly, if it is assumed that the capacity of a transmission facility is given by C, the complementary distribution of the steady-state queue length process Z in the network
access buffer can be represented by the distribution function F(z)=P(Z>z), assuming that the mean bit rate is less than the capacity of the transmission channel (i.e., m<C), where Z is
the queue length process taking place in this buffer. FIG. 3 plots log[10] F(z) for the three different distributions having the same traffic descriptors (R=10 Mbps, m=2 Mbps and b=2
milliseconds). The transmission capacity C is selected as 2.5 Mbps and thus the transmission facility is 80% utilized. It is assumed in FIG. 3 that distribution 40 has burst durations
exponentially distributed with μ[1] =100 sec^-1 with a probability of 0.1, and has idle period durations exponentially distributed with μ[2] =900 sec^-1 with a probability of 0.9,
thus providing an average burst length b of 2 milliseconds in either case. The average off duration is then equal to 8 milliseconds (λ=125 sec^-1), so that the mean arrival rate m is
2 Mbps. For the distribution of curve 42, μ=1000 sec^-1 and λ=250 sec^-1 which also yields the same m and b values. It is evident from FIG. 3 that, even though these three processes
have identical traffic descriptors (R, m, b), their impact on the queue length process is quite different.
For the prior art assumption of exponential on/off distribution of packet bursts, the distribution function F(z) can be obtained from the following simple closed form expression,
assuming an infinite length buffer: ##EQU1## If, as was done in the prior art, the parameters (R, m, b) are observed and the exponential on/off model is assumed for the arrival
process, the assumed distribution can be very misleading. The system of the present invention introduces the concept of "equivalent burst length" (b[eq]) which attempts to approximate
the non-exponential distributions of FIG. 3 by two different exponential on/off processes with the same mean and peak rates, but with different equivalent burst lengths b^1[eq] and b^2
In accordance with the illustrative embodiment of the present invention, the equivalent burst length b[eq] is calculated from a knowledge of the actual value of the queue length
distribution F(z[0]) at some queue length z[0]. In many applications, measuring F(z[0]) is easier than directly measuring the average burst length b. For example, in an access control
system for a high speed packet network, F(z[0]) may represent the probability of running out of tokens at the leaky bucket when the token generation rate is C and the token buffer size
is z[0]. The design and arrangement of such admissions buffers are shown in the copending application, Ser. No. 07/943,097, filed Sep. 10, 1992, assigned to applicants' assignee. In
general, the leaky bucket operates to permit packets to be introduced into the packet network without significant delay so long as the packet arrival process remains within the
statistical parameters initially describing that process. If the packet arrival process falls outside of these parameters, the leaky bucket operates to alter the accessibility of the
network, usually by tagging the packets not within the parameters. The leaky bucket utilizes a token pool into which tokens are entered at a fixed rate. A packet cannot be transmitted
until sufficient tokens are available to accommodate the packet. Packets delayed due to lack of sufficient tokens can be marked for special treatment. Such marking is known as
violation tagging.
In accordance with the present invention, the process Z of equation (1) is approximated by a process Z[0], which is the steady state fluid level if the input process were an
exponential on/off process with parameters (R, m, b[eq]), i.e., an exponential process having an average burst length of b[eq] and having the same mean and peak rates. The equivalent
mean burst length b[eq] is selected such that F(z[0])=P(Z[0] >z[0]).tbd.F[0] (z[0]). Solving equation (1) for the equivalent burst length b[eq] when using the actual queue length
distribution F(z[0]), the equivalent mean burst length (the modified mean on period b[eq]) is easily obtained as: ##EQU2## This value of b[eq] clearly distinguishes between the three
distributions shown in FIG. 3.
More particularly, FIG. 4 shows a graphical representation of the equivalent burst length, calculated according to equation (2), versus the buffer size in bits (z[0]). Curve 45
corresponds to the hyperexponential distribution 40 of FIG. 3, curve 46 corresponds to the two-stage Erlang distribution 42 of FIG. 3 and dashed curve 47 corresponds to the average
burst length corresponding to the exponential distribution 41 of FIG. 3. First note that the value of b[eq] remains relatively constant when F(z[0])<10^-2. Also note that b^1[eq] >b>b^
2[eq] for all values of z[0]. This is attributable to the longer queue lengths required to handle the high variability of the on and off times in the hyperexponential distribution 40
of FIG. 3, and the shorter queue lengths required to handle the low variability of the on and off time in the two-stage Erlang distribution 42 of FIG. 3. In selecting the value of b
[eq], the loss probability F(z[0]) is approximately equal to 0.01 for all processes, producing a value b^1[eq] of 6.26 milliseconds (distribution 40) and a value b^2[eq] of 1.01
milliseconds (distribution 42). With these values, the distributions of FIG. 3 are precisely tracked and accurate management of the traffic access can be accomplished.
If it is assumed that the buffer has a finite capacity X, equation (1) must be expanded to: ##EQU3## where 0â ¤zâ ¤X. Note that equation (3) reduces to equation (1) when X=â . Using
the same reasoning as was used in connection with equation (1), the equivalent burst length b[eq] is ##EQU4##
It should be noted that it is unreasonable to expect the single number b[eq] to accurately capture all of the complex distributional and correlational characteristics of a given
source. It is nevertheless true that the equivalent burst length b[eq] is a simple and very practical approach to capturing sufficient information about the impact of the arrival
process on the queuing process to be able to distinguish between the radically different arrival process distributions of FIG. 3. This is accomplished by estimating the tail (steady
state) behavior of F(z), and not necessarily using the low order moments such as the mean or variance of Z. If F(X) is overflow probability of the traffic from a buffer of size X, it
is highly desirable to select z[0] close to X since the resulting value of b[eq] carries more information about the tail of F(z) and hence result in better estimates of the required
bandwidth allocation. On the other hand, as the measured values of z[0] get smaller, the error in the measurement gets larger. As illustrated in FIG. 4, when the input process is more
regular (curve 42 in FIG. 3), the approximation to b[eq] is much less sensitive to the choice of z[0] (curve 46 in FIG. 4) and the approximation provides a much better match to the
true distribution (curve 42 in FIG. 5), although the approximation is quite accurate for other distributions (such as curve 40 in FIG. 3 and the corresponding curve 40 in FIG. 5).
In FIG. 5 there is shown the buffer occupancy distributions of FIG. 3 with the approximate distributions, calculated using the equivalent burst lengths suggested in FIG. 4,
superimposed thereon. That is, the hyperexponential distribution 40 and the two-stage Erlang distribution 42 (in solid lines) have superimposed thereon (in dashed lines) the
approximate distributions generated by using the equivalent burst lengths of FIG. 4. It can be seen that the correspondence between the approximate distributions and the actual
distributions are very close. More significantly, these equivalent burst length approximations are significantly more accurate than the exponential distribution 41 used in the prior
art to approximate all of the possible input process distributions.
As an example of a specific embodiment of the present invention, in FIG. 6 there is shown a flow chart of the process for determining the equivalent burst length in a network access
controller such as controller 39 in FIG. 2. It is assumed that the access controller is a leaky bucket mechanism. Starting in start box 50, box 51 is entered where the values of R, m,
C, and z[0] are read into the controller. These values represent the peak pulse rate of the incoming traffic (R), the mean pulse rate of the incoming traffic (m), the rate at which
tokens are generated and entered into the token pool (C), and the size of the token pool (z[0]). These values are provided by the traffic source and can be measured readily by simply
observing the incoming traffic over a period of time.
In box 52, the value of Î is computed as defined in equation (1). In box 53, the value of the probability F(z[0]), representing the probability that the token buffer is empty, is
measured. Decision box 54 is then entered to determine whether or not the admissions buffer is finite, i.e., of a length which is not adequate to store all of the offered traffic
before transmission. In the case of a leaky bucket mechanism, box 54 determines whether or not violation tagging is being used at the leaky bucket. That is, it is determined whether or
not the leaky bucket mechanism uses a tagging approach to deal with excess incoming traffic. If yes, the operation of the token buffer emulates a buffer queue with a finite length and
box 57 is entered to calculate the value of the equivalent burst length b[eq] according to equation (4). If, on the other hand, violation tagging is not being used and packets are
merely queued at the admissions buffer when insufficient tokens are available, the operation of the token buffer emulates an infinite buffer queue length and box 55 is entered to
calculate b[eq] according to equation (2). In either case, the procedure is terminated in terminal box 56.
It can be seen that using the procedure of FIG. 6 in the access controller 39 of FIG. 2 allows well known prior art network access and bandwidth management techniques to continue to be
used, and still obtain the benefit of traffic characterizations which much more accurately map into traffic burst length and idle period distributions which are not exponential and
which may be correlated. Prior art burst length representations assume on/off, uncorrelated distributions of such burst lengths and idle periods, a condition which often does not
occur. As a result, much more accurate access control and bandwidth management decisions are made using the equivalent burst length characteristic of the present invention.
It should also be clear to those skilled in the art that further embodiments of the present invention may be made by those skilled in the art without departing from the teachings of
the present invention.
Field of search:
|
{"url":"http://www.patentsmania.com/patent/traffic_measurements_in_packet_communications_networks-109786.html","timestamp":"2014-04-21T15:18:57Z","content_type":null,"content_length":"56937","record_id":"<urn:uuid:3d98932a-50f2-4656-b72e-b37f95e73df5>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Median Worksheets
Print median worksheet 1 with answers in PDF format,
note that the answers are on the 2nd page of the PDF.
The Mean, the Median and the Mode are all measures of Central Tendancy. The Median is the value of the middle in your list. When the totals of the list of numbers are odd ( for instance, there are 9,
13, 27, 101...numbers, the median will be the middle entry or number in the list after you have sorted the list into ascending order. However, when the totals of the list are even, a slightly
different calculation is needed. The median is equal to the sum of the two numbers in the middle (after you have sorted the list into ascending order) divided by two. Therefore, remember to line up
your numbers from smallest to largets and the middle number is the median! Be sure to remember the odd and even rule. A quick rule of thumb is that median is middle, number in the middle in an
increasing set of numbers. Examples:
To Calculate the Median of: 9, 3, 44, 17, 15 (There is an odd amount of numbers: 5)
Line up the numbers: 3, 9, 15, 17, 44 (smallest to largest)
The Median for this grouop of number is: 15 (The number in the middle)
To Calculate the Median of: 8, 3, 44, 17, 12, 6 (There is an even amount of numbers: 6)
Line up the numbers: 3, 6, 8, 12, 17, 44
Add the 2 middle numbers, then divide them by 2: 8 12 = 20 ÷ 2 = 10
The Median for this group of number is 10.
If you prefer to watch a video tutorial for the median, mode and mean, follow What are the Mean, Median and Mode.
|
{"url":"http://math.about.com/od/worksheets/ss/Median-Worksheets.htm","timestamp":"2014-04-20T05:42:57Z","content_type":null,"content_length":"42373","record_id":"<urn:uuid:b2fd9b0b-277a-4fa5-be14-d56d3308ab0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Decomposition of Dynamical Systems
I’m interested in all kinds of decomposition of all kinds of systems.
I’m planning to write a paper on the decomposition of dynamical systems into fundamental states. This project is a bit vague. I want to make some ideas/results more precise and easier to apply before
writing things up.
By a “dynamical system” I mean any dynamical model, which may be quite general: differential or difference equation, evolutionary system, sequential dynamical system, etc. In a dynamicla systems,
there is a time variable (discrete or continuous), a configuration or phase space which may be finite or infinite-dimensional (spaces of fields. It can be stochastic, have hidden variables, have many
initial values (whenn there are many possible initial values -> possibility to compare solutions, make approximations, etc.)
What kinds of decomposition do we have ?
* Spectral: “Pure” states are eigenvectors of an operator. “Mixed” states are linear combinations of pure states. Works well in linear approximations ?
* Phase space decomposition: an example is Conley’s decomposition. Conley’s theory can be extended to include non-compact and incomplete systems
* Time decomposition ?
* Wavelet decomposition ? Localized operators ? Is this technical or conceptual ?
* Stochastic decomposition ?
* Etc ?
* What are the main “fundamental” states ? What is a “state” ?
* How do things go from one state to another one ?
* How do things in different states influence each other ?
* Relations with ergodic theory ? Local normal form theory ?
* Relations among different decompositions ?
* Measurements ?
* Multi-agents ?
* Xiaopeng Chen, Jinqiao Duan, State space decomposition for nonautonomous dynamical systems, Proceedings of the Royal Society of Edinburgh: Section A Mathematics October 2011 141 : pp 957-974.
Decomposition of state spaces into dynamically different components is helpful for the understanding of dynamical behaviors of complex systems. A Conley type decomposition theorem is proved for
nonautonomous dynamical systems defined on a non-compact but separable state space. Namely, the state space can be decomposed into a chain recurrent part and a gradient-like part. This result applies
to both nonautonomous ordinary differential equations on Euclidean space (which is only locally compact), and nonautonomous partial differential equations on infinite dimensional function space
(which is not even locally compact). This decomposition result is demonstrated by discussing a few concrete examples, such as the Lorenz system and the Navier-Stokes system, under time-dependent
* (Semimartingale decomposition: how to include this stuff ?)
Dynamic Markov bridges motivated by models of insider trading
Authors: Luciano Campi, Umut Çetin, Albina Danilova
Abstract: Given a Markovian Brownian martingale $Z$, we build a process $X$ which is a martingale in its own filtration and satisfies $X_1 = Z_1$. We call $X$ a dynamic bridge, because its terminal
value $Z_1$ is not known in advance. We compute explicitly its semimartingale decomposition under both its own filtration $\cF^X$ and the filtration $\cF^{X,Z}$ jointly generated by $X$ and $Z$. Our
construction is heavily based on parabolic PDE’s and filtering techniques. As an application, we explicitly solve an equilibrium model with insider trading, that can be viewed as a non-Gaussian
generalization of Back and Pedersen’s \cite{BP}, where insider’s additional information evolves over time.
Journal reference: Stochastic processes and their applications, 2011, 121 (3). pp. 534-567
Cite as: arXiv:1202.2980v1 [math.PR]
* Rasmussen: Morse decomposition …
* Decomposition and simulation of sequential dyn systems
* Blokh: decomposition of systems on interval
|
{"url":"http://zung.zetamu.net/2012/04/decomposition-of-dynamical-systems/","timestamp":"2014-04-17T09:34:28Z","content_type":null,"content_length":"112036","record_id":"<urn:uuid:6d39bb9e-1a8b-43fd-b79f-41a1d5910ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
mvc: Multi-View Clustering
An implementation of Multi-View Clustering (Bickel and Scheffer, 2004). Documents are generated by drawing word values from a categorical distribution for each word, given the cluster. This means
words are not counted (multinomial, as in the paper), but words take on different values from a finite set of values (categorical). Thus, it implements Mixture of Categoricals EM (as opposed to
Mixture of Multinomials developed in the paper), and Spherical k-Means. The latter represents documents as vectors in the categorical space.
Version: 1.3
Depends: R (≥ 2.14.1), rattle (≥ 2.6.18)
Published: 2014-02-24
Author: Andreas Maunz
Maintainer: Andreas Maunz <andreas at maunz.de>
License: BSD_3_clause + file LICENSE
URL: http://cs.maunz.de
NeedsCompilation: no
Materials: README
CRAN checks: mvc results
Reference manual: mvc.pdf
Package source: mvc_1.3.tar.gz
OS X binary: mvc_1.3.tgz
Windows binary: mvc_1.3.zip
Old sources: mvc archive
|
{"url":"http://cran.r-project.org/web/packages/mvc/index.html","timestamp":"2014-04-20T03:11:46Z","content_type":null,"content_length":"2985","record_id":"<urn:uuid:74512f35-f443-408f-90f1-19fe7229012a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Irvington On Hudson, NY Math Tutor
Find a Irvington On Hudson, NY Math Tutor
...I received a perfect score on the GRE (800/800/6.0) in November 2010 and am happy to provide the official score report on request. I can't guarantee you similar results, but I CAN guarantee
that I will provide you with the tools you need to succeed on your upcoming tests!I have tutored high school algebra both privately and for the Princeton Review. I have a bachelor's degree in
20 Subjects: including ACT Math, algebra 1, algebra 2, SAT math
...I have a Bachelor's Degree in Math and a Master's Degree in Math Education. I am a certified teacher with three years of experience teaching high school math. I have also been tutoring all
levels of math from elementary through college for the past two years.
9 Subjects: including algebra 1, algebra 2, calculus, geometry
...And my college students have gotten into U Michigan, Case Western, UCSD, NYU, and many others. In short, I am just who you are looking for! As for myself, I have a BA in Theatre, a BA in
Psychology, and an MA in Shakespeare.I worked for the Princeton Review for 12 years doing SAT Prep and College Counseling.
42 Subjects: including algebra 1, algebra 2, LSAT, biology
...Andrews that frequently used linear algebra as well. Both during the courses and after they were completed, I worked in my college's math help center to assist other students in these
subjects. I was a double major in Mathematics and Philosophy--the two subjects that deal in logic the most.
22 Subjects: including calculus, composition (music), ear training, precalculus
...Able to work with large datasets and analyze and organize efficiently. Engineering graduate with years of experience working with MS word. Numerous reports and publications using MS Word in
addition to using Endnote citation software.
12 Subjects: including calculus, precalculus, statistics, probability
Related Irvington On Hudson, NY Tutors
Irvington On Hudson, NY Accounting Tutors
Irvington On Hudson, NY ACT Tutors
Irvington On Hudson, NY Algebra Tutors
Irvington On Hudson, NY Algebra 2 Tutors
Irvington On Hudson, NY Calculus Tutors
Irvington On Hudson, NY Geometry Tutors
Irvington On Hudson, NY Math Tutors
Irvington On Hudson, NY Prealgebra Tutors
Irvington On Hudson, NY Precalculus Tutors
Irvington On Hudson, NY SAT Tutors
Irvington On Hudson, NY SAT Math Tutors
Irvington On Hudson, NY Science Tutors
Irvington On Hudson, NY Statistics Tutors
Irvington On Hudson, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/irvington_on_hudson_ny_math_tutors.php","timestamp":"2014-04-20T06:27:25Z","content_type":null,"content_length":"24503","record_id":"<urn:uuid:20d5683e-eedd-473b-bab2-158658b8d443>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exchange Interaction and Exchange Anisotropy
The origin of the interaction which lines up the spins in a magnetic system is the exchange interaction. When spin magnetic moments of adjacent atoms and make an angle , the exchange energy, ,
between the two moments can be expressed as[10]
where is the exchange integral and is the total spin quantum number of each atom. For positive values of this gives a minimum when ie the spins are aligned parallel to each other.
Although the exchange interaction produces strong interactions between neighbouring magnetic atoms it can also be mediated by various mechanisms, producing long range effects. As the exchange energy
for neighbouring atoms is dependent only upon the angle between them it does not give rise to anisotropy.
In multilayers where magnetic layers are separated by a non-magnetic layer, there can be an exchange coupling between the two magnetic layers mediated by, for example, the RKKY interaction (see
Section 5.4). The exchange coupling is composed of two terms: an isotropic exchange coupling and an anisotropic Dzialoshinski-Moriya exchange coupling.
The total exchange interaction energy is given by[21]
where is the exchange coupling constant, and are the magnetisation of the adjacent magnetic layers and is the non-magnetic layer thickness. This product is a maximum for ferromagnetically aligned
The anisotropic exchange energy is given by[21]
where is the Dzialoshinski-Moriya exchange constant. The cross product gives a resultant that is perpendicular to the direction of the layer magnetisation. This is at a maximum when the two layer
magnetisations are at right angles to each other. This can favour inplane moment alignment for positive and out of plane alignment for negative .
Dr John Bland, 15/03/2003
|
{"url":"http://www.cmp.liv.ac.uk/frink/thesis/thesis/node68.html","timestamp":"2014-04-20T06:25:29Z","content_type":null,"content_length":"9101","record_id":"<urn:uuid:b9f6843d-65cd-495d-8485-eb7c000b846b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Riverdale, MD Geometry Tutor
Find a Riverdale, MD Geometry Tutor
...In tutoring, I always make it a point to figure out the student's style of learning and I plan my tutoring sessions accordingly, spending extra time to prepare for the session prior to meeting
with the student. My broad background in math, science, and engineering combined with my extensive rese...
16 Subjects: including geometry, calculus, physics, statistics
...We start with getting acclimated to the software by setting up a basic page or template. Then I introduce them to the basic features one at a time, such as inserting an image, links, and
formatting text. I have used Outlook for my email and calendar needs for over 15 years.
29 Subjects: including geometry, reading, writing, algebra 1
...Professionally I work as a Systems Engineer for a defense contractor in Washington, DC, and am thus available to meet most evenings and weekends. In addition to my day job, I have five years
of experience coaching tackle football in Northern Virginia, and the players I have coached have ranged a...
30 Subjects: including geometry, reading, chemistry, physics
...Please feel free to contact with questions. I scored very high on my math portions of the SAT and ACT (750 out of 800 SAT, 35 out of 36 ACT). I have helped several other students with study
tips and practice problems for these exams as well. I completed AP Calculus BC through my Junior year of high school as well.
16 Subjects: including geometry, statistics, algebra 1, algebra 2
I have worked as an electrical engineer for many years and am now retired. However, my skills are as sharp as ever, and I am still teaching a graduate course in electrical engineering at Johns
Hopkins University. I enjoy teaching very much, and particularly enjoy tutoring young people to improve their skills in mathematics.
17 Subjects: including geometry, English, calculus, ASVAB
Related Riverdale, MD Tutors
Riverdale, MD Accounting Tutors
Riverdale, MD ACT Tutors
Riverdale, MD Algebra Tutors
Riverdale, MD Algebra 2 Tutors
Riverdale, MD Calculus Tutors
Riverdale, MD Geometry Tutors
Riverdale, MD Math Tutors
Riverdale, MD Prealgebra Tutors
Riverdale, MD Precalculus Tutors
Riverdale, MD SAT Tutors
Riverdale, MD SAT Math Tutors
Riverdale, MD Science Tutors
Riverdale, MD Statistics Tutors
Riverdale, MD Trigonometry Tutors
Nearby Cities With geometry Tutor
Bladensburg, MD geometry Tutors
Brentwood, MD geometry Tutors
Cheverly, MD geometry Tutors
College Park geometry Tutors
Edmonston, MD geometry Tutors
Greenbelt geometry Tutors
Hyattsville geometry Tutors
Landover Hills, MD geometry Tutors
Lanham Seabrook, MD geometry Tutors
Mount Rainier geometry Tutors
New Carrollton, MD geometry Tutors
North Brentwood, MD geometry Tutors
Riverdale Park, MD geometry Tutors
Riverdale Pk, MD geometry Tutors
University Park, MD geometry Tutors
|
{"url":"http://www.purplemath.com/Riverdale_MD_Geometry_tutors.php","timestamp":"2014-04-16T10:34:42Z","content_type":null,"content_length":"24292","record_id":"<urn:uuid:34b51c3e-e3d5-4cc0-a55d-ffd0ea77109f>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
|
For An Unsteady, Compressible Flow Field That Is... | Chegg.com
For an unsteady, compressible flow field that is two- dimensional in the. x- y plane and in which temperature and density variations are significant, how many unknowns are there? List the equations
required to solve for these unknowns. ( Note: Assume other flow properties like viscosity. thermal conductivity, etc. can be treated as constants.)
|
{"url":"http://www.chegg.com/homework-help/unsteady-compressible-flow-field-two-dimensional-x-y-plane-chapter-9-problem-4p-solution-9780077295462-exc","timestamp":"2014-04-17T01:50:26Z","content_type":null,"content_length":"40070","record_id":"<urn:uuid:12ba11af-5913-4b08-89aa-a6ca2a7a7ece>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Creating cumulative distribution function of Random variable.
October 8th 2010, 11:17 PM
Creating cumulative distribution function of Random variable.
Electrons hit a circular plate with unit radius. Let X be the random variable representing
the distance of a particle strike from the centre of the plate. Assuming that a particle is
equally likely to strike anywhere on the plate,
(a) for 0 < r < 1 find P(X < r), and hence write down the full the cumulative distribution
function of X, FX;
(b) find P(r < X < s), where r < s;
(c) find the probability density function for X, fX.
(d) calculate the mean distance of a particle strike from the origin.
I have problem in crating CDF and finding mean distance.
please help me.
October 8th 2010, 11:55 PM
Electrons hit a circular plate with unit radius. Let X be the random variable representing
the distance of a particle strike from the centre of the plate. Assuming that a particle is
equally likely to strike anywhere on the plate,
(a) for 0 < r < 1 find P(X < r), and hence write down the full the cumulative distribution
function of X, FX;
(b) find P(r < X < s), where r < s;
(c) find the probability density function for X, fX.
(d) calculate the mean distance of a particle strike from the origin.
I have problem in crating CDF and finding mean distance.
please help me.
The probability that P(X<r), 0<=z<=1 is the ratio of the area of a circle of radius r to that of one of radius 1. This is close to just the definition of a uniform distribution on the unit circle
October 9th 2010, 12:53 AM
ok so,
3)p.d.f =d(F(X))/dx=2X
October 9th 2010, 02:39 AM
|
{"url":"http://mathhelpforum.com/statistics/158896-creating-cumulative-distribution-function-random-variable-print.html","timestamp":"2014-04-21T07:36:14Z","content_type":null,"content_length":"6858","record_id":"<urn:uuid:7f103d59-65d8-4efc-b850-c43a3d181c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jason Garrett wants the Cowboys to know the Pythagorean theorem
Posted by Michael David Smith on July 24, 2013, 4:21 PM EDT
Cowboys coach Jason Garrett graduated from Princeton, and he wants to have smart players on his team.
Specifically, Garrett wants players with a firm grasp of geometry, which includes understanding the Pythagorean theorem, which states that in a right triangle, the square of the length of the
hypotenuse is equal to the sum of the squares of the lengths of the two legs. Garrett thinks it’s important to know that when thinking about how long it will take a player to run between two points
on the field.
“If you’re running straight from the line of scrimmage, six yards deep, that’s a certain depth, right? It takes you a certain amount of time,” Garrett said. “But if you’re doing it from 10 yards
inside and running to that same six yards, that’s the hypotenuse of that right triangle. It’s longer, right? So they have to understand that, that it takes longer to do that. That’s an important
thing. Quarterbacks need to understand that, too. If you’re running a route from here to get to that spot, it’s going to be a little longer, you might need to be a little fuller in your drop.”
Garrett said he told players to look up the Pythagorean theorem if they didn’t know it.
“We talked about Pythagorus and it’s been going for the last few days,” Garrett said.
Players on the Cowboys may not be thrilled about getting a math lecture, but Garrett just did a big favor to geometry teachers across the country. This fall, when schoolkids across the country
inevitably ask when they’re going to need to know this in real life, their teachers can say that if you want to play in the NFL, you’d better know the Pythagorean theorem.
808raiderinparadise says: Jul 24, 2013 4:24 PM
Dez Bryant …. Iz dat sum Korean dish ?
drunkenpackblogger says: Jul 24, 2013 4:24 PM
Morris Claiborne is in BIG trouble.
sidk1121 says: Jul 24, 2013 4:24 PM
This is why the Cowboys are a joke
whatnojets says: Jul 24, 2013 4:24 PM
The what???????
8to80texansblog says: Jul 24, 2013 4:24 PM
8to80texansblog says: Jul 24, 2013 4:26 PM
If the Sum of the wins doesn’t equal 11 or better, Garrett can hypotenuse himself another job….
bucrightoff says: Jul 24, 2013 4:26 PM
Morris Claiborne don’t need to know about it anyhow.
belichickdominatedjoemontana says: Jul 24, 2013 4:27 PM
Get Mr. Feeny down to Dallas!
purplekoolaid1 says: Jul 24, 2013 4:27 PM
First focus on winning games on the football field, Jason….not in classrooms.
doomsdaydefensetx says: Jul 24, 2013 4:28 PM
By you people saying Morris Claiborne will have problems because of the infamous “wonderlic” scores. Just know he didn’t take the test at all resulting in the lowest score possible. FYI
cocheese000 says: Jul 24, 2013 4:29 PM
Garrett: a²+b²=c²
Most of the players:????zzzzzzzzzzzz??zzzzzzz
mt99808 says: Jul 24, 2013 4:30 PM
Pretty sure we learn this around grade 8 in Canada. How would a college grad be expected to know this?
pnut87 says: Jul 24, 2013 4:30 PM
Cowboy player sets up on the line
Cowboy player starts thinking about what the hell a hypotenuse is.
Cowboy player then begins thinking, “When did I end up on my back?”
iamthefootballjerk says: Jul 24, 2013 4:30 PM
a^2 + b^2 = c (ul8r)^2 Garrett
Even Jerrah knows that equation.
rc33 says: Jul 24, 2013 4:30 PM
Triceratops, Pterodactyl and Pythagorean. Got it, coach.
nyyjetsknicks says: Jul 24, 2013 4:31 PM
Cowboys players: I was told there would be no math.
SparkyGump says: Jul 24, 2013 4:31 PM
I want the RHG to quit icing his kickers and understand the concept of a balanced offense. There’s this thing called a slant pass. Look it up, Jason.
irishgary says: Jul 24, 2013 4:32 PM
Romo flunks ……..again
flexx91 says: Jul 24, 2013 4:32 PM
doomsdaydefensetx says:
Jul 24, 2013 4:28 PM
By you people saying Morris Claiborne will have problems because of the infamous “wonderlic” scores. Just know he didn’t take the test at all resulting in the lowest score possible. FYI
No one really cares. FYI
steelerben says: Jul 24, 2013 4:33 PM
It makes perfect sense for players to be thinking about how shifting from one spot on the line to another is going to effect the time that they arrive at the same destination. It keeps receivers from
arriving to the ball late and there being an interception, and it keeps Romo from overthrowing guys. It helps defenders figure out the angle that they are going to take to cut off a runner or how to
adjust yourself on the line of scrimmage to create an efficient path to the QB.
This is 9th grade math guys, not rocket surgery.
flexx91 says: Jul 24, 2013 4:34 PM
Did he ask Jerry Jones? Probably not…..
fanofevilempire says: Jul 24, 2013 4:35 PM
Apple pie = 3.14
seahawks4alltime says: Jul 24, 2013 4:36 PM
So, wait, seriously? Jason Garrett just had to tell players that it takes more time to run a longer distance?
flexx91 says: Jul 24, 2013 4:39 PM
He then tried to explain the Law of sines. They all thought he meant “signs” and started looking at the walls in the classroom…..
hanifmiller says: Jul 24, 2013 4:40 PM
Dear Santa,
Please let Jerry Jones be the owner of the Cowboys and Jason Garrett be the coach of the Cowboys forever. Signed– A loyal Eagles fan.
thegreatgabbert says: Jul 24, 2013 4:40 PM
At least Pythagoras got to call his own plays.
condor75 says: Jul 24, 2013 4:42 PM
I am sure everyone here knew exactly what it was , lol
marvsleezy says: Jul 24, 2013 4:42 PM
You have to find a way to relate to these players and teach them in a way they will understand.
This ain’t it. And his 35 minute boring speech the other day was not it either.
thegreatgabbert says: Jul 24, 2013 4:42 PM
Garrett may understand how Pythagoras relates to football, but it’s Greek to everyone else on the team.
channer81 says: Jul 24, 2013 4:43 PM
They’d be better off taking muscle relaxers before practice..
edukator4 says: Jul 24, 2013 4:44 PM
“We talked about Pythagorus and it’s been going for the last few days,”
thegreatgabbert says: Jul 24, 2013 4:44 PM
Now he has to teach Romo that the shortest distance between two points is not a wobbly throw that gets intercepted and run back to the point where Tony has to try to tackle the defensive player.
grandpoopah says: Jul 24, 2013 4:46 PM
You don’t need to understand the equation to know that it takes longer to run the same distance diagonally any more than you need to understand the physics of jet propulsion in order to fly in a
4512dawg4512 says: Jul 24, 2013 4:48 PM
I’m sure the players are just LOVING that
youarejealousof6rings says: Jul 24, 2013 4:48 PM
Tom Landry just rolled over.
chiadam says: Jul 24, 2013 4:52 PM
I’m confused.
dowhatifeellike says: Jul 24, 2013 4:53 PM
If you can’t intuitively figure that out from a combination of experience and simple logic, how did you make the team in the first place?
Rick Spielman is a Magician says: Jul 24, 2013 4:53 PM
That wasn’t a very good explanation of the Pythagorean theorem. The sides of a triangle do not have areas, they have lengths. If you create a square off each side of the triangle, then the sum of the
areas of the squares equals the area of the square created off the hypotenuse. But nobody thinks of it that way. Just say that the length of the hypotenuse (the side opposite the right angle) is the
square root of the sum of the square of the length of the other two sides.
joshuas82 says: Jul 24, 2013 4:54 PM
If he’s that smart why did he turn down two well run teams in the ravens and falcons to become jerry jones puppet? The Garrett fallacy.
bunjy96 says: Jul 24, 2013 4:55 PM
Make fun all you want, but Garrett is 100% correct.
Because you don’t understand and/or don’t want to, doesn’t make him wrong. It makes you uneducated.
808raiderinparadise says: Jul 24, 2013 4:56 PM
Is Garrett catching up to Rex Ryan?
jayniner says: Jul 24, 2013 4:57 PM
In other words:
49ers² + Seahawks² = No chance for Dallas representing the NFC anytime soon²
themike31 says: Jul 24, 2013 4:57 PM
Just talk about pursuit angles, man. They teach that in middle school football and onward.
Don’t outsmart yourself.
larrydavid7000 says: Jul 24, 2013 4:58 PM
@cowgirls and their fans . LOL LOL LOL AND LMAO LMAO LMAO. 4-12 at best.!!!!!!!
minnesoulja says: Jul 24, 2013 5:02 PM
This is fairly simple math…
It sounds to me as if he is using it to help players find the fastest point of contact with the ball carrier or reciever they are covering…
Again things my coach’s got us started on in like pee wee or rec. center city leagues by the time we were 13….
Just saying…
Also you would have to know the top speed of the D-Back/Reciever and how quick they get moving that fast…
This is a joke… Isnt it?
drbrown7451 says: Jul 24, 2013 5:05 PM
Yes smart players are a major plus. Especially players with “football smarts”. As far as higher math skills go, how about teaching Romo about angles, trajectory, velocity, and in general how not to
throw interceptions? Bryant, Claiborne meh. Just teach them how to hold on to the ball, and where the first down marker is.
10and46httr says: Jul 24, 2013 5:27 PM
They better learn the offense and defense first.
coolzog says: Jul 24, 2013 5:28 PM
Pretty sad if players don’t know what this is. Don’t you learn this in like grade 8-9 math?
bubbybrister/shovelpass says: Jul 24, 2013 5:29 PM
If Garrett doesn’t start winning more games, Dallas fans will be putting a hypotenuse around his neck!
tinbender2000 says: Jul 24, 2013 5:29 PM
My daughter is going into the 4th grade, she learned this last year.
dcbassguy says: Jul 24, 2013 5:46 PM
That theorem has nothing to do with calculating the time it would take to get from point A to point B in comparison with the time it would take to get to point C from point B. (different starting
points A/B, to reach the same destination C)
Garrett, in trying to make himself sound smart, did the opposite to the guys that actually have any understanding of geometry.
leftyman says: Jul 24, 2013 5:49 PM
One of the better collections of fan witticisms I’ve read in a long while, I tip my hat to you all, and also to the Jerry and Jason show for providing the content.
ambitoos says: Jul 24, 2013 6:27 PM
Garrett is in the same situation as Norv Turner was last year. Win or your gone Garrett, Romo and most of the older players. Jerry will replace the whole damned bunch of you.
hatesycophants says: Jul 24, 2013 6:39 PM
tvjules says: Jul 24, 2013 6:40 PM
In turn, Cowboys fans would like you Jason, to learn how to manage a game clock.
bennyb82 says: Jul 24, 2013 6:40 PM
If I was the coach I would say “Just take the angle. Now hit the weights.”
conormacleod says: Jul 24, 2013 7:03 PM
So, the Cowboys are just starting to learn to run to where the ball will be, and not where it was. Good lord. So happy I’m not a Cowboys fan.
cwk22 says: Jul 24, 2013 7:34 PM
Next year when they suck he can say it’s because they didn’t do their math homework!
cwk22 says: Jul 24, 2013 7:36 PM
Makes the Jets coach look smart in comparison…
corvusmaximus says: Jul 24, 2013 8:13 PM
I’ll take a field trip over a math test any day.
youarejealousof6rings says: Jul 24, 2013 9:09 PM
bunjy96 says:
Jul 24, 2013 4:55 PM
Make fun all you want, but Garrett is 100% correct.
Because you don’t understand and/or don’t want to, doesn’t make him wrong. It makes you uneducated.
They teach you how to take angles in pee-wee; that’s enough. What Garrett is doing is going to cause his players to over-think on the field, and you don’t want that. The guy is not fit to be a HC.
ytownjoe says: Jul 24, 2013 10:00 PM
Now Romo wishes he woulda studied in skool.
dallascowboysdishingthereal says: Jul 24, 2013 10:30 PM
Garrett if you’re going to teach geometry, then you need to stop drafting players that scored about a 5 on the Wonderlic.
Here is a little math for you Jason.
8-8 season + 8-8 season = no job.
It’s known as the Jerry Jones theorem.
katoelf75 says: Jul 24, 2013 10:51 PM
After reading this article I could think was that this man is done. First coach fired this season.
katoelf75 says: Jul 24, 2013 10:51 PM
After reading this article all I could think was that this man is done. First coach fired this season.
peoplesrepublic0fdabayarea says: Jul 29, 2013 2:54 PM
Dear Dallas Cowboys,
Please give me my Triangle Offense back.
Phil Jackson.
Permalink 66 Comments Latest Stories in: Dallas Cowboys, Rumor Mill, Top Stories
|
{"url":"http://profootballtalk.nbcsports.com/2013/07/24/jason-garrett-wants-the-cowboys-to-know-the-pythagorean-theorem/","timestamp":"2014-04-19T10:29:12Z","content_type":null,"content_length":"128119","record_id":"<urn:uuid:3d6272e8-61e8-4cc6-bd93-e7666af58c6e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Speaking of rounding errors...
Christopher T King squirrel at WPI.EDU
Sat Jul 3 07:16:54 CEST 2004
Speakign of rounding errors, why is round() defined to round away from
zero? I've always been quite fond of the mathematical definition of
rounding of floor(n+.5). As a contrived example of why the latter is IMHO
better, I present the following:
for x in [x+.5 for x in xrange(-10,10)]:
print int(round(x))
This prints -10, -9, ..., -2, -1, 1, 2, ..., 9, 10 (it skips zero),
probably not what you'd expect. I'm not sure how often a case like this
comes up in real usage, but I imagine it's more often than a case relying
on the current behaviour would.
More information about the Python-list mailing list
|
{"url":"https://mail.python.org/pipermail/python-list/2004-July/265617.html","timestamp":"2014-04-19T10:21:05Z","content_type":null,"content_length":"3048","record_id":"<urn:uuid:397de218-0b69-4608-886a-3701d364e076>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebra: writing equation describing max, min temperature
I need help solving this problem. Some say that to brew an excellent cup of coffee, you must have a brewing temperature of 200 degrees F, plus or minus five degrees. Write and solve an equation
describing the maximum and minimum brewing temperature for an excellent cup of coffee.
Re: Algebra
x = 200 + 5
x = 205 (degrees Fahrenheit)
x = 200 - 5
x = 195 (degrees Fahrenheit)
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=7&t=2748","timestamp":"2014-04-18T11:12:42Z","content_type":null,"content_length":"18212","record_id":"<urn:uuid:c1cd5bb2-89d4-4946-9e91-9ae2d2f41cf2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
Updated (up to 2002, 153 references) theoretical and numerical studies of the inverse problem for the continuum model of EIT which seeks the admittivity $\left(\gamma \left(x\right)=\sigma \left(x\
right)+i\omega \epsilon \left(x\right)\right)$ inside a body ${\Omega }$ from the knowledge of the Dirichlet to Neumann (DtN) (or the Neumann to Dirichlet (NtD)) map at the boundary $\partial {\Omega
}$, are reviewed. A similar review has been performed in 1999 by M. Cheney, D. Isaacson and J. C. Newell [SIAM Rev. 41, 85-101 (1999; Zbl 0927.35130)]. It is very useful because it discusses the
performances and limitations of some of the previously proposed methods for solving the inverse problem.
The DtN and the NtD maps are characterized through the Dirichlet and Thompson variational principles, respectively. For Lipschitz bounded domains ${\Omega }$ in ${ℝ}^{d}$, the DtN determines uniquely
a positive isotropic conductivity in ${W}^{2,p}\left({\Omega }\right)$, for $p>1$, if $d=2$, see A. I. Nachman [Ann. Math. (2) 143, 71-96 (1996; Zbl 0857.35135)], and a positive isotropic Lipschitz
conductivity, if $d\ge 3$, see L. Päivärinta, A. Panchenko and G. Uhlmann [Rev. Mat. Iberoam. 19, 57-72 (2003; Zbl 1055.35144)]. For a positive anisotropi conductivity $\sigma$ the DtN map determines
uniquely $\sigma$ up to a diffeomorphism, if $\sigma \in {C}^{2,\alpha }\left(\overline{{\Omega }}\right)$, $\alpha \in \left(0,1\right)$ and $\partial {\Omega }\in {C}^{3,\alpha }$, see J. Sylvester
[Commun. Pure Appl. Math. 43, 201-232 (1990; Zbl 0709.35102)] if $\lambda =2$, and if $\sigma$ is analytic, if $d=3$, see J. M. Lee and G. Uhlmann [Commun. Pure Appl. Math. 42, 1087-1112 (1989; Zbl
0702.35036)]. An interesting analogy between EIT and electrical networks and magnetotellurics is made, giving new areas for research in electrical engineering geophysics. The stability of the inverse
problem relies on the logarithmic estimates for $\sigma \in {H}^{2+s}\left({\Omega }\right)$, $s>d/2$ given by G. Alessandrini [Appl. Anal. 27, 153-172 (1988; Zbl 0616.35082)].
The review continues on how to stabilize the inverse problem by some regularization approach which ensures convergence of reconstruction algorithms, by restricting the admittivity $\gamma$ to a
compact subset of ${L}^{\infty }\left({\Omega }\right)$. It is noted that basically all the known regularization methods make use of some ‘a priori’ information about the unknown $\sigma$ or $\gamma$
and, as a result, they may produce artifacts in the images. It also stresses the need for criteria for comparing regularization methods. Various imaging methods are described. First, the linearized
EIT problem $\sigma =1+\delta \sigma$ is reviewed and it is concluded that there is no known exact (or fully satisfactory) reconstruction of $\delta \sigma$ inside ${\Omega }$. For the nonliner EIT
problem the layer stripping algorithm is not stable, but for the inverse conductivity problem there are other methods such as the signal processing method, see M. Brühl and M. Hanke [Inverse Probl.
16, 1029-1042 (2000; Zbl 0955.35076)] and the level set method, see F. Santosa [ESAIM Control Optim. Calc. Var. 1, 17-33 (1996; Zbl 0870.49016)]. Iterative algorithms are also reviewed, unfortunately
the exposition is rather restrictive since $\gamma$ is assumed known at the boundary.
The output least-squares method with regularization is described and two important questions that affect the quality of the final image, namely: (1) How to discretize the unknown $\gamma$?; (2) What
current flux excitations to apply?, are addressed. For the first question one can use multigrid methods, or optimal finite-difference grids, but one may as well employ the finite element method. For
the second question, if one is interested in distinguishing $\sigma$ from a given ${\sigma }^{0}$, then one should apply the current flux given by the leading eigenvectors (or right singular vectors)
of the difference between the inverse DtN operator for $\sigma$ and the inverse DtN operator for ${\sigma }^{0}$.
Alternatively, one may use variational algorithms. It is highlighted that there exists no successful imaging algorithm which uses Kohn and Vogelius relaxed variational formulation of EIT. More
powerful seem to be variational feasibility constraints. Some interesting remarks are made on the regularity of the forward map, see D. C. Dobson [SIAM J. Appl. Math. 52, 442-458 (1992; Zbl
0747.35051)], E. Bonnetier and M. Vogelius [SIAM J. Math. Anal. 93, 651-677 (2000; Zbl 0947.35044)].
Finally, several open questions are addressed, such as: (i) the injectivity of the DtN at the boundary for the complete electrode model; (ii) better parametrization of the unknown $\sigma$ or $\
gamma$; (iii) anisotropy.
Overall, this work is an excellent topical review paper on EIT for the continuum model.
An Addendum is given in ibid. 19, 997-998 (2003).
35R30 Inverse problems for PDE
35-02 Research monographs (partial differential equations)
35Q60 PDEs in connection with optics and electromagnetic theory
92C55 Biomedical imaging and signal processing, tomography
78A55 Technical applications of optics and electromagnetic theory
|
{"url":"http://zbmath.org/?q=an:1031.35147","timestamp":"2014-04-20T23:31:26Z","content_type":null,"content_length":"30715","record_id":"<urn:uuid:b1860a6f-227e-414a-bac8-ed66717562ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
inverse of these functions
November 13th 2013, 03:30 AM #1
Jan 2013
inverse of these functions
Could someone please just confirm if my answers are correct im looking for the inverse of the following function. (this is for a programming assignment x is a variable i will be taking from the
user then subbing into formulas)
f(x)= 3x+2
g(x)= 8x^3
inverse of f(x) = (x-2)/3;
inverse of g(x) =cubed root of 8 * x
that second one is wrong i think, i've been looking at vids online but i cant figure out how to inverse 8x^3 could any one help me
Re: inverse of these functions
Could someone please just confirm if my answers are correct im looking for the inverse of the following function. (this is for a programming assignment x is a variable i will be taking from the
user then subbing into formulas)
f(x)= 3x+2
g(x)= 8x^3
inverse of f(x) = (x-2)/3;
inverse of g(x) =cubed root of 8 * x
that second one is wrong i think, i've been looking at vids online but i cant figure out how to inverse 8x^3 could any one help me
The second one is correct. It could be written as $2x$ because $\sqrt[3]{8}=2.$
November 13th 2013, 03:40 AM #2
|
{"url":"http://mathhelpforum.com/pre-calculus/224221-inverse-these-functions.html","timestamp":"2014-04-18T12:03:51Z","content_type":null,"content_length":"33895","record_id":"<urn:uuid:22d5991e-aa81-45c2-bc77-4fd82d6016ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the cost of energy consumption and cooling servers?
Solve problems - It's Free
Create your account in seconds
Email address
Between 5 and 30 characters. No spaces please
The Profile Name is already in use
Notify me of new activity in this group:
Keep me informed of the latest:
By clicking "Join Now", you agree to Toolbox for Technology terms of use, and have read and understand our privacy policy.
What is the cost of energy consumption and cooling servers?
I'm trying to develop a spreadsheet that tracks the energy consumption
in kwh and heat dissipation in BTU/hr. I also want to assign a cost to
these metrics. Computing energy consumption and cost seems easy enough:
Watts x 8760(hours in a year)/1000 gives you kWH. kWH x .10(cost per
kWH) gives you the cost per year for energy consumption.
Computing the cost for heat dissipation proves to be a lot harder. I
have only been able to find one formula to do this but I don't
understand it.
The formula comes from
http://findarticles.com/p/articles/mi_m0BRZ/is_10_ 23/ai_111062988.
A synopsis of the article is:
We're not done with the calculators just yet. There is one other factor
that we failed to take into consideration; all electrical products
produce heat. Heat is called BTUs (British Thermal Units) and for every
watt of power consumed, 3.41 BTUs are created. As a result, the 100TBs
of RAID noted above generate approximately 136,400 BTUs per hour. The
more power used, the more heat is produced, which must be compensated
with cooling to prevent the products from overheating. Air conditioning
systems installed on top of buildings are used to introduce cool air
into the computer room to keep the temperature constant. Unfortunately,
air conditioning systems use power, too, and the age and efficiency of
the air conditioning system will determine how much electricity and cost
necessary to keep the above storage system cool. The efficiency of an
air conditioner is based on the K-Factor. High efficiency units may
consume as little as .33 BTU to cool 1 BTU of Heat. Older units may have
a 1:1 power to cooling ratio. Therefore the next calculation looks like
#BTU/3.4/1000 x .33 x .10 = Cost Or 136400/3.4/1000 x .33 x .125 =
Or 31.68 per day Or $ 11,563 per year
I don't understand why BTU's are divided by 3.4.
Does anyone know or have a formula to use to calculate the cost of heat
Merlin5x5 replied Jul 26, 2007
You are converting BTU back to watts by dividing, Total BTU / 3.4
(The source is Watts X 3.4 = BTU), so (watts*(3.4/3.4) = BTU / 3.4)
Then you are converting total watts to Kilo watts, Total watts / 1000
Now you perform the KWH times your cost. Total KWH * cost (.10)
That gives you your hourly cost of kooling per KWH
Multiply your hourly rate by 24 to get daily rate 1.32*24=31.68Multi
ply daily rate to get yearly rate 31.68 *365=11,563
Check with your local power company for rates that vary by hour,
just to make it more complicated.
> Subject: [unixadmin-l] What is the cost of energy consumption and cooling
servers?> Date: Thu, 26 Jul 2007 12:22:24 -0500> From: unixadmin-l@Groups.
> To: email@removed > > > I'm
trying to develop a spreadsheet that tracks the energy consumption> in kwh
and heat dissipation in BTU/hr. I also want to assign a cost to> these metr
ics. Computing energy consumption and cost seems easy enough:> Watts x 8760
(hours in a year)/1000 gives you kWH. kWH x .10(cost per> kWH) gives you th
e cost per year for energy consumption.> > Computing the cost for heat diss
ipation proves to be a lot harder. I> have only been able to find one formu
la to do this but I don't> understand it.> > The formula comes from> http:/
/findarticles.com/p/articles/mi_m0BRZ/is_10_23/ai_1 11062988.
> A synopsis of
the article is:> > We're not done with the calculators just yet. There is
one other factor> that we failed to take into consideration; all electrical
products> produce heat. Heat is called BTUs (British Thermal Units) and fo
r every> watt of power consumed, 3.41 BTUs are created. As a result, the 10
0TBs> of RAID noted above generate approximately 136,400 BTUs per hour. The
> more power used, the more heat is produced, which must be compensated> wi
th cooling to prevent the products from overheating. Air conditioning> syst
ems installed on top of buildings are used to introduce cool air> into the
computer room to keep the temperature constant. Unfortunately,> air conditi
oning systems use power, too, and the age and efficiency of> the air condit
ioning system will determine how much electricity and cost> necessary to ke
ep the above storage system cool. The efficiency of an> air conditioner is
based on the K-Factor. High efficiency units may> consume as little as .33
BTU to cool 1 BTU of Heat. Older units may have> a 1:1 power to cooling rat
io. Therefore the next calculation looks like> this:> > #BTU/3.4/1000 x .33
x .10 = Cost Or 136400/3.4/1000 x .33 x .125 = > $1.32> Or 31.68 per d
ay Or $ 11,563 per year> > I don't understand why BTU's are divided by 3.4.
> > Does anyone know or have a formula to use to calculate the cost of heat
> dissipation? > > > > >
Robert Gregory replied May 13, 2011
Merlin, I just used this today co do a TCO on power and cooling consumption and this formula is a life saver.
Should be easy to put into a spread sheet.
Rod Johnson replied Jun 5, 2013
The BTU's divided by 3.4 basically converts the BTU's back to watts. The dividing of the Watts by 1,000 converts the unit of measurement to Kilowatt hours. I understand multiplying by .10 as that is
the cost per KWh and the .33 is the efficiency factor of the air conditioner.
Get Answers - It's Free
If you're looking for UNIX - General Discussions help, sign up and take advantage of 32,892 UNIX professionals who are here to help you.
|
{"url":"http://unix.ittoolbox.com/groups/technical-functional/unixadmin-l/what-is-the-cost-of-energy-consumption-and-cooling-servers-1545320","timestamp":"2014-04-20T23:35:02Z","content_type":null,"content_length":"119758","record_id":"<urn:uuid:47f11178-37d4-4d74-9772-ded0c1ba2003>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HELP SIMPLIFY (sinx/1-cosx)+(1-cosx/sinx)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I believe you have attempted to do it. Where were you stuck? Though the answer isn't that pretty..
Best Response
You've already chosen the best response.
okay so i was thinking of multiplying cosx to the numerator and denominator of the (sinx/1-cosx) so then the denominator will be sin^2x and then multiply sinx to (1-cosx/sinx) so that both
denominators will bbe sin^2x is that even a good first step to take?
Best Response
You've already chosen the best response.
sinx/(1-cosx)*(1+cosx)/(1+cosx) + (1-cosx/sinx) =sinx(1+cosx)/sin^2 x + (1-cosx/sinx) =(1+cosx/sinx)+(1-cosx/sinx) =(1+cosx+1-cosx)/sinx =2/sinx
Best Response
You've already chosen the best response.
why did you do this sinx(1+cosx)/sin^2 x or how'd that happen
Best Response
You've already chosen the best response.
oh wait nevermind i see what you did there
Best Response
You've already chosen the best response.
when you multiply (1-cosx) by its conjugate (1+cosx), you get (1-cos^2 x) which is just sin^2 x --> sin^2 x + cos^2 x = 1 so 1-cos^2 x=sin^2 x
Best Response
You've already chosen the best response.
okay well how did that turn into (1+cosx/sinx)?
Best Response
You've already chosen the best response.
there is a sinx in the numerator and a sin^2 x in the denominator. so 1 of them cancels out.
Best Response
You've already chosen the best response.
and you got 2/sinx because the +cosx and the -cosx cancel out right?
Best Response
You've already chosen the best response.
because that is the remaining denominator for both terms.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
you can simplify it even further because 1/sinx = cscx so it can also be written as 2cscx
Best Response
You've already chosen the best response.
oh okay thank you this was really helpful
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a06388e4b0e22d17eebfea","timestamp":"2014-04-16T16:52:33Z","content_type":null,"content_length":"56615","record_id":"<urn:uuid:4cafcc27-5a8f-4dc0-8f4a-9fb78a8a1717>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f1137ffe4b0a4c24f57cab6","timestamp":"2014-04-19T12:38:56Z","content_type":null,"content_length":"85638","record_id":"<urn:uuid:3823cbbd-d2ae-4554-b420-a1ed9523698d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Electric dipole's maximum angular velocity
1. The problem statement, all variables and given/known data
Consider an electric dipole located in a region with an electric field of magnitude [tex]\vec{E}[/tex] pointing in the positive y direction. The positive and negative ends of the dipole have charges
+q and -q, respectively, and the two charges are a distance D apart. The dipole has moment of inertia I about its center of mass. The dipole is released from angle [tex]\theta[/tex], and it is
allowed to rotate freely.
What is [tex]\omega_{max}[/tex], the magnitude of the dipole's angular velocity when it is pointing along the y axis?
2. Relevant equations
dipole moment p= qd
U= -[tex]\vec{p}[/tex] [tex]\cdot[/tex][tex]\vec{E}[/tex]
3. The attempt at a solution
I attempted to use energy, but I am not sure how to do it correctly - does potential energy equal kinetic? is the potential energy the one described in the above equation?
|
{"url":"http://www.physicsforums.com/showthread.php?t=216085","timestamp":"2014-04-18T18:24:28Z","content_type":null,"content_length":"25803","record_id":"<urn:uuid:0785c463-2e91-4ae6-a05d-2adaa100acd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Independence/Dependence
I clearly defined the combination of elements and you clearly understood this.
Why are you now asking for a zero set?
The book you quoted clearly talked about addition of two elements. So it makes sense that you would want to define A+B for A and B sets and that you want to have a zero set. If you cannot define
these things, then your definition of "linear dependence of sets" is not compatible with the definition of Borowski.
A vector is a set of points that satisfy certain conditions, specific to the problem in hand.
A vector is not a set of points. At least: nobody really thinks of a vector as a set of points. Depending on the set theory you choose, everything is a set. But I doubt many people in linear algebra
see (a,b) actually as [itex]\{\{a\},\{a,b\}\}[/itex].
Since this is getting further and further from the OP and personal to boot I withdraw from this thread.
That is perhaps the best decision.
|
{"url":"http://www.physicsforums.com/showthread.php?p=4182989","timestamp":"2014-04-20T11:23:40Z","content_type":null,"content_length":"28416","record_id":"<urn:uuid:9857414a-f41e-4519-b69d-69054a59e504>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Review of Sand Production Prediction Models
Journal of Petroleum Engineering
Volume 2013 (2013), Article ID 864981, 16 pages
Review Article
Review of Sand Production Prediction Models
^1Department of Civil and Environmental Engineering, University of Alberta, Edmonton, AB, Canada T6G 2W2
^2BP America Inc., Houston, TX 77079, USA
Received 31 August 2012; Revised 7 November 2012; Accepted 12 November 2012
Academic Editor: Jorge Ancheyta
Copyright © 2013 Hossein Rahmati et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
Sand production in oil and gas wells can occur if fluid flow exceeds a certain threshold governed by factors such as consistency of the reservoir rock, stress state and the type of completion used
around the well. The amount of solids can be less than a few grams per cubic meter of reservoir fluid, posing only minor problems, or a substantial amount over a short period of time, resulting in
erosion and in some cases filling and blocking of the wellbore. This paper provides a review of selected approaches and models that have been developed for sanding prediction. Most of these models
are based on the continuum assumption, while a few have recently been developed based on discrete element model. Some models are only capable of assessing the conditions that lead to the onset of
sanding, while others are capable of making volumetric predictions. Some models use analytical formulae, particularly those for estimating the onset of sanding while others use numerical models,
particularly in calculating sanding rate. Although major improvements have been achieved in the past decade, sanding tools are still unable to predict the sand mass and the rate of sanding for all
field problems in a reliable form.
1. Introduction
A significant proportion of the world oil and gas reserves is contained in weakly consolidated sandstone reservoirs and hence is prone to sand production. Material degradation is a key process
leading to sanding. Drilling operations, cyclic effects of shut-in and start-up, operational conditions, reservoir pressure depletion, and strength-weakening effect of water may gradually lead to
sandstone degradation around the perforations and boreholes. High pressure gradient due to fluid flow also facilitates the detachment of sand particles. In addition, fluid flow is responsible for the
transport and production of cohesionless sand particles or detached sand clumps to the wellbore.
Sand production is the cause of many problems in the oil industry and it affects the completion adversely. These problems include, but are not limited to, plugging the perforations or production
liner, wellbore instability, failure of sand control completions [1], collapse of some sections of a horizontal well in unconsolidated formations, environmental effects, additional cost of remedial
and clean-up operations, and pipelines and surface facilities erosion, in case the sand gets out of the well. The mechanical prevention of sanding is costly and leads to low productivity/injectivity.
Therefore, there is always a cost benefit if sand management and modeling is implemented early before well completions.
Sand production takes place if the material around the cavity is disaggregated and additionally, the operation of the well generates sufficient seepage force to remove the sand grains. It is a
complex phenomenon which depends on various parameters such as the stress distribution around the wellbore, the properties of the rock and fluids in the reservoir, and the completion type. Therefore,
capturing all the factors and mechanisms in the numerical models is difficult and the models have many limitations.
Due to the importance of the sand production prediction in the oil industry, considerable efforts have been made in developing robust numerical methods for sand production prediction. In this paper,
the techniques, advances, problems, and the likely future development in numerical models for the prediction of sand production are presented.
2. Common Techniques Used in Sand Management Decisions
A number of approaches have been developed to predict or help to understand the sand production problem using physical model testing, analytical and empirical relationships, and numerical models.
Routine laboratory tests can only predict the onset of sand production [2]. More sophisticated physical model could predict volumetric sand production [3]. They are also time-consuming and expensive.
In addition, because of the small sizes of the laboratory setup, the results are usually influenced by boundary effects. Analytical models are fast and easy to use but they are only suitable to
predict the onset of sand production and they have limitations. Most of them are only valid for capturing a single mechanism of sanding and under simplified geometrical and boundary conditions which
are not usually the case in complicated field-scale problems. Numerical models are by far the most powerful tools for predicting sand production. They can be combined with analytical correlations to
obtain the results more efficiently. Experimental results are also utilized to calibrate or validate a numerical model. Yet, numerical models have their own limitations and extensive efforts have
been made to improve them.
Modeling of sand production requires coupling of two mechanisms. The first mechanism is mechanical instability and degradation around the wellbore and the second one is hydromechanical instability
due to flow-induced pressure gradient on degraded material surrounding the cavity (e.g., perforation and openhole). In general, numerical methods in the mechanical modeling are categorized under
continuum and discontinuum approaches.
In the continuum approach, matters are treated as continuous in deriving the governing differential equations. The assumption of continuity implies that the material cannot be separated or broken
into smaller pieces. In the case when there is a discontinuity, the magnitudes of deformation along or across the discontinuity are about the same as the rest of the continuum [4].
Discrete element method (DEM) is a useful tool to simulate sand production especially to understand the mechanism of sanding. However, it cannot be used for large-scale problems because of large
computational time required. The calibration of the model is also difficult and involves several uncertainties as it is not possible to create a model with the exact particle arrangement as the real
material. Further, methodologies to directly measure sandstone micro properties have not been developed yet. Presently, the microproperties are obtained in calibrating against the actual sand
behavior [5, 6]. Therefore, continuum-based models are more popular especially for field-scale problems. However, there can be advanced models which couple continuum and discontinuum models together
to take advantage of both methods. These are known as hybrid models and are discussed later.
A comprehensive sand management may require the use of some or all of the above mentioned techniques.
2.1. Numerical Models Based on Continuum Approach
Developments of continuum models are based on various assumptions, constitutive laws, sanding criteria, and numerical procedures with different levels of complexity to capture the physical behavior
of the material.
Table 1 summarizes the majority of continuum-based sanding models. Initially many researchers first calculated the onset of sand production or the initiation of mechanical failure until Vardoulakis
et al. [7] proposed the basic theory for hydrodynamic erosion of sandstone which is based on filtration theory without solving the equilibrium equation. Later Papamichos and Stavropoulou [8] combined
the evolution of localized deformation with hydrodynamic erosion. This was the beginning for many researches that adopt full strength hardening/softening behavior of sandstone in their models [9–17].
The results are highly mesh-dependent for strain softening material and hence a regularization method is necessary. Regularization methods include an internal length, which has been related to the
grain size, in the formulation. Papanastasiou and Vardoulakis [18] applied Cosserat micro-structure method [18, 19] for cavity failure around boreholes. Zervos et al. [20] considered Gradient
elastoplasticity [21] for thick-walled cylinders. Fracture energy regularization technique [22] was also applied by Nouri et al. [16], Wang et al. [23], and Rahmati et al. [24] in sand production
2.1.1. Constitutive Models Used in Continuum Approaches
Rock failure and/or degradation in commonly accepted as prerequisite for sanding. Failure of geomaterials is usually associated with formation of shear bands which are narrow zones of concentrated
plastic deformation. This phenomenon, which is known as “deformation localization” or simply “localization,” is one of the key parameters in sanding prediction models. Further details about this
concept can be found in Sulem et al. [40], Nouri et al. [16], and Jafarpour et al. [41].
In the simplest form, an elastic brittle failure model has been implemented in sand production models by Nordgren [42], Coates and Denoo [43], Risnes et al. [44], and Edwards et al. [45]. Most of
these models predict the onset of sand production by considering failure of the sand matrix. Elastic brittle failure rock behavior leads to excessive stress concentrations at the borehole wall and
therefore results in overestimation of initial sand production conditions. This model may be used as a quick estimate of sanding onset in relation to production parameters and in situ stresses. The
predictions of the elastic models are cumbersome unless they are combined with apparent strength models.
An elastoplastic material model can simulate the material behavior more realistically. However, it requires more computational effort and more input data. Many researchers have implemented
elastoplastic models in sand production analysis (e.g., Morita et al. [25]; Antheunis et al. [46]; Peden et al. [47]; Papamichos and Vardoulakis [9]; Detournay et al. [11]; Wan and Wang [48]; Servant
et al. [36]; Wang et al. [35]; Wan et al. [49]; Detournay [15]; Wan and Wang [50]; Vaziri et al. [14]; Nouri et al. [13]; Nouri et al. [51]; Nouri et al. [34]; Rahmati et al. [24]; Azadbakht et al. [
39]). These works will be discussed in more detail in the next section.
Implementation of classical elastoplastic continuum models and the abovementioned failure criteria are inefficient in modeling the localization phenomena, which are of discontinuum nature. Therefore,
employment of such models in simulation of material degradation leads to inability in recovering size and dependency of the results on numerical mesh design. This problem would be addressed by
pursuing two distinct solutions: using discontinuum models that we discuss in the next sections and enriching the material model by an appropriate regularization strategy. Papanastasiou and Zervos [
52] used Cosserat continuum and gradient elastoplasticity theories to predict the localization phenomenon. These models also proved efficient in capturing the size effect often observed in laboratory
tests performed on thick-walled cylinder samples [53].
Vardoulakis et al. [54] used bifurcation theory to predict the failure mode and the critical value of the stress at infinity. They showed that failure depends on stress path and boundary conditions.
Tronvoll et al. [55] performed finite element modeling using bifurcation theory to solve the axisymmetric problem for the trivial solution and checked the condition for non-axisymmetric bifurcation
modes. Van Den Hoek et al. [56] and Papanastasiou and Vardoulakis [57] used bifurcation theory in a Cosserat continuum which takes into account the material microstructure. In a Cosserat continuum
individual points possess, in addition to their translational degrees of freedom, independent rotational degrees of freedom, which result in an internal characteristic length in the classical
elastoplastic constitutive laws.
The models based on bifurcation theory require special numerical techniques to capture size effects, localization mechanisms, and so forth, which make it computationally demanding and hard to apply
in solving field problems.
As shown in Table 1, the most basic improvement in sand models is the yield function. Mohr-Coulomb (MC) model is the most common model being used. Vaziri et al. [10] modified the MC model using a
bilinear yield function to differentiate sand behavior under low and high confining stresses. The model was later used by Nouri et al. [12], Nouri et al. [38], and Vaziri et al. [14]. The theory is
based on Sulem et al. [40] and is described more thoroughly by Nouri et al. [16] and Jafarpour et al. [41]. Haimson and Lee [58] showed that the slit mode of cavity development is related to the
formation of compaction bands, which are thin bands of localized compressive deformation in high porosity rocks. Therefore, Detournay [15] extended Detournay et al. [11] work by accounting for the
compaction mode of failure. The model used Double Yield cap constitutive law to capture compaction bands. The simulation results show that the slit mechanism develops as a combination of volumetric
collapse and transport of failed material by seepage forces. It was found out that pore collapse is the responsible mechanism for slit mode of cavity failure associated with sand production. Although
it was considered as one of the main sanding mechanisms, many researchers have avoided the incorporation of compression yielding to simplify their models.
It appears that the optimum constitutive models are those that are based on the critical state theory and use a combined isotropic and kinematic hardening model which allows capturing all kinds of
failure (shear, tensile, and compressional). In addition, it would be ideal to capture the effect of hysteresis to simulate fatigue in cyclic start-up and shut-in conditions in the wellbore.
2.1.2. Sanding Criteria Used in Continuum Approaches
Several mechanisms are recognized as responsible for sand production. They are mainly based on shear and tensile failure, critical pressure gradient, critical drawdown pressure, critical plastic
strain, and erosion criteria. Table 2 summarizes the main sanding criteria used in sand models.
When the effective minimum principal stress is equal to the tensile strength of the formation rock, tensile failure may occur. This mode of failure is responsible for rock degradation. It can occur
as a standalone degradation mechanism or in combination with shear failure [22]. Tensile mode is also believed to be the responsible mechanism for particle removal after the degradation during
production. In this case, tensile failure relates to seepage forces on the degraded sand particles.
Shear failure may occur when some planes in the vicinity of the wellbore are subjected to higher stress than they can sustain. This is the dominant mechanism in cemented sands and when it is combined
with tensile cracks and high compressive stress, it can lead to buckling at the wellbore wall [18].
When effective hydrostatic stresses are increased due to the reservoir pressure depletion, pore collapse may occur which can lead to sand production. Plastic volumetric compression can be captured
using a compression yield cap in the failure envelope. This mainly occurs in high porosity sandstones.
Risnes and Bratli [59] proposed a tensile failure criterion for perforation tunnel inner shell collapse. Bratli and Risnes [60] and Weingarten and Perkins [61] proposed sanding criteria in terms of
pressure gradient. Morita et al. [25] proposed a sand production model that can be triggered by either shear failure or tensile failure.
Dynamic seepage drag forces lead to internal and surface erosion that result in releasing and transporting sand particles. Internal erosion may be related to micromechanical impacts imposed on solid
skeleton by gas bubbles, water droplets, and so forth. Surface erosion may be related to parallel flow scouring the surface and normal flow over the surface [28]. Numerous authors studied surface
erosion criterion in sand production modeling. Vardoulakis et al. [7] proposed a model that was based on mass balance of the produced solids and radial flowing flow conditions, the constitutive law
for particle erosion, and Darcy’s law but neglecting skeleton deformation. Based on sand production experiments, Tronvoll et al. [62] showed that in addition to the radial flow, axial flow parallel
to the perforation channels is important in sand production and it may result in surface erosion of the perforation channels. Consequently, Vardoulakis et al. [63] extended Vardoulakis et al. [7]
work to account for axial flow conditions. In the governing equations, they included Brinkman’s extension of Darcy’s law, which accounts for a smooth transition between channel flow and Darcian flow.
The results show that erosion progresses in time at the front of high transport concentration.
First Stavropoulou et al. [64] and later Papamichos et al. [9] advanced a purely hydromechanical model proposed by Vardoulakis et al. [7] by coupling the poromechanical behavior of the solid fluid
system with the erosion behavior of the solids due to fluid flow. Papamichos et al. [65] extended their own work by using a porosity diffusion type law that results in a sand rate that decreases over
time when the process of erosion zone enlargement takes place. The model is based on nonlinear elastoplasticity, nonlinear stress dependent elasticity, friction hardening and cohesion softening, and
single-phase flow fully coupled with geomechanics.
Wang et al. [35] also performed a fully coupled single-phase flow analysis based on erosion mechanics using the FEM. They applied the model in 2D plain strain geometry for both open hole and
perforated casing completion.
Based on laboratory experiments, Haimson [66] and Papamichos [67] observed the slit cavity evolution pattern during sand production. Subsequently Detournay et al. [11] proposed a model to predict the
formation of slit channels based on a critical flow rate. They modified the erosion law used by Vardoulakis et al. [7] with the addition of a critical flow rate which is a function of the grain size.
They also assumed that sand continuously is produced until a critical porosity is reached beyond which the material suddenly collapses. This process can be responsible for the periodic sand bursts
observed in experiments. The model was applied to 2D plain strain geometry for a long wellbore or perforation using a finite difference software. The model can predict different erosion features such
as surface spalling and small burst events.
Kim et al. [17] calculated sanding conditions and utilized a sanding criterion based on the force balance on each element and achieved a good match with experimental data reported by Nouri et al. [51
]. In their criterion, the forces leading to sand production are hydrodynamic forces generated by the pore pressure gradient between element faces in the flow direction. The resistance forces are the
forces that retain the elements in place and are generated by vertical and tangential stresses and the friction coefficient. The advantage of using this method is that no calibration parameter for
sanding (such as sand production coefficient) is necessary. The friction coefficient is an empirical parameter which depends on the grain size and mineralogy.
However, the abovementioned coupled hydromechanical erosion models have been criticized due to the following conflicting assumptions. Material mass balance equations are based on rigid porous media
while equilibrium equations are based on deformable porous media. Therefore in order to establish a proper coupled mechanical erosion model in a consistent manner, the porosity changes can be split
into two parts: one related to volume changes as a result of erosion, and the other one due to deformation in the matrix subjected to stress changes [48].
To sum up, a realistic sanding model should ideally account for all failure mechanisms (shear, tensile, and compressional) and must also consider the effect of fluid flow. Therefore, a suitable sand
erosion model consists of a combination of erosion criterion, tensile criterion, and compressional criterion. Considering the physics of sand production, erosion seems more suitable for weak rock
where full decementation and degradation to small particles is more likely [68]. On the other hand strong rocks are more prone to localized failure that result in larger chunks of sandstone which are
not easily eroded away. Lastly, compressional failure is more dominant in highly porous weak materials where void spaces collapse easily under high loading.
2.1.3. Phases Involved in Continuum Approaches
The models can be categorized into two groups based on the phases involved. In the first group, mass balance equation is solved for only fluid and solid phases while the second group recognizes
fluidized solid as a phase and solves for solid concentration in the fluid. Fluidized solids are the particles in suspension that move with the fluid. Any other loose particle which is trapped inside
the void space is seen as part of the solid phase. However, these models use a constant viscosity for the slurry. It is notable over time that researches tend to use the simpler approach (solid and
fluid phases) and couple the equations with an erosion model. This is mainly because good agreement between the model and field observations were obtained when combined with a suitable sanding
Multiphase fluid flow may also affect both the onset of sand production and sand rate. Tronvoll et al. [69] and Vaziri et al. [10] observed water cut effects on the onset of sand production. Water
inflow changes the relative permeability and capillary pressure. It also can dissolve cement bonds and weaken the strength of the porous media.
Wang et al. [48] presented an integrated modular approach to predict volumetric sand production and cavity growth under two-phase flow (oil and water) and 3D geometry. In this work, the effect of
water on rock strength reduction is reflected in material properties such as cohesion. The results show that water contact has a significant impact on the sand rate.
Gas flow also may hasten the instability process in sand production. When gas comes out the oil phase due to pressure drop and flows towards the wellbore at high velocities, it applies additional
drag forces on sand particles and increases sand production. Wan and Wang [49] studied the gas effects in 1D model using the finite element method. They assumed that eroded mass in erosion law is
proportional to the total fluid flux, which is referred to the oil and gas flux. The effect of multiphase fluid flow was also considered by Wang and Xue [32] for oil-water phases and later by Wang et
al. [35] for oil-water-gas phases.
The only available numerical works in the literature that incorporate the water contact effect on sanding were performed by adjusting the cohesion or lowering the rock strength [17, 39].
2.1.4. Treating the Sanded Elements
Different strategies have been used for dealing with those elements that satisfy sanding criteria. The first one is to remove such elements from the model assuming cavities and wormholes grow as a
result of sanding. This seems to be a suitable approach in stronger rocks in which stable cavities can form. The other approach is to keep the element in place but alter the material properties to
residual values. We believe that stable cavities cannot develop in weak sandstone. Rather, the space is occupied by infill material, or cohesionless sand particles. In this approach, sandstone
properties are altered to those of degraded cohesionless sand. The property change should also be applied in erosion models as a function of increasing porosity in the eroding elements.
It is evident that the moduli, tension cut off and permeability vary with the production of sand and increase of the porosity. However, the correct method to apply these changes requires experimental
data. Most researchers use arbitrary choice of permeability change with porosity or with volumetric strain or even with mean stress. Wang and Xue [32] used two permeability correlations and found
that the permeability relationship plays an important role. There is also disagreement on whether permeability should increase or decrease. It is reported that for high permeable sand formations, the
permeability of the disaggregated sand is much less than that of the intact sand. This is mainly attributed to the sand deposition and plugging of the pore space, which is not the case for less
permeable sand [26].
The moduli of the elements will also vary with porosity to the values of loose cohesionless sand (about 6.9MPa for bulk modulus and 4.14MPa for shear modulus). These numbers are the lowest values
reported for loose sand [70].
2.1.5. Model Design
Openhole completion is often treated with axisymmetrical models. Strictly speaking, this is correct only when horizontal stresses are equal. However, most of the times, if not always, principal
horizontal stresses, and , are not equal. In such cases a plane strain model can be a suitable choice for 2D analysis. Plane strain may not always be an appropriate assumption as vertical deformation
may not necessarily be negligible. Papanastasiou and Zervos [52] suggested that generalized plane strain could be an appropriate assumption in the modeling of vertical and inclined wellbores.
As sand production model is commonly used in cased and perforated (C&P) completions, it is important to consider the sand behavior under such geometrical and boundary conditions. For instance the
frequency of shots, the length of perforations, and their orientations may lead to a more intense commingling of the failed zones and finally higher sanding rate. One solution is using 3D simulation
but it is computationally demanding because very fine mesh is required around perforations. In addition, creating the hollow geometry in weak rocks may not be reasonable as in reality the perforation
can collapse upon creation and be filled with infill degraded sand.
Some perforation simulations have used axisymmetric models in which the perforation is assumed to be ring-shaped rather than conical or cylindrical [39]. Such an assumption influences the pressure
gradient around the perforations and also impacts the mechanical response. These models are also unable to capture the perforation direction effect, which can play a significant role in perforation
stability and failure as demonstrated by Papanastasiou and Zervos [71]. They performed a 3D numerical simulation to study the effect of orientation on stability and failure of perforation. Based on
results of this work, it is recommended to avoid perforating the wellbore parallel to the minimum horizontal stress direction as perforations in this direction suffer more compressive stress and
hence more chance of failure and sand production. These models require calibration against field and/or physical model testing before application to real-world sanding problems.
2.1.6. Other Factors for Continuum-Based Numerical Models
Sand production is a moving-boundary problem. As sand is produced, a sanded zone is formed around the perforations. Adaptive meshing can be very useful in processes where the geometry or the boundary
is changing. Yet, there are only two applications of adaptive meshing in the literature [23, 37].
The current models are unable of predicting the sand bridging and fines retention in the rock. Sand particles can aggregate at the perforation cavity and form a stable entity called sand bridge and
act as a filter which may reduce further sand production as long as the flow rate remains constant. The theory for modeling sand deposition was proposed by Vardoulakis et al. [7] but it has been
applied only in one model [30] using a similar but not exactly the same approach. Yi [30] considered a part of the degraded sand as to be deposited in the porous media. The difficult part about
modeling sand deposition is the calibration of the critical porosity or the critical sand concentration after which sand deposition initiates.
An important issue about the current sand models is that almost all of them are applied in modeling production wells. A few researchers [14, 30] performed numerical studies to predict sanding in
injector wells. Sanding mechanisms in injectors have not been investigated thoroughly and may be quite different such as the effects of water hammer (WH) waves. The observation in injection wells is
often described as the cases with high sand production within a short duration. Few papers [72–75] tried to explain the sanding problems in injectors. It is stated that sand liquefaction due to WH
pressure pulses is the most likely mechanism for massive sand production. Liquefaction is defined as the process by which saturated sand loses shear strength and stiffness in response to dynamic
loading [76]. WH is a general term describing generation, propagation, and damping of pressure waves in pipes. It occurs due to sudden velocity changes such as quick shutting of the well [72].
Nevertheless, no work has been published which investigates liquefaction around the wellbore and the conditions leading to liquefaction. As a result, sand particles can flow easily like a liquid. As
no investigation has been performed on liquefaction in sand production, it is difficult to confirm its role in massive sand production.
2.2. Numerical Models Based on Discontinuum Approach
Sand production is a continuous and dynamic process that occurs at the microscopic scale and the rock becomes a discontinuum in nature. As mentioned before, conventional continuum approaches cannot
capture local discontinuous phenomena. Therefore, discontinuum approach is promising to simulate phenomena such as detachment of individual particles from the rock matrix.
Cundall [83] first introduced the Discrete Element Method (DEM). The method can be used to simulate the disintegration of granular media subjected to loading. Each particle of the granular media is
considered as an individual entity with a geometric representation of its surface topology and a description of its physical state. Particle bonds are modeled with a spring-dashpot in the normal
direction and a spring-dashpot-frictional slider in the tangential direction. In the DEM, the interaction of the particles is treated as a dynamic process and a state of equilibrium is reached
whenever the internal forces are equal to the external forces. The contact forces and displacements of a stressed assembly of particles are found by tracing the movements of the individual particles
Table 3 summarizes some of the discontinuum-based sanding models. At first, O’Connor et al. [77] introduced the application of DEM to model the mechanics of sand production during oil recovery. Using
laser scanning and sieve testing they developed techniques to consistently represent particles with irregular geometries. They used particle bonding scheme to mimic the cement and cohesion due to
capillary forces. The bond also incorporated spring stiffness and a nominal tensile breaking strength assuming a bond dimension proportional to the size of the particles it connects. They
incorporated the fluid flow calculations by combining the continuity equation and Darcy’s law using the finite element method. Darcy flow is formulated with a measure for effective permeability in
the solid medium based on the porosity and average diameter of the solid particles. Their 2D model provides an understanding of the fundamental physics involved in sand production and the relative
importance of various rock and fluid properties.
Jensen and Preece [78] explored the coupling of 2D DEM and finite element implementation of the 2D continuity equation for Darcy flow to assess the sanding potential. The basic particle shape used by
the model was an n-sided polygon and only the tensile mode for bond failure was considered. They concluded that lower strength of the cohesive bonds increased the number of particles breaking free
from the solid matrix.
Li et al. [79] used commercial DEM code PFC2D to simulate hollow cylinder tests with fluid flow to study sanding. PFC2D simulates an assembly of circular disks with the bonds inserted between them.
The disks are assumed to be rigid but they can overlap. The bonds have normal and shear stiffness and strength. In the standard PFC2D code, a bond fails when the tensile or shear stress in the bond
exceeds its strength. Bond breakage may be interpreted as microfailure in the real rock. The growth of such microfailures eventually leads to macroscopic failure of the rock. Li and Holt [80] showed
that DEM model may not result in realistic macroscopic friction coefficients if only circular or spherical grain shapes are used.
Li et al. [79] improved the prediction of macroscopic friction coefficient by setting the bond strength so high that no bonds in the model would fail due to the stress in the bond. Instead, all bonds
associated with a given disk break when the stresses inside the disk satisfy a failure criterion. They used a failure criterion composed of tensile failure, shear failure, and compressive failure. A
simple approach was used to calculate and couple the fluid flow. They found three typical failure patterns in the simulations similar to those observed in laboratory experiments. The slit-like
breakout failure pattern was observed when the material is prone to localized compressive failure due to grain crushing. For those cases where the material was weak and tensile strength was low,
uniform failure around the borehole was observed along with a rather uniform hole enlargement. In those cases with relatively competent rock properties, which were unlikely to fail in localized
compaction, the failure pattern was observed to be in the form of dog-eared breakouts.
Several researchers (e.g., [80, 85]) have modeled fluid flow in 2D DEM codes by introducing fluid flow networks and simulating flow along the flow paths connecting the voids which are referred to as
pipes. The fluid velocities that flow through the pipes and the pore pressures were computed based on Darcy’s theory. The forces arising from the pore fluid were then calculated and applied to the
particles in the DEM model. Li and Holt [80] implemented this type of fluid-solid coupling system in the PFC2D codes. While the geometrical limitations of using a 2D model to investigate breakout
geometry are obvious, the fluid flow simulation through flow networks is computationally expensive.
The entire coupling techniques described above used Darcy’s law to calculate fluid flux or pressure. Darcy’s law has been derived from the Navier-Stokes equations via homogenization that is only
valid for slow and viscous flow. The conditions may not be met around the wellbore where breakout forms [81]. Chan and Tipthavonnukul [86] proposed a method to couple continuum and discontinuum flow
to simulate granular particles movement in a flowing fluid. The fluid flow is modeled using the 2D Navier-Stokes equations solved by a finite volume method. The coupling is achieved by detecting the
presence of the solid in the flow domain and altering the flow resistance accordingly. The method is computationally expensive and therefore it is not applicable in large-scale problems.
In 2D DEM models, only two force components and one moment component are considered. However three force components and three moment components exist in a 3D DEM model. Since sand production is a
three-dimensional problem with complex geometry, where cement bonds and arching of particles are important, there is a need to model this problem using 3D DEM. The 2D DEM models overestimate the
effect of fluid flow on the integrity of the assembly of particles as the resistance to particle dislodgment due to contact forces and bonds normal to the fluid flow is neglected in 2D models. In
addition, 2D models cannot represent three-dimensional pore flow networks.
Cheung [81] used a coupled fluid-solid 3D DEM model for a perforation test simulation to study the sand production problem. They used the Navier-stokes equations assuming radial flow. Although this
scheme is computationally inexpensive, it has two major problems. First, by assuming radial flow, it is not suitable for investigating the impact of the flow at the tip of the perforation where the
fluid flow is in all directions. Second, the fluid flow scheme does not consider the presence of the cement between particles. The magnitude of the radial pore fluid velocity in each fluid cell is
calculated, considering only the presence of the particles. The presence of cement can highly affect the rock conductivity and therefore the fluid velocity.
Later, Zhou et al. [82] employed DEM with computational fluid dynamics (CFD) and showed that the main features of sand erosion can be captured by the CFD-DEM approach.
The main advantage of the DEM models is that it captures the motion and interaction of individual sand grains and its failure micro mechanism in a dynamic process. It enables the model to predict
many real behaviors such as continuously nonlinear stress strain response, behavior that changes in character according to stress state, memory of previous stress or strain excursion in both
magnitude and direction, dilatancy that depends on history, mean stress, and initial states, hysteresis at loading/unloading among others.
To the best of our knowledge, there is no existing continuum constitutive model that reproduces all of these behaviors. However, since the DEM involves many individual particles and interactions
between them, it is computationally expensive and therefore it is not applicable to large-scale problems.
Another disadvantage of the DEM model is the lack of a systematic method for an objective determination of micro material parameters. As opposed to the continuum-based models for which the strength
and elastic properties can be determined directly from laboratory testing, the micro properties cannot be determined by direct measurements of the macro responses on the laboratory specimens. It
could be found by means of a calibration process in which a particular assembly of particles with a set of micro parameters is used to simulate a set of material tests and the micro parameters then
are evaluated to reproduce the macro responses measured in such tests [84]. Several researchers (e.g., [5, 6, 84, 87]) have proposed calibration procedures relating the micro parameters to macro
properties of the material. However, the calibration procedure is challenging for the 3D DEM models. There may be several variations in the parameters and it is difficult to conclude which set of
parameters is most appropriate for the material.
2.3. Hybrid Approaches
Considering the advantages and disadvantages of the continuum- and discontinuum-based approaches, a hybrid model that combines them can be practical and efficient in sand production modeling.
Continuum-based approaches can be used in far field where the deformation is small hence continuum assumption is still valid and computationally efficient. On the other hand, a discontinuum-based
approach can be used to describe large deformation or discontinuity near the wellbore or the perforations. In this manner, accurate and descriptive simulation of field-scale problems becomes possible
with nowadays computational power available.
Some researchers have used this approach to analyze geomechanical problems. For example, El Shamy and Elmekati [88] and Elmekati and Shamy [89] combined a FEM code with a DEM code to analyze
soil-structure interaction problems. Also, Azevedo and Lemos [90] used the same approach to study fracture growth in tensioned columns. In a similar work, Zeghal and El Shamy [91] coupled a continuum
fluid model with discontinuum particle model to analyze the dynamic liquefaction of granular soils.
To the best of our knowledge, the hybrid scheme has not been used in sand production modeling.
3. Conclusions
Despite the numerous efforts in sand production and modeling, there are still some fundamental deficiencies which require to be addressed. Considering the works reported in the literature, there is
still significant room for improvement in sanding models. Some are listed below.
The assessment of the properties of infill materials requires more investigations and experimental data. The choice of these properties plays a significant role in sanding prediction. More accurate
correlations for the changes of rock and flow properties with sand production and increase of porosity are essential.
Methods to capture sand arching have not been explicitly developed as such require complex interaction between the geometry of the opening in the completion and the disaggregated rock mass
characteristics under prevailing stress state. Sand particles aggregate at the perforation cavity and form a stable entity called sand bridge and acts as a filter that prevents further sand particles
to produce as long as the flow rate remains constant.
Sand liquefaction around injection wells has not been investigated fully yet. Since there is no recorded measurement of how sanding occurs in an injector, it is not yet clear whether it is occurs
suddenly or gradually over many cycles, and if it is due to waterhammer or cross and back flow, or whether the produced material is sand or shale/clay.
The calibration procedure in the DEM model for determining micro material parameters needs further research.
In order to have more accurate analysis in the DEM model, the fluid flow scheme needs to be modified. For instance, in current models, the permeability is usually related to porosity changes due to
particles removal. However, cement debonding and wash out could also affect the pore-network and therefore the permeability.
Hybrid model combining DEM for the rocks around the wellbore and continuum approach for far-field rocks is expected to provide a more realistic and yet practical representation of a number of
critical factors governing sanding.
: Dimensionless erosion onset coefficient
: Constant for permeability equation
: Constant for permeability equation
: Transport concentration
: Constant for permeability equation
: Critical value of c for which the two competing phenomena, erosion and deposition, balance each other
: Permeability
: Initial permeability
: Mass rate of sand production
: Mass rate of sand deposition
: Pore pressure
: Pore pressure at wellbore
: Fluid flow rate ()
: Critical specific discharge
: Specific discharge in the th direction
: Wellbore radius
: Average grain size
: The boundary surface, , (unit length along wellbore axis)
: Time
: Flow velocity
: Solid velocity
: Volume of boundary layer,
: Plugging permeability reduction coefficient
: Volumetric strain
: Volumetric strain rate
: Minimum horizontal stress
Maximum horizontal stress
: Radial stress
: Tangential stress
: Vertical stress
: Effective stress acting parallel to the boundary
: Porosity
: Initial porosity
: Friction angle
: Sand production coefficient (must be calibrated experimentally)
: Sand deposition coefficient
: Permeability reduction coefficient
: Specific (unit) weight of fluid
: Fluid viscosity
: Friction coefficient
: Fluid density
: Solid density
: Density
: Equivalent density, )
: Proportion of particles with size less than average pore size
: Effective initial stress
: Notation representing the norm of a vector.
Conflict of Interests
The authors of this paper are not involved with those software companies whose products have been mentioned in this paper that may give rise to conflict of interests.
The authors would like to acknowledge the research funding for this study provided by NSERC through a Collaborative Research Development program supported by BP.
1. S. M. Willson, Z. A. Moschovidis, J. R. Cameron, and I. D. Palmer, “New model for predicting the rate of sand production,” in SPE/ISRM Rock Mechanics Conference, pp. 152–160, Irving, Tex, USA,
October 2002. View at Scopus
2. Y. Xiao and H. H. Vaziri, “Import of strength degradation process in sand production prediction and management,” in Proceedings of the 45th U.S. Rock Mechanics/Geomechanics Symposium, San
Francisco, Calif, USA, June 2011.
3. E. Papamichos, P. Cerasi, J. F. Stenebråten et al., “Sand production rate under multiphase flow and water breakthrough,” in Proceedings of the 44th U.S. Rock Mechanics Symposium and the 5th US/
Canada Rock Mechanics Symposium, Salt Lake City, Utah, USA, June 2010. View at Scopus
4. L. Jing and O. Stephansson, Fundamentals of Discrete Element Methods For Rock Engineering, Theory and Applications, Elsevier, 2007.
5. N. Belheine, J. P. Plassiard, F. V. Donzé, F. Darve, and A. Seridi, “Numerical simulation of drained triaxial test using 3D discrete element modeling,” Computers and Geotechnics, vol. 36, no.
1-2, pp. 320–331, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. P. H. S. W. Kulatilake, B. Malama, and J. Wang, “Physical and particle flow modeling of jointed rock block behavior under uniaxial loading,” International Journal of Rock Mechanics and Mining
Sciences, vol. 38, no. 5, pp. 641–657, 2001. View at Publisher · View at Google Scholar · View at Scopus
7. I. Vardoulakis, M. Stavropoulou, and P. Papanastasiou, “Hydro-mechanical aspects of the sand production problem,” Transport in Porous Media, vol. 22, no. 2, pp. 225–244, 1996. View at Scopus
8. E. Papamichos and M. Stavropoulou, “An erosion-mechanical model for sand production rate prediction,” International Journal of Rock Mechanics and Mining Sciences and Geomechanics Abstracts, vol.
35, no. 4, pp. 531–532, 1998.
9. E. Papamichos, I. Vardoulakis, J. Tronvoll, and A. Skjrstein, “Volumetric sand production model and experiment,” International Journal for Numerical and Analytical Methods in Geomechanics, vol.
25, no. 8, pp. 789–808, 2001. View at Publisher · View at Google Scholar · View at Scopus
10. H. Vaziri, B. Barree, Y. Xiao, I. Palmer, and M. Kutas, “What is the magic of water in producing sand?” in SPE Annual Technical Conference and Exhibition, pp. 2973–2985, San Antonio, Tex, USA,
October 2002. View at Scopus
11. C. Detournay, C. Tan, and B. Wu, “Modeling the mechanism and rate of sand production using FLAC,” in Proceedings of the of 4th International FLAC Symposium on Numerical Modeling in Geomechanics,
pp. 8–10.
12. A. Nouri, H. Vaziri, H. Belhaj, and R. Islam, “Sand-production prediction: a new set of criteria for modeling based on large-scale transient experiments and numerical investigation,” SPE Journal,
vol. 11, no. 2, pp. 227–237, 2006. View at Scopus
13. A. Nouri, E. Kuru, and H. Vaziri, “Enhanced modelling of sand production through improved deformation and stress analysis,” in Proceedings of the Canadian International Petroleum Conference, pp.
12–14, Calgary, Canada, June 2007.
14. H. Vaziri, A. Nouri, K. Hovem, and X. Wang, “Computation of sand production in water injectors,” SPE Production and Operations, vol. 23, no. 4, pp. 518–524, 2008. View at Scopus
15. C. Detournay, “Numerical modeling of the slit mode of cavity evolution associated with sand production,” in Proceedings of the SPE Annual Technical Conference and Exhibition (ATCE '08), pp.
3422–3431, Denver, Colo, USA, September 2008. View at Scopus
16. A. Nouri, E. Kuru, and H. Vaziri, “Elastoplastic modelling of sand production using fracture energy regularization method,” Journal of Canadian Petroleum Technology, vol. 48, no. 4, pp. 64–71,
2009. View at Scopus
17. A. S. Kim, M. M. Sharma, and H. Fitzpatrick, “A Predictive model for sand production in poorly consolidated sands,” in Proceedings of the International Petroleum Technology Conference, pp. 15–17,
Bangkok, Thailand, November 2011.
18. P. C. Papanastasiou and I. G. Vardoulakis, “Numerical treatment of progressive localization in relation to borehole stability,” International Journal for Numerical & Analytical Methods in
Geomechanics, vol. 16, no. 6, pp. 389–424, 1992. View at Scopus
19. H. B. Muehlhaus and I. Vardoulakis, “The thickness of shear bands in granular materials,” Geotechnique, vol. 37, no. 3, pp. 271–283, 1987. View at Scopus
20. A. Zervos, P. Papanastasiou, and I. Vardoulakis, “Modelling of localisation and scale effect in thick-walled cylinders with gradient elastoplasticity,” International Journal of Solids and
Structures, vol. 38, no. 30-31, pp. 5081–5095, 2001. View at Publisher · View at Google Scholar · View at Scopus
21. A. Zervos, P. Papanastasiou, and I. Vardoulakis, “A finite element displacement formulation for gradient elastoplasticity,” International Journal for Numerical Methods in Engineering, vol. 50,
no. 6, pp. 1369–1388, 2001.
22. T. Crook, S. Willson, J. G. Yu, and R. Owen, “Computational modelling of the localized deformation associated with borehole breakout in quasi-brittle materials,” Journal of Petroleum Science and
Engineering, vol. 38, no. 3-4, pp. 177–186, 2003. View at Publisher · View at Google Scholar · View at Scopus
23. J. Wang, D. P. Yale, and G. R. Dasari, “Numerical modeling of massive sand production,” in Proceedings of the SPE Annual Technical Conference and Exhibition, Denver, Colo, USA, October 2011.
24. H. Rahmati, A. Nouri, D. Chan, and H. Vaziri, “Validation of predicted cumulative sand and sand rate against physical-model test,” Journal of Canadian Petroleum Technology, vol. 51, no. 5, pp.
403–410, 2011.
25. N. Morita, D. L. Whitfill, I. Massie, and T. W. Knudsen, “Realistic sand-production prediction: numerical approach,” SPE Production Engineering, vol. 4, no. 1, pp. 15–24, 1989. View at Scopus
26. N. Morita, E. Davis, and L. Whitebay, “Guidelines for solving sand problems in water injection wells,” in Proceedings of the SPE International Symposium on Formation Damage Control, pp. 189–200,
Lafayette, La, USA, February 1998. View at Scopus
27. R. C. Burton, E. R. Davis, N. Morita, and H. O. McLeod, “Application of reservoir strength characterization and formation failure modeling to analyze sand production potential and formulate sand
control strategies for a series of North Sea gas reservoirs,” in Proceedings of the SPE Technical Conference and Exhibition, pp. 159–168, New Orleans, La, USA, September 1998. View at Scopus
28. A. Skjaerstein, M. Stavropoulou, I. Vardoulakis, and J. Tronvoll, “Hydrodynamic erosion: a potential mechanism of sand production in weak sandstones,” International Journal of Rock Mechanics and
Mining Sciences, vol. 34, no. 3-4, pp. 292.e1–292.e18, 1997. View at Publisher · View at Google Scholar · View at Scopus
29. E. Papamichos and E. M. Malmanger, “A sand erosion model for volumetric sand predictions in a north sea reservoir,” in Proceedings of the SPE Latin American and Caribbean Petroleum Engineering
Conference, Caracas, Venzuela, 1999.
30. X. Yi, “Water injectivity decline caused by sand mobilization: simulation and prediction,” in Proceedings of the SPE Permian Basin Oil and Gas Recovery Conference, pp. 202–209, Midland, Tex, USA,
May 2001. View at Scopus
31. H. H. Vaziri, Y. Xiao, R. Islam, and A. Nouri, “Numerical modeling of seepage-induced sand production in oil and gas reservoirs,” Journal of Petroleum Science and Engineering, vol. 36, no. 1-2,
pp. 71–86, 2002. View at Publisher · View at Google Scholar · View at Scopus
32. Y. Wang and S. Xue, “Coupled reservoir-geomechanics model with sand erosion for sand rate and enhanced production prediction,” in Proceedings of the SPE International ymposium on Formation Damage
Control, pp. 373–383, Lafayette, La, USA, February 2002. View at Scopus
33. L. Y. Chin and G. G. Ramos, “Predicting volumetric sand production in weak reservoirs,” in Proceedings of the SPE/ISRM Rock Mechanics Conference, pp. 161–170, Irving, Tex, USA, October 2002. View
at Scopus
34. A. Nouri, M. M. Al-Darbi, H. Vaziri, and M. R. Islam, “Deflection criteria for numerical assessment of the sand production potential in an openhole completion,” Energy Sources, vol. 24, no. 7,
pp. 685–702, 2002. View at Publisher · View at Google Scholar · View at Scopus
35. J. Wang, R. G. Wan, A. Settari, and D. Walters, “Prediction of volumetric sand production and wellbore stability analysis of a well at different completion schemes,” in Proceedings of the 40th
U.S. Symposium on Rock Mechanics (USRMS '05), Anchorage, Alaska, USA, June 2005.
36. G. Servant, P. Marchina, Y. Peysson, E. Berner, and J. F. Nauroy, “Sand erosion in weakly consolidated reservoirs: experiments and numerical modeling,” in Proceedings of the 15th SPE-DOE Improved
Oil Recovery Symposium: Old Reservoirs New Tricks a Global Perspective, pp. 949–956, Tulsa, Okla, USA, April 2006. View at Scopus
37. A. Nouri, H. Vaziri, and E. Kuru, “Numerical investigation of sand production under realistic reservoir/well flow conditions,” in Proceedings of the Canadian International Petroleum Conference,
pp. 13–15, Calgary, Canada, June 2006.
38. A. Nouri, H. Vaziri, H. Belhaj, and M. R. Islam, “Comprehensive transient modeling of sand production in horizontal wellbores,” SPE Journal, vol. 12, no. 4, pp. 468–474, 2007. View at Scopus
39. S. Azadbakht, M. Jafarpour, H. Rahmati, A. Nouri, H. Vaziri, and D. Chan, “A numerical model for predicting the rate of sand production in injector wells,” in Proceedings of the SPE Deep Water
and Completions Conference and Exhibition, pp. 20–21, Galveston, Tex, USA, June 2012.
40. J. Sulem, I. Vardoulakis, E. Papamichos, A. Oulahna, and J. Tronvoll, “Elasto-plastic modelling of Red Wildmoor sandstone,” Journal of Mechanics of Cohesive-Frictional MaterialsNo, vol. 4, no. 3,
pp. 215–245, 1999.
41. M. Jafarpour, H. Rahmati, S. Azadbakht, A. Nouri, D. Chan, and H. Vaziri, “Determination of mobilized strength properties of degrading sandstone,” Journal of Soils and Foundations, vol. 52, no.
4, pp. 658–667, 2012.
42. R. P. Nordgren, “Strength of well completions,” in Proceedings of the 18th US Symposium on Rock Mechanics (USRMS '77), Golden, Colo, USA, June 1977.
43. G. R. Coates and S. A. Denoo, “Mechanical properties program using borehole analysis and mohr’s circle,” in Proceedings of the 22nd Annual Logging Symposium (SPWLA '81), pp. 23–26, 1981.
44. R. Risnes, R. K. Bratili, and P. Horsrud, “Sand stresses around a wellbore,” SPE Journal, vol. 22, no. 6, pp. 883–898, 1982. View at Scopus
45. D. P. Edwards, Y. Sharma, and A. Charron, “Zones of sand production identified by log-derived mechanical properties: a case study,” in Proceedings of the 8th European Formation Evaluation
Symposium (SPWLA '83), London, UK, 1983.
46. D. Antheunis, P. B. Vriezen, B. A. Schipper, and A. C. van der Vlis, “Perforation collapse: failure of perforated friable sandstones,” in Proceedings of the SPE European Spring Meeting, pp. 8–9,
Amsterdam, The Netherlands, April 1976.
47. J. M. Peden, U. Heriot Watt, and A. A. M. Yassin, “The determination of optimum completion and production conditions for sand-free oil production,” in Proceedings of the SPE Annual Technical
Conference and Exhibition, New Orleans, La, USA, October 1986.
48. R. G. Wan and J. Wang, “Analysis of sand production in unconsolidated oil sand using a coupled erosional-stress-deformation model,” Journal of Canadian Petroleum Technology, vol. 43, no. 2, pp.
47–53, 2004.
49. J. Wang, D. Walters, A. Settari, and R. G. Wan, “An integrated modular approach to modeling sand production and cavity growth with emphasis on the multiphase flow and 3D effects,” in Proceedings
of the 41st U.S. Symposium on Rock Mechanics Symposium, Golden, Colo, USA, June 2006.
50. R. G. Wan and J. Wang, “Modeling of sand production and wormhole propagation in oil saturated sand pack using stabilized finite element methods,” Journal of Canadian Petroleum Technology, vol.
42, no. 12, pp. 1–8, 2003.
51. A. Nouri, H. Vaziri, H. Belhaj, and R. Islam, “Effect of volumetric failure on sand production in Oil-wellbores,” in Proceedings of the SPE Asia Pacific Oil and Gas Conference and Exhibition, pp.
86–93, Jakarta, Indonesia, April 2003. View at Scopus
52. P. Papanastasiou and A. Zervos, “Wellbore stability analysis: from linear elasticity to postbifurcation modeling,” International Journal of Geomechanics, vol. 4, no. 1, pp. 2–12, 2004. View at
Publisher · View at Google Scholar · View at Scopus
53. E. Papamichos, J. Tronvoll, A. Skjærstein, and T. E. Unander, “Hole stability of Red Wildmoor sandstone under anisotropic stresses and sand production criterion,” Journal of Petroleum Science and
Engineering, vol. 72, no. 1-2, pp. 78–92, 2010. View at Publisher · View at Google Scholar · View at Scopus
54. I. Vardoulakis, J. Sulem, and A. Guenot, “Borehole instabilities as bifurcation phenomena,” International Journal of Rock Mechanics and Mining Sciences and Geomechanics Abstracts, vol. 25, no. 3,
pp. 159–170, 1988. View at Scopus
55. J. Tronvoll, E. Papamichos, and N. Kessler, “Perforation cavity stability: investigation of failure mechanisms,” in Proceedings of the International Symposium on Geotechnical Engineering of Hard
Soils—Soft Rocks, pp. 1687–1693, Balkema, Rotterdam, The Netherlands, 1993.
56. P. J. Van Den Hoek, G. M. M. Hertogh, A. P. Kooijman, P. De Bree, C. J. Kenter, and E. Papamichos, “New concept of sand production prediction: theory and laboratory experiments,” SPE Drilling and
Completion, vol. 15, no. 4, pp. 261–273, 2000. View at Scopus
57. P. C. Papanastasiou and I. G. Vardoulakis, “Bifurcation analysis of deep boreholes: II. Scale effect,” International Journal for Numerical & Analytical Methods in Geomechanics, vol. 13, no. 2,
pp. 183–198, 1989. View at Scopus
58. B. Haimson, “Micromechanisms of borehole instability leading to breakouts in rocks,” International Journal of Rock Mechanics and Mining Sciences, vol. 44, no. 2, pp. 157–173, 2007. View at
Publisher · View at Google Scholar · View at Scopus
59. R. Risnes and R. K. Bratli, “Stability and failure of sand arches,” in Proceedings of the 54th Annual Technical Conference and Exhibition of the Society of Petroleum Engineers of AIME, Las Vegas,
Nev, USA, September 1979.
60. R. K. Bratli and R. Risnes, “Stability and failure of sand arches,” SPE Journal, vol. 21, no. 2, pp. 236–248, 1981. View at Scopus
61. J. S. Weingarten and T. K. Perkins, “Prediction of sand production in gas wells: methods and gulf of Mexico case studies,” Journal of Petroleum Technology, vol. 47, no. 7, pp. 596–600, 1995. View
at Scopus
62. J. Tronvoll, A. Skjaerstein, and E. Papamichos, “Sand production: mechanical failure or hydrodynamic erosion,” International Journal of Rock Mechanics and Mining Sciences and Geomechanics
Abstracts, vol. 34, no. 3-4, pp. 291.e1–291.e17, 1997. View at Publisher · View at Google Scholar · View at Scopus
63. I. Vardoulakis, P. Papanastasiou, and M. Stavropoulou, “Sand erosion in axial flow conditions,” Transport in Porous Media, vol. 45, no. 2, pp. 267–281, 2001. View at Publisher · View at Google
Scholar · View at Scopus
64. M. Stavropoulou, P. Papanastasiou, and I. Vardoulakiz, “Coupled wellbore erosion and stability analysis,” International Jounral for Numerical and Analytical Methods in Geomechanics, vol. 22, no.
9, pp. 749–769, 1998.
65. E. Papamichos and I. Vardoulakis, “Sand erosion with a porosity diffusion law,” Computers and Geotechnics, vol. 32, no. 1, pp. 47–58, 2005. View at Publisher · View at Google Scholar · View at
66. B. Haimson and H. Lee, “Borehole breakouts and compaction bands in two high-porosity sandstones,” International Journal of Rock Mechanics and Mining Sciences, vol. 41, no. 2, pp. 287–301, 2004.
View at Publisher · View at Google Scholar · View at Scopus
67. E. Papamichos, “Sand production physical and experimental evidence,” Journal of Geomechanics in Energy Production, vol. 10, no. 6-7, pp. 803–816, 2006.
68. J. Tronvoll and E. Fjær, “Experimental study of sand production from perforation cavities,” International Journal of Rock Mechanics and Mining Sciences and Geomechanics, vol. 31, no. 5, pp.
393–410, 1994. View at Scopus
69. J. Tronvoll, M. B. Dusseault, F. Sanfilippo, and F. J. Santarelli, “The tools of sand management,” in Proceedings of the SPE Annual Technical Conference and Exhibition, pp. 3101–3115, New
Orleans, La, USA, October 2001. View at Scopus
70. M. F. Trentacoste, Evaluation of LS-DYNA soil material model 147, no. FHWA-HRT-04-094, 2004.
71. P. Papanastasiou and A. Zervos, “Three-dimensional stress analysis of a wellbore with perforations and a fracture,” in Proceedings of the SPR/ISRM Rock Mechanics In Petroleum Engineering (Eurock
'98), pp. 347–355, Trondheim, Norway, July 1998. View at Scopus
72. F. J. Santarelli, E. Skomedal, P. Markestad, H. I. Berge, and H. Nasvig, “Sand production on water injectors: just how bad can it get?” in Proceedings of the SPR/ISRM Rock Mechanics In Petroleum
Engineering (Eurock '98), pp. 107–115, Trondheim, Norway, July 1998. View at Scopus
73. F. J. Santarelli, E. Skomedal, P. Markestad, H. I. Berge, and H. Nasvig, “Sand production on water injectors: how bad can it get?” SPE Drilling and Completion, vol. 15, no. 2, pp. 132–139, 2000.
View at Scopus
74. F. J. Santarelli, F. Sanfilippo, J. Embry, M. White, and J. B. Turnbull, “The Sanding mechanisms of water injectors and their quantification in terms of sand production: example of the Buzzard
Field (UKCS),” in Proceedings of the SPE Annual Technical Conference and Exhibition, Denver, Colo, USA, October 2011.
75. A. Hayatdavoudi, “Formation sand liquefaction: a mechanism for explaining fines migration and well sanding,” in Proceedings of the SPE Mid-Continent Operations Symposium, Oklahoma City, Okla,
USA, March 1999.
76. P. K. Robertson and C. E. Fear, “Liquefaction of sands and its evaluation,” in Proceedings of the 1st International Conference on Earthquake Geotechnical Engineering, 1997.
77. R. M. O'Connor, J. R. Torczynski, D. S. Preece, J. T. Klosek, and J. R. Williams, “Discrete element modeling of sand production,” International Journal of Rock Mechanics and Mining Sciences and
Geomechanics Abstracts, vol. 34, no. 3-4, pp. 231.e1–231.e15, 1997. View at Publisher · View at Google Scholar · View at Scopus
78. R. P. Jensen and D. S. Preece, Modeling of Sand Production with Darcy’s Flow Coupled with Discrete Elements, OSTI, 2000.
79. L. Li, E. Papamichos, and P. Cerasi, “Investigation of sand production mechanisms using DEM with fluid flow,” in Proceedings of the International Symposium of the International Society for Rock
Mechanics (Eurock '06), pp. 241–247, Liège, Belgium, May 2006. View at Scopus
80. L. Li and R. M. Holt, “Particle scale reservoir mechanics,” Oil and Gas Science and Technology, vol. 57, no. 5, pp. 525–538, 2002. View at Scopus
81. L. Y. G. Cheung, Micromechanics of Sand Production in Oil Wells [Ph.D. thesis], Imperial College of London, 2010.
82. Z. Y. Zhou, A. B. Yu, and S. K. Choi, “Numerical simulation of the liquid-induced erosion in a weakly bonded sand assembly,” Powder Technology, vol. 211, no. 2-3, pp. 237–249, 2011. View at
Publisher · View at Google Scholar · View at Scopus
83. P. A. Cundall, “A computer model for simulating progressive large scale movement in blocky rock systems,” in Proceedings of the Symposium International Society of Rock Mechanics, 1971.
84. D. O. Potyondy and P. A. Cundall, “A bonded-particle model for rock,” International Journal of Rock Mechanics and Mining Sciences, vol. 41, no. 8, pp. 1329–1364, 2004. View at Publisher · View at
Google Scholar · View at Scopus
85. S. Thallak, L. Rothenburg, and M. Dusseault, “Hydraulic fracture (parting) simulation in granular assemblies using the discrete element method,” AOSTRA Journal of Research, vol. 6, pp. 141–153,
86. D. Chan and S. Tipthavonnukul, “Numerical simulation of granular particles movement in fluid flow,” International Journal of Nonlinear Sciences and Numerical Simulation, vol. 9, no. 3, pp.
229–248, 2008. View at Scopus
87. J. Yoon, “Application of experimental design and optimization to PFC model calibration in uniaxial compression simulation,” International Journal of Rock Mechanics and Mining Sciences, vol. 44,
no. 6, pp. 871–889, 2007. View at Publisher · View at Google Scholar · View at Scopus
88. U. El Shamy and A. Elmekati, “An efficient combined DEM/FEM technique for soil-structure interaction problems,” in International Foundation Congress and Equipment Expo, pp. 238–245, Orlando, Fla,
USA, March 2009. View at Publisher · View at Google Scholar · View at Scopus
89. A. Elmekati and U. E. Shamy, “A practical co-simulation approach for multiscale analysis of geotechnical systems,” Computers and Geotechnics, vol. 37, no. 4, pp. 494–503, 2010. View at Publisher
· View at Google Scholar · View at Scopus
90. N. M. Azevedo and J. V. Lemos, “Hybrid discrete element/finite element method for fracture analysis,” Computer Methods in Applied Mechanics and Engineering, vol. 195, no. 33–36, pp. 4579–4593,
2006. View at Publisher · View at Google Scholar · View at Scopus
91. M. Zeghal and U. El Shamy, “A continuum-discrete hydromechanical analysis of granular deposit liquefaction,” International Journal for Numerical and Analytical Methods in Geomechanics, vol. 28,
no. 14, pp. 1361–1383, 2004. View at Publisher · View at Google Scholar · View at Scopus
|
{"url":"http://www.hindawi.com/journals/jpe/2013/864981/","timestamp":"2014-04-17T10:06:53Z","content_type":null,"content_length":"145437","record_id":"<urn:uuid:7b9ad82b-4a8c-4402-bc3c-e933d4314fd7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Middle Village Geometry Tutor
Find a Middle Village Geometry Tutor
...I scored highly on the GRE practice units, particularly on the Math portion, where I obtained 90% correct. I also found that strategies I employed for the verbal questions helped considerably,
for both antonyms and analogies. Finally, I excel in Writing, as I incorporate correct sentence structure with proper grammar, and create well-developed essays.
41 Subjects: including geometry, English, reading, chemistry
...I recently graduated from Yale University with an Intensive Bachelor of Science degree in Molecular, Cellular, and Developmental Biology with distinction in the major, and am a highly
experienced and patient tutor offering personalized, one-on-one instruction in the Manhattan area. Academics, le...
24 Subjects: including geometry, English, reading, Spanish
...Start your future today with the GED. I can help. More and more colleges are accepting ACT scores each year.
8 Subjects: including geometry, algebra 1, GED, SAT math
...I am an economic historian and am currently working on a number of papers for presentation at conferences. I am equally familiar with American and global history. I have taught chemistry
privately and in college for over thirty years and have BS and MS degrees in the subject, as well as R&D experience.
50 Subjects: including geometry, chemistry, physics, GRE
...I am a representation of my culture, my generation, and am consistently showing that I don't have to be a product of, nor a stereotype in, my society. Tutoring is a rare and intimate privilege
to give wisdom, guidance, and direction to students by way of authenticity, compassion, and intellectua...
23 Subjects: including geometry, reading, writing, English
|
{"url":"http://www.purplemath.com/middle_village_geometry_tutors.php","timestamp":"2014-04-16T19:29:45Z","content_type":null,"content_length":"24077","record_id":"<urn:uuid:271737d7-4e7a-49e2-94dd-33e1b9d1973c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Complex integral over a circle
To be honest i can do the problems but the cauchy integral theorem still perplexes me. I understand the integral fails if there is a "hole" within the integral, and equals 0 everywhere else. So i
guess what im really trying to integrate is the pole? So the entire point is to find the integral of the pole?
I found this extremely helpful:
But what confused me was that when winding multiple times over the integral the equation becomes 1/2pi*i, which seems counter intuitive to me since if the first integral is equal to 2pi*i, if i
integrate twice it should simply be 4pi*i.
The "derivative of f" was expressed as 1/(2pi*i)*(f(zo)/(z-zo)^2, but i thought i could just take the derivative of a complex number as i did a "regular" number...
I also have another problem related to this, Should i create another thread? (i am going to sleep for now so ill post it here, as i suspect the answer is i shouldnt)
1. let C be the square with vertices z= +-2 and +-2i traversed once in the positive sense.
etc etc etc.
i suspect i evaluate these just as i would the first problem. As from how i understand the integral as long as the pole is within the "area" im just going to be integrating it anyway, so i can
effectively ignore the area that im supposedly integrating and just treat it as though im integrating the pole. The area as far as i know is just to stipulate if the pole is within it or not.
|
{"url":"http://www.physicsforums.com/showthread.php?p=3848917","timestamp":"2014-04-20T21:34:39Z","content_type":null,"content_length":"52503","record_id":"<urn:uuid:ced35887-99a4-44ab-ae34-2fc64314b6f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
need help on a very simple geometry problem
1. The problem statement, all variables and given/known data
a side of a regular hexagon is twice the square root of its apothem. Find the length of the apothem and the side
2. Relevant equations
a formula for special triangle
90 degree side is doubled the 30 degree side, and 60 degree side is root 3 times the 30 degree side
3. The attempt at a solution
after drawing the pictures and do the math, i got:
side: 2 times square root of a; a stands for apothem, an unknown value
apothem: square root of a times square root of three
In your second line, you say that "a stands for apothem" (You mean that a stands for the
of the apothem- let's be precise!) but in the third line you say "apothem: square root of a times square root of three". Obviously a is
the length of the apothem in that line; what is a?
|
{"url":"http://www.physicsforums.com/showthread.php?p=1260138","timestamp":"2014-04-18T03:14:24Z","content_type":null,"content_length":"33146","record_id":"<urn:uuid:19ab5407-1455-4365-a39d-8ec332307235>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
histogram sort
histogram sort
Definition: An efficient 3-pass refinement of a bucket sort algorithm. The first pass counts the number of items for each bucket in an auxiliary array, and then makes a running total so each
auxiliary entry is the number of preceding items. The second pass puts each item in its proper bucket according to the auxiliary entry for the key of that item. The last pass sorts each bucket.
Also known as interpolation sort.
Generalization (I am a kind of ...)
bucket sort.
Specialization (... is a kind of me.)
counting sort.
See also American flag sort.
Note: A bucket sort uses fixed-size buckets. A histogram sort sets up buckets of exactly the right size in a first pass. A counting sort is a histogram sort with one bucket per possible key value.
The following note is due to Richard Harter, cri@tiac.net, http://www.tiac.net/users/cri/, 8 January 2001, and is used by permission.
The run time is effectively O(n log log n). Let S be the data set to be sorted, where n=|S|. R is an approximate rank function to sort the data into n bins. R has the following properties.
• R is an integer valued function into [0, n-1].
• 0 ≤ R(x) ≤ n-1 for x in S.
• For some x,y in S, R(x)=0 and R(y)=n-1.
• x < y implies R(x) ≤ R(y) for x,y in S.
Each bin then has, on average, 1 entry. Under some rather broad assumptions the number of entries in a bin will be Poisson distributed whence the observation that the sort is O(n log log n).
Let T be the final array for the sorted data. Allocate an auxiliary integer array H indexed 0 ... n-1. We make one pass through the data to count the number of items in each bin, recording the counts
in H. The array H is then converted into a cumulative array so each entry in H specifies the beginning bin position of the bin contents in T. We then make a second pass through the data. We copy each
item x in S from S to T at H(R(x)), then increment H(R(x)) so the next item in the bin goes in the next location in T. (The bin number R(x) could be saved in still another auxiliary array to trade
off memory for computation.)
For numeric data, there is a simple R function that works very well: Let min, max be the minimum and maximum of S. Then R(x) = n*(x - min)/(max-min).
This uses quite a bit of extra memory. For large data sets, there could be slow downs because of page faults. For large n it is more efficient to bound the number of bins.
Author: PEB
(C and Pascal). Go to the Dictionary of Algorithms and Data Structures home page.
If you have suggestions, corrections, or comments, please get in touch with Paul Black.
Entry modified 16 November 2009.
HTML page formatted Mon Nov 18 10:44:09 2013.
Cite this as:
Paul E. Black, "histogram sort", in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 16 November 2009. (accessed TODAY) Available from: http://
|
{"url":"http://xlinux.nist.gov/dads/HTML/histogramSort.html","timestamp":"2014-04-18T05:31:21Z","content_type":null,"content_length":"5076","record_id":"<urn:uuid:1d5bde17-daf2-43e5-a466-f4d1e80d6465>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dealing with Missing Data
Another problem faced when collecting data is that some data may be missing. For example, in conducting a survey with ten questions, perhaps some of the people who take the survey don’t answer all
ten questions. In Identifying Outliers and Missing Data we show how to identify missing data using a supplemental data analysis tool provided in the Real Statistics Resource Pack.
A simple approach for dealing with missing data is to throw out all the data for any sample missing one or more data elements. One problem with this approach is that the sample size will be reduced.
This is particularly relevant when the reduced sample size is too small to obtain significant results in the analysis. In this case additional sample data elements may need to be collected. This
problem is a bigger than might first be evident. E.g. if a questionnaire with 5 questions is randomly missing 10% of the data, then on average almost 60% of the sample will have at least one question
Also it is often the case that the missing data is not randomly distributed. E.g. people filling out a long questionnaire may give up at some point and not answer any further questions, or they may
be offended or embarrassed by a particular question and choose not to answer it. These are characteristics that might be quite relevant to the analysis.
In general there are the following types of remedies for missing data:
• Delete the samples with any missing data elements
• Impute the value of the missing data
• Remove a variable (e.g. a particular question in the case of a questionnaire or survey) which has a high incidence of missing data, especially if there are other variables (e.g. questions) which
measure similar aspects of the characteristics being studied.
Deleting Missing Data
Of particular importance is the randomness of the missing data. E.g. suppose a lot of people didn’t answer question 5 but everyone answered question 7. If the frequency of the responses to question 7
changes significantly when samples which are missing responses to question 5 are dropped, then the missing data is not random, and so dropping samples can bias the results of the analysis. In this
case either another remedy should be employed or the analysis should be run twice: once with samples with missing data retained (e.g. by adding a “no response” for missing data) and once with these
samples dropped.
Missing data can be removed by using the following supplemental Excel functions found in the Real Statistics Resource Pack.
Real Statistics Functions:
DELBLANK(R1, s) – fills the highlighted range with the data in range R1 (by columns) omitting any empty cells
DELNonNum(R1, s) – fills the highlighted range with the data in range R1 (by columns) omitting any non-numeric cells
DELROWBLANK(R1, b, s) – fills the highlighted range with the data in range R1 omitting any row which has one or more empty cells; if b is True then the first row of R1 (presumably containing column
headings) is always copied (even if it contains an empty cell); this argument is optional and defaults to b = False.
DELROWNonNum(R1, b, s) – fills the highlighted range with the data in range R1 omitting any row which has one or more non-numeric cells; if b is True then the first row of R1 (presumably containing
column headings) is always is always copied (even if it contains a non-numeric cell); this argument is optional and defaults to b = False.
The string s is used as a filler in case the output range has more cells/rows than needed. This argument is optional and defaults to the error value #N/A. See Data Conversion and Reformatting for an
example of the use of these functions.
In addition there is the supplemental function CountFullRows(R1, b) where b = True (default) or False and
CountFullRows(R1, True) = the number of rows in range R1 which don’t have any empty cells
CountFullRows(R1, False) = the number of rows in range R1 which don’t have any non-numeric cells
There is also the related supplemental function CountPairs(R1, R2, b) where b = True (default) or False. Here we look at pairs of cells from R1 and R2: the ith cell in R1 is paired with the ith cell
in R2
CountPairs(R1, R2, True) = the number of pairs for which neither cell in the pair is empty
CountPairs(R1, R2, False) = the number of pairs for which neither cell in the pair is empty or non-numeric
Note that in standard Excel the equivalent of CountPairs(R1, R2, True) can be calculated by
CountPairs(R1, R2, False) can be calculated by
To calculate the number of pair with equal numeric entries, we can use the formula
Example 1: Delete any missing data listwise (indicated by an empty cell) from the sample in A3:G22 in Figure 1.
Figure 4.10.1 – Listwise deletion of missing data
Since we want to delete any row which contains one or more empty cells (except the first row which contains column titles), we use the following array formula to produce the output in range I3:O22 of
Figure 1: =DELROWBLANK(A3:G22,TRUE).
Real Statistics Tools: The Real Statistics Resource Pack supplies the Reformatting a Data Range by Rows data analysis tool which provides easier-to-use versions of the supplemental DELROWBLANK and
DELROWNonNum functions described above.
We can also use the supplemental Reformatting a Data Range data analysis tool as substitutes for the DELBLANK and DELNonNum functions. We won’t demonstrate this tool here, but see Data Conversion and
Reformatting for more information about how to use that tool.
Example 2: Repeat Example 1 using the Reformatting a Data Range by Rows data analysis tool.
To use this data analysis tool press Ctrl-m and choose the Reformatting a Data Range by Rows option. A dialog box will appear as in Figure 2. Fill in the dialog box as indicated and click on OK. The
exact same output will appear as we saw previously (namely range I3:O22 of Figure 1).
Figure 2 – Dialog box for Reformat Data Range by Rows
The data analysis tool will output the same number of rows as in the input data range, but any extra rows would be filled in with the values #N/A. Since four rows had at least one empty cell, four
rows are deleted from the output (those for Arkansas, Colorado, Idaho and Indiana) and so the last four rows of the output need to be filled with #N/A.
Actually all the cells in the output range I3:O22 will contain the array formula =DELROWBLANK(A3:G22,True) and so if we change the value of cell B15 to say 10.2, the row for Idaho would now
automatically appear in the output and there would be one less row with values #N/A.
If we had entered an asterisk in the Filler field of Figure 2, then the output would be the same as we saw in Figure 1 except that this time all the cells in range I19:O22 would contain an asterisk
instead of #N/A.
If we had entered the number 0 in the Filler field then all the cells in the output range would contain the array formula =DELROWBLANK(A3:G22,True,””), although the values of all the cells in the
range I19:O22 would be empty. As before if we change the contents of cell B15 to 10.2, then the row for Idaho would appear in the output and only three rows with empty cells would appear. All the
cells in the output range would still have the same array formula, namely =DELROWBLANK(A3:G22,True,””).
If we had checked the Freeze output range size element then the data analysis tool would determine that four rows have missing data and so it would output a range with four fewer rows, namely the
range I3:O18. Although the output would be displayed exactly as in the case described in the previous paragraph, this time only the range I3:O18 would contain the formula =DELROWBLANK(A3:G22,True).
This time if cell B15 is changed to 10.2, then Idaho would be added to the output range, but since the output range only goes down to row 18, the last input row (that for Maine) would not be
displayed, which is probably not what we want.
In conclusion, the Freeze output range size option makes the output cleaner (since all the rows contain data), but should not be used if there is the possibility that some missing data may be added
Imputing the values for missing data
Some techniques for imputing values for missing data include:
• Substituting the missing data with another observation which is considered similar, either taken from another sample or from a previous study
• Using the mean of all the non-missing data elements for that variable. This might be acceptable in cases with a small number of missing data elements, but otherwise it can distort the
distribution of the data (e.g. by reducing the variance) or by lowering the observed correlations (see Basic Concepts of Correlation).
• Using regression techniques. In this approach regression (as described in Regression and Multiple Regression) is used to predict the value of the missing data element based on the relationship
between that variable and other variables. This approach reinforces existing relationships and so makes it more likely that the analysis will characterize the sample and not the general
|
{"url":"http://www.real-statistics.com/descriptive-statistics/missing-data/","timestamp":"2014-04-20T20:14:11Z","content_type":null,"content_length":"41608","record_id":"<urn:uuid:32b7ac77-8b35-467d-929f-0a7eecc66f35>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Yifan Yang
Department of Applied Mathematics, National Chiao Tung University, Hsinchu, Taiwan
Abstract: Let $\Delta(T)$ and $E(T)$ be the error terms in the classical Dirichlet divisor problem and in the asymptotic formula for the mean square of the Riemann zeta function on the critical
strip, respectively. We show that $\Delta(T)$ and $E(T)$ are asymptotic integral transforms of each other. We then use this integral representation of $\Delta(T)$ to give a new proof of a result of
M. Jutila.
Classification (MSC2000): 11M06, 11N37; 11L07
Full text of the article: (for faster download, first choose a mirror)
Electronic fulltext finalized on: 21 Oct 2008. This page was last modified: 10 Dec 2008.
© 2008 Mathematical Institute of the Serbian Academy of Science and Arts
© 2008 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
|
{"url":"http://www.emis.de/journals/PIMB/097/9.html","timestamp":"2014-04-18T15:50:47Z","content_type":null,"content_length":"4439","record_id":"<urn:uuid:f1ca8a5b-e8f2-42d8-ad67-f62213898250>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Formal Model of Objects
Go to the first, previous, next, last section, table of contents.
Objects in Larch/C++ are modeled using several traits. The main trait is TypedObj, which handles the translation between typed objects and values and the untyped objects and values used in the trait
State (see section 2.8.2 Formal Model of States). This model builds on the work of Wing [Wing87], Chen [Chen89], Lerner [Lerner91], and most recently has benefited discussions with Chalin and his
work [Chalin95].
The formal model of this trait (and its relation to the sorts in the trait State) may be explained with the help of the following (crudely-drawn) picture. This picture shows a sort T of abstract
values, with a representative element, tval. The trait function injectTVal maps this into the sort WithUnassigned[T], a sort that also includes the special value unassigned. The trait function
extractTVal is its (near) inverse. The sort Loc[T] of typed objects containing T values (or unassigned) has loc as a typical element. The sorts WithUnassigned[T] and Loc[T] in the trait TypedObj are
the typed counterparts of the sorts Value and Object in the trait State. The overloaded trait functions named widen map typed to untyped values and objects. Their inverses are the trait functions
named narrow. In each trait, the eval mapping takes a second argument which is a state, written as st in the picture.
trait TypedObj trait State
Loc[T] Object
!---------------! widen !-----------------------!
! ! --------------> ! !
! loc ! ! widen(loc) !
! ! narrow ! !
! ! <-------------- ! !
!---------------! !-----------------------!
/ !
/ |
/ |
/ |
/ eval(__, st) | eval(__, st)
/ |
/ |
/ |
/ !
/ !
T v WithUnassigned[T] v Value
!--------! injectTVal !------------! widen !-----------------------!
! ! -----------> ! ! --------> ! !
! tval ! !injectTVal( ! !widen(injectTVal(tval))!
! ! extractTVal ! tval)! narrow ! !
! ! <----------- ! ! <-------- ! !
!--------! ! - - - - - -! ! !
! unassigned ! ! widen(unassigned) !
!------------! !-----------------------!
A picture of the sorts in the traits TypedObj and State, and some of
the mappings between them. The second argument to eval is a state, st.
The TypedObj trait itself includes several other traits, and uses them to define the sort Loc[T]. The included traits will be explained individually below.
% @(#)$Id: TypedObj.lsl,v 1.29 1997/02/13 00:21:23 leavens Exp $
TypedObj(Loc, T): trait
includes State, WithUnassigned(T), WidenNarrow(Loc[T], Object),
WidenNarrow(WithUnassigned[T], Value), TypedObjEval(Loc, T),
AllocatedAssigned(Loc, T), ModifiesSemantics(Loc, T),
FreshSemantics(Loc, T), TrashesSemantics
sort Loc[T] generated by narrow
sort Loc[T] partitioned by widen
The conversions to and from typed and untyped versions of objects and values are defined by the two inclusions of the trait WidenNarrow given below.
% @(#)$Id: WidenNarrow.lsl,v 1.3 1997/02/13 00:21:25 leavens Exp $
% Maps between untyped and typed values.
% This could be used to describe any partially inverse pair of mappings.
WidenNarrow(Typed, Untyped): trait
widen: Typed -> Untyped
narrow: Untyped -> Typed
\forall t: Typed
narrow(widen(t)) == t;
\forall u: Untyped
narrow(widen(narrow(u))) == narrow(u);
The sort WithUnassigned[T] is specified by the following trait. (Those who are familiar with denotational semantics [Schmidt86] will recognize this as the "lift" of T, with unassigned used in place
of the usual notation for a bottom element. The mappings injectTVal and extractTVal are explicit conversions to and from this lifted set.)
% @(#)$Id: WithUnassigned.lsl,v 1.1 1995/11/06 05:12:17 leavens Exp $
WithUnassigned(T): trait
injectTVal: T -> WithUnassigned[T]
extractTVal: WithUnassigned[T] -> T
unassigned: -> WithUnassigned[T]
isUnassigned: WithUnassigned[T] -> Bool
sort WithUnassigned[T] generated by injectTVal, unassigned
sort WithUnassigned[T] partitioned by isUnassigned, extractTVal
\forall tval: T
extractTVal(injectTVal(tval)) == tval;
\forall tval: T
injectTVal(extractTVal(injectTVal(tval))) == injectTVal(tval);
isUnassigned, extractTVal
exempting extractTVal(unassigned)
The trait TypedObjEval is defined below. Evaluation is, as in the picture above, defined by widening the typed object to an untyped object, using the untyped eval to get the untyped object's value,
and narrowing that value to a WithUnassigned[T] value, then extracting that to a value of type T.
% @(#)$Id: TypedObjEval.lsl,v 1.3 1995/11/10 06:35:44 leavens Exp $
TypedObjEval(Loc, T): trait
assumes State, WithUnassigned(T), WidenNarrow(Loc[T], Object),
WidenNarrow(WithUnassigned[T], Value)
eval: Loc[T], State -> T
\forall loc: Loc[T], st: State
eval(loc, st) == extractTVal(narrow(eval(widen(loc), st)));
eval: Loc[T], State -> T
exempting \forall loc: Loc[T], st: State, typs: Set[TYPE]
eval(loc, bottom), eval(loc, emptyState),
eval(loc, bind(st, widen(loc), widen(unassigned), typs))
The trait AllocatedAssigned defines notions of when a typed object is allocated in a state, and when it is assigned a well-defined value (i.e., is not unassigned). See section 6.2.2 Allocated and
Assigned for details.
The trait ModifiesSemantics defines trait functions that help give a semantics to the Larch/C++ modifies clause. See section 6.2.3 The Modifies Clause for details.
The trait FreshSemantics defines trait functions that help in giving the semantics of the Larch/C++ built-in lcpp-primary fresh. See section 6.3.1 Fresh for details.
The trait TrashesSemantics defines trait functions that help in giving the semantics of the Larch/C++ trashes-clause. See section 6.3.2 The Trashes Clause for details.
Objects in Larch/C++ come in two flavors, mutable and constant (immutable). Mutable objects include global variables and reference parameters. Constant objects include global variables declared using
the C++ cv-qualifier const.
Go to the first, previous, next, last section, table of contents.
|
{"url":"http://archives.cs.iastate.edu/documents/disk0/00/00/02/43/00000243-02/lcpp_19.html","timestamp":"2014-04-18T15:40:05Z","content_type":null,"content_length":"9980","record_id":"<urn:uuid:3921be79-0e66-40f1-aa3c-00d2cf8aef4f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diff-n-Diff of estimates from Structural Equation Models (SEM)
Structural equation modeling (SEM) is generally the practice of estimating latent factor(s) from observable measures. It is similar to factor analysis in that it typically reduces a problem with
potentially many dependent variables to a lower dimension which we hope is interpretable.
Let’s imagine a relatively simple model where this might be useful. We have five responses which are measures of 2 latent traits we are interested in. For half of our sample we administer an
intervention with the hopes that it will increase our latent traits. We administer a pretest measuring the 5 observables before the treatment and a posttest after measuring the 5 observables after.
Stata Code
set obs 4000
gen id = _n
gen eta1 = rnormal()
gen eta2 = rnormal()
* Generate 5 irrelevant factors that might affect each of the
* different responses on the pretest
gen f1 = rnormal()
gen f2 = rnormal()
gen f3 = rnormal()
gen f4 = rnormal()
gen f5 = rnormal()
* Now let's apply the treatment
expand 2, gen(t) // double our data
gen treat=0
replace treat=1 if ((id<=_N/4)&(t==1))
* Now let's generate our changes in etas
replace eta1 = eta1 + treat*1 + t*.5
replace eta2 = eta2 + treat*.5 + t*1
* Finally we generate out pre and post test responses
gen v1 = f1*.8 + eta1*1 + eta2*.4 // eta1 has more loading on
gen v2 = f2*1.5 + eta1*1 + eta2*.3 // the first few questions
gen v3 = f3*2 + eta1*1 + eta2*1
gen v4 = f4*1 + eta1*.2 + eta2*1 // eta2 has more loading on
gen v5 = f5*1 + eta2*1 // the last few questions
* END Simulation
* Begin Estimation
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==0
predict L1 L2, latent
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==1
predict L12 L22, latent
replace L1 = L12 if t==1
replace L2 = L22 if t==1
* Now let's see if our latent predicted factors are correlated with our true factors.
corr eta1 eta2 L1 L2
* We can see already that we are having problems.
* I am no expert on SEM so I don't really know what is going wrong except
* that eta1 is reasonably highly correlated with L1 and L2 and
* eta2 is less highly correlated with L1 and L2 equally each
* individually, which is not what we want.
* Well too late to stop now. Let's do our diff in diff estimation.
* In this case we can easily accomplish it by generating one more variable.
* Let's do a seemingly unrelated regression form to make a single joint estimator.
sureg (L1 t id treat) (L2 t id treat)
* Now we have estimated the effect of the treatment given a control for the
* time effect and individual differences. Can we be sure of our results?
* Not quite. We are treating L1 and L2 like observed varaibles rather than
* random variables we estimated. We need to adjust out standard errors to
* take this into account. The easiest way though computationally intensive is
* to use a bootstrap routine.
* This is how it is done. Same as above but we will use temporary variables.
cap program drop SEMdnd
program define SEMdnd
tempvar L1 L2 L12 L22
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==0
predict `L1' `L2', latent
sem (L1 -> v1 v2 v3 v4 v5) (L2 -> v1 v2 v3 v4 v5) if t==1
predict `L12' `L22', latent
replace `L1' = `L12' if t==1
replace `L2' = `L22' if t==1
sureg (`L1' t id treat) (`L2' t id treat)
drop `L1' `L2' `L12' `L22'
SEMdnd // Looking good
* This should do it though I don't hae the machine time available to wait
* for it to finish.
bs , rep(200) cluster(id): SEMdnd
Formatted By EconometricsbySimulation.com
|
{"url":"http://www.econometricsbysimulation.com/2013/07/diff-n-diff-of-estimates-from.html","timestamp":"2014-04-18T23:17:26Z","content_type":null,"content_length":"188683","record_id":"<urn:uuid:1640d0a6-9b7b-4932-ae04-730a1c6a2851>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topos Theory in a Nutshell
John Baez
April 12, 2006
Okay, you wanna know what a topos is? First I'll give you a hand-wavy vague explanation, then an actual definition, then a few consequences of this definition, and then some examples.
I'll warn you: despite Chris Isham's work applying topos theory to the interpretation of quantum mechanics, and Anders Kock and Bill Lawvere's work applying it to differential geometry and mechanics,
topos theory hasn't really caught on among physicists yet. Thus, the main reason to learn about it is not to quickly solve some specific physics problems, but to broaden our horizons and break out of
the box that traditional mathematics, based on set theory, imposes on our thinking.
1. Hand-Wavy Vague Explanation
Around 1963, Lawvere decided to figure out new foundations for mathematics, based on category theory. His idea was to figure out what was so great about sets, strictly from the category-theoretic
point of view. This is an interesting project, since category theory is all about objects and morphisms. For the category of sets, this means SETS and FUNCTIONS. Of course, the usual axioms for set
theory are all about SETS and MEMBERSHIP. Thus analyzing set theory from the category-theoretic viewpoint forces a radical change of viewpoint, which downplays membership and emphasizes functions.
Even earlier, this same change of viewpoint was also becoming important in algebraic geometry, thanks to the work of Grothendieck on the Weil conjectures. So topos theory can be thought of as a
merger of ideas from geometry and logic - hence the title of this book, which is an excellent introduction to topos theory, though not the easiest one:
• Saunders Mac Lane and Ieke Moerdijk, Sheaves in Geometry and Logic: a First Introduction to Topos Theory, Springer, New York, 1992.
After a bunch of work, Lawvere and others invented the concept of a "topos", which is category with certain extra properties that make it a lot like the category of sets. There are lots of different
topoi; you can do a lot of the same mathematics in all of them; but there are also lots of differences between them: for example, the axiom of choice need not hold in a topos, and the law of the
excluded middle ("either P or not(P)") need not hold. Some but not all topoi contain a "natural numbers object", which plays the role of the natural numbers.
It's good to prove theorems about topoi in general, so that you don't need to keep proving the same kind of theorem over and over again, once for each topos you encounter. This is especially true if
you do a lot of heavy-duty mathematics as part of your daily work.
2. Definition
There are various equivalent definitions of a topos, some more terse than others. Here is a rather inefficient one:
A topos is a category with:
A) finite limits and colimits,
B) exponentials,
C) a subobject classifier.
Short and sweet! But it could be made even shorter.
3. Some Consequences of the Definition
Unfortunately, if you don't know some category theory, the above definition will be mysterious and will require a further sequence of definitions to bring it back to the basic concepts of category
theory - object, morphism, composition, identity. Instead of doing all that, let me say a bit about what these items A)-C) amount to in the category of sets:
A) says that there are:
• an initial object (an object like the empty set)
• a terminal object (an an object like a set with one element)
• binary coproducts (something like the disjoint union of two sets)
• binary products (something like the Cartesian product of two sets)
• equalizers (something like the subset of X consisting of all elements x such that f(x) = g(x), where f,g: X → Y)
• coequalizers (something like the quotient set of X where two elements f(y) and g(y) are identified, where f,g: Y → X)
In fact A) is equivalent to all this stuff. However, I should emphasize that A) says all this in an elegant unified way; it's a theorem that this elegant way is the same as all the crud I just
B) says that for any objects x and y, there is an object y^x, called an "exponential", which acts like "the set of functions from x to y".
C) says that there is an object called the "subobject classifier" Ω, which acts like {0,1}, in that functions from any set x into {0,1} are secretly the same as subsets of x. You can think of Ω as
the replacement for the usual boolean "truth values" that we work with when doing logic in the category of sets.
Learning more about all these concepts is probably the best use of your time if you wants to learn a little bit of topos theory. Even if you can't remember what a topos is, these concepts can help
you become a stronger mathematician or mathematical physicist!
4. Examples
Suppose you're an old fuddy-duddy. Then you want to work in the topos Set, where the objects are sets and the morphisms are functions.
Suppose you know the symmetry group of the universe, G. And suppose you only want to work with sets on which this symmetry group acts, and functions which are compatible with this group action. Then
you want to work in the topos G-Set.
Suppose you have a topological space that you really like. Then you might want to work in the topos of presheaves on X, or the topos of sheaves on X. Sheaves are important in twistor theory and other
applications of algebraic geometry and topology to physics.
Generalizing the last two examples, you might prefer to work in the topos of presheaves on an arbitrary category C, also known as hom(C^op, Set).
For example, if C = Δ (the category of finite totally ordered sets), a presheaf on Δ is a simplicial set. Algebraic topologists love to work with these, and physicists need more and more algebraic
topology these days, so as we grow up, eventually it pays to learn how to do algebraic topology using the category of simplicial sets, hom(Δ^op, Set).
Or, you might like to work in the topos of sheaves on a topological space - or even on a "site", which is a category equipped with something like a topology. These ideas were invented by Alexander
Grothendieck as part of his strategy for proving the Weil conjectures. In fact, this is how topos theory got started. And the power of these ideas continues to grow. For example, in 2002, Vladimir
Voevodsky won the Fields medal for cracking a famous problem called Milnor's Conjecture with the help of "simplicial sheaves". These are like simplicial sets, but with sets replaced by sheaves on a
site. Again, they form a topos. Zounds!
But if all this sounds too terrifying, never mind - there are also examples with a more "foundational" flavor:
Suppose you're a finitist and you only want to work with finite sets and functions between them. Then you want to work in the topos FinSet.
Suppose you're a constructivist and you only want to work with "effectively constructible" sets and "effectively computable" functions. Then you want to work in the "effective topos" developed by
Martin Hyland.
Suppose you like doing calculus with infinitesimals, the way physicists do all the time - but you want to do it rigorously. Then you want to work in the "smooth topos" developed by Lawvere and Anders
Or suppose you're very concerned with the time of day, and you want to work with time-dependent sets and time-dependent functions between them. Then there's a topos for you - I don't know a spiffy
name for it, but it exists: an object gives you a set S(t) for each time t, and a morphism gives you a function f(t): S(t) → T(t) for each time t. This too gives a topos!
If you want to learn more about topos theory, this is the easiest place to start:
• F. William Lawvere and Steve Schanuel, Conceptual Mathematics: A First Introduction to Categories, Cambridge U. Press, Cambridge, 1997.
It may seem almost childish at first, but it gradually creeps up on you. Schanuel has told me that you must do the exercises - if you don't, at some point the book will suddenly switch from being too
easy to being way too hard! If you stick with it, by the end you will have all the basic concepts from topos theory under your belt, almost subconsciously.
After that, try this one:
• F. William Lawvere and Robert Rosebrugh, Sets for Mathematics, Cambridge U. Press, Cambridge, 2003.
This is a great introduction to category theory via the topos of sets: it describes ordinary set theory in topos-theoretic terms, making it clear which axioms will be dropped when we go to more
general topoi, and why. It goes a lot further than the previous book, and you need some more sophistication to follow it, but it's still written for the beginner.
I got a lot out of the following book, but many toposophers complain that it's not substantial enough - it shows how topoi illuminate concepts from logic, but it doesn't show you can do lots of cool
stuff with topoi. Perhaps it's been supplanted by Sets for Mathematics, but you should definitely take a look at it if you can find it:
Don't be scared by the title: it starts at the beginning and explains categories before going on to topoi and their relation to logic.
When you want to dig deeper, try this:
• Colin McLarty, Elementary Categories, Elementary Toposes, Clarendon Press, Oxford, 1995.
It's still an introductory text, but of a more muscular sort than those listed above. McLarty is a philosopher by profession, but this is very much a math book.
To dig deeper still, try Mac Lane and Moerdijk's book mentioned above. And after that... well, let's not rush this! For example, this classic is now available for free online:
but it's advanced enough to make any beginner run away screaming! These books are bound to have a similar effect:
• Peter Johnstone, Topos Theory, London Mathematical Society Monographs 10, Academic Press, 1977.
• Peter Johnstone, Sketches of an Elephant: a Topos Theory Compendium, Oxford U. Press, Oxford. Volume 1, comprising Part A: Toposes as Categories, and Part B: 2-categorical Aspects of Topos
Theory, 720 pages, published in 2002. Volume 2, comprising Part C: Toposes as Spaces, and Part D: Toposes as Theories, 880 pages, published in 2002. Volume 3, comprising Part E: Homotopy and
Cohomology, and Part F: Toposes as Mathematical Universes, in preparation.
... but once you get deeper into topos theory, you'll see they contain a massive hoard of wisdom. I'm trying to read them now. McLarty has said that you can tell you really understand topoi if you
can follow Johnstone's classic Topos Theory. It's long been the key text on the subject, but as a referee of his new trilogy wrote, it was "far too hard to read, and not for the faint-hearted". His
Sketches of an Elephant spend more time explaining things, but they're so packed with detailed information that nobody unfamiliar with topos theory would have a chance of seeing the forest for the
trees. Also, they assume a fair amount of category theory. But they're great!
Mathematics is not the rigid and rigidity-producing schema that the layman thinks it is; rather, in it we find ourselves at that meeting point of constraint and freedom that is the very essence of
human nature. - Hermann Weyl
© 2006 John Baez
|
{"url":"http://www.math.ucr.edu/home/baez/topos.html","timestamp":"2014-04-18T23:16:17Z","content_type":null,"content_length":"13334","record_id":"<urn:uuid:4062e6f4-f823-4fe7-b488-e882d128f8b6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuity of the cone spectral radius
Lemmens, Bas and Nussbaum, R. (2013) Continuity of the cone spectral radius. Proceedings of the American Mathematical Society, 141 . pp. 2741-2754. ISSN 0002-9939. (In press) (The full text of this
publication is not available from this repository)
This paper concerns the question whether the cone spectral radius of a continuous compact order-preserving homogenous map on a closed cone in Banach space depends continuously on the map. Using the
fixed point index we show that if there exist points not in the cone spectrum arbitrarily close to the cone spectral radius, then the cone spectral radius is continuous. An example is presented
showing that continuity may fail, if this condition does not hold. We also analyze the cone spectrum of continuous order-preserving homogeneous maps on finite dimensional closed cones. In particular,
we prove that for each polyhedral cone with m faces, the cone spectrum contains at most m-1 elements, and this upper bound is sharp for each polyhedral cone. Moreover, for each non-polyhedral cone,
there exist maps whose cone spectrum contains a countably infinite number of distinct points.
• Depositors only (login required):
|
{"url":"http://kar.kent.ac.uk/28465/","timestamp":"2014-04-19T04:37:39Z","content_type":null,"content_length":"19598","record_id":"<urn:uuid:eff45450-d6e1-4530-8f86-1dc0714f3e58>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Larkspur, CA Calculus Tutor
Find a Larkspur, CA Calculus Tutor
...To tutor MATLAB programming I will create a series of exercises designed to show the student the specific tools that they will need for their desired application. I am a trained engineer, with
an M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-Champaign. I have gradu...
15 Subjects: including calculus, Spanish, geometry, ESL/ESOL
...If you wonder if any of these questions relate to your child, have you considered a math tutor? I can help. I will help your child fulfill graduation requirements.
12 Subjects: including calculus, geometry, statistics, algebra 2
...I spent most of the previous school year working with a 7th grader to raise her overall grade in prealgebra from a C. After approximately two months of tutoring, she began receiving A's on her
exams, and, by the end of the year, she was able to raise her cumulative grade to an A-. Because of my ...
29 Subjects: including calculus, English, physics, French
...I have a bachelor's degree in Physics from U.C. Berkeley. I taught Math and Physics at the Orinda Academy for 6 years.
12 Subjects: including calculus, chemistry, physics, geometry
...I have several years of experience tutoring in a wide variety of subjects and all ages, from small kids to junior high to high school, and kids with learning disabilities. I am also available
to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language. I'm fluent in ...
48 Subjects: including calculus, reading, English, French
Related Larkspur, CA Tutors
Larkspur, CA Accounting Tutors
Larkspur, CA ACT Tutors
Larkspur, CA Algebra Tutors
Larkspur, CA Algebra 2 Tutors
Larkspur, CA Calculus Tutors
Larkspur, CA Geometry Tutors
Larkspur, CA Math Tutors
Larkspur, CA Prealgebra Tutors
Larkspur, CA Precalculus Tutors
Larkspur, CA SAT Tutors
Larkspur, CA SAT Math Tutors
Larkspur, CA Science Tutors
Larkspur, CA Statistics Tutors
Larkspur, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/larkspur_ca_calculus_tutors.php","timestamp":"2014-04-20T01:50:33Z","content_type":null,"content_length":"23820","record_id":"<urn:uuid:5b06f9a1-698d-4aeb-a3a4-dd343d261497>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Financial Derivatives/Basic Derivatives Contracts
From Wikibooks, open books for an open world
Spot markets allow the purchase and sale of an asset today. By contrast a forward contract specifies the price at which an asset can be purchased or sold at some future date. Although a forward
contract is classified as a derivative in many markets it is difficult to distinguish between the underlying and the forward contract. Large trading volumes in OTC forwards can in fact make them more
significant than spot markets.
A forward contract does not require upfront payment. It is simply the purchase or sale of an asset at some future date at a fixed price (the forward price). Therefore the assumption is that the
forward price reflects the value of this asset on this date. If this assumption is based on a market view, characterising a forward contract as a derivative is misleading.
The primary reason for the classification of a forward contract as a derivative is that in many cases its price can be derived through a no-arbitrage argument that relates the forward price of an
asset to its spot price. For assets like oil this is not possible; given the spot price of a barrel of oil it is not possible to construct an arbitrage argument that relates it to the forward price.
In the oil markets forwards or futures are effectively the underlying and cannot be understood as derivatives. In these markets the forward price of oil is similar in nature to the price of a stock:
it reflects the current consensus of the market and has nothing to do with risk-neutral valuation.
In financial markets forwards can be determined through a no-arbitrage argument. Consider for example a forward on the USD vs EUR exchange rate. If today one euro can be exchanged for 1.3 dollars (
$FX_{spot}$) then in order to determine the forward exchange rate one year from now we can look at the following set of trades,
• We buy a one year forward that guarantees an exchange rate of $FX_{one year}$ dollars per euro.
• We borrow one dollar today.
• We exchange it for (1/1.3) euros and invest this amount in a deposit account.
• After one year we withdraw the principal and the interest earned and exchange them into dollars at $FX_{one year}$.
The net cashflow of this trade at expiry is,
$-(1+r_{USD}) + \frac{1}{FX_{spot}} (1+r_{EUR}) FX_{one year}$
In the absence of arbitrage opportunities the net cashflow of this trade should be zero and therefore,
$FX_{one year}=FX_{spot} \frac{(1+r_{USD})}{(1+r_{EUR})}$
Another example is a forward contract on a zero coupon one year bond , one year from now. Given the price of a one year bond $P_{1 year}$ and a two year bond $P_{2 year}$ we look at the following set
of trades,
• Sell a one year zero coupon bond one year from now at forward price $P_{1,1}$.
• Buy a one year zero coupon bond today.
• Sell a two year zero coupon bond today.
Since $P_{1 year}>P_{2 year}$ we must borrow the difference. After one year we receive $1 from the one year bond and pay interest and principal on the amount borrowed. The two year bond has one year
to maturity and we transfer it to the buyer of the forward in return for $P_{1,1}$. Therefore the net cashflow in one year is,
$-(P_{1 year}-P_{2 year})(1+r_{1 year})+1+ P_{1,1}$
In the absence of arbitrage opportunities this cashflow must be zero. Since,
$P_{1 year}= \frac{1}{(1+r_{1 year})}$
we conclude that,
$P_{1,1}=P_{2 year}/P_{1 year}$
It is interesting to note that the formula,
$P_{1 year}= \frac{1}{(1+r_{1 year})}$
is based on a "no-arbitrage" argument itself and the one year bond can be viewed as the "forward contract" for one dollar received in one year. Given the value of $r_{1 year}$, if the price of the
one year bond was different from $1/(1+r_{1 year})$ one could sell a one-year bond at a price $P^* > P_{1 year}$. At expiry one dollar would be paid to the buyer of the bond but since the proceeds
from the sale would have earned $P^* (1+r_{1 year})$ they would cover this payment and leave a clear profit. Only if $P^* = 1/(1+r_{1 year}) = P_{1 year}$ the condition of no arbitrage holds.
The main features of forward contract are :
• It is an agreement between two parties to buy or sell an asstes at a pre agreed future point in time.
• Buying or selling price of assets is determined in present time.
• The transfer or delivery of assets is in future agreed period.
• Any terms of contract can be negotiated (not standardized)between the parties invlved in the forward contract
• Transaction on forward contract are not transparant.
• The different between the spot and the forward price is the forward premium or forward contract.
• If the price of stock is increased in future, investor(buyer) will gain and seller will bear loss.
Futures contracts, like forward contracts, specify the delivery of an asset at some future date. Futures contracts, unlike forward contracts,
1. Require the buyer or the seller of the futures contract to post margin.
2. Have minimum margin requirements; these requirements are achieved through a margin call.
3. Use the process of mark-to-market.
There three requirements in practice are not unique to futures contracts. The best way to understand them is by looking at a specific futures contract.
The corn futures contract trades at the Chicago Board of Trade (CBOT). The specifications of the contract are very strict and require the delivery of "no. 2 yellow" corn; if other grades of corn are
delivered instead the price paid is adjusted [1]. The contract size can be in multiples of 5,000 bushels of corn. Futures can be purchased for delivery of corn in months December, March, May, July
and September only. Trading this contract ceases on the business day nearest to the 15th calendar date of the delivery month. Delivery takes place two business days after the 15th calendar date of
the delivery month.
Assume that one lot (5,000 bushels) of the Jul-07 contract was bought at 418 cents/bushel on 24th January 2007. The exchange would require the buyer to post initial margin of $900. If the buyer does
not post this amount of money in her account with the exchange, her order cannot be executed. For this contract the maintenance margin is the same; during the life of this futures contract the
balance of the account cannot go below this level; if for any reason the balance of the account falls below the maintenance margin, the buyer of this contract will receive a margin call.
On the date on which the trade was executed the mark-to-market of the futures contract is zero. Assume that on the next trading date, the settlement price of the futures contract is 418 3/4 cents/
bushel (settlement price is the price traded for a futures contract at the close of the trading session). The mark-to-market of the Jul-07 corn futures is,
$MtM = 5,000 * (418 3/4 - 418) = 37.50$
The balance on the buyer's account will now be $937.50. The account is like a normal deposit account and earns interest on its balance.
If the market price of the Jul-07 corn contract drops in the following day, the mark-to-market could drop from $37.50 to $12.50. In this case $25 are withdrawn from the buyer's account and the
balance is now $912.50.
If on the last trading date of this contract (13th July 2007) the settlement price is 420 1/4 cents/bushel then the mark-to-market is $112.50. The final balance of the buyer's account is $1012.50
plus interest earned. Since the corn that will be delivered on the 17th July 2007 is worth $21,012.50, the buyer will pay this amount to the clearing house. The clearing house acts as counterparty in
the transaction between the corn producer and the buyer and makes sure payments are made and corn is delivered to the warehouse nominated by the buyer.
Since the trader has earned $112.50 (plus interest) in effect the net payment for delivery of corn is $20,900. This is equivalent to paying 418 cents/bushel on the corn delivered. The futures
contract has therefore enabled the buyer to purchase corn at the original price of 418 cents/bushel and hedge against price changes.
In order to compare the price of a forward contract $F_0$ and the price of a futures contract $\Phi_0$ we look at the following set of trades:
• We sell a forward contract to deliver a specific quantity of corn at some future date for price $F_0$.
• For dates $i=0,1,2, ... , N-1$ we purchase a quantity $q_i$ of the futures contract so that the following conditions are satisfied:
□ $q_0 (1+r_1)^{N-1} =1$
□ $(q_0+q_1) (1+r_2)^{N-2}=1$
□ ...
□ $(q_0+q_1+...+q_{N-1}) (1+r_{N-1})=1$
where $r_i$ is the daily interest rate applicable for period starting on date $i$ and ending on date $N$. This set of equations can be solved recursively. The value of the margin account on date $N$
will be,
$\sum_{i=1}^{N-1} \left( \sum_{j=0}^{i-1} q_j \right) (1+r_i)^{N-i} [\Phi_i - \Phi_{i-1}]$
To undestand the last equation, we know that on date $i$ the total quantity of futures purchased is $q_0+q_1+...+q_{i-1}$. By the end of date $i$ the mark-to-market change is equal to $
(q_0+q_1+...+q_{i-1})(\Phi_{i}-\Phi_{i-1})$. Depending on the direction of the change $\Phi_{i}-\Phi_{i-1}$ this is a gain or a loss and earns or requires the payment of principal plus interest at
the expiry date of the contract.
Given the conditions that give rise to the solutions for $q_i$, the last equation is equal to $\Phi_N - \Phi_0$. Since at expiry the price of the futures is equal to the spot price of the asset and
therefore $F_N = \Phi_N$, if $\Phi_0$ is different from $F_0$ a risk-free profit can be generated. Note that there is no cost in entering into the series of futures contracts and depending on the
sign of the difference $F_0 - \Phi_0$ the strategy can be reversed. Therefore the forward price must be equal to the futures price.
The strategy used in this analysis assumes that when we purchase an additional quantity $q_i$ of the futures contract we know the interest rate for the period $i+1$ to $N$. Since in practise the
actual value of the interest rate is not known assume that we can lock in a forward rate. However since we cannot predict the change in the mark-to-market of the futures contract in the period $i$ to
$i+1$ we do not know the notional amount we must purchase.
Assume that on date $i$ we make the assumption that there will be no change in the mark-to-market of the futures and therefore there is no need to lock in a forward rate for the period $i+1$ to $N$.
Since the most likely scenario is that we will be wrong, we will have to borrow or deposit the actual change in the mark-to-market at the spot rate for the period $r_{i+1}$.
As long as the error in our estimate of the mark-to-market change is independent of the spot rate we can expect that the costs/benefits will balance to zero. But if the mark-to-market change of the
futures contract is a function of spot rate the costs/benefits will not balance to zero and the futures strategy described above will not be able to replicate the payoff of the forward. We conclude
that when the futures contract is a function of the interest rate the futures price will not be equal to the forward price.
Another exception occurs when the futures price can change by large amounts from one date to the next. The term "large amounts" here means that a one day move accounts for a large percentage of the
difference between $\Phi_0$ and $\Phi_N$. In this case on the date when this large price change occurs the error in the notional locked in at the forward rate is large enough to magnify the error in
our estimate of the change in the mark-to-market. Furthermore, all subsequent mark-to-market changes are much smaller and cannot balance this cost/benefit. Fortunately, most exchanges limit the
maximum change in the futures price that can occur from one date to the next. But if such large price moves are possible then, even if the futures price is not a function of the interest rate, the
assumption that it is equal to the forward price is wrong.
In general, the relation between the futures and the forward price cannot be derived through a static arbitarge strategy unless interest rates have a deterministic term-structure. The derivation of
the relation between the futures and the forward price of an asset is one of the first applications of dynamic hedging [Black 1976].
The main features of Future contract are:
• It is standardized contract made in terms of quantity, expiration date and settlement procedures etc.
• Transition in a future contract are fully transparant.
• It is traded in organized exchange and is "marked to market" daily.
• Physical delivery of underlying assets is virtually never taken.
A Swap is an agreement to exchange a sequence of cash flows over a period of time in the future in same or different currencies. Mainly used for hedging various interest rate exposures, they are very
popular and highly liquid instruments. Some of the very popular swap types are
Fixed - Float (Same currency) Party P pays/receives fixed interest in currency A to receive/pay floating rate in currency A indexed to X on a notional N for a tenor T years. For example, you pay
fixed 5.32% monthly to receive USD 1M Libor monthly on a notional USD 1 mio for 3 years. Fixed-Float swaps in same currency are used to convert a fixed/floating rate asset/liability to a floating/
fixed rate asset/liability. For example, if a company has a fixed rate USD 10 mio loan at 5.3% paid monthly and a floating rate investment of USD 10 mio that returns USD 1M Libor +25 bps monthly, and
wants to lockin the profit as they expect the USD 1M Libor to go down, then they may enter into a Fixed-Floating swap where the company pays floating USD 1M Libor+25 bps and receives 5.5% fixed rate,
locking in 20bps profit.
Fixed - Float (Different currency) Party P pays/receives fixed interest in currency A to receive/pay floating rate in currency B indexed to X on a notional N at an initial exchange rate of FX for a
tenor T years. For example, you pay fixed 5.32% on the USD notional 10 mio quarterly to receive JPY 3M Tibor monthly on a JPY notional 1.2 bio (at an initial exchange rate of USDJPY 120) for 3 years.
For Nondeleverable swaps, USD equivalent of JPY interest will be paid/received (as per the Fx rate on the FX fixing date for the interest payment day). Note in this case no initial Exchange of
notional takes place unless the Fx fixing date and the swap start date fall in the future. Fixed-Float swaps in different currency are used to convert a fixed/floating rate asset/liability in one
currency to a floating/fixed rate asset/liability in a different currency. For example, if a company has a fixed rate USD 10 mio loan at 5.3% paid monthly and a floating rate investment of JPY 1.2
bio that returns JPY 1M Libor +50 bps monthly, and wants to lockin the profit in USD as they expect the JPY 1M Libor to go down or USDJPY to go up(JPY depreciate against USD), then they may enter
into a Fixed-Floating swap in different currency where the company pays floating JPY 1M Libor+50 bps and receives 5.6% fixed rate, locking in 30bps profit against the interest rate and the fx
Float - Float (Same Currency, different index) Party P pays/receives floating interest in currency A Indexed to X to receive/pay floating rate in currency B indexed to Y on a notional N for a tenor T
years. For example, you pay JPY 1M Libor monthly to receive JPY 1M Tibor monthly on a notional JPY 1 bio for 3 years.
In this case, company wants to lockin the cost from the spread widening or narrowing. For example, if a company has a floating rate loan at JPY 1M Libor and the company has an investment that returns
JPY 1M Tibor+30 bps and currently the JPY 1M Tibor = JPY 1M Libor +10bps. At the moment, this company has a net profir of 40 bps. If the company thinks JPY 1M tibor is going to come down or JPY 1M
Libor is going to increase in the future and wants to insulate from this risk, they can enter into a Float float swap in same currency where they pay JPY TIBOR +10 bps and receive JPY LIBOR+35 bps.
with this, they have effectively locked in a 35 bps profit instead of running with a current 40 bps gain and index risk. The 5bps difference comes from the swap cost which includes the market
expectations of the future rates in these two indices and the bid/offer spread which is the swap commission for the swap dealer.
Float - Float (Different Currency) Party P pays/receives floating interest in currency A Indexed to X to receive/pay floating rate in currency A indexed to Y on a notional N at an initial exchange
rate of FX for a tenor T years. For example, you pay floating USD 1M libor on the USD notional 10 mio quarterly to receive JPY 3M Tibor monthly on a JPY notional 1.2 bio (at an initial exchange rate
of USDJPY 120) for 4 years.
To explain the use of this type of swap, consider a US company operating in Japan. To fund their Japanese growth, they need JPY 10 bio. the easiest option for the company is to issue debt in Japan.
As the company might be new in the Japanese market with out a well known reputation among the Japanese investors, this can be an expensive option. Added on top of this, the company might not have
appropriate Debt issuance program in Japan and they might lack sophesticated treasury operation in Japan. To overcone the above problems, they can issue USD debt and convert to JPY in the FX market.
Although this option solves the first problem, it introduces two new risks to the company. 1. Fx risk. If this if this USDJPY spot goes up at the maturity of the debt, then when the company converts
the JPY to USD to pay back its matured debt, it receives less USD and suffers a loss 2. USD and JPY interest rate risk. If the JPY rates come down, the return on the investment in Japan might go down
and this introduces a interest rate risk component.
First exposure in the above can be hedged using long dated FX forward contracts but this introduces a new risk where the implied rate from the Fx Spot and the Fx Frward is a fixed rate but the JPY
invest ment returns a floating rate. Although there are several alternatives to hedge both the exposures effectively with out introducing new risks, the easiest and the most cost effective
alternative would be to use a Float-Float swap in different currencies. In this, the company raises USD by issuing USD Debt and swaps it to JPY. It receives USD floating rate(so matching the interest
payments on the USD Debt) and pays JPY floating rate matching the returns on the JPY investment.
Fixed - Fixed (Different Currency) Party P pays/receives fixed interest in currency A to receive/pay fixed rate in currency B for a tenor T years. For example, you pay JPY 1.6% on a JPy notional of
1.2 bio and receive USD 5.36% on the USD equivalent notional of 10 mio at an initial exchange rate of USDJPY 120.
Usage is similar to above but you receive USD fixed rate and pay JPY Fixed rate.
--192.147.54.3 05:07, 29 June 2007 (UTC)M G Naidu
Primarily used as hedging instruments, against varying interest payments. The base concept is quite easy to follow; you swap a fixed rate for a floating rate or vice-versa. In the case of companies
that offer Variable Rate Bonds, they can enter into a swap agreement with a broker/dealer; where the company pays the broker a fixed rate as per agreement and the broker provides them with the
floating rate, which can be used to make periodic coupon payments. In essence, the company has hedged it's risk against a sudden rate increase, as it is locked in a fixed rate over time. Swaps may be
terminated with one party paying it's counterpart a certain fee, which may have been determined at time of initial agreement or may be based on future payments if interest rates were to remain
An option is a financial instrument that gives the holder to purchase or sale the stated number of shares at pre determined price(exercise price) within or on certain future date. It can be defined
as a contract between two investors( i.e. call writer and option buyer).
There are two types of stock options:
• Call Option: Call option gives the buyer a right to purchase the given stock at the strike price. Thus Call option is generally bought when the buyer is bullish about the underlying security.The
value of call option can be calculated by following equation:
V[c]= Max.(V[s]- E,0)
V[c] = value of call option
Max = Maximum
0 = Zero
V[s] = Value of stock
E = Exercise price or strike price
Profit or Loss
Buyer = V[c] - Premium
Seller = Premium - V[c]
Break- Even point
Buyer = Exercise Price + Premium Paid
Seller = Exercise Price + premium Received
• Put Option: Similarly buying a put option gives you the right to sell the underlying stock at the strike price. Put option is bought when the buyer has bearish views about the underlying
security.The value of Put option can be calculated by following equation:
V[p]= Max(E - V[s],0)
V[p] = value of Put option
Max = Maximum
0 = Zero
V[s] = Value of stock
E = Exercise price or stike price
Profit or Loss
Buyer = V[p] - Premium
Seller = Premium - V[p]
Break- Even point
Buyer = Exercise Price - Premium Paid
Seller = Exercise Price - premium Received
Each option comes with an "Exercise Date". European options may only be exercised on the exercise date, whereas American options may be exercised at any time up till the exercise date.
Due to the put-call parity, it is possible to create artificial call or put options if the other is not available. Put options may also be used as a hedging instrument, against possible decline in
value of the underlying stock.
While stocks with high volatility (modified duration) are high risk, options whose underlying stock have high volatility are actually better. They provide a possibility of a higher payout if the
stock goes up in proportion to its volatility and the same amount of loss.
Options on Forwards[edit]
In this case, the underlying asset on which the option is written is a forward contract. A market exists in which forward contracts are traded. We do not impose the martingale property on the s.d.e.
for a forward price. Rather, given the current forward price $F(t,T)$,
$\frac{dF(t,T)}{F(t,T)}=\mu dt + \sigma dW(t)$
In order to simplify the analysis we assume that $\mu$ and $\sigma$ are positive constants. The mark-to-market of a forward contract with arbitrary strike $K$ is,
$V(t,T)=B(t,T) [F(t,T)-K]$
where $B(t,T)=\exp[-r(T-t)]$ and $r$ is the risk-free rate. An option on a forward gives the buyer of the option the right to purchase a forward contract with strike $K$ and expiry $T^*$ at some
future date $T < T^*$. Lets price this option blindly the actuarial approach. This approach requires that the price of the option is given by taking the expectation of its payoff under the 'true'
distribution of the forward price,
$C(t,T)=B(t,T) \mathbf{E}_t \left\{B(T,T^*) [F(T,T^*)-K]^+ \right\}$
$F(T,T^*)=F(t,T^*) \exp\left[ \left( \mu - \frac{1}{2} \sigma^2 \right) (T-t) + \sigma \sqrt{T-t} U \right]$
and $U$ is a standard normal random variable. The expectation has the following simple solution,
$C(t,T^*)=B(t,T^*) [Fexp[\mu(T-t)]N(d_1) - K N(d2)]$
$d1=\frac{\ln\left(\frac{F}{K}\right)+\left(\mu+\frac{1}{2}\sigma^2\right)(T-t)}{\sigma \sqrt{T-t}}$
$N(x)=Prob(U>x)$ and $d_2=d_1 - \sigma \sqrt{T-t}$. In the same way the price of a put is given by,
$P(t,T^*)=B(t,T^*) [-F\exp[\mu(T-t)]N(-d_1) + K N(-d2)]$
In the absence of arbitareg put-call parity requires the following equation to hold,
This is equivalent to,
This is only possible if $\mu=0$. This transparent approach, first proposed by Emanuel Derman and Nassim Taleb [2], generates the arbitrage-free option price without the need for unrealistic
assumptions about the viability of dynamic hedging. The only assumption we made was regarding the 'true' probability distribution function of the forward price. If we choose a more general approach,
where $F(T,T^*)$ has an arbitrary probability distribution function, then the value of a call option is given by,
$\mathbf{E}_t \left\{[F(T,T^*)-K]^+ \right\}= \mathbf{E}_t \left\{F(T,T^*)\right\} \tilde{P}(F(T,T^*)>K)-KP(F(T,T^*)>K)$
where $P(\cdot)$ is the 'true' probability distribution and $\tilde{P}(\cdot)$ is a probability distribution with the property,
$d\tilde{P}(F(T,T^*))=\frac{F(T,T^*)}{\mathbf{E}_t \{ F(T,T^*) \}}dP(F(T,T^*))$
In the same way, the value of a put option is given by,
$\mathbf{E}_t \left\{[K-F(T,T^*)]^+ \right\}= -\mathbf{E}_t \left\{F(T,T^*)\right\} \tilde{P}(F(T,T^*)<K)+KP(F(T,T^*)<K)$
By put-call parity,
$B(t,T^*) [\mathbf{E}_t \left\{F(T,T^*)\right\}-K]=B(t,T^*) [F(t,T)-K]$
$\mathbf{E}_t \left\{F(T,T^*)\right\}=F(t,T)$
i.e. under an arbitrary 'true' distribution the option on the forward is priced by using the martingale property for the forward price.
Options on the Product of Two Asset Prices[edit]
The growth of the financial sector has resulted in products which are covered under the broad term "exotic derivatives". These derivatives are often written on indices which are derived from traded
prices but which themselves are not traded. Depending on investor preferences an index can be a function of more than one asset prices and can be determined from the value of these asset prices from
a single or a series of observations. Exotic derivatives can either be priced using analytic methods or numerical techniques. The framework used to price all exotic derivatives is based on the
Black-Scholes option pricing theory, in which dynamic hedging is used to obtain an arbitrage-free equation for the option price. Although we can always obtain a p.d.e. for all exotic derivatives, an
analytic solution cannot always be obtained. However, there exists a large range of exotics where an analytic solution is possible. An option on the product of two asset prices has an analytic
Given two traded assets, an index can be created where the value of the index at some time $t$ is defined as,
where $t=0$ is the time at which the index is created and $S(0)=1$. An option can be written on this index with payoff at expiry $T$,
$C(T)=\max [ S(T) - 1 ,0]$
Since the option is only a function of $P_1$, $P_2$ and $t$, given the s.d.e.s for the prices of the two assets,
$\frac{dP_1(t)}{P_1(t)}=m_1 dt + \sigma_1 dW_t^1$
$\frac{dP_2(t)}{P_2(t)}=m_2 dt + \sigma_2 dW_t^2$
(where $dW_t^1 dW_t^2 = \rho dt$) Itô's lemma can be applied the price of the option to give,
$dC=\left[ \frac{\partial C}{\partial t} + m_1 P_1(t)\frac{\partial C}{\partial P_1} + m_2 P_2(t)\frac{\partial C}{\partial P_2} + \frac{1}{2} \sigma_1^2 P_1(t)^2 \frac{\partial^2 C}{\partial P_1^2}
+ \frac{1}{2} \sigma_2^2 P_2(t)^2 \frac{\partial^2 C}{\partial P_2^2} + \sigma_1 \sigma_2 \rho P_1(t) P_2(t) \frac{\partial^2 C}{\partial P_1\partial P_2}\right] dt + \sigma_1 \frac{\partial C}{\
partial P_1} P_1(t) dW_t^1+ \sigma_2 \frac{\partial C}{\partial P_2}P_2(t) dW_t^2$
A portfolio consisting of $1 of the option, $-\partial C / \partial P_1$ of asset 1 and $-\partial C / \partial P_2$ of asset 2 must therefore have an s.d.e. given by,
$d\left( C - \frac{\partial C}{\partial P_1} P_1(t) - \frac{\partial C}{\partial P_2} P_2(t)\right)=\left[ \frac{\partial C}{\partial t} + \frac{1}{2} \sigma_1^2 P_1(t)^2 \frac{\partial^2 C}{\partial
P_1^2} + \frac{1}{2} \sigma_2^2 P_2(t)^2 \frac{\partial^2 C}{\partial P_2^2} + \sigma_1 \sigma_2 \rho P_1(t) P_2(t) \frac{\partial^2 C}{\partial P_1\partial P_2}\right] dt$
Since this portfolio has no sources of risk, in the absence of arbitrage it must have an instantaneous return equal to the risk-free rate $r$. Therefore the last equation gives rise to the following
$rC= \frac{\partial C}{\partial t} + r P_1\frac{\partial C}{\partial P_1} + r P_2\frac{\partial C}{\partial P_2} + \frac{1}{2} \sigma_1^2 P_1^2 \frac{\partial^2 C}{\partial P_1^2} + \frac{1}{2} \
sigma_2^2 P_2^2 \frac{\partial^2 C}{\partial P_2^2} + \sigma_1 \sigma_2 \rho P_1 P_2 \frac{\partial^2 C}{\partial P_1\partial P_2}$
From the payoff function of this option we can deduce that the pricing equation can be transformed into a two-dimensional one with variables $t$ and $P = P_1 P_2$. Note that,
$\frac{\partial C}{\partial P_1}=P_2\frac{\partial C}{\partial P}$
$\frac{\partial C}{\partial P_2}=P_1\frac{\partial C}{\partial P}$
$\frac{\partial^2 C}{\partial P_1^2}=P_2^2\frac{\partial^2 C}{\partial P^2}$
$\frac{\partial^2 C}{\partial P_2^2}=P_1^2\frac{\partial^2 C}{\partial P^2}$
$\frac{\partial^2 C}{\partial P_1 \partial P_2}=P_1 P_2\frac{\partial^2 C}{\partial P^2}+\frac{\partial C}{\partial P}$
Therefore the p.d.e. can be simplified to,
$rC= \frac{\partial C}{\partial t} + m P\frac{\partial C}{\partial P} + \frac{1}{2} \sigma^2 P^2 \frac{\partial^2 C}{\partial P^2}$
$m=2r+\sigma_1 \sigma_2 \rho$
$\sigma=\sqrt{\sigma_1^2+\sigma_2^2+2 \sigma_1 \sigma_2 \rho}$
and boundary condition $C(T) = \max[P(T)/P(0) - 1 ]$. This p.d.e. is the Black-Scholes p.d.e. for a call option and can be solved to give,
$C(0)= \exp[(m-r)T]N(h_1) - \exp[-rT]N(h_2)$
$h_1=\frac{\left(m + \frac{1}{2}\sigma^2 \right)\sqrt{T}}{\sigma}$
$h_2=\frac{\left(m - \frac{1}{2}\sigma^2 \right)\sqrt{T}}{\sigma}$
The same result can be obtained by starting with the risk-neutral processes for the two assets,
$\frac{dP_1(t)}{P_1(t)}=r dt + \sigma_1 d \tilde{W}_t^1$
$\frac{dP_2(t)}{P_2(t)}=r dt + \sigma_2 d \tilde{W}_t^2$
Using Itô's lemma, the process for the product of the two prices is,
$\frac{dP(t)}{P(t)}=m dt + \sigma d W_t$
and the pricing equation derived using the p.d.e. follows.
Advanced Structures[edit]
Theoretically, the price of an option (or option premium)consists of two elements: the Intrinsic value and time value of an option. Therefore, Premium=Intrinsic value+Time value.
The price of an option consists of five things: Strike Price, Price of the underlying asset, Time to maturity, Risk-Free interest rate and Volatility. Since the first four can be read from the
markets, the only unknown factor in the price of the option is volatility.
|
{"url":"http://en.wikibooks.org/wiki/Financial_Derivatives/Basic_Derivatives_Contracts","timestamp":"2014-04-20T13:35:14Z","content_type":null,"content_length":"72331","record_id":"<urn:uuid:5a1943e2-4145-4db7-baab-a01f3cf5defe>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help with direct & inverse variations
October 3rd 2009, 09:17 AM #1
Oct 2009
Help with direct & inverse variations
I understand the basics of variations, but am having trouble when numbers are applied. For example, I have the following problem:
"Assume that y is directly proportional to x. Use the given x-value and y-value to find a linear model that relates y and x.
x = 9, y = 36 "
How do I find the linear model?
The same goes for this problem:
"Find a mathematical model representing the statement. (In each case, determine the constant of proportionality.) y is inversely proportional to x. (y = 7 when x = 32.)"
Any help is greatly appreciated. Thanks
I understand the basics of variations, but am having trouble when numbers are applied. For example, I have the following problem:
"Assume that y is directly proportional to x. Use the given x-value and y-value to find a linear model that relates y and x.
x = 9, y = 36 "
How do I find the linear model?
The same goes for this problem:
"Find a mathematical model representing the statement. (In each case, determine the constant of proportionality.) y is inversely proportional to x. (y = 7 when x = 32.)"
Any help is greatly appreciated. Thanks
Directly proportional means y = k x for some constant k. Plugging in those numbers you get:
36 = k 9
Find k = 4
So the model is y= 4x
Good luck!
I understand the basics of variations, but am having trouble when numbers are applied. For example, I have the following problem:
"Assume that y is directly proportional to x. Use the given x-value and y-value to find a linear model that relates y and x.
x = 9, y = 36 "
y is directly proportional to x ... y = kx
y = 4x ... would be a linear equation, right?
The same goes for this problem:
"Find a mathematical model representing the statement. (In each case, determine the constant of proportionality.) y is inversely proportional to x. (y = 7 when x = 32.)"
y is inversely proportional to x ... $\textcolor{red}{y = \frac{k}{x}}$
$\textcolor{red}{y = \frac{224}{x}}$
October 3rd 2009, 10:29 AM #2
October 3rd 2009, 10:35 AM #3
|
{"url":"http://mathhelpforum.com/pre-calculus/105844-help-direct-inverse-variations.html","timestamp":"2014-04-20T07:42:28Z","content_type":null,"content_length":"39262","record_id":"<urn:uuid:31a9767c-c39b-427d-a9b2-2de52755b516>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Technical report Frank / Johann-Wolfgang-Goethe-Universität, Fachbereich Informatik und Mathematik, Institut für Informatik
5 search hits
On conservativity of concurrent Haskell (2011)
David Sabel Manfred Schmidt-Schauß
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure
functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive
unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in
CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid
if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
Computing overlappings by unification in the deterministic lambda calculus LR with letrec, case, constructors, seq and variable chains (2011)
Conrad Rau Manfred Schmidt-Schauß
Correctness of program transformations in extended lambda calculi with a contextual semantics is usually based on reasoning about the operational semantics which is a rewrite semantics. A
successful approach to proving correctness is the combination of a context lemma with the computation of overlaps between program transformations and the reduction rules.The method is similar to
the computation of critical pairs for the completion of term rewriting systems. We describe an effective unification algorithm to determine all overlaps of transformations with reduction rules
for the lambda calculus LR which comprises a recursive let-expressions, constructor applications, case expressions and a seq construct for strict evaluation. The unification algorithm employs
many-sorted terms, the equational theory of left-commutativity modeling multi-sets, context variables of different kinds and a mechanism for compactly representing binding chains in recursive
let-expressions. As a result the algorithm computes a finite set of overlappings for the reduction rules of the calculus LR that serve as a starting point to the automatization of the analysis of
program transformations.
Fast equality test for straight-line compressed strings (2011)
Manfred Schmidt-Schauß Georg Schnitger
The paper describes a simple and fast randomized test for equality of grammar-compressed strings. The thorough running time analysis is done by applying a logarithmic cost measure. Keywords:
randomized algorithms, straight line programs, grammar-based compression
A contextual semantics for concurrent Haskell with futures (2011)
David Sabel Manfred Schmidt-Schauß
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of
concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core
language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence
as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need
and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s
seq-operator, which for instance justifies the use of the do-notation.
Pattern matching of compressed terms and contexts and polynomial rewriting (2011)
Manfred Schmidt-Schauß
A generalization of the compressed string pattern match that applies to terms with variables is investigated: Given terms s and t compressed by singleton tree grammars, the task is to find an
instance of s that occurs as a subterm in t. We show that this problem is in NP and that the task can be performed in time O(ncjVar(s)j), including the construction of the compressed
substitution, and a representation of all occurrences. We show that the special case where s is uncompressed can be performed in polynomial time. As a nice application we show that for an
equational deduction of t to t0 by an equality axiom l = r (a rewrite) a single step can be performed in polynomial time in the size of compression of t and l; r if the number of variables is
fixed in l. We also show that n rewriting steps can be performed in polynomial time, if the equational axioms are compressed and assumed to be constant for the rewriting sequence. Another
potential application are querying mechanisms on compressed XML-data bases.
|
{"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/series/id/16122/start/0/rows/10/yearfq/2011","timestamp":"2014-04-20T21:59:28Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:678b17c3-db8a-4ada-92a3-19048ddf2e77>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Examples and Notation
Okay, let’s do some simple examples of differentials, which will lead to some notational “syntactic sugar”.
First of all, if we pick an orthonormal basis $\left\{e_i\right\}_{i=1}^n$ we can write any point as $x=x^ie_i$. This gives us $n$ nice functions to consider: $x^i:\mathbb{R}^n\rightarrow\mathbb{R}$
is the function that takes a point and returns its $i$th coordinate. This is actually a sort of subtle point that’s important to consider deeply. We’re used to thinking of $x^i$ as a variable, which
stands in for some real number. I’m saying that we want to consider it as a function in its own right. In a way, this is just extending what we did when we considered polynomials as functions and we
can do everything algebraically with abstract “variables” as we can with specific “functions” as our $x^i$.
Analytically, though, we can ask how the function $x^i$ behaves as we move our input point around. It’s easy to find the partial derivatives. If $keq i$ then
since moving in the $e_k$ direction doesn’t change the $i$th component. On the other hand, if $k=i$ then
since moving a distance $t$ in the $e_k$ direction adds exactly $t$ to the $i$th component. That is, we can write $D_kx^i=\delta_k^i$ — the Kronecker delta.
Of course, since ${0}$ and ${1}$ are both constant, they’re clearly continuous everywhere. Thus by the condition we worked out yesterday the differential of $x^i$ exists, and we find
$\displaystyle dx^i(x;t)=\delta_k^it^k=t^i$
We can also write the differential as a linear functional $dx^i(x)$. Since this takes a vector $t$ and returns its $i$th component, it is exactly the dual basis element $\eta^i$. That is, once we
pick an orthonormal basis for our vector space of displacements, we can actually write the dual basis of linear functionals as the differentials $dx^i$. And from now on that’s exactly what we’ll do.
So, for example, let’s say we’ve got a differentiable function $f:\mathbb{R}^n\rightarrow\mathbb{R}$. Then we can write its differential as a linear functional
In the one-dimensional case, we write $df(x)=f'(x)dx$, leading us to the standard Leibniz notation
If we have to evaluate this function, we use an “evaluation bar” $\frac{df}{dx}\bigr\vert_{x}=f'(x)$, or $\frac{df}{dx}\bigr\vert_{x=a}=f'(a)$ telling us to substitute $a$ for $x$ in the formula for
$\frac{df}{dx}$. We also can write the operator that takes in a function and returns its derivative by simply removing the function from this Leibniz notation: $\frac{d}{dx}$.
Now when it comes to more than one variable, we can’t just “divide” by one of the differentials $dx^i$, but we’re going to use something like this notation to read off the coefficient anyway. In
order to remind us that we’re not really dividing and that there are other variables floating around, we replace the $d$ with a curly version: $\partial$. Then we can write the partial derivative
$\displaystyle\frac{\partial f}{\partial x^i}=D_if$
and the whole differential as
$\displaystyle df=\frac{\partial f}{\partial x^1}dx^1+\dots+\frac{\partial f}{\partial x^n}dx^n=\frac{\partial f}{\partial x^i}dx^i$
Notice here that when we see an upper index in the denominator of this notation, we consider it to be a lower index. Similarly, if we find a lower index in the denominator, we’ll consider it to be
like an upper index for the purposes of the summation convention. We can even incorporate evaluation bars
$\displaystyle df(a)=\frac{\partial f}{\partial x^1}\biggr\vert_{x=a}dx^1+\dots+\frac{\partial f}{\partial x^n}\biggr\vert_{x=a}dx^n=\frac{\partial f}{\partial x^i}\biggr\vert_{x=a}dx^i$
or strip out the function altogether to write the “differential operator”
$\displaystyle d=\frac{\partial}{\partial x^1}dx^1+\dots+\frac{\partial}{\partial x^n}dx^n=\frac{\partial}{\partial x^i}dx^i$
No comments yet.
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives
|
{"url":"http://unapologetic.wordpress.com/2009/10/02/examples-and-notation/?like=1&source=post_flair&_wpnonce=18b5244a97","timestamp":"2014-04-17T12:39:45Z","content_type":null,"content_length":"81136","record_id":"<urn:uuid:04e4e1c3-4b57-459a-b693-82b1b28e330f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please help! Find the excluded value of the rational expression. (2x + 6)/(4x – 8) A. -3 B. -2 C. 0 D. 2 A, B, C, or D? Please explain (:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
instead of x put in a number from your answers and see what you get
Best Response
You've already chosen the best response.
(2(-3)+6) / (4(-3)-8) = 0/-20 (2(-2)+6) / (4(-2)-8) =2/-16 (2(0)+6) / (4(0)-8) =0/0 (2(2)+6) / (4(2)-8) =10/0
Best Response
You've already chosen the best response.
that didn't really get us anywhere let's just use algebra \[\frac{ (2x+6) }{(4x-8) } = \frac{ 2(x+3) }{ 4(x-2) }\]
Best Response
You've already chosen the best response.
we want to find the value where the expression is undefined which happens when the denominator is zero. so what we have to do is make the denominator zero so what would make \[4(x-2) =0?\] when x
=2 because \[4(2-2)=4(0)=0\] so D is your answer
Best Response
You've already chosen the best response.
Thank You!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b0756ae4b0e906b4a5fbf2","timestamp":"2014-04-18T14:03:15Z","content_type":null,"content_length":"37544","record_id":"<urn:uuid:1d276a86-3cc5-47ce-b3ea-7df97a5a197c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: fmincon problem
Replies: 5 Last Post: Jul 18, 2011 3:22 PM
Messages: [ Previous | Next ]
Re: fmincon problem
Posted: Jul 18, 2011 3:22 PM
I think that the use of 'interior-point' algorithm solves my problem. If the program is feasible, it gives proper solution. For infeasible program, the code gives weird output but I can check
exitflag to know that it is infeasible.
Thanks a lot for all the help. I really appreciate it. I may reply here again if I get stuck in fmincon again.
"Matt J" wrote in message <j01r6b$sml$1@newscl01ah.mathworks.com>...
> "Nazmul Islam" wrote in message <j01jlq$5dm$1@newscl01ah.mathworks.com>...
> >
> > I have not set the 'AlwaysHonorConstraints' option. It is probably set to the default value.
> =====================
> Then you should turn it on (it's off by default).
> > Some constraints of c(i) are off by as large AS 10 or 2000 ! So, it's not coming from the round-off error, I guess.
> ================
> Check the exitflag to see if the algorithm stopped prematurely.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2279782&messageID=7503078","timestamp":"2014-04-18T11:04:39Z","content_type":null,"content_length":"22657","record_id":"<urn:uuid:fd311ea2-b78d-42d9-badd-f31bf70ef34c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Seminar on Stochastic Processes
The Seminar on Stochastic Processes is an ongoing series of conferences. Many of the participants work on such topics as Markov processes, Brownian motion, Superprocesses, and Stochastic Analysis.
SSP encourages interaction between young researchers and more senior ones, and typically devotes half its program to informal and problem sessions. An alphabetical list of former speakers is
From 1981 to 1992, the Proceedings of SSP were published as volumes in Birkhäuser's Progress in Probability series. For more information see: http://www.math.yorku.ca/Probability/ssparch.html
Speaker list
Ted Cox (Syracuse)
Robert Griffiths (Oxford)
Chuck Newman (Courant Institute)
Kavita Ramanan (Carnegie Mellon)
Balasz Szegedy (University of Toronto)
Short presentations
Michelle Boue, Trent University
Dimitris Cheliotis, University of Toronto
Nikolai Dokuchaev, Trent University
Anna Savu, University of Toronto
Deniz Sezer,York University
David Steinsaltz, Queen's University
Benedek Valko (University of Toronto)
Vladimir Vinogradov, (Ohio University)
Michelle Boue, Trent University
Phase transitions for some interacting particle systems with movement
We consider a system of interacting moving particles on the d-dimensional lattice. This sytem introduces disorder to a model proposed by Kesten and Sidoravicius for the spread of epidemics. We will
discuss and compare the phase transitions of both systems.
Dimitris Cheliotis (University of Toronto)
The noise of perturbed random walk on some regular graphs
We consider random walk on mildly random environment on finite transitive d-regular graphs of increasing girth. After centering and scaling, the analytic spectrum of the transition matrix converges
in distribution to a Gaussian noise. An interesting phenomenon occurs at d=2: as the limiting object changes from a regular tree to the integers, the noise becomes localized. The talk is based on
joint work with Balint Virag.
Ted Cox (University of Syracuse)
Survival and coexistence for a stochastic Lotka-Volterra model
In 1999, Neuhauser and Pacala introduced a stochastic spatial version of the Lotka-Volterra model for interspecific competition. In this talk I will discuss some recent work with Ed Perkins analyzing
this model. Our approach, which works when the parameters of the process are "nearly critical," is to (1) show that suitably scaled Lotka-Volterra models converge to super-Brownian motion, and (2)
"transfer" information from the super-Brownian motion back to the Lotka-Volterra models. We are able to show survival and coexistence for certain parameter values this way.
Deniz Sezer, (York University)
Conditioning super-Brownian motion on its exit measure
Let $X$ be a super-Brownian motion defined on a domain $E$ in the euclidean space and $(X_D)$ be its exit measures indexed by sub-domains of $E$. We pick a sub-domain $D$ and condition the
super-Brownian motion inside this domain on its exit measure $X_D$. We give an explicit construction of the resulting conditional law in terms of a particle system, which we call the ``backbone'',
along which a mass is created uniformly. In the backbone, each particle is assigned a measure $\nu$ at its birth. The spatial motion of the particle is an h-transform of Brownian motion, where $h$ is
a potential that depends on $\nu$. $\nu$ represents the particle's contribution to the exit measure. At the particle's death two new particles are born and $\nu$ is passed to the newborns by
fragmentation into two bits. (Joint work with Tom Salisbury.)
Nikolai Dokuchaev, Trent University
Mean-reverting market model: Novikov condition, speculative opportunities, and non-arbitrage
We study arbitrage opportunities and possible speculative opportunities for diffusion mean-reverting market models. We found that the Novikov condition is satisfied for any time interval and for any
set of parameters. It is non-trivial because the appreciation rate has Gaussian distribution converging to a stationary limit. It follows that the mean-reverting model is arbitrage free for any
finite time interval. However, we found that this model still allows some speculative opportunities: a gain for a wide enough set of expected utilities can be achieved for a strategy that does not
require any hypothesis on market parameters and does not use estimation of these parameters.
Bob Griffiths (University of Oxford)
Diffusion processes and coalescent trees.
Diffusion process models for evolution of neutral genes have the coalescent process underlying them. Models are reversible with transition functions having a diagonal expansion in orthogonal
polynomial eigenfunctions of dimension greater than one, extending classical one-dimensional diffusion models with Beta stationary distribution and Jacobi polynomial expansions to models with
Dirichlet or Poisson Dirichlet stationary distributions. Another form of the transition functions is as a mixture depending on the mutant and non-mutant families represented in the leaves of the
infinite- leaf coalescent tree.
Charles Newman (Courant Institute)
Percolation methods for spin glasses
Percolation methods, e.g., those based on the Fortuin-Kasteleyn random cluster representation (of vacant and occupied bonds), have been enormously important in the mathematical analysis of
ferromagnetic Ising models. There exists a Fortuin-Kasteleyn representation for non-ferromagnetic Ising models (including spin glasses) but to date that has not been terribly useful in the
non-ferromagnetic context. We will discuss why this is so and the prospects for this to change in the future. Although our motivation is to study short-range models, we may also describe the
percolation situation in the mean-field Sherrington-Kirkpatrick spin glass. Much of the talk is joint work with Jon Machta and Dan Stein.
Kavita Ramanan (Carnegie Mellon)
Measure-valued Process Limits of Some Stochastic Networks
Markovian representations of certain classes of stochastic networks give rise naturally to measure-valued processes. In the context of two specific examples, we will describe some techniques that
have proved useful in obtaining limit theorems for such processes. In particular, we will discuss the role of certain mappings, which can be viewed as a generalization to the measure-valued setting
of the Skorokhod map that has been used to analyze stochastic networks admitting a finite-dimensional representation. This talk is mainly based on various joint works with Haya Kaspi, Lukasz Kruk,
John Lehoczky and Steven Shreve.
Anna Savu (University of Toronto)
Convergence of a process of Wishart matrices to free Poisson process
Free Poisson process is the free analogue of the classical Poisson process and can be obtained as a limit of a process of Wishart matrices of size arbitraly large. A large deviation principle for
this convergence is studied. The analogous large deviation principle for the convergence of the Hermitian Brownian motion towards the free Brownian motion has been obtained by P. Biane, M. Capitaine
and A. Guionnet.
David Steinsaltz, Queen's University
Measure-valued dynamical systems, with applications to the evolution of aging
We consider an infinite population, described at any time by a probability distribution on a space of "genotypes", each of which is a subset of a space of possible "mutations". The probability
distribution changes in time according to a mutation rule, which augments the genotypes with more mutations, and a selection rule, which reduces the frequency of genotypes with more deleterious
mutations. The Feynman-Kac formula enables us to write down a closed-form solution to this dynamical system, amenable to various kinds of approximations.
This talk is based on joint work with Steve Evans and Ken Wachter.
Balasz Szegedy (University of Toronto)
Limits of Discrete Structures
Take a family of discrete objects, define a limit notion on them and take the topological closure of the family. We study the discrete objects through the analytic properties of their closure. An
example for this strategy is classical ergodic theory by Furstenberg where the discrete structures are subsets of intervals of the integers and the limit objects are certain group invariant measures.
This theory, in particular, leads to various strengthenings of the famous theorem by Szemeredi on arithmetic progressions. We present analogous theories where the discrete objects are graphs or
hypergraphs. Among the applications we show various results about group invariant random processes. The talk is based on joint works with Gábor Elek and László Lovász.
Benedek Valko (University of Toronto)
t^{1/3} Superdiffusivity of Finite-Range Asymmetric Exclusion Processes on Z
We give bounds on the diffusivity of finite-range asymmetric exclusion processes on Z with non-zero drift. We use the resolvent method to make a direct comparison with the totally asymmetric simple
exclusion process, for which the recent works of Ferrari and Spohn, and Balazs and Seppalainen provide sharp bounds.
Vladimir Vinogradov, Ohio University
On local limit theorems related to Levy-type branching mechanism
We prove local limit theorems for total masses of two branching-diffusing particle systems which converge to discontinuous $(2,d,\beta)$-superprocess. We establish new properties of the total mass
for these superprocesses. Both particle systems are characterized by the same heavy-tailed branching mechanism. One of them starts from a Poisson field, whereas the initial number of particles for
the other system is non-random. The poissonization is related to Gnedenko's method of accompanying infinitely divisible laws. We observe a worse discrepancy between the extinction probabilities than
in the continuous case.
Presentation Requests:
If you wish to schedule a short presentation, please e-mail Balint Virag at balint@math.toronto.edu
Thursday March 15
9:30 coffee
10:10 - 11:00 Kavita Ramanan
11:10 - 12:00 Bob Griffiths
2:30-4:00 Problem/Informal session
4:00-4:30 Coffee
4:30-6:00 Short presentations
4:30-4:55 David Steinsaltz
5:00-5:25 Nikolai Dokuchaev
5:30-5:55 Vladimir Vinogradov
8:00-10:00 Banquet, Bright Pearl Restaurant
(tickets must be reserved in advance)
Friday March 16
9:30 coffee
10:10 - 11:00 Balazs Szegedy
11:10 - 12:00 Chuck Newman
2:30-4:00 Problem/Informal session
4:00-4:30 Coffee
4:30-6:00 Short presentations
4:30-4:55 Dimitris Cheliotis
5:00-5:25 Michelle Boue
5:30-5:55 Denis Sezer
Saturday March 17
9:30 coffee
10:10 - 11:00 Ted Cox
11:10 - 12:10 Short presentations
11:10-11:35 Ana Savu
11:40-12:05 Benedek Valko
There will be a banquet on Thursday March 15 at the Bright Pearl Restaurant. Please contact gensci@fields.utoronto.ca
or 416-348-9710 ext. 3018 to reserve a ticket @$30.00 each. **All tickets must be picked up and paid for at Fields by Thursday March 15.**
Back to top
|
{"url":"http://www.fields.utoronto.ca/programs/scientific/06-07/ssp2007/","timestamp":"2014-04-21T09:55:02Z","content_type":null,"content_length":"25535","record_id":"<urn:uuid:67a6cdc7-e69c-489a-a96a-be01d08cbb7d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limits problem
April 3rd 2010, 02:37 AM #1
Junior Member
Mar 2010
Hi everyone,
I've been struggling with 2 limits problems that couldn't be solved by me, and I'm asking for help.
please help with the attached problems, basically I need to find a example for a function that does NOT follow the rules specified in the question.
Please advise if you can, thanks in advance.
For the first one:
take $f(x)= 1, g(x)= e^{-x}$
$\lim_{x\to\infty} |\frac{1}{e^{-x}}|=\infty$ but $\lim_{x\to\infty}|1-e^{-x}|= 1$.
The second one is kind of weird, a limit $\lim_{x\to a}f(x)$ only exists when $\lim_{x\to a^-}f(x)$ and $\lim_{x\to a^+}f(x)$ both exist and are equal.
So if we take $f(x)= 4x$ then $\lim_{x\to 1}f(x)= 4$ and $\lim_{x\to 0}4\cdot \frac{|x|}{x}$ is undefined since $\lim_{x\to 0^+}4\cdot \frac{|x|}{x}=4$ and $\lim_{x\to 0^-}4\cdot \frac{|x|}{x}=
Is this the kind of counter-example you're looking for ?
thanks for your help.
I thought that f(x)=4x is an example for a function that do not follow the rules the question specified. however, when I tried to answer the question using f(x) = 4x, the computer application
claimed it is not the correct example.
I still think that f(x) = 4x satisfies my condition, thank for your help. let me know if you have another function that do not follow the rules.
as for the first problem, it is correct...and thank you for your help.
April 3rd 2010, 03:42 AM #2
April 3rd 2010, 04:14 AM #3
April 3rd 2010, 05:46 AM #4
Junior Member
Mar 2010
|
{"url":"http://mathhelpforum.com/calculus/137081-limits-problem.html","timestamp":"2014-04-19T21:40:38Z","content_type":null,"content_length":"38907","record_id":"<urn:uuid:5898e160-d567-4228-989a-ddb71d0528d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This manual page is part of Xcode Tools version 5.0
To obtain these tools:
If you are running a version of Xcode Tools other than 5.0, view the documentation locally:
• In Xcode
• In Terminal, using the man(1) command
Reading manual pages
Manual pages are intended as a quick reference for people who already understand a technology.
• To learn how the manual is organized or to learn about command syntax, read the manual page for manpages(5).
• For more information about this technology, look for other documentation in the Apple Developer Library.
• For general information about writing shell scripts, read Shell Scripting Primer.
MATH(3) BSD Library Functions Manual MATH(3)
math -- mathematical library functions
#include <math.h>
The header file math.h provides function prototypes and macros for working with floating point values.
Each math.h function is provided in three variants: single, double and extended precision. The single
and double precision variants operate on IEEE-754 single and double precision values, which correspond
to the C types float and double, respectively.
On Intel Macs, the C type long double corresponds to 80-bit IEEE-754 double extended precision. On iOS
devices using ARM processors, long double is mapped to double, as there is no hardware-supported wider
Details of the floating point formats can be found via "man float".
Users who need to repeatedly perform the same calculation on a large set of data will probably find
that the vector math library (composed of vMathLib and vForce) yields better performance for their
needs than sequential calls to the libm.
Users who need to perform mathematical operations on complex floating-point numbers should consult the
man pages for the complex portion of the math library, via "man complex".
Each of the functions that use floating-point values are provided in single, double, and extended pre-cision; precision;
cision; the double precision prototypes are listed here. The man pages for the individual functions
provide more details on their use, special cases, and prototypes for their single and extended preci-sion precision
sion versions.
int fpclassify(double)
int isfinite(double)
int isinf(double)
int isnan(double)
int isnormal(double)
int signbit(double)
These function-like macros are used to classify a single floating-point argument.
double copysign(double, double)
double nextafter(double, double)
copysign(x, y) returns the value equal in magnitude to x with the sign of y. nextafter(x, y) returns
the next floating-point number after x in the direction of y. Both are correctly-rounded.
double nan(const char *tag)
The nan() function returns a quiet NaN, without raising the invalid flag.
double ceil(double)
double floor(double)
double nearbyint(double)
double rint(double)
double round(double)
long int lrint(double)
long int lround(double)
long long int llrint(double)
long long int llround(double)
double trunc(double)
These functions provide various means to round floating-point values to integral values. They are cor-rectly correctly
rectly rounded.
double fmod(double, double)
double remainder(double, double)
double remquo(double x, double y, int *)
These return a remainder of the division of x by y with an integral quotient. remquo() additionally
provides access to a few lower bits of the quotient. They are correctly rounded.
double fdim(double, double)
double fmax(double, double)
double fmin(double, double)
fmax(x, y) and fmin(x, y) return the maximum and minimum of x and y, respectively. fdim(x, y) returns
the positive difference of x and y. All are correctly rounded.
double fma(double x, double y, double z)
fma(x, y, z) computes the value (x*y) + z as though without intermediate rounding. It is correctly
double fabs(double)
double sqrt(double)
double cbrt(double)
double hypot(double, double)
fabs(x), sqrt(x), and cbrt(x) return the absolute value, square root, and cube root of x, respectively.
hypot(x, y) returns sqrt(x*x + y*y). fabs() and sqrt() are correctly rounded.
double exp(double)
double exp2(double)
double __exp10(double)
double expm1(double)
exp(x), exp2(x), __exp10(x), and expm1(x) return e**x, 2**x, 10**x, and e**x - 1, respectively.
double log(double)
double log2(double)
double log10(double)
double log1p(double)
log(x), log2(x), and log10(x) return the natural, base-2, and base-10 logarithms of x, respectively.
log1p(x) returns the natural log of 1+x.
double logb(double)
int ilogb(double)
logb(x) and ilogb(x) return the exponent of x.
double modf(double, double *)
double frexp(double, int *)
modf(x, &y) returns the fractional part of x and stores the integral part in y. frexp(x, &n) returns
the mantissa of x and stores the exponent in n. They are correctly rounded.
double ldexp(double, int)
double scalbn(double, int)
double scalbln(double, long int)
ldexp(x, n), scalbn(x, n), and scalbln(x, n) return x*2**n. They are correctly rounded.
double pow(double, double)
pow(x,y) returns x raised to the power y.
double cos(double)
double sin(double)
double tan(double)
cos(x), sin(x), and tan(x) return the cosine, sine and tangent of x, respectively. Note that x is
interpreted as specifying an angle in radians.
double cosh(double)
double sinh(double)
double tanh(double)
cosh(x), sinh(x), and tanh(x) return the hyperbolic cosine, hyperbolic sine and hyperbolic tangent of
x, respectively.
double acos(double)
double asin(double)
double atan(double)
double atan2(double, double)
acos(x), asin(x), and atan(x) return the inverse cosine, inverse sine and inverse tangent of x, respec-tively. respectively.
tively. Note that the result is an angle in radians. atan2(y, x) returns the inverse tangent of y/x
in radians, with sign chosen according to the quadrant of (x,y).
double acosh(double)
double asinh(double)
double atanh(double)
acosh(x), asinh(x), and atanh(x) return the inverse hyperbolic cosine, inverse hyperbolic sine and
inverse hyperbolic tangent of x, respectively.
double tgamma(double)
double lgamma(double)
tgamma(x) and lgamma(x) return the values of the gamma function and its logarithm evalutated at x,
double j0(double)
double j1(double)
double jn(int, double)
double y0(double)
double y1(double)
double yn(int, double)
j0(x), j1(x), and jn(x) return the values of the zeroth, first, and nth Bessel function of the first
kind evaluated at x, respectively. y0(x), y1(x), and yn(x) return the values of the zeroth, first, and
nth Bessel function of the second kind evaluated at x, respectively.
double erf(double)
double erfc(double)
erf(x) and erfc(x) return the values of the error function and the complementary error function evalu-ated evaluated
ated at x, respectively.
In addition to the functions listed above, math.h defines a number of useful constants, listed below.
CONSTANT VALUE
M_E base of natural logarithm, e
M_LOG2E log2(e)
M_LOG10E log10(e)
M_LN2 ln(2)
M_LN10 ln(10)
M_PI pi
M_PI_2 pi / 2
M_PI_4 pi / 4
M_1_PI 1 / pi
M_2_PI 2 / pi
M_2_SQRTPI 2 / sqrt(pi)
M_SQRT2 sqrt(2)
M_SQRT1_2 sqrt(1/2)
The libm functions declared in math.h provide mathematical library functions in single-, double-, and
extended-precision IEEE-754 floating-point formats on Intel macs, and in single- and double-precision
IEEE-754 floating-point formats on PowerPC macs.
float(3), complex(3)
The <math.h> functions conform to the ISO/IEC 9899:2011 standard.
BSD August 16, 2012 BSD
Reporting Problems
The way to report a problem with this manual page depends on the type of problem:
Content errors
Report errors in the content of this documentation with the feedback links below.
Bug reports
Report bugs in the functionality of the described tool or API through Bug Reporter.
Formatting problems
Report formatting mistakes in the online version of these pages with the feedback links below.
|
{"url":"https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man3/math.3.html","timestamp":"2014-04-18T23:28:55Z","content_type":null,"content_length":"28153","record_id":"<urn:uuid:507c59ab-2782-4644-bee5-680b71fe3119>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Turning Word Problems into Equations
Date: 04/15/2002 at 14:14:07
From: Amanda Beasley
Subject: Math
Dear Doctor Math,
Algebra is sometimes hard for me and the thing I hate most about Math
is word problems. I don't like turning word problems into equations
because it is too confusing. Is there a certain technique that might
help me out?
Thanks a lot,
Amanda Beasley
Date: 04/15/2002 at 14:59:47
From: Doctor Ian
Subject: Re: Math
Hi Amanda,
It _can_ be confusing turning problems into equations, but the
interesting thing is that once you've done that, it's usually _less_
confusing to work with the equations than to try to work with the
words directly.
In fact, the whole _point_ of converting things to equations is that
once you know that a certain trick works on a certain kind of
equation, you can use that same trick no matter _what_ kind of story
led to the equation. It doesn't matter if it's a story about mowing
lawns, or the ages of some family members, or a boat going across a
river, or two guys painting a house.
In fact, that's one of the things that makes math so powerful. Because
everyone uses the same representation, if somebody in Berlin comes up
with a trick to help her solve a problem that comes up while trying to
build a bridge, somebody else in San Francisco can use that same trick
to help him solve a problem about loading and unloading ships.
You can find some tips on doing the translations at
Algebraic Sentences
and in the Dr. Math FAQ:
Word Problems
But getting good at this is more a matter of practice than anything
else, and getting enough practice is often just a matter of convincing
yourself that the practice is going to be worth your while.
I hope this helps. Write back if you'd like to talk more
about this, or anything else.
- Doctor Ian, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/60399.html","timestamp":"2014-04-19T10:47:38Z","content_type":null,"content_length":"7157","record_id":"<urn:uuid:49192cf8-c9c0-4787-8c10-b589a5cebcba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
|
3rd Grade Math: Division Help
Do you need help with Division Of Fractions in your 3rd Grade Math class?
Do you need help with Division Of Decimals in your 3rd Grade Math class?
Do you need help with Divident in your 3rd Grade Math class?
Do you need help with Divisor in your 3rd Grade Math class?
Do you need help with Quotient in your 3rd Grade Math class?
Do you need help with Remainder in your 3rd Grade Math class?
Do you need help with Division Of Integers in your 3rd Grade Math class?
Do you need help with Division Of Whole Numbers in your 3rd Grade Math class?
Do you need help with Properties Of Division in your 3rd Grade Math class?
3rd Grade: Division Videos
division video clips for 3rd grade math students.
3rd Grade: Division Worksheets
Free division printable worksheets for 3rd grade math students.
3rd Grade: Division Word Problems
division homework help word problems for 3rd grade math students.
Third Grade: Division Practice Questions
division homework help questions for 3rd grade math students.
How Others Use Our Site
|
{"url":"http://www.tulyn.com/3rd-grade-math/division","timestamp":"2014-04-20T18:33:10Z","content_type":null,"content_length":"19926","record_id":"<urn:uuid:77cc1cf2-ea33-4c6d-8938-2829c1e09026>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Set Tolerance
Set Tolerance Limits
In this step, you specify the tolerance limits for each tolerance key for each company code.
When processing an invoice, the R/3 System checks each item for variances between the invoice and the purchase order or goods receipt. The different types of variances are defined in tolerance keys.
The system uses the following tolerance keys to check for variances:
• AN: Amount for item without order reference
If you activate the item amount check, the system checks every line item in an invoice with no order reference against the absolute upper limit defined.
• AP: Amount for item with order reference
If you activate the item amount check, the system checks specific line items in an invoice with order reference against the absolute upper limit defined. Which invoice items are checked depends on
how you configure the item amount check.
• BD: Form small differences automatically
The system checks the balance of the invoice against the absolute upper limit defined. If the upper limit is not exceeded, the system automatically creates a posting line called Expense/Income from
Small Differences, making the balance zero and allowing the system to post the document.
• BR: Percentage OPUn variance (IR before GR)
The system calculates the percentage variance between the following ratios: quantity invoiced in order price quantity units : quantity invoiced in order units and quantity ordered in order price
quantity units : quantity ordered in order units. The system compares the variance with the upper and lower percentage tolerance limits.
• BW: Percentage OPUn variance (GR before IR)
The system calculates the percentage variance between the following ratios: quantity invoiced in order price quantity units: quantity invoiced in order units and goods receipt quantity in order price
quantity units : goods receipt quantity in order units. The system compares the variance with the upper and lower percentage limits defined.
• DQ: Exceed amount: quantity variance
If a goods receipt has been defined for an order item and a goods receipt has already been posted, the system multiplies the net order price by (quantity invoiced - (total quantity delivered - total
quantity invoiced)).
If no goods receipt has been defined, the system multiplies the net order price by (quantity invoiced - (quantity ordered - total quantity invoiced)).
The system compares the outcome with the absolute upper and lower limits defined.
This allows relatively high quantity variances for invoice items for small amounts, but only small quantity variances for invoice items for larger amounts.
You can also configure percentage limits for the quantity variance check. In this case, the system calculates the percentage variance from the expected quantity, irrespective of the order price, and
compares the outcome with the percentage limits configured.
The system also carries out a quantity variance check for planned delivery costs.
• DW: Quantity variance when GR quantity = zero
If a goods receipt is defined for an order item but none has as yet been posted, the system multiplies the net order price by (quantity invoiced + total quantity invoiced so far).
The system then compares the outcome with the absolute upper tolerance limit defined.
If you have not maintained tolerance key DW for your company code, the system blocks an invoice for which no goods receipt has been posted yet. If you want to prevent this block, then set the
tolerance limits for your company code for tolerance key BW to Do not check.
• KW: Variance from condition value
The system calculates the amount by which each delivery costs item varies from the product of quantity invoiced * planned delivery costs/ planned quantity. It compares the variance with the upper and
lower limits defined (absolute limits and percentage limits).
• LA: Amount of blanket purchase order
The system calculates the sum of the value invoiced so far for the order item and the value of the current invoice and compares it with the value limit of the purchase order. It then compares the
difference with the upper percentage and absolute tolerances defined.
• LD: Blanket purchase order time limit exceeded
The system determines the number of days by which the invoice is outside the planned time interval. If the posting date of the invoice is before the validity period, the system calculates the number
of days between the posting date and the start of the validity period. If the posting date of the invoice is after the validity period, the system calculates the number of days between the posting
date and the end of the validity period. The system compares the number of days with the with the absolute upper limit defined.
The system determines by how much each invoice item varies from the product of quantity invoiced * order price. It then compares the variance with the upper and lower limits defined (absolute limits
and percentage limits).
When posting a subsequent debit/credit, the system first checks if a price check has been defined for subsequent debits/credits. If so, the system calculates the difference between (value of
subsequent debit/credit + value invoiced so far) / quantity invoiced so far * quantity to be debited/credited and the product of the quantity to be debited/credited * order price and compares this
with the upper and lower tolerance limits (absolute limits and percentage limits).
• PS: Price variance: estimated price
If the price in an order item is marked as an estimated price, for this item, the system calculates the difference between the invoice value and the product of quantity invoiced * order price and
compares the variance with the upper and lower tolerance limits defined (absolute limits and percentage limits).
When posting a subsequent debit/credit, the system first checks whether a price check has been defined for subsequent debits/credits, If so, the system calculates the difference between (value of
subsequent debit/credit + value invoiced so far) / quantity invoiced so far * quantity to be debited/credited and the product quantity to be debited/credited * order price. It then compares the
variance with the upper and lower tolerance limits defined (absolute limits and percentage limits).
• ST: Date variance (value x days)
The system calculates for each item the product of amount * (scheduled delivery date - date invoice entered) and compares this product with the absolute upper limit defined. This allows relatively
high schedule variances for invoice items for small amounts, but only small schedule variances for invoice items for large amounts.
• VP: Moving average price variance
When a stock posting line is created as a result of an invoice item, the system calculates the new moving average price that results from the posting. It compares the percentage variance of the new
moving average price to the old price using the percentage tolerance limits defined.
Variances are allowed within predefined tolerance limits. If a variance exceeds a tolerance limit, however, the system issues a message informing the user. If an upper limit (except with BD and VP)
is exceeded, the invoice is blocked for payment when you post it. You must then release the invoice in a separate step. If the tolerance limit for BD is breached, the system cannot post the invoice.
Note that if you set all limits for a tolerance key to Do not check , the system does not check that tolerance limit. Therefore any variance would be accepted. This does not make sense particularly
in the case of the tolerance key Form small differences automatically.
Configure the tolerance limits for the individual tolerance keys.
Lower limit Upper limit
Absolute Percentage Absolute Percentage
AN - - X -
AP - - X -
BD X - X -
BR - X - X
BW - X - X
DQ - - X -
DW - - X -
KW X X X X
LA - - X X
LD X - X -
PP X X X X
PS X X X X
ST - - X -
VP - X - X
|
{"url":"http://help.sap.com/saphelp_46c/helpdata/pt/2d/1a39516e36d1118b3f0060b03ca329/content.htm","timestamp":"2014-04-16T16:59:19Z","content_type":null,"content_length":"13685","record_id":"<urn:uuid:cbf277e5-3d61-41cc-98f3-02cbbc3a9233>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the equation. Identify any extraneous solutions. sqrt a = -8 A. –64 is a solution of the original equation. 64 is an extraneous solution. B. 64 is a solution of the original equation. C. 64 is
a solution of the original equation. –64 is an extraneous solution. D. no solution <<<
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5101d35ae4b03186c3f87bc1","timestamp":"2014-04-18T08:29:07Z","content_type":null,"content_length":"116720","record_id":"<urn:uuid:8c35c855-4600-4d82-8f77-d7873f88fa52>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Display Class Syllabus
Physical Chemistry
CLASS CODE: CHEM 461 CREDITS: 3
DIVISION: PHYSICAL SCIENCE & ENGINEERING
DEPARTMENT: CHEMISTRY
GENERAL This course does not fulfill a General Education requirement.
CATALOG First semester of a course covering the fundamental concepts of physical chemistry. This course provides a theoretical and mathematical description of the physical behavior of
DESCRIPTION: chemical systems. (Offered Fall Semester)
DESCRIPTION: First semester of a course covering the fundamental concepts of physical chemistry. This course provides a theoretical and mathematical description of the physical behavior of
chemical systems.
TOPICS: States of Matter, Thermodynamics and Equilibria, Kinetics.
OBJECTIVES: *Develop a fundamental understanding of the stated topics.
*Broaden the students' understanding of physical and mathematical sciences by intergrating mathematical principles with chemistry and physics based concepts.
REQUIREMENTS: Typically 5 unit exams, 10 quizzes, and a final exam.
PREREQUISITES: Completion of Math 113, Math 215 or Math 119 is required. Completion of Math 316 or Math 341 is recommended.
EFFECTIVE DATE: August 2001
|
{"url":"http://www2.byui.edu/catalog-archive/2003-2004/class.asp540.htm","timestamp":"2014-04-21T02:03:34Z","content_type":null,"content_length":"3870","record_id":"<urn:uuid:2abdbffd-99c8-4532-829b-bb9ab1b3cbe4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Row Index Number Increasing Automatically
I am doing a hlookup on a range (possibly over 100 rows). My formula is working but my problem is that it takes too much time to go and manually edit the formula to change the row index number and
increase it by one. For example the row index number has to change in order to pull in the right data....eg 5,6,7,8 etc up to over 100. Is there a quick way to do this or do I have to manually enter
the row index numbers in over 100 rows?
I am attaching a SAMPLE of data. The actual worksheet is much more larger.
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
Automatically Number An Index Column
I am working with a group of individuals that will be passing around an
excel spreadsheet to one another, and wanted to come up with a way to have
the 1st column act as an index, with the key component requiring that the
index column would automatically re-number itself, if someone entered a new
A typical value in the first column looks like this: 8-5-012-005
Which in our case means that there are 4 series of number sets, separated by
dashes. So the first set is 8, the second 5, the third 012, and the fourth
005. The available range for the sets are 7 or 8 for the first, 5 through 9
for the second, and 0 through 130, and 0 through 200, respectively.
So the user can pick any of these ranges for when they decide to create a
new record (row).
Here is the way the spreadsheet columns currently look (always sorted by
Row-1 Tract_ID Parcel_ID
Row-2 7-5-065-105 01245787
Row-3 7-5-112-005 01245787
Row-4 8-5-012-005 01245787
Row-5 8-6-030-125 01245787
Now, here is the way I'd like to have the spreadsheet columns look with the
Index_No (can be either Numeric or Text - depending on your recomendations).
The sort order is based on 1st, the index number, then 2nd the Tract_ID:
Row-1 Index_No Tract_ID Parcel_ID
Row-2 1 7-5-065-105 01245787
Row-3 2 7-5-112-005 0126A560
Row-4 3 8-5-012-005 01005147
Row-5 4 8-6-030-125 01000541
Then, let's say the user wants to enter a new value like say, 7-5-105-021.
That value would need to go between Row-1 and Row-2, which, if they just
inserted the value in the row of their choice, would screw up the indexing.
What I need is a way to ALWAYS create an index (automatically), no matter
where they decide to put the value in the spreadsheet, AND it would update
all of the other Indexes as well (very important requirement).
So the end result would be this:
Row-1 Index_No Tract_ID Parcel_ID
Row-2 1 7-5-065-105 01245787
Row-3 2 7-5-105-021 00547419
Row-4 3 7-5-112-005 5126A560
Row-5 5 8-5-012-005 00005147
Row-6 5 8-6-030-125 00001541
View Replies! View Related
Match Index: The Value Yes To Return The Row Number
in one column I am looking up the value Yes to return the row number.
=MATCH("Yes",C:C,0) in this case it returns a 2
I want to use this row number in a sum...
i.e. =sum(b2:b&x) where x is the row number from the formula above, but it just errors out.
View Replies! View Related
Formula To Incrementally ADD Increasing Number Of Columns
I have a worksheet where over time I add columns that need to be added in a "Total" cell.
Above example, the cell being added is 10 columns after the previous one.
What formula can I use to automatically pull the value from every 10th cell starting with M3?
Preferably every 10th cell till a value I determine... i.e every 10th cell but only for the first 15 occurrences.
(Is this anything to do with the Series command?)
View Replies! View Related
Autofill; Copy Down It Doesn’t Automatically Update The Cell References Because It Want To Update Them By Column Number Instead Of Row Number
I have a basic formula =C17+'Asset Depreciation 2008 Onwards'!C24, and I want to copy it down just using the drag function. Problem is that the second reference range of cells are in rows and hence
when I copy it down it doesn’t automatically update the cell references because it want to update them by column number instead of row number. IE I want it to display =C17+'Asset Depreciation 2008
D24, instead of C25. Do you know if there is any way of telling Excel that I want it to increase the column number by 1 every time, instead of the row number for this part of the formula?
View Replies! View Related
Increasing A Value In A Cell By Adding A New Number To A Connected Cell?
First I would like to say that I am not English nor very good to explain myself so hope the title is according to the forums rules. Now to my problem
I would like to to put in for instance the number 100 in cell A and then the number should appear in cell B. I would like to remove the number in cell A without the number in cell B to dissapear.
Then add for instance 50 in cell A to get the number in cell B to add up to 150 and so on. How can I do this? I would like to add that cell B is already connected to a different cell. And I am using
View Replies! View Related
VLOOKUP -- Automatically Change Column Index
Is there a way to automatically change the column index number in the VLOOKUP formula when copying the formula to columns? For example, when I copy a VLOOKUP formula from column A to column B, the
cell references will change, but the column index remains the same. I'd like the column index to be increased by 1.
View Replies! View Related
Identify Row Number Based On Value In A Cell And Use That Row Number In A Macro
I have Sheet with 40 employees who each proposes their work schedule, so I have to give each Employee access to the same sheet and want highlight and unlock only those cells that specific employee
can use.
Each employee has to login from a drop-down (sourced from Sheet.Employee Master), so their unique Employee Number is in "A13" of Sheet.LOGIN
Can I identify the ROW number and then use that ROW number in a macro to highlight and unlock specific Range of Cells in Sheet.PROPOSED SCHEDULE?
---where "Sheet.LOGIN("A13") = (the value in the cell Col A:"row" of Sheet.PROPOSED SCHEDULE)
I have attached a scaled down version of the Workbook.
Following code is scaled down-- this is for Employee 02 who appears on ROW 16 of the sheet. (macro is same for each employee, just uses a different row)
View Replies! View Related
Get Sheet Index Number
Is there a straightforward way to find a sheetnumber based on the name of that sheet? Eg if I have 10 sheets and the fifth sheet is called "number5", I want to get a 5 based on an expression that
uses the sheetname "number5".
View Replies! View Related
Automatically Fill Formulas In Newly Inserted Row From Row Above
What I would like to do is on a sheet when I insert a new row that it will "FILL" the formulas that are the row above it. For example I have cells A1-F1. On cell A1 there is 1, B1 there is 2...etc.
When I then insert a new row I would like the row below A1-F1 to read. A2 = 2, B2=3 so it had a linear growth. I want to do this with my formulas so whenever someone adds a new line it knows to copy
the formula as well but only in certain cells if possible.
View Replies! View Related
Change The Index Number When Match
I have to do a vlookup in 12 sheets (named ranges)
I use the formula '=vlookup(a1;choose(1;range1;range2; etc);2;0)
In this case I have to change the 'choose index_num' every time to match.
Is it possible to do the lookup without changing the 'choose index_num'?
View Replies! View Related
Do Userform Controls Have An Index Number
The form that I have created has a number of controls that are created at runtime. If a user triggers the event that originally created the controls, I want to be able to delete all of the controls
before recreating them.
Is there a loop that can delete all of the newly created controls? I have 9 controls on the form that need to stay put but everything after those 9 controls I want to delete. Is there something like:
View Replies! View Related
Select Column By Index Number
How can you select a column by it's index? For example, if I do the following, I get an error.
Dim Counter As Integer
Counter = 157
Columns(, Counter).Select
View Replies! View Related
Find Column Number And Then Use Index Function
I have a database with over 100 products listed across the first row.
Column a has a list of over 500 projects. Across each project various columns are marked with a number depending on how many of each products are being used on that project.
For Example
A B C D E etc.
Products --> X Y Z AA
Proj 1 2 3
Proj 2 1 4 5
Proj 3 2 4
I want to be able to create a report for any given product.
The report could look like,
Product Z
Proj 1 3
Proj 3 2
So I need to lookup the product code across row 1 and determine the column number and then INDEX down that column and find all non blank cells and read the project names from column A.
I am familiar with formulas with INDEX and V/H LOOKUP functions. I am not very good with VBA codes.
View Replies! View Related
Index Function With No Row Entry
My formula is: = INDEX (Lastsales,$022,$S$5)
O22 is blank
S5 =1
I am not getting an error message. I am getting data that is in Lastsales in column 1, row 19. What is Excel using for the row since $O22 was a blank?
View Replies! View Related
Vlookup Or Similar With Variable Col Index Number Value
i am looking to create a vlookup but with the ability to easily change the column number index so that i can use different columns.
As an example. In a worksheet i have a table with the names of cars in column A starting row 3. Column B to m Row 1 is headed Jan to dec, row 2 same columns is a country name eg UK. Column N to Y row
1 is Jan to Dec again and row 2 of these columns is a diff country say Germany. This repeats for a few more countries. The data within Row 4 for these columns i.e per car is all prices. The table
therefore shows the prices of cars per country per month.
I then have a seperate worksheet for each country where the cars are again listed in column A and Jan to Dec is in column b to M but the data is hard coded being the number of cars. I would like to
use column N to link to 1 of these months hard coded counts dependant on what month i decide to forecast on. The easy way being that if i wanted to use Jan count number i would link the count for
that car type to =b4 etc. Is there an easy way to allow me to change the link should i decide i want Feb ?
The second question is within each countries worksheet i want to bring into column p the countries related car price for a month i select. It may be that the count number differs from the price i
View Replies! View Related
Return Array Index Number For Specific Date
I looking for a macro which will help to open a file with current week number in name.
The problem is week 1 is starting on 30/03/2008 (finacial year) and ends on 28/03/2009.
I've made two dimensional array (week number, weekday) with all the dates from that period.
I have problem with code to search through the array for given key, return index and write it into variables.
For now my code looks like:
Sub week()
Dim i As Long
Dim j As Long
Dim k As Long
Dim week(51, 6) As String
View Replies! View Related
Automatically Add Formated Row After Last Row
I am trying to do is have an additional row, or rows entered after after the last available row is filled. I've written a macro that searches for the last data set in a column, and will then copy the
row above and insert, then copy the formating and formulas down, but I can only make this work by having the user click on a control button.
I would like to make this macro work automatically when data is enterted into the last row in the quote form. Below is what my current macro looks like:
Sub AddLine()
With ActiveSheet
. Unprotect Password:="*********"
ActiveSheet. Range("m17").End(xlDown).Select
ActiveCell.Offset(1, 0).Select
ActiveCell.Offset(-1, 0).Select
ActiveCell.Offset(1, 0).Select
Selection.PasteSpecial Paste:=xlPasteFormats, Operation:=xlNone, _
SkipBlanks:=False, Transpose:=False
ActiveCell.Offset(1, 0).Select
.Protect Password:="*********"
End With
End Sub
View Replies! View Related
Automatically Insert Row When Row Value In Column Changes
I'm unable to find VBA code to insert a blank row when the value in Column L changes. For example if cell L2 = 400 and cell L3=500 I need to insert a blank row between L2 and L3. I need the macro to
search the entire sheet which will have variable numbers of rows but Column L will always have data.
View Replies! View Related
INDEX SMALL ROW Array Function
Please see the attached worksheet for details. I would like the array function to search for instances of the word "FALSE" in column E and return the values of columns A:D when a match is found. I
have done this successfully when the lookup value is a value in the first column of the range, but cannot seem to do so when the lookup value is in the last column of the range. I have received a #
NUM! error each time.
View Replies! View Related
Copy Row Based On Color Index
to loop through each row in sheets("Layer Layout") and check if there are any red fonts in its cell..If there is, i need to copy the header ("A1") and the rows containing the red fonts to sheets
View Replies! View Related
SUMIF Formula With Sum_Range Based On Column Index Number
Following is a summarized example of my data and what I am trying to accomplish.
[Column A] contains a list of account numbers. [Column B] contains current balances, [column C] contains balances from one month ago, [column D] contains balances from two months ago. Within the same
spreadsheet I want the ability to type in the account number in one cell and then the column number in another cell. For example, If I type in the account number 1234 and the column number 3, I would
get the balance from [column c]...if I typed in the column number 4, I would get the balance from [column D].
My first thought was to use a simple SUMIF formula that would compare the account number I type with the account numbers found in [column A]. The problem is getting a formula that can translate the
number 3 to [column C] or the number 4 to [column D]. Note: the actual spreadsheet I am using extends out to column BI.
This is simuilar to the Column Index Number used in a VLOOKUP formula.
View Replies! View Related
Index Function (isolate One Number At A Time And Evaluate Usage)
The value that is returned is off by 2 rows everytime. When I evaluate the formula, it shows the correct row just before the indexing function does it's thing.
I have a cell phone bill for 20 or so phones and am trying to isolate one number at a time and evaluate usage. The first sheet is my data, the second is sheet ("Breakdown") is where I enter the
number in A2 that I want to look at. When I do, it misses the first 2 rows and picks up 2 extra from the following phone number.
View Replies! View Related
INDEX SMALL ROW Formula Showing #REF!
I have the formula (found in cell "C2") on the Report sheet. I need to perform a function, but I cannot get it to work on the sheet I need to pull information from. The sheet RecapWk12 has a small
section pasted (with some cells edited for obvious reasons) from the actual workbook. I can get the formula in Report cell (A10) to work on pulling information from sheet2. You can see I am getting
(#REF!) in cell C2.
View Replies! View Related
VLookup/Index (round-down E3 And Find Its Corresponding Row In The Table To The Right)?
I need a formula for F3 that will round-down E3 and find its corresponding row in the table to the right and find its intersection with the coating listed in I3. Does that make sense?
I've tried, to no avail:
View Replies! View Related
Relative Lookup Or Index With Negative Row Values
Below are cell values a1:c6
a 2 1
b 3 3
a 4 5
b 5 3
a 3 7
a 4 2
I want to grab the value from a cell whose position is relative to cell C5 (value= 7).
the value from column B
of the first row ABOVE cell c5
with 'b' in column A.
I presume an index statement might do it, but I am unsure how to search for a row above a reference cell.
View Replies! View Related
TextBox: Show Cell Using ComboBox Index As Row
Forms – Combobox with Lookup function
From an Excel form combobox I can select one number from a list (from column A). Once selected I want the value in the adjacent column D to show in a text box, with the option to change that text box
value, with the change reflected in column D cell.
View Replies! View Related
Increasing A Cell Value By +1 Every Day
I'm just fiddling around with excel at the moment and have a created a cell with a value of 16. It represents the number of days an event has been running for.
I'm not sure how to make it increase by +1 every day without me having to open excel and change the value in the cell.
View Replies! View Related
Increasing Value Of Cell On Save
I have a worksheet that I need a piece of code for, Cell E1 is an amendment number that increases every time a new one is put out,
so they can be tracked.
Every time the sheet is saved the value in cell E1 needs to increase by 1.
View Replies! View Related
Index Or Match Formula: When A Reference Number Is Used - It Popluates Cells From A List
I am looking for a formula or something - that when a reference number is used - it popluates cells from a list. Attached is a sample spreadsheet - 2 worksheets are being used - 1 is Purchase List
and the 2nd is Fax Commitment. When reference no is filled in on the Fax Commitment sheet and it = the same reference no as on the Purchase List - I need it to populate the appropriate fields (in
this case I have colour coded)
View Replies! View Related
If Any Cell Is (red) Has A Color Index Of 3, Bring The Whole Row To The Top
I have a worksheet with several columns and 1,000's of rows. I have code that makes all "good cells" grey (color index 15) and all "bad cells" red (color index 3).
I would like to do 2 things...
1. If ANY cell is RED, cut the WHOLE ROW and "insert cut cells" below the header row (even if ALL other cells are grey), then repeat the process up the whole worksheet until ANY row with a red cell
is at the top.
2. Create a new worksheet named "Trouble Cells", copy the header row along with any rows with red cells.
I would like to keep the formatting the same (for example, the title row is always yellow and is "28" high and all other rows are a height of "12").
I would also like to keep the column width of each column in the new worksheet as well.
Excel 2002
View Replies! View Related
Match/Index Formula :: Multiply Last 3 Cells In A Row And Subtract 1
I am having a little trouble with multiplying a few formulas. I am looking for a formula that will multiply the last three cells in a row that contain data and subtract 1. Below is an example of the
type of data I am working with and the formula I am trying to use but is not working. The formula is for the cell highlighted in red. Every quarter the last three cells being referenced will
View Replies! View Related
INDEX Or MATCH: List In Row 1, Starting With Column A, Which Colors Have A Value Next To Them
Let's say I have a list in worksheet 1. It's in column A, starts in row 1 and goes.
In worksheet 2 I want to list in row 1, starting with column A, which colors have a value next to them. I want the list to match the first worksheet's order. I'm looking for a formula solution.
Example 1
WS 1
Red 3
Orange 4
Yellow 5
Blue 1
View Replies! View Related
Index & Match Formula: Multiple Row Criteria
I am trying to get my INDEX & MATCH formula to retreive data from my table.
This is what I can do so far:
=INDEX(table,MATCH(B13,balance),MATCH(C13, date))
But I am trying to get it to get another row to look up as well.
I want it to look up the color then the 100 or 250, then the date.
red100 12
red250 45
blue100 78
blue250 1011
I think i need to insert another match code in the row section but cant seem to get it to work.
View Replies! View Related
Increasing Value Of One Cell Based On Another Incremental Value
I have cell F15 which is blank by default, and cell D14 which pulls a value from another sheet (D14's value is =Info!X20). For D14's properties I have it set to show thirds (Custom Property "# ?/3").
I want to make D14 increase by 1/3 for every increment of 60 that F15 contains. For example, let's say D14 is 12. If F15 is 59, it won't change. If it's 60, D14 will be 12 1/3, and if its 180, it'll
be 13. I think I'm close, but just can't quite get it.
View Replies! View Related
Macro To Print Rows That Keep Increasing?
I'm searching for a macro that will allow me to print rows that are constantly changing in number. Attached is a sample of my workbook. The workbook has worksheets by month. There are data from five
sales people on each sheet so each sales person has his own section. I have a print button within each section so he can print only his section of the page. No problem creating that macro.
However, at least twice a week the sales people are adding rows or moving a row of data from one month to another month, so the print area is constantly changing.
View Replies! View Related
Index And Match Formula: Return The Correct Serial Number Based On Both The Matching
I'm trying to create a formula in cell f13 of my attached spreadsheet "Sample 1" that will search the 2nd attached spreadsheet "Sample 2" and return the correct serial number based on both the
matching PO # (located in cell E10 on Sample Sheet 1 and in Column 5 on Sample Sheet 2) and Product # (cell A13 on my Sample Sheet 1). My current formula is not returning the correct result and I'm
not sure why.
View Replies! View Related
Macro To Work On Increasing/Descreasing Data
I have a spreadsheet with a worksheet for each month, so as a new month begins I add a new worksheet using a macro
Each worksheet has 5 columns:
A = Vendor, B = Date, C = Debits, D = Credits, E = Balance
Row 30 contains the totals for columns C, D, & E, cell A30 contains the text Totals
The problem I have is occaisionally extra rows are added so the totals may not be in row 30
Is there any way that the macro can be changed so that it looks for the word Total in column A and then reads the contents of the corresponding cell E? to transfer that total to cell E2 on the next
View Replies! View Related
Increasing/decreasing; Rate Of Decrease - Spreadsheet
I’m trying to make this:
Amount of money: 1000 (changeable)
Beginning # of units: 0,1 (changeable)
Delta: 200 (changeable)
Rate of decrease: 50% (changeable)
0,1 unit should be added for every 200$ increase (1000+200=1200; 1200+200=1400; 1400+500=1900; 1900+256=2156 etc)
0,1 unit should be taken away after decreasing by100$ (because the rate of decrease = 50% (changeable))
After increasing of 1000 + 200, quantity of units should also increase for 0,1, so it should be 0,2 (1200), 0,3(1400), 0,4(1600)….. Amount of money can increase not only by 200, but for any sum. If
that sum is (for example) 500, we should increase by 0,2 etc.
But when this some decreasing we are using rate of decrease (for example 50%, (changeable):
0,4 units – 1600
0,3 units – 1500
0,2 units – 1400
0,1 unit – 1300
View Replies! View Related
Chart Based On Changing/Increasing Data
I'm facing a charting problem and i can not find any solution with a search here, anyway this is the problem: When I choose a week I would like to see the results of the 5 previous week also.
View Replies! View Related
Listbox Increasing In Size As List Grows
I have a listbox on a worksheet which is linked to a named range, the named range is a results from a database query. Now the problem I have is everytime the query is refreshed the listbox expands in
size. Is there anyway to stop the listbox from growing thus 'locking' the size??
View Replies! View Related
Place In Sequence Increasing The Numbers Of One I Creak In A Cell
I would like to place in sequence increasing the numbers of one I creak in a cell.
In the formula I determine it I creak and the corresponding frame number to the placed ones.
The problem is that he is accumulated only the greater and not sequencia it.
Function ordenar2(Myrange As Range, num As Integer) As String
Dim Myorder As Double
Dim X2 As String
Dim n As Integer
n = 1
Do While n
View Replies! View Related
Formula To Decrease A Margin, In Connection With Increasing Basic Value
I'm trying to build a formula to form a price-list. I have some basic prices from a supplier and want to build my prices with a simple rule: the higher the basic price is (column A), the lower my
profit margin (in %) should be (column B). Example:
Basic value is $50, my price is $75 (50% margin)
Basic value is $100, my price is $130 (30% margin)
Basic value is $150, my price is $172,5 (15% margin)
And so on...
I forgot most of what I've learned on Excel at my university (long time ago...), so I tried to do it by using simple thresholds, with "if" function:
View Replies! View Related
Create Drop-Down List Using Increasing/Decreasing Range
I have a list of jobs being displayed using the following code. All sheet names that start with AJ, CJ and PJ within the workbook are how the list is created.
Sub ListSheets()
Dim sht As Worksheet
Dim lRow As Long
Dim rCell As Range
With Sheet1
Set rCell = .Cells(2, 12)
End With
For Each sht In ActiveWorkbook.Worksheets
Select Case UCase(Left(sht.Name, 2))
Case Is = "AJ", "CJ", "PJ"
lRow = lRow + 1
rCell(lRow, 1) = sht.Name
Case Else
End Select
Next sht
End Sub
what i want to do is create a drop list within each job sheet within the workbook that will display the names of the jobs above. Now the thing is I cannot choose the range like normal from data -
validation - list as I will not know how many job names will be displayed so I dont know how many cells to include in the range.
View Replies! View Related
Calculate Future Value Of Monthly Recurring, Annually Increasing Payments
follows in paragraph 5 - but first, background!
I have a specific formula (received courtesy of some clever person here at Ozgrid (thanks!)) which I use to calculate the Future Value of a series of future payments that increase at a fixed annual
rate and earn interest at a fixed rate.
Here it is: =Pmt1* SUMPRODUCT((1+Increase_in_payment)^(ROW( OFFSET($A$1,0,0,Term,1))-1),(1+Return_on_investment)^(Term-ROW(OFFSET($A$1,0,0,Term,1))+1))
(Example: $1000 per annum (Pmt1) is invested for 20 years (Term). The interest earned on the $1000 is 10% per annum (Return_on_investment). The $1000 increases by 5% (Increase_in_payment) each year -
i.e. 19 increases - answer: $89,632 (rounded))
This formula assumes that the payment is made at the beginning of the period.
Question: I would like to change the formula to use MONTHLY payments made in advance, and interest earned on a monthly basis.
Because I REALLY do not know what the formula does, maybe I could ask for a detailed explanation thereof - maybe even from the person who supplied it to me (I cannot see who did!) - and then I can
start fiddling with it myself if answers do not come.
Two previous posts of mine that dealt with somewhat different issues on the same formula are:
Determine Present Value From Future Value
View Replies! View Related
Changing A Number Automatically
In the following sheet I'm tracking daily numbers against a monthly total. In the cell E4 for example I have a minimum per day number needed which is based on the monthly goal divided by the number
of days availble to work. What I would like to be able to do is have the number auto-adjust if a letter (i.e. V=Vacation, S=Sick, etc...) is used in place of a number on any given day.
View Replies! View Related
Find Cell Value Row Number & Use For Column Number
to update these values via a form in this sheet. I can find the correct row to be edited by entering a value from column A and B. The problem is if I want display the values of that row first and
then change it. If I want to change row 10 data how can I bring back the value in ROW 3 AND THE COLUMN VALUE? The next step would be to do the actual update if I want to change ROW 10 to "Ooi" and a
sales value of 200?
This is what I have done so far:
Dim myRows As Integer
With Sheets("Mrt")
'Retrieve history information for row
For myRows = 4 To 49
If comboxDay.Text = Range("A" & myRows).Value And textboxdescription.Text = Range("B" & myRows).Value Then
textboxbedrag.Text = Range("C" & myRows).Value
chkBTW_Ja.Value = Range("D" & myRows).Value
txtNota.Text = Range("S" & myRows).Value
End If
End With
Picture attached to show how sheet looks like.
View Replies! View Related
Automatically Go Into New Row
In sheet1 create template so new data can be add into sheet2 in new row. I made example with adding Last and First name, Years, datas, job. Cell "Number" should be next row in sheet2 where to add
Input template don't need be in one row, it's better to make it in several rows but on sheet2 should put them in one row. Also, in sheet2 datas are not in one row, check there is a column with
diference of datas.
View Replies! View Related
|
{"url":"http://excel.bigresource.com/Row-Index-Number-Increasing-Automatically-zustByrN.html","timestamp":"2014-04-20T11:26:45Z","content_type":null,"content_length":"76515","record_id":"<urn:uuid:dcb35a14-d3fb-458b-aea4-2c9c72ac6494>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling and solution environments for MPEC: GAMS & MATLAB
- Computational Optimization and Applications , 1998
"... Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each
interface, the code was reorganized using object-oriented design techniques. At the same time, robustness issues ..."
Cited by 48 (17 self)
Add to MetaCart
Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface,
the code was reorganized using object-oriented design techniques. At the same time, robustness issues were considered and enhancements made to the algorithm. In this paper, we document the external
interfaces to the PATH code and describe some of the new utilities using PATH. We then discuss the enhancements made and compare the results obtained from PATH 2.9 to the new version. 1 Introduction
The PATH solver [12] for mixed complementarity problems (MCPs) was introduced in 1995 and has since become the standard against which new MCP solvers are compared. However, the main user group for
PATH continues to be economists using the MPSGE preprocessor [36]. While developing the new PATH implementation, we had two goals: to make the solver accessible to a broad audience and to improve the
"... . We describe a technique for generating a special class, called QPEC, of mathematical programs with equilibrium constraints, MPEC. A QPEC is a quadratic MPEC, that is an optimization problem
whose objective function is quadratic, first-level constraints are linear, and second-level (equilibrium) co ..."
Cited by 20 (5 self)
Add to MetaCart
. We describe a technique for generating a special class, called QPEC, of mathematical programs with equilibrium constraints, MPEC. A QPEC is a quadratic MPEC, that is an optimization problem whose
objective function is quadratic, first-level constraints are linear, and second-level (equilibrium) constraints are given by a parametric affine variational inequality or one of its specialisations.
The generator, written in MATLAB, allows the user to control different properties of the QPEC and its solution. Options include the proportion of degenerate constraints in both the first and second
level, ill-conditioning, convexity of the objective, monotonicity and symmetry of the second-level problem, and so on. We believe these properties may substantially effect efficiency of existing
methods for MPEC, and illustrate this numerically by applying several methods to generator test problems. Documentation and relevant codes can be found by visiting http://www.maths.mu.OZ.AU/~danny/
- Department of Mathematics and Computer Science, University of Dundee, Dundee , 2002
"... This paper describes numerical experience with solving MPECs as NLPs on a large collection of test problems. The key idea is to use off-the-shelf NLP solvers to tackle large instances of MPECs.
It is shown that SQP methods are very well suited to solving MPECs and at present outperform Interior Poin ..."
Cited by 19 (1 self)
Add to MetaCart
This paper describes numerical experience with solving MPECs as NLPs on a large collection of test problems. The key idea is to use off-the-shelf NLP solvers to tackle large instances of MPECs. It is
shown that SQP methods are very well suited to solving MPECs and at present outperform Interior Point solvers both in terms of speed and reliability. All NLP solvers also compare very favourably to
special MPEC solvers on tests published in the literature.
- Journal of Economic Dynamics and Control , 1998
"... A fundamental mathematical problem is to find a solution to a square system of nonlinear equations. There are many methods to approach this problem, the most famous of which is Newton's method.
In this paper, we describe a generalization of this problem, the complementarity problem. We show how such ..."
Cited by 17 (6 self)
Add to MetaCart
A fundamental mathematical problem is to find a solution to a square system of nonlinear equations. There are many methods to approach this problem, the most famous of which is Newton's method. In
this paper, we describe a generalization of this problem, the complementarity problem. We show how such problems are modeled within the GAMS modeling language and provide details about the PATH
solver, a generalization of Newton's method, for finding a solution. While the modeling format is applicable in many disciplines, we draw the examples in this paper from an economic background.
Finally, some extensions of the modeling format and the solver are described. Keywords: Complementarity problems, variational inequalities, algorithms AMS Classification: 90C33,65K10 This paper is an
extended version of a talk presented at CEFES '98 (Computation in Economics, Finance and Engineering: Economic Systems) in Cambridge, England in July 1998 This material is based on research supported
by Nationa...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3145417","timestamp":"2014-04-21T03:08:07Z","content_type":null,"content_length":"21636","record_id":"<urn:uuid:c3a9cd44-6678-44f5-abbd-73ee28623f4e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two-state systems: MASERs
The Feynman lectures on physics are special for many reasons.
Feynman delineates the big picture rather clearly; is able to be very concise about various points that make many other authors excessively talkative; isn't afraid to address questions that some
people incorrectly consider a domain of philosophers even though they have become a domain of physics many years ago; isn't afraid to clarify many widespread misconceptions and explain why e.g.
Einstein was wrong about quantum mechanics; offers some original cute research which may have had pedagogical motivations but that is more important than that e.g. the derivation of general
relativity from a consistent completion of the coupling of the stress-energy tensor to a new spin-2 field (it was a really beautiful, pragmatic, modern, and string-theory-like approach to general
relativity); and for many other reasons.
Two-state systems are among the topics in quantum mechanics that Feynman dedicated much more attention than most other textbooks of quantum mechanics which is one reason why Feynman's students were
much more likely to understand the foundations of quantum mechanics properly.
On this blog, two-state systems have played a central role in many older articles such as the
introduction to quantum computation
(a qubit is a very important two-state system);
two-fermion double-well system
(a ramification of the Brian Cox telekinetic diamond insanity), and several others.
There are hundreds of important examples of two-state systems. The underlying maths governing their evolution is isomorphic in all these cases but we still tend to think about the individual examples
"differently" because we have different intuitions about the different system. The electron's spin is one example; the ammonia molecule is another. The mathematical isomorphism between the two
systems is a fact; nevertheless, people tend to incorrectly assume that the mathematics of the ammonia molecule we will focus on is much more "classical" than the example of the electron's spin.
Well, it's not. All states in the Universe obey the same laws of quantum mechanics.
Looking at Feynman's chapter on MASERs
The third, final volume of the Feynman lectures on physics is mostly dedicated to quantum mechanics. There are many chapters on two-state systems. Chapter 9 of Volume III is about MASERs – cousins of
LASERs that use a particular transition between two states of the ammonia molecule.
Ammonia, \(NH_3\), is a stinky gas excreted in urine. It escapes from the liquid which is why you could smell it on most toilets, especially in the third world and the Soviet Bloc during socialism. ;
-) I apologize to the French and others but Germans have the cleanest toilets. But we want to discuss a closely related issue, namely the states of the ammonia molecule. :-)
The model above shows us that the molecule looks like a pyramid with a triangle of hydrogen atoms at the base and a nitrogen atom at the top of the pyramid (above the center of the triangle). All the
nuclei may move relatively to each other, in principle. However, the relative motion that would change the distances between the four atoms looks like a collection of harmonic oscillators with pretty
high frequencies i.e. energies. That's why if we only look at low-lying energy states (at most a tiny fraction of an electronvolt above the minimum possible energy), we are restricted to the ground
states of the harmonic oscillators, to the lowest-lying state in which the distances between the four nuclei are fixed at values that minimize the energy with this accuracy. The vibrations changing
the distances are thus forbidden. The wave functions for the electrons are completely determined by the low-energy constraints.
How many states of the ammonia molecule are there?
Well, even if the lengths of the six edges of the pyramid are determined, we may do something with the pyramid: we may rotate it. If you define the vector \(\vec d\) pointing from the nitrogen atom
to the center of the hydrogen triangle, \(\vec d\) may be any direction on the two-sphere \(S^2\). I've chosen the direction of the arrow in the opposite way than what you may find natural because \
(\vec d\) is really proportional to the dipole moment. Note that the hydrogens are "somewhat more positive" (much like they tend to lose the electrons elsewhere: compare the \(H^+\) or \(H_3 O^+\)
and \((OH)^-\) ions in water) while the nitrogen is more "negative", like in \(N^- H_3^+\), so the dipole moment has to start from the negative nitrogen.
For each direction on the sphere, there is a distinct quantum state orthogonal to others. Also, the orientation of the hydrogen triangle within its plane may change (rotate) and is given by an
arbitrary angle defined modulo 120 degrees (because the rotation by 120 degrees maps the triangle to itself). The low-lying states will pick a superposition of these states (triangles rotated by \(\
gamma\)) that minimize the energy, pretty much the "symmetric combination" of all directions.
So the vector \(\vec d\) is the only "light degree of freedom" we are allowed to vary without raising the energy too much.
In classical physics, you could take \(\vec d\) to point along any axis, e.g. the negative \(z\) semi-axis ("nitrogen is above the triangle"). If the nitrogen had the same properties as the classical
plastic model, the value of \(\vec d\) would be conserved. We could just forget about all the states with different directions of \(\vec d\); we could consistently demand that \(\vec d\) has a
preferred direction.
Let's try to impose as similar a condition as we can in quantum mechanics, too. Quantum mechanics allows \(\vec d\) to change arbitrarily (the dipole moment isn't conserved) so the true energy
eigenstates would be some spherical harmonics that depend on \(\vec d\). However, the transitions to different values of \(\vec d\) are unusually slow and we may forget about them.
However, there is one transition that is actually fast enough: the transition from \(+\vec d\) to \(-\vec d\). We can't neglect it at all. Why does exist? You may explain it in many ways but one of
them is the "quantum tunneling". It's just possible for the nitrogen atom to be pushed through the triangle and appear on the opposite side of the pyramid. The ammonia molecule is exactly the kind of
a microscopic object where things like quantum tunneling are inevitable.
So there's a significant probability that an ammonia molecule starting with a direction of \(+\vec d\) ends up with the opposite direction, \[
\vec d\to -\vec d.
\] Note that the position of the center of mass is conserved. You can't "ban" this process which is fast enough (the frequency is high enough for the purposes we consider but low enough to be
compatible with our low-energy constraints). That's why you can't study the state of the ammonia molecule with a given value of \(\vec d\) only. You must study the states with \(\vec d\) and \(-\vec
d\) at the same moment.
The states "nitrogen above the triangle" and "nitrogen below the triangle" will be called \(\ket 1\) and \(\ket 2\), respectively. The picture above should really have the same orientation of the
hydrogen triangle in both states, if you wanted it to be really natural, but it's a detail because I said that the relevant states are averaged over the orientation of the triangle, anyway.
At any rate, it is legitimate to require that the dipole moment \(\vec d\) lies in a particular line – the transitions to non-parallel values of the dipole moments are so slow that they can be
neglected. However, the flipping of the sign of \(\vec d\) is something that simply cannot be neglected. It's possible, allowed, inevitable, and it's a vital reason why the MASERs work at all.
You see that this is already a big deviation from classical physics in which there's no tunneling effect. In classical physics, \(\vec d\) seems to be conserved. Some people could argue that the
ammonia molecule is already pretty large so classical physics should more or less apply. They would be wrong. Classical physics never applies exactly. Any physical system in this Universe, regardless
of the size, obeys the laws of quantum mechanics. Classical physics may sometimes be a good enough approximation but it's never precise and it's always wrong at least for some questions when we talk
about small molecules.
We have restricted our attention to a two-state system. The different shapes of the molecule (different lengths of edges) were banned because the change of the geometry (or excited states of the
electrons) would raise the energy too much; different directions of the dipole moment than \(\vec d\) and \(-\vec d\) where \(\vec d\) is the (measured) initial value may be ignored because the
transition to non-parallel values of \(\vec d\) is too slow.
We know that the state vector will belong to a two-dimensional Hilbert space. The relevant Hamiltonian is\[
\hat H = \pmatrix{ E_0 & A\\ A& E_0 }.
\] The off-diagonal matrix elements \(A\) are due to the quantum tunneling. In classical physics, we would have to have \(A=0\) but that's definitely not the case in quantum physics. By redefining
the phase of \(\ket 1\) and \(\ket 2\) (only the relative phase matters), we could change the phase of \(A\) and choose a basis in which \(A\) is real and positive. Note that in general, \(\hat H\)
would be Hermitian so the upper-right matrix element would be the complex conjugate of the lower-left matrix element.
The two diagonal entries are the "best approximations for the energy" of the pyramids in which we neglect the tunneling. These matrix elements are equal to each other due to the rotational symmetry
of the laws of physics. After all, \(\ket 2\) may be obtained from \(\ket 1\) by a rotation and because rotations are symmetries, they don't change the expectation value of the energy. It has to be
the same for both states, \(E_0\).
However, \(E_0\) isn't an eigenvalue of the energy. Instead, to find the eigenvalues of the energy, we have to diagonalize \(\hat H\). We find out that the eigenstates are\[
\frac{\ket 1- \ket 2}{\sqrt{2}}, \quad
\frac{\ket 1+ \ket 2}{\sqrt{2}}.
\] I divided the sum and difference by \(\sqrt{2}\) to make the states normalized but I didn't really have to do that. Again, note that the phases (and, if you choose this convention, general
normalization factors) of the eigenstates are undetermined. In the column notation, the eigenstates are\[
\pmatrix{ +\frac{1}{\sqrt 2} \\ -\frac{1}{\sqrt{2}} },\quad
\pmatrix{ +\frac{1}{\sqrt 2} \\ +\frac{1}{\sqrt{2}} }.
\] If you multiply the matrix \(\hat H\) by these vectors, you will get multiples of themselves. The coefficients in the multiples are the energy eigenvalues:\[
E_I = E_0 - A, \quad E_{II} = E_0 + A.
\] Now you see why I chose the first sign to be minus – I wanted the first eigenvalue to be the lower one. In other words, I wanted the first eigenstate to be the genuine ground state. The Roman
numerals were picked to label the eigenvalues from the lowest one to the highest one.
So if you really cool down the molecule, it won't sit in the shape of a particular pyramid. It just can't sit there because there exists quantum tunneling, even at vanishing temperature \(T=0\,{\rm
K}\). It is an inevitable process of quantum mechanics. Instead, the molecule will ultimately emit photons and drop to the lowest-energy state which means the ground state, the energy eigenstate with
the lowest energy, and it has the same probability amplitude to be in the "pyramid up" and "pyramid down" states.
Because many people make a mistake, let me emphasize one more thing. The energy eigenstates \[
\ket{I,II} = \frac{\ket 1 \mp \ket 2}{\sqrt 2}
\] are not just some dull "statistical mixtures" that have a 50% probability to be in the state \(\ket 1\) and 50% probability to be in the state \(\ket 2\) (pyramid up or down). Instead, the
relative phase between the states \(\ket 1\) and \(\ket 2\) is absolutely crucial for the physical properties of the linear superposition state.
In particular, if the relative phase is anything else than \(+1\) or \(-1\), the superposition fails to be an energy eigenstate. It is only an energy eigenstate if the relative phase is real and when
it's real, it's damn important whether the relative sign is \(+1\) or \(-1\). The negative relative sign gives us the lower-energy state – the quantum tunneling has the effect of maximally lowering
the energy from the "intermediate" level \(E_0\) down to \(E_0-A\), while the positive relative sign does exactly the opposite: it raises the energy to \(E_0+A\).
Almost all the anti-quantum zealots would try to talk about a preferred basis and because they're obsessed with position eigenstates, they would probably allow \(\ket 1\) and \(\ket 2\) to be the
only states that may be "truly" realized. But this is of course completely wrong because even if you start with \(\ket 1\), you can't have \(\ket 1\) forever. Because of the quantum tunneling, the
initial state \(\ket 1\) inevitably evolves into general linear superpositions of \(\ket 1\) and \(\ket 2\): it oscillates between \(\ket 1\) and \(\ket 2\).
As emphasized repeatedly on this blog, there's no way to ban general linear superpositions. The states in a given basis inevitably evolve into general complex combinations of such basis vectors. And
all the complex coefficients, including the relative phases – and especially the relative phases – are absolutely critical. Only a basis of energy eigenstates is completely "sustainable": each energy
eigenstate only evolves into a multiple of itself (well, it evolves into itself with a different phase).
And it is not true that the relevant energy eigenstates are always "equal mixtures" of \(\ket 1\) and \(\ket 2\). For example, in the case of our molecule, things change e.g. when we add the electric
Adding electric fields: MASER
The surrounding electric field \(\vec E\) doesn't change anything about the existence of the tunneling. However, it will add the interaction energies between the electric dipole and the electric
field so that the Hamiltonian is\[
\hat H = \pmatrix{ E_0+d E & A\\ A& E_0 - dE }.
\] The expectation value of energy of \(\ket 1\) i.e. the upper-left matrix element (associated with the pyramid pointing up, the dipole is down) was increased by the product \(|\vec d|\) and \(|\vec
E|\) because the two vectors are pointing in the opposite direction; the expectation value of energy of \(\ket 2\) was lowered for the same reason.
Again, this matrix (the Hamiltonian) may be diagonalized. In this case of a real matrix (it was made real by our having redefined the phases of the basis vectors), the coordinates of the eigenvectors
are still real. But it is no longer the case that the absolute values of both coordinates are equal. They're rather general. The energy eigenvalues are\[
E_I = E_0 - \sqrt{A^2+ d^2 E^2},\quad
E_{II} = E_0 + \sqrt{A^2+ d^2 E^2}.
\] Note that those formulae reduce to \(E_0\mp A\) for \(E=0\). Also, they reduce to \(E_0\mp dE\) for \(dE\gg A\). Note that if you drew graphs of \(E_0\mp dE\) as a function of the electric field \
(E\), you would get two straight lines that intersect each other. However, when you draw the graphs of the exact results \(E_I,E_{II}\) written in the displayed equation above, the two curves never
cross. One of them never drops below \(E_0+A\) and the other one never jumps above \(E_0-A\) so they obviously cannot cross. In fact, the right upper arm of the curve (which is clearly a hyperbola)
"bends" and continuously connects to the left upper arm of the curve; the same thing holds for the lower curve. See
Avoided crossing
(= "repulsion of eigenvalues" or "level repulsion") at Wikipedia.
A few final steps are needed to explain why such molecules are able to emit and absorb some radiation whose frequency is \(f\) where\[
hf = E_{II} - E_{I} \sim 2A + \frac{d^2 E^2}{A}
\] where the final form of the expression is a Taylor expansion of the square root that is OK for all realistic (small enough) values of the electric field \(E\). No, you won't be really able to
produce \(dE\gg A\) in the lab: these would be too strong electric fields.
For example, start with the absorption. Place the ammonia molecule to a variable field \(\vec E\) which has the right frequency \(f\sim 2A/h\). In the \(\vec E=0\) energy eigenstate basis which is
composed of the vectors \(\ket 1\mp \ket 2\), the Hamiltonian was diagonal (that's what the energy eigenstates mean). When you add the electric field \(\vec E\) going like \(\cos (2\pi f t)\), it
will contribute some off-diagonal elements in this basis and when the frequency is right, the resonance will be able to "accumulate" the probability amplitude of \(\ket{II}\) even if the initial
state is \(\ket{I}\). If you choose a wrong frequency, the amplitude of the state to be in \(\ket{II}\) will receive contributions with different phases each cycle and they will cancel out after a
So the ammonia molecule is able to increase its energy by capturing some energy from electromagnetic waves at the right frequencies. That's absorption. There's also stimulated emission, the
time-reversed process to the absorption. If the molecule is already mostly at the higher level \(\ket{II}\), the electric field oscillating at the right frequency will encourage the molecule to drop
to \(\ket{I}\) and deposit the energy difference to the electromagnetic waves.
In the text above, the electric field \(\vec E\) was assumed to be large enough and treated as a classical background that only affects the Hamiltonian of the molecule via "classical parameters".
However, if the electric field is weak enough, it becomes important that \(\vec E\) is a quantum observable as well. The energy carried by the electromagnetic waves of frequency \(f\) is no longer
continuous – it is quantized i.e. forced to be a multiple of the photon's energy \(E_\gamma = hf\).
When you calculate the transitions properly, you will find out that there's actually a nonzero probability amplitude for the ammonia molecule in the excited state \(\ket{II}\) to emit a photon even
if there's no oscillating electric field around the molecule to start with. This is the spontaneous emission. A proper look at the "time reversal argument" is enough to see why the spontaneous
emission has to exist. Even one last photon may be absorbed (absorption is always "stimulated") and we end up with zero photons; the time-reversed process therefore has to start with zero photons and
end with one photon and its "invariant" probability amplitude has to be the same i.e. nonzero. The ability of systems to emit even if the initial number of photons is zero is called "spontaneous
emission"; the total "stimulated plus spontaneous emission" has the probability proportional to \(N_\gamma+1\) – it's the squared matrix element of a harmonic oscillator's position matrix element.
However, for MASERs, the stimulated emission is more important because the intensity of the electromagnetic waves is high – "stimulated" is what the letter "S" in "MASER" (or "LASER") stands for. I
recommend you e.g. Feynman's treatment in the Volume III of his lectures for sketched calculations of the formulae for the transition rates, why there is a resonance, what is the width of the curve,
and so on.
My main goal was more specific here: to convince you that the superpositions of "classically intuitive" states are absolutely inevitable and natural. This wisdom holds for all systems in quantum
mechanics, including many-level and many-body systems and including infinite-dimensional Hilbert spaces whose bases are labeled by positions or any other continuous or discrete observables. Physical
systems may be found in any superpositions and if you need to identify a basis that is a bit more "sustainable" than other bases, it's a/the basis of the energy eigenstates, not e.g. position
eigenstates. What these slightly preferred energy eigenstates are depends on the Hamiltonian. For example, in our case, the form of the eigenstates depended on the surrounding electric field. So
there can never be any "a priori preferred basis". There is never a preferred basis but if some basis is more well-behaved than others, it's because it's closer to a basis of energy eigenstates, and
such a basis always depends on the Hamiltonian as well as the environment: it can only be determined "a posteriori". In particular, the wave functions for the ground states are more important than
all others and that's the wave functions into which cool enough systems want to sit down. These ground state wave functions describe what the degrees of freedom are doing – and it doesn't matter that
you may find these wave functions "complicated" or "different from those you would like to prescribe".
The ammonia molecule is just another system that invalidates any non-quantum or "realist" (anti-quantum zealots prefer to use the term "realist" for themselves over the much more accurate but less
flattering term "classical and optimized for cranks eternally and dogmatically stuck in the concepts of the 17th century physics") replacement for the proper rules of quantum mechanics.
For example, coherence of the ammonia molecule is totally essential and testable so if the Universe decided to "split" during any of the processes discussed above, as Everett liked to imagine, it
would immediately have tangible consequences that disagree with the experiment.
Also, if there were any pilot waves envisioned by de Broglie or Bohm, one couldn't explain "where" the photon is created during the spontaneous emission. Note that during absorption or emission of
electromagnetic waves, the number of photons isn't conserved, in contradiction with an elementary property of the flawed pilot wave paradigm. The photon emitted by an excited ammonia molecule is
"everywhere". You can't fix this problem of the pilot wave theory by forcing the system to remember the "truly real classical field configuration" instead of the position of particles because that
would be similarly incompatible with the particle-like properties of the electromagnetic field. The electromagnetic field – and all other fields in the world – exhibits both particle-like and
wave-like behavior and which of them is more relevant depends on the situation, on the relevant terms in the Hamiltonian, on the frequency of the waves and the occupation numbers etc. If you want to
declare one of the behaviors (particle-like or wave-like) to be "more real" than the other, you're guaranteed to end up with a fundamentally wrong theory. If you keep on doing such things for years,
then you become an irreversible crackpot.
Theories with GRW collapses would also predict unacceptable effects whose existence may be safely ruled out experimentally. And I am not even talking about theories that would love to completely
"ban" the complex superpositions because the authors of these theories must be misunderstanding everything, including the content of Schrödinger's equation that makes the evolution into general
complex combinations inevitable.
Of course, the main point of all such texts is always the same: quantum mechanics has been demonstrated to be the right framework to describe the world for more than 85 years and everyone who is
hoping that a completely, qualitatively different description will replace quantum mechanics – e.g. the incredible idiots that clearly gained a majority in similar threads on the
Physics Stack Exhange
(holy crap, the users such as "Ron Maimon" and "QuestionsAnswers" are just so unbelievably stupid!) – is a crackpot.
And that's the memo.
12 comments:
1. Thanks for the ongoing QM lessons---despite one graduate course in it, QM was a hole in my math/physics trek which I am slowly remedying after many years.
I went to the Physics Stack link to read the t'Hooft (if it is indeed him) and comments. Holy crap, indeed.
Questions and Answers tells you basically to f$%k off,
and calls what you say "pathetic lies and mischaracterisations"...Maybe it is a good thing that I havent been following it for ages.
I see nothing wrong with t'Hooft (again, him or a troll?)
framing models and testing them and tinkering with them. t'Hooft deserves a lot of slack because of his accomplishments. Maimon and QuestionsandAnswers are a different matter. What all three are
doing is simply
looking for a deterministic substructure to QM, and denying locality rather than reality. For QandA to say you are denying objective reality is a distortion. Obviously there is an objective
reality. That is not the same as saying that reality we experience is not crystallized until a "measurement", ie any interaction, occurs. We live in the reality of events that have and are
interacting resulting in the decoherence of probability wave amplitudes.
IMO a problem that many people (me included) have is that describing QM using words rather than math leads to confusion about determinism and other things, particularly foundational issues.
QandA's responses to you are amazingly condescending.
2. Thanks, Gordon, for the comment which is still entirely off-topic and I will refrain from responding to any physics in your comment because it's only a path to trouble.
3. Off topic, sorry, but there is an interesting new paper by Connes:
Any comments ? I am having hard times to understand it. Is it a threat to string theory ?
4. Dear Raisonator,
the paper makes no sense. Connes (and collaborator) has been trying to claim that the Standard Model may be reformulated as a compactification on a non-commutative manifold which is able to
predict relationships between some coupling constants in the Standard Model. It wasn't really ever true - the values of the coupling constants that may look "more natural" in his variables are
still not physically privileged over other values so it isn't possible for a quantum field theory to become predictive in this way.
At any rate, Connes cared about this "new simplicity in the non-commutative variables" - a formalism that wasn't really worked out beyond the classical level (he uses ordinary tools of QFT for
the loops only) - so he had predicted the Higgs mass of 170 GeV. It just happened that 170 GeV was the very first value of the Higgs mass that got excluded by the Tevatron:
So he was shooting into the wrongest place of the real axis. One must be pretty lucky for that. ;-) Now, when we know that the Higgs is near 126 GeV, he's trying to retroactively correct the
wrong prediction by adding random components that would make it bad science even if they were used in an otherwise healthy context - but when they're used within the framework that makes invalid
claims about the possibility to predict things that cannot be predicted, it's even worse.
Best wishes
5. sorry , OT
zdravim Lubosi,
I have an old problem of understanding, perhaps trivial.
Probably you know the solution immediately.
The stimulated emission, one photon comes in, interacts somehow with the atomic field (how is not important), and if everything fits, two identical photons come out. OK !
On the other hand, quantum
cryptography (one photon communication) protects themselves with
statements like : due to the non-cloning theorem , one photon can not
be.... => quantum crypto is secure.
But what is with Laser/Maser, Radio ?
All these devices produce identical photons from in principle one
Or is it so, that thouse photons are in
reality not perfectly identical, but identical enough for Laser etc.?
Thanks and ciao
6. Dear Luďku, an extremely good question.
7. Dear Lubos,
stupid question maybe, Would anything change if we used the good old H2O molecule as the basis for our discussion? After all it has a dipole moment as well.
8. Dear Mikael, H2O molecule isn't a 2-state system in any sense, so you can't construct a MASER out of it and you can't use in discussions on 2-state systems.
I may be puzzled by your question but this whole article, every single topic in this blog entry, is about 2-state systems. They're systems for which you can actually count the relevant
microstates satisfying certain conditions that may evolve into each other - and you get two. I got 2 because the states essentially come from the pyramid-up and pyramid-down states of the
But H2O isn't a pyramid. Its shape can't ever be organized "just in two ways". There isn't any group of atoms (the triangle) that would define a plane and allow another atom to be above or below
it. For H2O, the counting just doesn't give 2. It gives either 1 or infinity. Your suggestion that this discussion boils down to a nonzero dipole moment indicates that you haven't even started to
understand this topic yet. Pretty much every system in the world has a nonzero dipole moment. That doesn't mean that every system is a 2-state system.
9. Dear Lubos,
I can see that H2O has an infinite number of states. But if you allow for rotations NH3 has them as well. Only if you keep the H atoms in place it becomes a two state system. So my question is
why can I neglect the rotations in one case but not in the other?
10. There's still a maser transition in water, around 22 GHz, similar to the 24 GHz ammonia maser frequency, but the rotational modes for H2O have almost the same frequency so the decoupling of the
problem isn't legitimate.
For ammonia, the triangle has a larger number of H-atoms and may be approximated by a circle. Due to the identical character of the atoms, forcing the 120-degree periodicity the rotational modes
of NH3 correspond to higher frequencies that may be considered "parameterically higher". For H2O, this decoupling of the scale is not possible as both energies are of the same order.
11. Thanks, Lubos. I think it is the kind of answer I should have guessed from the geometrical initution. But it is always reassuring to get a true expert answer.
12. Dear Lubos,
after spending more time on this topic I think your answer is just not right, at least partially.
For the rotational modes the only thing what should matter is the quantized angelar momentum
and therefore the moment of inertia of the molecule for the rotation axis.
The bigger the moment of inertia the lower the transition frequencies of course.
Instead the key point for the two states of the Ammonia molecule should be fact
that if you push the N throught the triangle of the three H you create a state which is different
to the original state not just by a rotation. Instead you must additional exchange to
H atoms to come to the original situation.
Regarding the maser frequency of 22 GHz for water are you sure about its origin?
|
{"url":"http://motls.blogspot.cz/2012/08/two-state-systems-masers.html?m=1","timestamp":"2014-04-20T15:51:18Z","content_type":null,"content_length":"132941","record_id":"<urn:uuid:fa96eb18-e046-4bd1-8f4d-3d46bbaedd4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00483-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculator ban in primary tests
9 November 2012 Last updated at 01:30
Government bans calculators from primary maths tests
The government says calculators will be banned in maths tests for 11-year-olds in England from 2014.
Education and Childcare Minister Elizabeth Truss said pupils should only use them once they were confident in basic mathematical skills.
The move follows a government review of calculator use in primary schools.
Teaching unions responded that fluent use of calculators was essential, with the NUT's Christine Blower calling the ban "a retrograde step".
Ms Truss said an over-reliance on calculators meant children missed the rigorous grounding in mental and written arithmetic they needed to progress.
"All young children should be confident with methods of addition, subtraction, times tables and division before they pick up the calculator to work out more complex sums," she said.
"By banning calculators in the maths test, we will reduce the dependency on them in the classroom for the most basic sums."
Complex problems
She said maths "influences all spheres of our daily lives".
"The irony is that while maths is all around us, it seems to have become acceptable to be 'bad with numbers'," Ms Truss said.
"The habit of simply reaching for the calculator to work things out only serves to worsen that problem."
Prof Celia Hoyles, director of the National Centre for Excellence in the Teaching of Mathematics, said: "Children develop greater confidence and success in mathematics if they know a range of methods
- for example mental and written calculation alongside quick recall of relevant number facts.
"It is important that calculators are used appropriately, so children do not become dependent on them for arithmetic but at the same time are able to use them as a tool to support their own problem
But teaching unions argued banning calculator use in the tests would risk pupils' ability to use them to tackle more complex mathematical problems.
Christine Blower, general secretary National Union of Teachers, said: "It is entirely appropriate for children in primary school to learn to use a range of tools to solve maths problems and the skill
of deciding which tool and method to use for a particular problem is an important one.
"It may not be appropriate to use calculators for the whole of the maths test paper, but it is a retrograde step to ban them completely as it will diminish the skills set for primary pupils and leave
them floundering in secondary school".
Russell Hobby, general secretary of the National Association of Head Teachers, said: "As long as they alter the test design and marking to reflect the changed conditions, it shouldn't be too
disturbing. One of the papers is already done without calculators of course.
"It is indeed good to be sure that children can perform routine calculations in their heads, but the advantage of a limited use of calculators is that children can focus on the problem itself. "
Chris Keates, general secretary of the Nasuwt union, said: "If the test is a mental arithmetic test, then clearly you wouldn't expect children to have calculators, but the government needs to come
clean about what its expectations are for the maths curriculum and what kind of skills it believes young people need in the 21st Century.
"Surely we should be expecting to nurture from an early age skills in young people to master complex mathematical challenges. This should include learning how to use the tools which can support them
in that process."
This entry is now closed for comments
Jump to comments pagination
□ Order by:
• Comment number 222.
9th November 2012 - 12:16
It is necessary to gain an understanding of the principles underlying the functioning of numbers and to see clearly what is happening when they are manipulated. Learning styles are important for
individual differences in learning, not rote.
Calculators seem neither to detract nor add to an already gained understanding of concepts, so why does it make sense to ban them?
• Comment number 221.
9th November 2012 - 12:16
@200. I have a maths degree (1975), and two As at Alevel maths. I didn't formally learn tables because I was educated in the USA until age 7. I finally mastered 9x5=45 (not 40!) while at uni.
Maths is a huge subject, and arithmetic is only a small (though important) part of it, even at primary school. We need to test arithmetic 'by hand', but if a calculator helps with the rest of
• Comment number 220.
9th November 2012 - 12:16
Unfortunately it's because of misguided people like Christine Blower that we have this problem in the first place. Calculators should never have been introduced at primary school level. It's
totally unnecessary. Teachers should be concentrating on helping kids consolidate their basic mathematical understanding and improving their mental arithmetic skills. Save calculator use for
• Comment number 219.
Edwin Cheddarfingers
9th November 2012 - 12:16
This isn't about knowing or not knowing mental arithmetic.
It's about grading people on how quickly they can do mental arithmetic due to a test with a time limit which is stupid, because doing mental arithmetic fast does not equate to strong mathematical
problem solving.
Someone graded D in math for mental arithmetic could well be an A* candidate at doing actual real world math with a calculator
• Comment number 218.
9th November 2012 - 12:14
192 david h, dont quite get where you are coming from,i have never said that calcs shouldn't be used, what i have said in my posts is that kids should be able to u@stand the principles & workings
and be able to explain them,basic education at the end of the day,i do use calcs for some stuff,i'm no einstein,i can one,you come across as tech dependant,a reason why the uk is so far behind
• Comment number 217.
9th November 2012 - 12:13
@news_monitor, I also have an A* at Maths and A at Further Maths when I did my A Levels and I know my tables. Its a matter of how you were taught maths. I studied mostly in Singapore before I
came here and did my A Levels and there they placed a lot of emphasis on mental maths etc and didn't allow you to touch a calculator till you entered secondary school. I did well at exams as a
• Comment number 216.
Dave Muir
9th November 2012 - 12:10
Well done Ms TRuss and shame on you Christine Blower. this was long overdue.
• Comment number 215.
David H
9th November 2012 - 12:08
189.Chorley Lass
20 Minutes ago
locked away in sealed rooms nursed by men in white coats,
I remember being told that they were wonderful provided you remembered GINGO:
G arbage IN
G arbage O ut
And that is why calculators, and spell checkers are no good, until the basics are understood.
men,, white coats,, sealed rooms
It's "GIGO"
• Comment number 214.
9th November 2012 - 12:06
Thank goodness light is dawning. Kids don't get the arithmetic practice they need if they can use a calculator instead. The calculator in primary school is a handicap.
I have taught maths both at primary & secondary level, worked in outside industry and am a parent yet I have almost never used a calculator & don't possess one.
• Comment number 213.
9th November 2012 - 12:05
There is a time and a place to use calculators. The time to use them is once you have completely mastered the mathematical concepts of the calculations you're using the machine to do. The place
is secondary school - once you've moved on to more advanced mathematical concepts. It doesn't really take that long to become familiar with a calculator and learn how to use one.
• Comment number 212.
Edwin Cheddarfingers
9th November 2012 - 12:04
Some people are simply slower at mental arithmetic, I'm one of those people and couldn't complete an exam in the usual time given as a result of doing calculations manually.
Despite this I have a degree in pure math, and am currently studying for another focussing on stats. I also have real world expertise of writing software to solve complex COPs.
Math is about problem solving. It's not a race.
• Comment number 211.
9th November 2012 - 12:03
Calculators are not used for tests in numeracy except where it is part of the test which uses calculators ie a separate test test paper.
I think schools got this subject sorted out some time ago but here is another person wanting some headline time saying what is already delivered.
• Comment number 210.
9th November 2012 - 12:03
In hospital recently, based on the dosage the nurse wrote down, and the instructions the doctor then gave me, I instinctively knew that somebody had got their maths wrong. It turned out the nurse
was right, and the doctor had just told me to heavily overdose my 15 month old daughter three times a day for five days. Without mental arithmetic, I would have simply trusted the doctor and done
• Comment number 209.
9th November 2012 - 12:03
not sure I agree with the test, but if a school pupil wanted a career in science or engineering or something that requires a Maths A-level, it is a LOT easier to obtain if they know how the basic
calculations work- I really struggled at this subject, and while I don't blame calculators I wish someone had explained how important it was when I was a kid
• Comment number 208.
9th November 2012 - 12:02
I am trying to calculate the benefit of such a move.. be right back just getting some batteries..
• Comment number 207.
9th November 2012 - 12:01
It seems the older generation are more opinionated on this one.
We never did much mental arithmetic at school and to be honest it doesn't make any difference in my day to day life.
I do computer programming which involves doing more arithmetic than a lot of jobs but I don't have any problems. If something's too complex there's a calculator on my phone or laptop.
A bit of a non-issue imo
• Comment number 206.
Ian Gledhill
9th November 2012 - 11:59
Calculator skills are essential these days - but so are basic arithmetic. Do you really want to rely on a calculator just to add up your shopping before taking to the till, for instance?
Then there's more complicated maths - without a basic arithmetic understanding, it will be MUCH more difficult to spot errors made. It is necessary to have both skills - mental and mechanical -
for complex maths.
• Comment number 205.
9th November 2012 - 11:58
The very last time I went into a Woolworths, just before they went bust, I bought three items at 99p each. I had exactly £2.97 ready. The girl at the till asked me "How did you do that?"
• Comment number 204.
9th November 2012 - 11:57
I never used a calculator to do any maths questions till I was 12 and entered secondary schools and because of that my mental arthimetic is excellent. I agree with this step and it would be good
especially later on when these kids are doing Mathematics A Levels. There's an entire module in it which is non-calculator and while I saw people struggling with it, I aced it and got full marks.
• Comment number 203.
9th November 2012 - 11:57
Here we Gove again, wasn't used in the tory toffs educations so shouldn't be used now! Pity they didn't learn then, because they are obviously not using them now and running the countries
finances on 2+2=7 style maths!
My Maths GCSE had one test with a calc, one without. What is so wrong with that, 2 elements, both tested. Simple.
• Newer
• 1
• ... 12
Sign in with your BBC iD, or Register to comment and rate comments
All posts are reactively-moderated and must obey the house rules.
|
{"url":"http://www.bbc.co.uk/news/education-20259382?postId=114285129","timestamp":"2014-04-18T10:12:18Z","content_type":null,"content_length":"190957","record_id":"<urn:uuid:31448e84-eafe-4329-aebb-1a2830a6b681>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Additive Identity and Other Properties
Date: 01/03/99 at 19:50:17
From: Elizabeth
Subject: Properties
I don't understand properties. Can you explain the identity of zero,
the commutative property, and the multiplicative properties?
Date: 01/04/99 at 12:40:21
From: Doctor Peterson
Subject: Re: Properties
Hi, Elizabeth. I'm not sure which multiplicative properties you mean;
we talk about the multiplicative property of equality, about the
commutative and associative properties of multiplication, and about
the distributive property of multiplication over addition, and you
might mean any of those. I hope my explanation of the others will help
you. Write back if you need more.
We call zero the additive identity because when you add it to anything,
the result is identical to the original number - you haven't changed
it. This means that
n + 0 = 0 + n = n for any number n
There's no other number for which you can say this, so zero is very
Similarly, there is a multiplicative identity. It is a number you can
multiply anything by and not change it. What is that? It will be the
number X in this statement (where "*" means multiply):
n * X = X * n = n for any number n.
The commutative property means that it doesn't matter which order you
do things in. For addition, this means that
a + b = b + a for any numbers a and b.
You can do the same thing with multiplication.
These are things you've probably understood for years, but haven't
thought of as "properties." You just knew that 5+6 and 6+5 are the
same. They are important because not everything you can do has an
identity, or is commutative. Suppose we picture operations like
addition and multiplication as machines with two input pipes and one
output pipe:
a b
in in
| |
| |
| + |
| |
Commutativity says that if you switch the two input hoses, the same
thing will come out. That's true for addition. It's also true if the
machine is a paint mixer that takes blue and yellow paint, or yellow
and blue paint, and makes green. But it's not true for every machine
you can imagine. Maybe the left pipe has to take water and the right
air. It probably wouldn't work right if you switched them!
When we list properties like these, we're reminding ourselves of all
the ways in which addition "works nicely," all the things we can depend
on when we work with numbers. That's what a property is.
You might want to look through our archives. You can search for words
like "commutative" and find lots of helpful explanations like this:
Meanings of Properties
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/57150.html","timestamp":"2014-04-21T00:54:01Z","content_type":null,"content_length":"8208","record_id":"<urn:uuid:0db07158-8101-46d8-b610-3a72efbd911f>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
It is the only way to relax it. I am slightly better today.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
My mother has to do that sometimes but not for as long as you do.
Re: Linear Interpolation FP1 Formula
I just lay there for a while then I am usually okay. Sometimes though it may take longer, especially if I spasm.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
A lot of my elderly relatives suffer from that also.
Re: Linear Interpolation FP1 Formula
Suffer is a good word. Never ending pain of varying severity,
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
And everyone experiences it at least once.
Re: Linear Interpolation FP1 Formula
It is an easy thing to injure, Easy to put too much pressure on it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, the lower back especially.
Re: Linear Interpolation FP1 Formula
Funny thing is some people told me it happened to them when they sneezed, for me it happened when I picked up a floppy disk!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
People still use those?
Re: Linear Interpolation FP1 Formula
No longer but it was a long while back. I believe it was circa 1998 - 99.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Seems like such a long time ago now...
Re: Linear Interpolation FP1 Formula
Sure does. Sometimes, I can not even remember what I was doing or thinking or feeling back then. It seems like a thousand years ago. But other times, it is a sharp and as vivid as yesterday and it
seems that it all happened in the blink of an eye.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
True -- and sometimes we are not even aware how valuable it is, and sometimes it is only appreciated when it has past.
Re: Linear Interpolation FP1 Formula
I usually only feel that way about missed opportunities. The ones that got away.
Going to get a little sleep see you later.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, and there are usually a lot of them...
Okay, see you later.
Re: Linear Interpolation FP1 Formula
Depends on how wasteful a person is. In my case there was a lot of waste.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
And you regret it?
Re: Linear Interpolation FP1 Formula
Of course, they are right when they say the missed chances hurt the most.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I have been thinking about this and I don't think I have any regrets (regarding waste) -- I still think dumping H was a wise decision. Maybe, I am too young to have regrets.
Re: Linear Interpolation FP1 Formula
Yes, probably they are all in your future.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Something to look forward to?
Re: Linear Interpolation FP1 Formula
One of the thrills of life, I guess. Some good ones are going to get away for various reasons.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
It is a frustrating game.
I have a closed-form expression for Agnishom's integral without using a CAS but it appears that it is wrong (only accurate to 16 decimal places).
Also, I have been working through other chapters in Spiegel's book -- there is a limit I am stuck on, I cannot seem to get the answer.
Re: Linear Interpolation FP1 Formula
What is your closed form?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=278396","timestamp":"2014-04-19T05:13:15Z","content_type":null,"content_length":"34492","record_id":"<urn:uuid:ff779823-b48e-4c27-bdbf-48f906e8c043>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Equations solving
You should be able to reduce the number of equations, let's try an example:
That has 3 unknowns but only 2 equations, so let's do our best to solve it:
Combine the two: a+b = 2c = 2(a-b)
Therefore: a+b = 2a - 2b
Subtract a from both sides: b=a-2b
Add 2b to both sides: 3b = a
So we end up reducing this to:
a = 3b
So, we have a range of solutions.
For example: b=1, a=3b=3, a+b=4, a-b=2, c=2
Or: b=2, a=3b=6, a+b=8, a-b=4, c=4
etc ...
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=1513","timestamp":"2014-04-18T21:39:47Z","content_type":null,"content_length":"16434","record_id":"<urn:uuid:75cb4e4e-435c-4d89-9624-b6e24a779ada>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maclaurin Series
Nearly identical to the
Taylor Series
used in
, an approximation of a function. Unlike the Taylor Series, which can approximate a function around any number, the Maclaurin Series is specifically for approximating around 0.
for finding the Maclaurin Series is:
∞ (infinity)
Sigma (E) f~n(0) x^n where f~n is the n^th derivative of f
n=0 x!
Why is the Maclaurin series useful? This simple formula gives us a way to find polynomial approximations of many functions. This can be used to approximate the integral of f, among many other things.
|
{"url":"http://everything2.com/title/MacLaurin+series","timestamp":"2014-04-17T01:17:26Z","content_type":null,"content_length":"19350","record_id":"<urn:uuid:85099b8a-659d-408d-920a-dc2b234cf0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inequality query
December 29th 2011, 11:06 PM #1
Junior Member
Dec 2010
Inequality query
I'd be grateful if anyone could elucidate why this
$\sum_{k=1}^2 \binom{2}{k} (1-p)^k p^{2-k} > \sum_{k=2}^4 \binom{4}{k} (1-p)^k p^{4-k}$
is equivalent to this
$\sum_{k=0}^1 \binom{4}{k} (1-p)^k p^{4-k}> \binom{2}{0} (1-p)^0 p^{2}$
I guess it should be obvious but I can't quite see why. Thanks in advance. MD
Re: Inequality query
I'd be grateful if anyone could elucidate why this
$\sum_{k=1}^2 \binom{2}{k} (1-p)^k p^{2-k} > \sum_{k=2}^4 \binom{4}{k} (1-p)^k p^{4-k}$
is equivalent to this
$\sum_{k=0}^1 \binom{4}{k} (1-p)^k p^{4-k}> \binom{2}{0} (1-p)^0 p^{2}$
I guess it should be obvious but I can't quite see why. Thanks in advance. MD
Consider the binomial expansions of $((1-p)+p)^2$ and $((1-p)+p)^4$
December 30th 2011, 12:23 AM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/advanced-statistics/194772-inequality-query.html","timestamp":"2014-04-18T02:11:34Z","content_type":null,"content_length":"34223","record_id":"<urn:uuid:22830e24-c478-48e1-b3b8-5a5c802115c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dimensional Regularization
February 20th 2006, 02:49 PM #1
Dimensional Regularization
This may be a bit out of place here...
Does anyone know anything about "dimensional regularization" of integrals? It is one method to renormalize integrals in Quantum Field Theory. I'm not sure where it crops up in Math, but it's
probably somewhere. (Odd are no Physicist could come up with something this clever! I'm a Physicist, so I can say that.
Anyway, the basic method is this: We start with the divergent integral:
$\int d^4x F(x^{\mu})$. The form of the function won't come into play here, so don't worry about it. The idea is to isolate the divergent part of the integral by letting the dimension of the
integral be variable, then at the end of the calculation take $\lim_{d \rightarrow 4}$ of the resulting expression. It sounds cheesy, but it does work, at least for certain forms of F.
Now I was thinking about something I saw in another thread...a trick question, but the logic seems to apply here. Consider the equation:
$x^2=x+x+...+x+x$ where x appears on the RHS x times. Now take the derivative: $2x=1+1+...+1+1=x$. Apparently, then, 2=1. The reason this fails is that we are taking the derivative of x that is
variable on the LHS, but a constant on the RHS.
Aren't we doing something similar in the integral above? I have yet to see a case in QFT where the dimension is anything but an integer, so taking the limit as d approaches 4 seems a bit
ridiculous as a dimension of $d+ \epsilon$ for small $\epsilon$ doesn't exist.
I admit that the method works, but how can it work??
This may be a bit out of place here...
Does anyone know anything about "dimensional regularization" of integrals? It is one method to renormalize integrals in Quantum Field Theory. I'm not sure where it crops up in Math, but it's
probably somewhere. (Odd are no Physicist could come up with something this clever! I'm a Physicist, so I can say that.
Anyway, the basic method is this: We start with the divergent integral:
$\int d^4x F(x^{\mu})$. The form of the function won't come into play here, so don't worry about it. The idea is to isolate the divergent part of the integral by letting the dimension of the
integral be variable, then at the end of the calculation take $\lim_{d \rightarrow 4}$ of the resulting expression. It sounds cheesy, but it does work, at least for certain forms of F.
Now I was thinking about something I saw in another thread...a trick question, but the logic seems to apply here. Consider the equation:
$x^2=x+x+...+x+x$ where x appears on the RHS x times. Now take the derivative: $2x=1+1+...+1+1=x$. Apparently, then, 2=1. The reason this fails is that we are taking the derivative of x that is
variable on the LHS, but a constant on the RHS.
Aren't we doing something similar in the integral above? I have yet to see a case in QFT where the dimension is anything but an integer, so taking the limit as d approaches 4 seems a bit
ridiculous as a dimension of $d+ \epsilon$ for small $\epsilon$ doesn't exist.
I admit that the method works, but how can it work??
Maybe the process is telling you something about the structure of space
time. maybe this is indicative of the Hausdorff dimension of Space Time
being close to but not actually 4?
(Google for Hausdoff dimension for an explanation)
February 20th 2006, 08:58 PM #2
Grand Panjandrum
Nov 2005
|
{"url":"http://mathhelpforum.com/advanced-math-topics/1949-dimensional-regularization.html","timestamp":"2014-04-18T11:48:05Z","content_type":null,"content_length":"39436","record_id":"<urn:uuid:7794fc5e-5d89-4498-aaa2-9b183f1c37d2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Binominal distr with a twist !
September 20th 2013, 03:02 AM #1
Junior Member
Sep 2013
Hi! I'm a newbe to this forum, hope you can help!
I'm looking at probability function for the sum of flushes during a fixed hour at night (this would be the sum for a district).
And let's say I measure in 10 min interval, that would make 6 values.
Let's say that I know there are 10 people up during this hour, and each could flush in all of those timeslots then:
Y1 is the sum of flushes for timeslot1,
all X are equal distibuted with p=1/6, and independent
Then Y1 would be Bin(n,p)=Bin(10;1/6) right?
E[Y1]=np = 10*1/6 = 1,6666.. Var[Y1] =np(1-p) = 1,3888..
If I'm looking att a 1hour series,
All Y could have value 0 if no one flushes, and 10 for that matter.
One example of outcome of the first two timeslots
Y1=1 + 1 +...+1
Y2=1 + 1 +...+1
Now to the twist:
If each flushes once and an only once , how to I go about that?
Y1=1+ 0+.. +0
Y2=0+ 0+..+1 (if X1 has been 1 the other must be 0)
All X are independent from each other in the same timeslot (different persons)
But all Y are now dependent since you can and must only flush once
Not knowing what to do I've been fooling around with Hypergeometric Hyp(N,n,m)
and sum of Hypergeometric, saying flush is 1 white ball and the other 5 black.
But I'm on very thin Ice here, I don't know what to do.
How do I get Y2? How do I get Var[Y1]?
E[Y1] would still be 1/6 i guess.
Hope you can help, I read some stocastics a long long (long) time ago..
Last edited by laban1; September 20th 2013 at 03:37 AM.
Re: Binominal distr with a twist !
Just to confirm, before deciding what distribution fits your situation, when you say they flush "once and only once" is it not possible for someone to flush 0 times in an hour?
Re: Binominal distr with a twist !
Thanks for your reply!
No, flushing is just once and zero in all other timeslots for each Xn
One realisation could be (With Y=x1+x2+x3+x4 to make it shorter)
1 =1 0 0 0
1 =0 1 0 0
0 =0 0 0 0
2 =0 0 1 1
0 =0 0 0 0
0 =0 0 0 0
E[y1+y2+..+y6] would be completely deterministic, 4 every time
and E[y1] would be 4/6 (non deterministic)
In my case with X1+X2+..+X10 same principle 10 and 10/6
Last edited by laban1; September 20th 2013 at 01:09 PM.
Re: Binominal distr with a twist !
Bump! Shakarri? Anyone?
Re: Binominal distr with a twist !
Suppose you are trying to find Y[k]. And suppose j people have not flushed before the k^th interval.
There are $10Cj$ combinations of j people from 10. 10Cj is the Combination function
j people must have not flushed k-1 times. The probability of that is $(\frac{5}{6})^{j(k-1)}$
10-j people must have flushed already.
$\text{Probability of having flushed + Probability of not having flushed = 1}$
For one person
$\text{Probability of having flushed} + (\frac{5}{6})^{k-1}=1$
$\text{Probability of having flushed} = 1-(\frac{5}{6})^{k-1}$
Then the probability that 10-j people have not flushed is
Therefore the probability that j out of 10 people have not flushed k-1 times is $(10Cj)(\frac{5}{6})^{j(k-1)}\Big(1-(\frac{5}{6})^{k-1}\Big)^{10-j}$
Given that j have not flushed, the probability that i people will flush in the k^th interval is $(jCi)(\frac{1}{6})^i(\frac{5}{6})^{j-i}$
j can be anywhere between i and 10. So the probability that i people will flush in the k^th interval is
$\sum_{j=i}^{10} (jCi)(\frac{1}{6})^i(\frac{5}{6})^{j-i}(10Cj)(\frac{5}{6})^{j(k-1)}\Big(1-(\frac{5}{6})^{k-1}\Big)^{10-j}$
This simplifies a bit to
$\sum_{j=i}^{10} \frac{10!}{(10-j)!} \times \frac{1}{(10-j)!i!} (\frac{5}{6})^{jk-i} \Big(1-(\frac{5}{6})^{jk-i}\Big)^{10-j}(\frac{1}{6})^i$
And that is the expression for the probability that Y[k] =i
Last edited by Shakarri; September 27th 2013 at 04:01 AM.
Re: Binominal distr with a twist !
I made a mistake but we can't edit posts after a few hours. It should be this
$\sum_{j=i}^{10} \frac{10!}{(10-j)!} \times \frac{1}{(j-i)!i!} (\frac{5}{6})^{jk-i} \Big(1-(\frac{5}{6})^{jk-i}\Big)^{10-j}(\frac{1}{6})^i$
Hi !
I saw your answer first time today!
What is you conclusion compared to the Bionom distribution?
I was so confused a few days ago that I wrote down every combination by hand
taken 4 people and 3 timeslots (3 x 20min)
that is 3^4 = 81 combinations! Here is the first 6
4 1111 3 1110 ......
And I arrived at
0 1 2 3 4 sum
16 32 24 8 1 freq
1 4 6 4 1 PascTri
I also made the numbers in Excel
I believe this is nothing but the (n over k) here (4 over k) k=0,1,.. 4
with the Pascal triangle nr above
And I checked with the Binom(4,1/3) and got exactly the same numbers.
I came to realise that I took a long ride to arrive at Binom never the less.
There are one differens though.
In the "each person must flush once and only once" I Have a constant sum = 4 over every 3 trials.
Hence it's more "stable" than the Binom(4,1/3) that can have the result sum over 3 trials =0=0+0+0 or 12=4+4+4
I belive in the long run they generate the same "population" but Binom is taking longer time doin so.
Have you come to the same conclusion?
Thanks agan for your interest Shakarri!
Last edited by laban1; September 30th 2013 at 03:51 AM.
Re: Binominal distr with a twist !
I realized what is going on. If you assume that each of the 81 combinations has the same probability then it should be a binomial distribution, but they are not all equally common so the
expression is more complicated.
Re: Binominal distr with a twist !
"but they are not all equally common"
That's right! thats why the combination sum=4 has probability 1/81 and the sum 0 has 16/81.
I have listed all possible outfall. Can you explain why I can't use Binom?
Re: Binominal distr with a twist !
When you listed every combination you assumed that every one of the 81 outcomes were of equal probability equal to 1/81. There were 16 times when 0 people flushed and 1 time when 4 people flushed
so you thought that the probability that 0 flush is 16/81 and the probability that 4 people flushed was 1/81. But this is not true, the probability of each of the 81 outcomes is not equal 1/81 so
the probability that 0 flush is not equal to 16/81. The probability of each outcome depends on whether someone had flushed previously so it is not simply 1/81.
The other thing is that $\sum_{j=i}^{4} \frac{10!}{(4-j)!} \times \frac{1}{(j-i)!i!} (\frac{5}{6})^{jk-i} \Big(1-(\frac{5}{6})^{jk-i}\Big)^{4-j}(\frac{1}{6})^i$ does not simplify to $\frac{4!}{n!
Last edited by Shakarri; October 5th 2013 at 01:54 AM.
September 20th 2013, 12:40 PM #2
Super Member
Oct 2012
September 20th 2013, 01:07 PM #3
Junior Member
Sep 2013
September 26th 2013, 01:34 AM #4
Junior Member
Sep 2013
September 27th 2013, 03:58 AM #5
Super Member
Oct 2012
September 27th 2013, 09:16 AM #6
Super Member
Oct 2012
September 30th 2013, 03:38 AM #7
Junior Member
Sep 2013
October 4th 2013, 11:23 PM #8
Super Member
Oct 2012
October 4th 2013, 11:48 PM #9
Junior Member
Sep 2013
October 5th 2013, 01:50 AM #10
Super Member
Oct 2012
|
{"url":"http://mathhelpforum.com/advanced-statistics/222113-binominal-distr-twist.html","timestamp":"2014-04-16T20:49:52Z","content_type":null,"content_length":"63721","record_id":"<urn:uuid:f321e363-55be-4906-a613-3da1e0171f97>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A kind of foliantion on figure eight knot complement
up vote 2 down vote favorite
Let $N$ be the figure 8 knot complement, What we can say about such kind of dim 2 foliation $F$ on $N$: (1) no Reeb (2 dim); (2) $F$ intersect transversly with $\partial N$ is $n$ pareller Reeb (1
Here is a natural example. Let $T^2$ be a torus and $f=(2,1;1,1)$ be a diff on $T^2$. Suppose $(M,\phi_t)$ is the suspension of $(T^2,f)$, which is a transitive Anosov flow. Then cut a standard small
solid torus neighborhood of the closed orbit of 0, we obtain a manifold $N$ which is homeomorphic to figure 8 knot complement. The stable manifolds of $\phi_t$ (restrict to $N$) give $N$ a dim 2
foliation $F$ which satisfies:(1) no Reeb; (2)$F$ intersect transversly with $\partial N$ is two pareller Reeb.
This example comes from the paper link text where Franks and Williams constructed the first example of nontransitive Anosov flow.
Can we construct more such kind of foliation? How? Can we classify them in some sense? ...
gt.geometric-topology ds.dynamical-systems knot-theory
You might have a look at the thesis of Timothy Schwider: dl.dropbox.com/u/8592391/dissertation.pdf – Ian Agol Feb 8 '12 at 0:29
@Agol, Thank you. – Bin Yu Feb 8 '12 at 6:53
add comment
1 Answer
active oldest votes
Choose a Seifert surface which minimizes the Thurston norm in its relative homology class. (This exists because Seifert surfaces are not 0-homologous.) Gabai's Theorem says that every
Thurston-norm minimizing surface is the leaf of a taut foliation. Taut foliations do not have Reeb components. http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=
up vote 1 euclid.jdg/1214437784
down vote
@thku, thank you. But, I feel for figure 8 knot, since it is fibered knot, such a foliation intersect with the boundary torus maybe (at least, for the lowest genus seifert surface)
pareller circles foliation (doesn't satisfy that "$F$ intersect transversly with $\partial N$ is $n$ pareller Reeb (1 dim)"). – Bin Yu Feb 7 '12 at 19:52
add comment
Not the answer you're looking for? Browse other questions tagged gt.geometric-topology ds.dynamical-systems knot-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/87805/a-kind-of-foliantion-on-figure-eight-knot-complement","timestamp":"2014-04-17T15:59:17Z","content_type":null,"content_length":"54043","record_id":"<urn:uuid:fa2cd4fb-3bc8-40f7-8f10-28ecbd032b27>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
page2Zenithal Stereographical Polar Projection
Stereographic projections are created by projecting from a point at one end of the diameter in a circle to a projection plane which sit tangential to the other end of the diameter. This situation is
clearly depicted in Figure A where the light source comes from point A with white rays radiating out of it. Geometrically, the light source sits on the bottom end of diameter AB. In terms of
geography, the light source sits on the south pole of the earth with the projection plane CI on the north pole, thus making the projections on this plane a polar projection. Visually, one can think
of the stereographic projection as being created by the shadows casted upon the projection plane CI by the parallels and meridians.
With the yellow lines representing parallels and the angle the blue lines makes withe the equator representing angular distance away from the equator EF, shadows of parallels are located at the
intersection of the light rays with the plane of projection such as C, B, D, E, F, G, H, and I. As noted in the picture, these rays all goes through the same intersection point made by the parallel
with the circle. When looking upon this plane, latitudes will appear as circles whose radius is equal to the distance the polar axis AB is away from such intersections as C, B, D, E, F, G, H, and I
as illustrated in Figure B. Mathematically, this distance can be calculated using circle geometry. Referring to Figure A, if an arbitrary latitude JK is chosen, its angular distance is <EOJ. Because
the polar axis and the equator are perpendicular to each other, <EOJ and <BOJ are complementary angles. As such <BOJ is also known as a co-latitude. In other words, <BOJ = 90° - <EOJ. A similar
situation occurs on the supplementary side of the latitude. Thus, <FOK=angular distance of latitude and <KOB = co-latitude. Therefore, <KOB=<BOJ and <EOJ=<FOK. Utilizing a fact from circle geometry,
there is a central angle <JOK subtended by the arc JK. Since <JAK is also subtended by the same arc JK, but touching the side of the circle, the relationship between <JOK and <JAK is as follows: <JAK
= 1/2 <JOK. Because JK is perpendicular to the polar axis BA which goes through the center of the circle, BA bisects JK and similarly <JOK and <JAK. This is because <JAK and <JOK are opposite angles
to the bisected latitude. The relationship <JAK = 1/2 <JOK becomes <JAB+<KAB=1/2 (<JOB+<KOB). Since <KAB = <JAB and <JOB = <KOB, 2 <JAB = 1/2 (2 <JOB) therefore <JAB=1/2<JOB. With AB equivalent to
the diameter of the earth or 2 radius and the plane CI tangential to the circle, therefore, <CBO is a right angle. Thus the following relationship can be formed: tan (<JAB) = JB (radius of the
latitude) / circle's diameter. In other words, radius of the latitude = 2 circle's radius * tan(1/2<JOB). In conclusion, radius of latitude = radii * tan(co-latitude). Since tangent reaches infinity
at 90°, the polar stereographic projection cannot reveal an entire hemisphere. Another interesting fact about the stereographic polar projection is that smaller latitudes are farther away from the
center point of the projection (which represents the north pole) at the point where the polar axis intersects the projection plane.
By visually imagining the meridians from a birds eye view, one can imagine that the meridians, which is the semi circle following the shape of the earth from the poles, will be projected as the
straight radius of the outer most latitude as seen in Figure C. These equally spaced radius are rotated around the center with the desired angular interval. Therefore, the number of meridians with
angular distance n° apart from each other would have 360°/n° number of meridians.
|
{"url":"http://www.math.ubc.ca/~cass/courses/m309-01a/ting/page2.html","timestamp":"2014-04-19T07:18:43Z","content_type":null,"content_length":"8947","record_id":"<urn:uuid:94a75384-d294-4b0c-8d91-eac9ea409ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Dear All, I have got to the following statement (which might be wrong): A.B=2AB(cos^2a+sin^2b) and I am trying to get to A.B=ABcos(theta)=|A||B|cos(theta) can anyone help, thank you and best wishes.
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
This statement (A.B=2AB(cos^2a+sin^2b) ) analysed says just that AB has the same sign as A.B because (cos squared plus sin squared) is positive. If A and B are vectors (which i suspect they are)
then you should clarify what AB means in the second statement. This statement here A.B=|A||B|cos(theta) is indeed the definition of dot product but what AB without any dot or cross signifies i
cannot understand. If we are talking about just numbers (in which case i cannot understand y there is a dot on the left hand side of both statements) then there are two cases for AB or A.B (again
if numbers not any difference). AB being negative, considering the second statement, there are two possibilites. Either A, B have different signs and also θ is equal to π or A,B have the same
sign and thus θ=0. Similarly AB being positive gives the same two possibilities. Both possibilities do not contradict the original statement A.B=2AB(cos^2a+sin^2b) which again says that A.B has
the same sign as AB (whatever this AB means). There is the case that you have misstyped the statements (rather unlikely) and in this case i would think that A.B=2AB(cos^2a+sin^2b) is actually A.B
=2|A||B|(cos^2a+sin^2b) where a, b, are the angles of the A, B vectors to a given (unknown) third vector and thus the second statement seems to be A.B=|A||B|cos(theta)=|A||B| where θ is the angle
between vectors A, B. In that case A and B are pointing at the same direction or if A.B=|A||B|cos(theta)=-|A||B| are colinear but oposite direction which somehow must derive from the first
statement A.B=2|A||B|(cos^2a+sin^2b) (not difficult i think substituting cos squared and sin squared by their double angle equals). Will be happy to assist more if clarified what is the case....
Best Response
You've already chosen the best response.
Thank you for replying. To clarify; this is the correct assumption; A.B=2AB(cos^2a+sin^2b) is actually A.B=2|A||B|(cos^2a+sin^2b) where a, b, are the angles of the A, B vectors to a given
(unknown) third vector and thus the second statement seems to be A.B=|A||B|cos(theta)=|A||B| where θ is the angle between vectors A, B
Best Response
You've already chosen the best response.
So now that it's clear we have to show that A.B=|A||B|cos(theta)=|A||B| or cosθ = 1 Given that θ is the angle between A, B and \[0\leθ\leπ\] then θ=0. Whatever the case θ=a-b or θ=b-a , a must be
equal to b. If that is the case then A.B=2|A||B|(cos^2a+sin^2b) turns into A.B=2|A||B|(cos^2a+sin^2a) = 2|A||B| which contradicts with A.B=|A||B| that you are trying to prove. Now if A.B=2|A||B|
(cos^2a+sin^2a) = |A||B|cosθ , θ=a-b or θ=b-a is equal to 2|A||B|(cos^2a+sin^2a) = |A||B|cosθ or 2(cos^2a+sin^2a) = cosθ, again θ=a-b or θ=b-a. This equation has solutions acording to
WolframAlpha but i don't think it's what you are looking for. Probably there is something wrong with the A.B=2|A||B|(cos^2a+sin^2b) equation. Where did this derive from if i may ask?
Best Response
You've already chosen the best response.
Thank you for an excellent response. The actual original question is as follows. Derive AdotB = A*B*cos(theta) = |A|*|B|*cos(theta) from AdotB = AxBx + AyBy +AzBz where x,y,z are components.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/51923803e4b0d270317e33e0","timestamp":"2014-04-19T19:35:05Z","content_type":null,"content_length":"37717","record_id":"<urn:uuid:030e68cf-02e6-42c9-ab9e-b49be0ca1e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Astronomical Optical Interferometry Telescope
Astronomical Optical Interferometry
A Literature Review by Bob Tubbs
St John's College Cambridge
This report documents the development of optical interferometry and provides a physical explanation of the processes involved. It is based upon scientific papers published over the last 150 years,
and I have included references to the ones which are most relevant. The reader is assumed to have an understanding of modern optical theory up to undergraduate level - References 28 and 29 give
explanations at a more basic level. The formation of images from interferometric measurements is discussed and several example images are included.
Fizeau first suggested that optical interferometry might be used for the measurement of stellar diameters at the Academie des Sciences in 1867. The short wavelength of light and the absence of
sensitive calibrated detectors precluded more sophisticated interferometric measurements in the optical spectrum for over a century. After the Second World War most researchers instead turned to the
radio spectrum, where macroscopic wavelengths and electronic detection greatly simplified the measurement of interferometric quantities. Modern computers, lasers, optical detectors and the data
processing techniques developed for radio interferometry have recently enabled astronomers to produce high resolution images with optical arrays. At present only a few optical interferometer arrays
are capable of image formation but many more are planned or under construction. The basic principles underlying the operation of optical interferometers have not changed, so I begin with a look at
some of the earliest instruments.
• Superscript numbers 1) link to the References section of this report and relate to relevant reference numbers.
• All unusual symbols are presented as GIF images.
For a more detailed description of astronomical optical interferometry I would recommend this review article by John Monnier (68 pages).
The American physicist A. A. Michelson demonstrated the practicability of measuring light sources using optical interferometry^2 in 1890 with the experimental apparatus shown in Figure 1.
Figure 1 - Michelson?s experimental apparatus
Various masks were placed in front of incoherent light sources, acting as "artificial stars" for the experiment. Light from a distant artificial star passed through slits O and O' and was then
focused by a lens of focal length y to form an image on the screen. In a mathematical analysis of this experiment it is easier to first consider a monochromatic point source at Q on the optic axis.
Spherical wavefronts will radiate from the source reaching slits O and O' simultaneously. Light passing through slit O will interfere with light passing through slit O' forming intensity fringes on
the screen either side of point P. The optical path length from Q to point P on the screen is the same for rays travelling through either slit. This will not be the general case for light rays
travelling to an arbitrary point on the screen from Q. The difference in optical path length between light rays travelling via slit O and those travelling via O' will then be v is the co-ordinate on
the screen shown in Figure 1. When light rays from the two slits are combined on the screen they will interfere producing intensity proportional to k is the wavenumber defined as Q by an angle
Michelson was not able to make quantitative measurements of the visibility of interference fringes on the screen but did make measurements of the slit separation x which gave minimum fringe
visibility. The size of the artificial star can be calculated from this measurement provided its shape and distance are known. With modern photodiode detectors it is possible to make accurate
intensity measurements and hence calculate fringe visibilities. The viewing screen is replaced by four light intensity detectors as shown in Figure 2. Detector 1 is positioned so that the optical
path lengths from the detector to slit O and from the detector to slit O' are equal. Detector 2 is positioned so that the optical path lengths to O and O' differ by a ^1/[4] of the mean wavelength.
For detectors 3 and 4 the path differences are ^1/[2] of a wavelength and ^3/[4] of a wavelength respectively. If A is the complex amplitude of the light arriving at detector 1 along the path through
slit O, the amplitude of the light arriving via slit O' will be Aexp[-i], giving a total amplitude of A+Aexp[-i]. The intensity at detector 1 will be:
Similarly if A is the amplitude of the light arriving at detector 2 along the path through slit O, the intensity at the detector will be:
For detector 3:
For detector 4:
I have defined the complex fringe intensity I as (I[1]-I[3])+i(I[2]-I[4]) where I[1] to I[4] are the intensities shown above, and i is
Figure 2 - Visibility measurement Figure 3 - Alternative optical arrangement
As the complex intensity I is a linear combination of intensities, the complex intensity of an extended incoherent source can be calculated by summing the contributions from each point on the source.
The amplitude A(B (
d is small)
The complex intensity for light received between I(B(i] d
If the variable u is defined as u=kx, then I[TOTAL] is proportional to the Fourier transform of the one dimensional source brightness distribution B(u. If this Fourier transform is normalised to have
a total intensity of unity we obtain the complex visibility:
Michelson did not have sensitive electronic detectors so his measurements relied on human eyesight. He succeeded in calculating the diameters of Jupiter's satellites ^3 using an aperture mask with
two slits of adjustable separation placed over the objective of a 12-inch telescope. He measured the slit separations at which the fringes were least visible, and calculated the diameters of the
satellites by assuming them to be circular disks with uniform illumination. His results agreed well with visual estimations of the satellite diameters which had been made using large optical
With the optical arrangement of Figure 2 a large objective lens or mirror is required for measurements with large slit separations and much of the light that passes through the slits in the aperture
mask is wasted. Figure 3 shows an alternative optical arrangement which uses separate optical elements for the two beams. The incident light is from a distant point source at angle O and O' to each
of the detectors are the same as in Figure 2, but in this arrangement all the light entering the apparatus is used efficiently. In practice glass blocks might produce reflections within the apparatus
and would probably not be used. Instead, the appropriate difference in optical path length from the detectors to each of the slits could be produced by careful adjustment of the mirror positions. By
varying the optical path length of one of the beams it is possible to calculate the complex visibility with just one detector. As the optical path length is varied the interference fringes will be
scanned past the detector. The amplitude and phase of the intensity variations at the detector will be linearly related to the amplitude and phase of the complex visibility. In most modern
interferometers the intensity variation with time is Fourier transformed to give an amplitude and phase for the complex visibility.
In 1891 Michelson ^4 discussed the possibility of obtaining information about the brightness distribution within a source from interferometric measurements. He conceded that this was not practicable
as it would require accurate measurements of fringe visibility at many different slit separations. Over the next sixty years most of the work on optical interferometry concentrated instead on the
measurement of stellar diameters and the separation of binary stars^5. In 1920 A. A. Michelson and F. G. Pease^6 constructed a separate-element Michelson stellar interferometer as shown in Figure 4.
The separation of the siderostat mirrors was equivalent to the slit separation in his earlier interferometers. Separations of over 20ft were possible, enabling measurements of the diameters of
several large stars to be performed. An interferometer with a 50ft siderostat separation ^7 was built in 1930, with mirrors attached to 9 tons of steel girderwork on the front of a 40 inch optical
telescope. Very few astronomical measurements were made with this instrument due to the difficulty of operating it. With both of these interferometers atmospheric fluctuations produced phase
variations which caused the fringes to "shimmer", making observation extremely difficult. R. Hanbury Brown^8 estimated that atmospheric fluctuations may have led to errors of between ten and twenty
percent in Michelson and Pease's stellar diameter calculations. Hanbury Brown produced more accurate measurements using an intensity interferometer in Navarra ^8. Intensity interferometers look at
the statistical relationship between the intensities at two separated detectors observing a distant source. Quantum mechanics suggests that this is related to the amplitude of the complex visibility
function, allowing measurements of visibility with large detector separations. Unfortunately the phase of the complex visibility cannot be determined, and accurate visibility amplitudes can only be
calculated for bright astronomical sources.
Figure 4 - Simple separate element interferometer
Much of the early work in interferometric imaging was done by radio astronomers. Cosmic radio emissions were discovered in the 1930s^9 and radio interferometry developed after the Second World War.
In 1946 Ryle and Vonberg^10 constructed a radio analogue of the Michelson interferometer and soon located a number of new cosmic radio sources. The signals from two radio antennas were added
electronically to produce interference. Ryle and Vonberg's telescope used the rotation of the Earth to scan the sky in one dimension. Fringe visibilities could be calculated from the variation of
intensity with time. Later interferometers included a variable delay between one of the antennas and the detector as shown in Figure 5.
Figure 5 - Radio interferometer
In Figure 5 radio waves from a source at an angle l further in order to reach the left-hand antenna. These signals are thus delayed relative to the signals received at the right hand antenna by a
time c=casin[c is the speed of the radio waves. The signal from the right hand antenna must be delayed artificially by the same length of time for constructive interference to occur. Interference
fringes will be produced by sources with angles in a small range either side of t varies the angle x=acosa is the actual telescope separation.
An interferometer constructed from two antennas with separation variable in one direction can only provide information about the sky brightness distribution in one dimension. However, a two
dimensional map of the sky can be produced if the separation vector is varied in two dimensions. In Figure 6 the separation between two radio antennas is described by the vector (a,b) constructed
from two cartesian co-ordinates. The position of the source in the sky is described using the angles a axis and b axis. As in Figure 5, the effective baseline (x,y) will be the projection of the
separation vector onto a plane perpendicular to the source direction: (x,y)=(acos[bcos[u conjugate to angle v conjugate to angle u=kx and v=ky, where k is the wavenumber of the radio source defined
as a,b) can thus provide values of the complex visibility function at two points in the u-v plane:
Figure 6 - The telescope separation vector (a,b)
In order to produce a perfect map of the sky brightness distribution the complex visibility would have to be known for all points in the u-v plane (Fourier transform plane). The complex visibility
must be known at all points in a n×m rectangular array in the u-v plane for a portion of the sky to be mapped with resolution equivalent to n×m pixels. The radio source brightness distribution B(B(u-
v plane gives a relatively accurate model of the source brightness distribution, as shown in Figure 8. Figure 9 shows the cruder model formed from a 9×9 array of complex visibility measurements.
Figure 7 - Source brightness distribution Figure 8 - brightness distribution with 40x40 Fourier components Figure 9 - brightness distribution with 9x9 components
Axes and brightness key
For direct measurement of the complex visibility at a rectangular array of points in the u-v plane a large number of different baselines is required. The cost of radio antennas soon led astronomers
to try and find methods for calculating the complex visibility throughout the u-v plane using measurements from only a small number of antennas. The most important of these is the Earth rotation
aperture synthesis technique.
If an interferometer is constructed from two antennas with a separation which is not parallel to the Earth's axis of rotation, the effective baseline of the interferometer will rotate. Figure 10
shows an interferometer in the northern hemisphere with antennas located at A and B. During the day antenna A will move to A' and then A'' whilst B moves to B' and B''. Only the relative positions of
the two antennas are relevant when constructing a map of complex visibility in the u-v plane. To an irrotational observer standing beside antenna A, antenna B would appear to rotate in a circle, and
vice-versa. In a twelve hour period the complex visibility can be measured at all points on an ellipse in the u-v plane. If one of the antennas is mobile, the antenna separation can be altered every
day so as to measure complex visibilities in a different part of the u-v plane. A mathematical function which approximates the complex visibility is created by interpolation from the measurements
made. This can then be Fourier transformed to give an approximation to the source brightness distribution.
Figure 10 - Rotation of the Earth
Information about the fine structural detail of a radio source is found at large values of u and v due to the reciprocal nature of the Fourier transform plane. In order to produce a radio map of high
angular resolution it is therefore necessary to measure fringe visibilities over very long baselines. The radio signal received at an antenna cannot be sent further than a few tens of kilometers by
electrical cable due to the signal loss incurred. Electronic amplification en route introduces delays and distortion to the signal. The most effective method for measuring the complex visibility for
very long baseline interferometry (VLBI) is to first record the signals received by each antenna along with timing signals from a local atomic clock. The recorded signals from each antenna can then
be sent to a laboratory where they are replayed to produce interference. Figure 11 shows the received signals from three antennas being recorded onto magnetic tapes along with timing signals from
local atomic clocks. From these tapes the complex visibility can be calculated at six points in the u-v plane corresponding to the antenna separations a[1], -a[1], a[2], - a[2], a[3] and -a[3] in
Figure 11.
Figure 11 - Recording radio signals for very long baseline interferometry
Each antenna will be a different distance from the radio source, and as with the short baseline radio interferometer (Figure 5) the delays incurred by the extra distance to one antenna must be added
artificially to the signals received at each of the other antennas. The approximate delay required can be calculated from the geometry of the problem. The tapes are played back in synchronous using
the recorded signals from the atomic clocks as time references, as shown in Figure 12. If the position of the antennas is not known to sufficient accuracy or atmospheric effects are significant, fine
adjustments to the delays must be made until interference fringes are detected. If the signal from antenna A is taken as the reference, inaccuracies in the delay will lead to errors e [B] and e [C]
in the phases of the signals from tapes B and C respectively. As a result of these errors the phase of the complex visibility cannot be measured with a very long baseline interferometer.
Figure 12 - Visibility measurements in very long baseline interferometry
The phase of the complex visibility depends on the symmetry of the source brightness distribution. Any brightness distribution B([S] of the brightness distribution only contributes to the real part
of the complex visibility, while B[A] only contributes to the imaginary part. To demonstrate the dependence of the phase of the complex visibility on the symmetry of the source I separated the 9×9
array of complex visibility used to produce Figure 9 into real and imaginary parts. Figure 13 was produced using only the real component of the visibility, with the imaginary component set to zero.
As the phase of the complex visibility is zero throughout the u-v plane the image is symmetric about its centre. In Figure 14 the real component was removed instead, giving an anti-symmetric image.
As the phase of each complex visibility measurement cannot be determined with a very long baseline the symmetry of the corresponding contribution to the source brightness distributions is not known.
Figures 13 - Symmetric Figure 14 -
components Anti-symmetric
R. C. Jennison developed a novel technique for obtaining information about visibility phases when delay errors are present, using an observable called the closure phase. Although his initial
laboratory measurements of closure phase had been done at optical wavelengths, he foresaw greater potential for his technique in radio interferometry. In 1958^11 he demonstrated its effectiveness
with a radio interferometer, but it only became widely used for long baseline radio interferometry in 1974^12. A minimum of three antennas are required. I will initially look at the simplest case,
with three antennas in a line separated by the distances a[1] and a[2] shown in Figure 11. The radio signals received are recorded onto magnetic tapes and sent to a laboratory as described above. The
effective baselines for a source at an angle x[1]=a[1]cos[x[2]=a[2]cos[x[3]=(a[1]+a[2])cos[x[1], x[2] and x[3] I will call [1], [2] and [3] respectively. The phase of interference fringes on each
baseline will contain errors resulting from e [B] and e [C] in the signal phases. The measured phases for baselines x[1], x[2] and x[3], denoted [1], [2], and [3], will be:
[1]=[1]+e [B]-e [C]
[2]=[2]-e [B ]
[3]=[3]-e [C]
Jennison defined the quantity [C] for the three antennas as:
[C]=[1]+[2]-[3] =[1]+[2]-[3]
[C] is often called the closure phase^12.
The contributions to [C] from errors e [B] and e [C] in the signal phases cancel out allowing accurate measurement. Using measurements of [C], [3] can be written in terms of [1] and [2], the unknown
phases. If many closure phase measurements are made the complex visibility can be written as a function of several unknown phases. In order to produce an image of the sky the unknown phases must be
estimated so that the complex visibility function can be calculated. This is usually done using iterative algorythms^13,14,15 which attempt to minimise unphysical properties of the image, such as
areas of negative brightness (black areas above and below the source in figures 8 and 9) and large fluctuations in the background radio noise well away from the known location of the source. In radio
astronomy visibilities are typically measured on more than three baselines simultaneously, providing more information about the source than Jennison's closure phase technique. The mapping algorithms
are designed to retreive the maximum amount of information from the measurements performed without adding artificial detail. Images have been produced with baselines of many thousands of kilometers
and resolution higher than one milliarcsecond.
|
{"url":"http://astronomicaloptical.blogspot.com/","timestamp":"2014-04-19T22:07:52Z","content_type":null,"content_length":"86658","record_id":"<urn:uuid:8637755e-7dc9-4b5e-b096-3bd91708cef3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: RE: -expand-, -expandcl-, and -set mem-; limit to the number of
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: -expand-, -expandcl-, and -set mem-; limit to the number of obs?
From Maarten buis <maartenbuis@yahoo.co.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: -expand-, -expandcl-, and -set mem-; limit to the number of obs?
Date Sun, 11 Oct 2009 23:42:28 -0700 (PDT)
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
--- Misha wrote:
> > "Why am I asking for so much memory?", you might
> > ask. Well, I have a data set that, when expanded,
> > ought to give me about 2.63e+09 (i.e., nearly
> > three billion) observations.
Many commands allow you to work with the unexpanded
data by using fweights, see -help weight-.
> How large is the .dta file you are using (not in
> observations, but in terms of disk space)?
> Keep in mind that the "size" of your dataset is more than
> just the number of observations; so, how many variables are
> in the dataset? how many characters/digits are in your
> variables? are there labels, notes, or other
> characteristics stored in your .dta file? All of these
> will contribute to the amount of memory needed open the file
> in Stata (this is also why it is difficult to do a simple
> "back-of-the-envelope" calculation of the exact amount of
> memory you need).
Here is how to do such back of the envelope calculation:
Hope this helps,
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2009-10/msg00511.html","timestamp":"2014-04-20T06:06:53Z","content_type":null,"content_length":"7316","record_id":"<urn:uuid:14d7ffed-17e2-46db-a3d0-3377028f5b7b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bug in SciPy’s erf function
Last night I produced the plot below and was very surprised at the jagged spike. I knew the curve should be smooth and strictly increasing.
My first thought was that there must be a numerical accuracy problem in my code, but it turns out there’s a bug in SciPy version 0.8.0b1. I started to report it, but I saw there were similar bug
reports and one such report was marked as closed, so presumably the fix will appear in the next release.
The problem is that SciPy’s erf function is inaccurate for arguments with imaginary part near 5.8. For example, Mathematica computes erf(1.0 + 5.7i) as -4.5717×10^12 + 1.04767×10^12 i. SciPy computes
the same value as -4.4370×10^12 + 1.3652×10^12 i. The imaginary component is off by about 30%.
Here is the code that produced the plot.
from scipy.special import erf
from numpy import linspace, exp
import matplotlib.pyplot as plt
def g(y):
z = (1 + 1j*y) / sqrt(2)
temp = exp(z*z)*(1 - erf(z))
u, v = temp.real, temp.imag
return -v / u
x = linspace(0, 10, 101)
plt.plot(x, g(x))
In [31]: sp.__version__
Out[31]: ’0.9.0.dev’
In [32]: sp.special.erf(1.0 + 5.7j)*1e-12
Out[32]: (-4.5717045780553551+1.0476748318787288j)
The fix is also included in the current Scipy 0.8.0 release.
erf(z) =int(exp-t2dt) 0<t<x (1)
erf(z)=1-[1+0.278393z+0.230389z2+0.000972z3+0.078108z4]puiss-4 +e(z) (2)
jai pas compris comment trouve (2) aprtir de (1)
[...] This post was mentioned on Twitter by Planet Python, Alltop Programming. Alltop Programming said: Bug in SciPy’s erf function http://bit.ly/cb9SPk [...]
Tagged with: Programming, Python, SciPy, Special functions
Posted in Python, Software development
|
{"url":"http://www.johndcook.com/blog/2010/09/02/bug-in-scipys-erf-function/","timestamp":"2014-04-20T05:42:42Z","content_type":null,"content_length":"32657","record_id":"<urn:uuid:e9e68712-c5ce-403f-a00b-e742b9332dbe>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Very challenging; Simplify the trigonometric complex expression.
June 16th 2008, 12:52 PM #1
Junior Member
Jun 2008
Very challenging; Simplify the trigonometric complex expression.
$<br /> tan[ln(\sqrt[2i]{x^2 + 1}) - ln(\sqrt[i]{x-i})]$
Stopping at
$\frac{1}{2i} ln(x+i) - \frac{1}{2i} ln(x-i)$,
note that $ln(x+i) = ln(x^2+1) + i\ arctan(1/x)$ and
$ln(x-i) = ln(x^2+1) - i\ arctan(1/x)$.
The rest follows.
Now try it again using the assumption that x is complete, if you should take it.
Last edited by mathwizard; June 17th 2008 at 02:31 PM.
Hello !
Your problems are really challenging
$<br /> tan[ln(\sqrt[2i]{x^2 + 1}) - ln(\sqrt[i]{x-i})]$
I don't know if I'm going to do it correctly...but well, what happens if one doesn't try anything ?
$\tan \left(\ln(\sqrt[2i]{x^2+1})-\ln(\sqrt[i]{x-i})\right)$
I'll study first $\left(\ln(\sqrt[2i]{x^2+1})-\ln(\sqrt[i]{x-i})\right)=S$
$S=\ln\left((x^2+1)^{\frac{1}{2i}}\right)-\ln\left((x-i)^{\frac 1i}\right)$
$S=\frac{1}{2i} \ln [(x-i)(x+i)]-\frac 1i \ln[(x-i)]$
$S=\frac{1}{2i} \ln(x-i)+\frac{1}{2i} \ln(x+i)-\frac 1i \ln(x-i)$
$S=\frac{1}{2i} \ln(x+i)-\frac{1}{2i}\ln(x-i)$
$S=\frac{1}{2i} \ln \left(\frac{x+i}{x-i}\right)$
(I know we can do it in a more direct way, but it doesn't matter)
Now, remember that :
$\cos x=\frac{e^{ix}+e^{-ix}}{2}$
$\sin x=\frac{e^{ix}-e^{-ix}}{2i}$
$\tan x=\frac{\sin x}{\cos x} \implies \tan S=\frac 1i \cdot \frac{e^{iS}-e^{-iS}}{e^{iS}+e^{-iS}}$
$e^{iS}=e^{i \cdot \frac{1}{2i} \ln \left(\frac{x+i}{x-i}\right)}=e^{\frac 12 \ln \left(\frac{x+i}{x-i}\right)}=\sqrt{\frac{x+i}{x-i}}$
Similarly, we get :
$\tan S=\frac 1i \cdot \left(\sqrt{\frac{x+i}{x-i}}-\sqrt{\frac{x-i}{x+i}}\right) \cdot \frac{1}{\sqrt{\frac{x+i}{x-i}}+\sqrt{\frac{x-i}{x+i}}}$
Multiply by $1=\frac{\sqrt{\frac{x+i}{x-i}}-\sqrt{\frac{x-i}{x+i}}}{\sqrt{\frac{x+i}{x-i}}-\sqrt{\frac{x-i}{x+i}}}$ :
$\tan S=\frac 1i \cdot {\color{blue}\left(\sqrt{\frac{x+i}{x-i}}-\sqrt{\frac{x-i}{x+i}}\right)^2} \cdot \frac{1}{\color{red}\frac{x+i}{x-i}-\frac{x-i}{x+i}}$
${\color{blue}\left(\sqrt{\frac{x+i}{x-i}}-\sqrt{\frac{x-i}{x+i}}\right)^2}=\frac{x+i}{x-i}+\frac{x-i}{x+i} -2 \underbrace{\sqrt{\frac{x+i}{x-i}} \cdot \sqrt{\frac{x-i}{x+i}}}_{=1}$
$\tan S=\frac 1{\color{red}i} \cdot \left(\frac{2(x^2-1)}{x^2+1}-2\right) \cdot \frac{x^2+1}{4{\color{red}i}x}$
$\tan S={\color{red}-} \frac{1}{4x} \cdot \left(2(x^2-1)-2(x^2+1)\right)$
$\tan S=-\frac{1}{4x} \cdot (-4)$
$\boxed{\tan S=\frac{1}{x}}$
Yay !
Edit : and this was done assuming that x had values that didn't impeach anything in it...
Re-edit : new release, mistakes corrected
Last edited by Moo; June 16th 2008 at 02:33 PM.
Ok... I have found a quicker method (takin' a shower makes things easier
$\tan \left(\ln(\sqrt[2i]{x^2+1})-\ln(\sqrt[i]{x-i})\right)$
\begin{aligned} S=\ln(\sqrt[2i]{x^2+1})-\ln(\sqrt[i]{x-i}) &=\ln(\sqrt[2i]{(x-i)(x+i)})-\ln(\sqrt[2i]{(x-i)^2}) \\<br /> &=\ln \left(\sqrt[2i]{\frac{(x-i)(x+i)}{(x-i)^2}}\right) \\<br /> &=\frac
{1}{2i} \cdot \ln \left(\frac{x+i}{x-i}\right) \end{aligned}
\begin{aligned} \tan S &=\frac 1i \cdot \frac{e^{iS}-e^{-iS}}{e^{iS}+e^{-iS}} \quad \leftarrow \quad \text{multiply by } \frac{e^{iS}}{e^{iS}} \\<br /> &=\frac 1i \cdot \frac{e^{2iS}-1}{e^{2iS}
+1} \end{aligned}
$e^{2iS}=e^{2i \cdot \frac{1}{2i} \cdot \ln \left(\frac{x+i}{x-i}\right)}=\frac{x+i}{x-i}$
\begin{aligned} \tan S&=\frac 1i \cdot \frac{\frac{x+i}{x-i}-1}{\frac{x+i}{x-i}+1} \\ \\<br /> &=\frac 1i \cdot \frac{x+i-x+i}{x+i+x-i} \\ \\<br /> &=\boxed{\frac 1x} \end{aligned}
June 16th 2008, 02:12 PM #2
June 16th 2008, 10:58 PM #3
|
{"url":"http://mathhelpforum.com/calculus/41711-very-challenging-simplify-trigonometric-complex-expression.html","timestamp":"2014-04-18T18:14:48Z","content_type":null,"content_length":"50036","record_id":"<urn:uuid:896098ac-04a4-46eb-8806-91e1e7b57d38>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Princeton Junction Precalculus Tutor
Find a Princeton Junction Precalculus Tutor
...I teach Linear Algebra (MTH 201) at Burlington County College. This subject includes topics such as linear systems, matrix operations, vectors and vector spaces, linear independence, basis and
dimension, homogeneous systems, rank, coordinates and change of basis, orthonormal bases, linear transf...
17 Subjects: including precalculus, calculus, geometry, statistics
...I've always excelled in all academic areas and taking standardized tests. When I wanted to begin tutoring the LSAT, I took the test, scoring 175. I've also devoured the teaching materials from
several well known test prep companies in order to understand how they break the test down to help students with a variety of learning styles.
16 Subjects: including precalculus, calculus, geometry, algebra 2
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including precalculus, calculus, physics, ACT Math
...I've been using Apple products ever since I received my iphone years ago. I currently use a macbook air and have taught friends and family to use their iPhones, apple computers, iPads, and
iTunes. Using these products requires an understanding of their software and competitor products.
26 Subjects: including precalculus, chemistry, calculus, physics
I graduated from West Point with a Bachelor of Science degree in Engineering Management, and I currently teach mathematics, physics and engineering at an independent school in the Philadelphia
suburbs. I have tutored middle and high school students in the areas of PSAT/SAT/ACT preparation, math (Al...
19 Subjects: including precalculus, English, calculus, GRE
|
{"url":"http://www.purplemath.com/Princeton_Junction_Precalculus_tutors.php","timestamp":"2014-04-19T19:37:40Z","content_type":null,"content_length":"24669","record_id":"<urn:uuid:1325d273-435e-43ce-a8bb-6807903ebf32>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geodesic of a Sphere in spherical polar coordinates (Taylor's Classical Mechanics)
1. The problem statement, all variables and given/known data
"The shortest path between two point on a curved surface, such as the surface of a sphere is called a geodesic. To find a geodesic, one has to first set up an integral that gives the length of a path
on the surface in question. This will always be similar to the integral (6.2) but may be more complicated (depending on the nature of the surface) and may involve different coordinates than x and y.
To illustrate this, use spherical polar coordinates (r, [itex]\theta[/itex],[itex]\phi[/itex] ) to show that the length of a path joining two points on a sphere of radius R is
L=R[itex]\int[/itex][itex]\sqrt{1+sin^2\theta\phi'(\theta)^2}[/itex]d[itex]\theta[/itex] (Don't know how to do it on latex but the integral is between [itex]\theta[/itex]1 and [itex]\theta[/itex]2)
if ([itex]\theta[/itex]1,[itex]\phi[/itex]1) and ([itex]\theta[/itex]2,[itex]\phi[/itex]2) specify two points and we assume that the path is expressed as [itex]\phi[/itex]=[itex]\phi[/itex]([itex]\
2. Relevant equations
3. The attempt at a solution
I'm unsure of how much this question is asking for. I was able to quickly work out the solution after looking up the line element for the surface of a sphere in spherical polar and using that in
place of the Cartesian form of ds.
But then I was wondering if whether the question was asking me to derive the line element. It doesn't seem likely since this is one of the * questions which are supposed to be the easiest and I've
already completed the ** questions with no difficulty.
In any case, I worked on deriving the line element for the sake of it and got stuck. I found the differentials for x,y and z (dx, dy and dz) and put them into the equation for ds. What do I do from
here? Do I tediously expand the brackets involving three terms or is there something that I'm missing?
Thanks in advance.
|
{"url":"http://www.physicsforums.com/showthread.php?t=669933","timestamp":"2014-04-18T10:41:01Z","content_type":null,"content_length":"29996","record_id":"<urn:uuid:e64c2c27-aa79-44ac-9235-ef3d06de0b94>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Convert cubic centimeters per minute to cubic inch per hour - Conversion of Measurement Units
›› Convert cubic centimetre/minute to cubic inch/hour
›› More information from the unit converter
How many cubic centimeters per minute in 1 cubic inch per hour? The answer is 0.273117737269.
We assume you are converting between cubic centimetre/minute and cubic inch/hour.
You can view more details on each measurement unit:
cubic centimeters per minute or cubic inch per hour
The SI derived unit for volume flow rate is the cubic meter/second.
1 cubic meter/second is equal to 60000000 cubic centimeters per minute, or 219685475.576 cubic inch per hour.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between cubic centimeters/minute and cubic inches/hour.
Type in your own numbers in the form to convert the units!
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0030 seconds.
|
{"url":"http://www.convertunits.com/from/cubic+centimeters+per+minute/to/cubic+inch+per+hour","timestamp":"2014-04-19T07:07:43Z","content_type":null,"content_length":"20369","record_id":"<urn:uuid:83f707c3-bc55-46bf-b55e-dfc30ff1b633>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Saturnian Ring System Simulation
If the Java applet fails to start due to Java Security issues, click here.
With this applet, you can investigate mass ratios and eccentricities that lead to stable ring systems vs. unstable ones.
The parameter ecc controls the eccentricity. A value of zero produces circular orbits. Positive values produce an eccentric ring.
For a circular ring, if the mass of each ring body is no more than 2.3 times the mass of Saturn divided by the cube of the number of ring particles, then the system can be expected to be stable;
otherwise not. See Linear Stability of Ring Systems for the derivation of this inequality.
In the applet below, M is the mass of Saturn (in Earth-masses), m is the mass of an individual ring body, and n is the number of ring bodies.
The textfield labeled gamma is the ratio m*n^3/M. If this value is smaller than 2.3, the system will be stable. Large values will be unstable.
You will find that you don't need to increase m very much to make the system unstable. If you set "warp" to 100, the integrator will show the instability very quickly. Give it a whirl.
Notes: (1) The warp parameter only controls how often the screen is updated---large values mean that many time steps of the integrator are performed between each screen update. This makes the
simulation run much faster as updating the screen image is more time consuming that a step of the integrator.
(2) Due to unresolved technicalities, the time-step parameter dt can only be changed if 'ecc' is set to zero (i.e., circular orbits).
(3) For WebGL version of applet, click here.
|
{"url":"http://www.princeton.edu/~rvdb/JAVA/astro/galaxy/StableRings.html","timestamp":"2014-04-24T10:59:21Z","content_type":null,"content_length":"2889","record_id":"<urn:uuid:c8e66ec2-3a42-48c2-a542-30fb706ea558>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Steven on Sunday, March 21, 2010 at 11:01am.
Solve using elimination method
• math - drwls, Sunday, March 21, 2010 at 11:54am
Add the two equations together, by adding the left and right sides separately. That gives you
10y = 40.
Next divide both sides by 10 for the value of x.
Once you know x, substitute its value into either of your original equations to calculate y.
• math - Steven, Sunday, March 21, 2010 at 1:14pm
So would the ordered pair be (1,4)?
Related Questions
math - Solve using the elimination method. Show your work. If the system has no ...
Algebra - Elimination method: x+5y=-13 2x-5y=-19 ( , )
Algebra - Solve using the elimination method. Show your work. If the system has ...
Elimination Method Help - solve by the elimination method 5x+5y= -7 7x-3y=19 The...
Algebra - Checking a few answers to some algebra problems: 1.Find the slope of ...
math - Solve by elimination method. 0.3x – 0.2y = 4 0.5x + 0.5y = 85/19
math - solve the system by the elimination method: 4x=5y+24 5x=4y+21
Algebra - I am great at math but for some reason can not wrap my brain around ...
algebra - -3x-5y=-6 6x+5y=-34 how do i solve this using elimination?
Math - 4x-3y=5 Solve { 5y+7=3x using the method of elimination.
|
{"url":"http://www.jiskha.com/display.cgi?id=1269183666","timestamp":"2014-04-19T11:29:40Z","content_type":null,"content_length":"8498","record_id":"<urn:uuid:618305a6-d975-4bd7-bf6a-eb52f14b7b87>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computationally Efficient Probabilistic Linear Regression
Patent application title: Computationally Efficient Probabilistic Linear Regression
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A computationally efficient method of performing probabilistic linear regression is described. In an embodiment, the method involves adding a white noise term to a weighted linear sum of basis
functions and then normalizing the combination. This generates a linear model comprising a set of sparse, normalized basis functions and a modulated noise term. When using the linear model to perform
linear regression, the modulated noise term increases the variance associated with output values which are distant from any data points.
A method comprising:adding a large number of uncorrelated basis functions to a weighted linear sum of M basis functions;normalizing the sum of all the basis functions to provide a normalized linear
model; andusing the normalized linear model to perform probabilistic linear regression to predict a value and an associated variance, wherein the value is one of: a position of an image feature, a
relevance score, a matching score, a usage rate and a load on an entity.
A method according to claim 1, wherein the large number of uncorrelated basis functions comprise an infinite number of uncorrelated basis functions.
A method according to claim 1, wherein the sum of all the basis functions is normalized to a pre-defined prior envelope.
A method according to claim 1, wherein the sum of all the basis functions is normalized to a constant prior variance.
A method according to claim 1, wherein wherein the normalized linear model is given by: f ( x ) = c [ m = 1 M w m φ m ( x ) d ( x ) + w 0 ( x ) 1 d ( x ) ] ##EQU00012## where c is the pre-defined
prior variance envelope, φ
is a basis function, w
is a weight associated with the basis function, w
(x) is a white noise Gaussian process and d(x) is a diagonal of a covariance function.
A method according to claim 1, wherein using the normalized linear model to perform probabilistic linear regression comprises:using training data to learn weights associated with each of the M basis
functions;receiving at least one input value; andcomputing a predicted output value and a variance associated with the output value based on the at least one input value and using the normalized
linear model and the weights associated with each of the M basis functions.
A method according to claim 6, wherein the training data comprises at least one input and output value pair.
A method according to claim 6, wherein the at least one input value comprises a map of pixel intensities in an image and the predicted output value comprises a predicted position of a feature in the
A method according to claim 1, further comprising:making a decision based on a variance associated with an output value generated by the probabilistic regression.
One or more tangible device-readable media with device-executable instructions for performing steps comprising:adding a white noise term to a regression function comprising a linear model;normalizing
the linear model with the white noise term to a pre-defined prior variance envelope; andusing the normalized linear model to perform probabilistic regression to predict a value and an associated
confidence factor.
One or more tangible device-readable media according to claim 10, wherein the pre-defined prior variance envelope comprises a constant prior variance.
One or more tangible device-readable media according to claim 10, wherein the normalized linear model is given by: f ( x ) = c [ m = 1 M w m φ m ( x ) d ( x ) + w 0 ( x ) 1 d ( x ) ] ##EQU00013##
where c is the pre-defined prior variance envelope, φ
is a basis function, w
is a weight associated with the basis function, w
(x) is a white noise Gaussian process and d(x) is a diagonal of a covariance function.
One or more tangible device-readable media according to claim 12, wherein using the normalized linear model to perform probabilistic regression comprises:using training data to learn the weights
associated with the basis functions;receiving at least one input value; andcalculating an output value corresponding to the at least one input value and a confidence factor associated with the output
value using the normalized linear model.
One or more tangible device-readable media according to claim 10, wherein using the normalized linear model to perform probabilistic regression comprises:using the normalized linear model to perform
probabilistic regression in order to track a location of a feature in a sequence of images.
One or more tangible device-readable media according to claim 10, further comprising device-executable instructions for performing steps comprising:making a decision based on a confidence factor
associated with an output of the probabilistic regression.
A method comprising:adding a white noise term to a regression function comprising a linear model, the linear model comprising a sum of M weighted basis functions;normalizing the linear model with the
white noise term to a pre-defined prior variance envelope;using training data to train the normalized linear model; andperforming probabilistic regression using the trained normalized linear model to
predict the position of a feature in an image and to compute a confidence parameter associated with said position.
A method according to claim 16, wherein the pre-defined prior variance envelope comprises a constant prior variance.
A method according to claim 16, wherein the normalized linear model is given by: f ( x ) = c [ m = 1 M w m φ m ( x ) d ( x ) + w 0 ( x ) 1 d ( x ) ] ##EQU00014## where c is the pre-defined prior
variance envelope, φ
is a basis function, w
is a weight associated with the basis function, w
(x) is a white noise Gaussian process and d(x) is a diagonal of a covariance function.
A method according to claim 16, further comprising:determining whether to update parameters associated with said feature based on the confidence parameter.
A method according to claim 16, further comprising:if said confidence parameter exceeds a threshold value, updating a stored position of the feature.
COPYRIGHT NOTICE [0001]
A portion of the disclosure of this patent contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent
document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND [0002]
Regression is used in many applications to predict the continuous value of an output, such as the value of the stock market or the pixel intensity in an image, given a new input. Regression uses
training data which is a collection of observed input and output pairs to perform the prediction. Probabilistic regression provides a degree of uncertainty in addition to the prediction of an output.
The degree of uncertainty provides an indication of the confidence associated with the predicted output value and this may be very useful in decision making. For example, a different decision may be
made if the regression indicates a low confidence in a value compared to a high confidence in the same value.
There are a number of known techniques for performing accurate probabilistic regression; however, all these techniques have a high computational cost of learning from data and of making predictions.
This means that they are not suitable for many applications; in particular they are not suitable for applications where decisions need to be made quickly. A number of techniques have been proposed to
make probabilistic regression more efficient and these are based on sparse linear models. Sparse linear models use linear combinations of a reduced number of basis functions. These sparse linear
models are, however, not suitable for use in decision making because they are overconfident in their predictions, particularly in regions away from any training data.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known techniques for performing probabilistic regression.
SUMMARY [0005]
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not
identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more
detailed description that is presented later.
A computationally efficient method of performing probabilistic linear regression is described. In an embodiment, the method involves adding a white noise term to a weighted linear sum of basis
functions and then normalizing the combination. This generates a linear model comprising a set of sparse, normalized basis functions and a modulated noise term. When using the linear model to perform
linear regression, the modulated noise term increases the variance associated with output values which are distant from any data points.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying
DESCRIPTION OF THE DRAWINGS [0008]
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
[0009]FIG. 1
is a schematic diagram of an improved method for building probabilistic sparse linear models;
FIG. 2 shows a comparison between the performance of RVM, normalized RVM and the method shown in
FIG. 1
[0011]FIG. 3
shows the covariance matrices for each of RVM, normalized RVM and the method shown in
FIG. 1
[0012]FIG. 4
shows results for each of RVM, normalized RVM and the method shown in
FIG. 1
when applied to Silverman's motorcycle data set;
[0013]FIG. 5
shows schematic diagrams of a visual tracking method;
[0014]FIG. 6
shows two example methods of performing computationally efficient probabilistic linear regression;
[0015]FIG. 7
shows an example of one of the method steps from
FIG. 6
in more detail;
[0016]FIG. 8
shows a further method step which may follow on from the methods shown in
FIG. 6
; and
[0017]FIG. 9
illustrates an exemplary computing-based device in which embodiments of the methods described herein may be implemented.
Like reference numerals are used to designate like parts in the accompanying drawings.
DETAILED DESCRIPTION [0019]
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the
present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or
equivalent functions and sequences may be accomplished by different examples.
As described above, sparse linear models are attractive for probabilistic regression for computational reasons and because they are easily interpreted. In these models, the regression function is
simply a weighted linear sum of M basis functions φ
(x), . . . , φ
( x ) = m = 1 M w m φ m ( x ) = w T φ ( x ) ( 1 ) ##EQU00001##
where x is a
(vectorial) input. A popular Bayesian treatment is the relevance vector machine (RVM) in which a Gaussian prior is placed on the weights: p(w)=N(0,A), where A is a diagonal matrix of variance
parameters a
, . . . , a
. The observed outputs y are assumed to be corrupted by Gaussian white noise of variance σ
from the underlying regression function f(x). Therefore, given a data set of N input-output pairs (x
, y
), . . . , (x
, y
), it is possible to compute the Gaussian posterior distribution on the weights p(w|y) and make a Gaussian prediction at a new point x*: p(y*|x*, y). In the RVM model it is customary to use localized
basis functions centered on the training inputs. The model evidence p(y|A) is maximized to learn the variances of the weights A. An attractive property of the RVM is that most of the weights tend to
zero, effectively pruning the corresponding basis functions. The result is a sparse, and hence computationally efficient, linear model with M<<N. The combination of a finite linear model with
sparsity inducing priors on the weights is known as the Sparse Bayesian learning framework.
As a Bayesian regression model, the RVM gives predictive distributions for new inputs, i.e., it supplies error bars, however, as described above, these uncertainties are often unreasonable and in
particular the model is overconfident away from observed data points.
[0022]FIG. 1
is a schematic diagram of an improved method for building probabilistic sparse linear models (i.e. linear combinations of a reduced set of basis functions). As shown in
FIG. 1
, an infinite number of totally uncorrelated basis functions 101 are added to the model (in addition to the M basis functions 102, as described above). The total a priori uncertainty is then
normalized (block 103) to a desired pre-specified envelope (e.g. to a constant prior variance). The normalization takes into account the shapes of all existing basis functions 102 in the model as
well as the model uncertainty about the value of the weight associated with each basis function. The output of the normalization (block 103) is the normalized basis functions 104 (as is standard)
with the addition of the modulated infinitely many basis functions 105 which provide a linear model 106. The linear model 106 can be used to make predictions 107 based on inputs 108.
By adding the infinite number of uncorrelated (delta) basis functions 101 prior to the normalization step, any very long term correlations which might otherwise be introduced by the normalization
step, are avoided. These long term correlations are introduced in naive normalization because whenever one basis function is dominant, it is normalized to a saturated non-zero constant value whilst
all other basis functions are normalized to zero. This introduces very strong long term correlation and results in overconfidence in predictions. The method shown in
FIG. 1
, avoids the long term correlation and hence also avoids the overconfidence. As a result the normalization step (block 103) may be referred to as a decorrelation and normalization step.
The modulated infinitely many basis functions 105, which may also be referred to as a modulated noise term (where the infinite number of totally uncorrelated basis functions 101 are considered a
white noise term), is suppressed in regions where data is available and is dominant in regions away from data points, thus leading to increased uncertainly in predictions away from data points. This
is shown graphically in FIG. 2 and described below. This suppression occurs within the normalization step (block 103).
Although the description herein refers to adding an infinite number of uncorrelated basis functions, in some examples a large number of uncorrelated basis functions may be used. In such an example,
the large number of basis functions are located in the regions relevant for prediction. The infinite number of basis functions (or white noise term) is, however, easier to describe mathematically and
there is no requirement to specify the position and/or density of basis functions.
FIG. 2 shows a comparison between the performance of RVM (graphs 201-202), normalized RVM, which has a constant prior variance by construction (graphs 203-204) and the method shown in
FIG. 1
, which is referred to herein as `decorrelation and normalization`, (graphs 205-206). The graphs on the left of FIG. 2 (graphs 201, 203, 205) show samples from the prior with the basis functions,
including normalization where appropriate, shown above the prior graphs. The graphs on the right (graphs 202, 204, 206) show the predictive distribution for a small training set drawn from the RVM
prior, given by the crosses in the graphs. Two standard deviation prior and predictive envelopes are shaded in grey. The parameters of the models are fixed to those of the generating RVM prior and no
learning is taking place.
Graph 202 in FIG. 2 shows that with RVM the predictions are extremely confident away from the data points (e.g. in region indicated by arrow 212 where the predictive envelope is very narrow). By
comparison, in normalized RVM (as shown in graph 204), the uncertainty is increased away from the data points (e.g. in region indicated by arrow 214); however, the uncertainty saturates at a constant
level which still results in overconfidence. Using the method shown in
FIG. 1
(see graphs 205-206), the envelope of the modulated infinitely many basis functions (or noise) 105, which is shown as a dashed line 215, dominates away from data points and as a result the
uncertainty increases away from data points (e.g. in regions indicated by arrows 216-218).
The decorrelation and normalization method (e.g. as shown in
FIG. 1
) provides a prior which decorrelates but does not decay away from the basis functions, whilst maintaining the computational sparsity. This method is described in more detail below. The large number
of uncorrelated basis functions provide a large degree of freedom away from the data points which provides the high levels of uncertainty in the model.
In the decorrelation and normalization method, a white noise Gaussian process w
(x) of constant variance a
is added to the linear model before normalization, i.e. equation (1) becomes:
( x ) = m = 1 M w m φ m ( x ) + w 0 ( x ) ( 2 ) ##EQU00002##
The prior distribution on f(x) can be described as a Gaussian process (GP) with degenerate covariance function:
(x, x')=φ(x)
A φ(x')+a
' (3)
, degenerate refers to the fact that any covariance matrix K formed from the covariance function k(x,x') will have a maximum rank M. The prior variance envelope (e.g. as shown in graph 205 of FIG. 2)
is given by the diagonal of the covariance function:
( x ) = m = 1 M a m φ m 2 ( x ) + a 0 ( 4 ) ##EQU00003##
The covariance function may be normalized to achieve a constant prior variance using:
~ ( x , x ' ) = c k ( x , x ' ) k ( x , x ) k ( x ' , x ' ) = c k ( x , x ' ) d ( x ) d ( x ' ) ( 5 ) ##EQU00004##
This provides a finite linear model with normalized basis functions plus a
modulated white noise Gaussian process:
( x ) = c [ m = 1 M w m φ m ( x ) d ( x ) + w 0 ( x ) 1 d ( x ) ] ( 6 ) ##EQU00005##
The effect of this normalized white noise process can be seen in graph 205 of FIG. 2, which shows samples from the prior for a particular choice of a
. The basis functions are flattened to a degree, but unlike the noiseless normalized solution (graph 203), as one moves away from the basis function centers the white noise process takes over and
decorrelates the sample functions. The original constant variance white noise process is normalized to have variance a
=d(x), which means it dominates when the basis functions decay. Its envelope is shown as a dashed line 215 with the basis functions in graph 205. The relative magnitude of a
to A determines the strength of the noise process relative to the basis functions. Graph 206 shows predictions from the model, where the desired behavior of the predictive variances can now be
observed and which grow as one moves away from data.
[0033]FIG. 3
shows the covariance matrices for each of RVM (graph 301), normalized RVM (graph 302) and decorrelation and normalization (graph 303), with dark areas indicating high covariance. Graph 303
illustrates the new covariance matrix, using the method described above, with constant diagonal and reduced blocks of high correlation.
By adding a weight function w
(x) to the model it might at first seem that this implies the addition of infinitely many new basis functions, and potentially an increase in computational cost. However, since w
(x) is a white noise process, no additional correlations are introduced in the model, and hence the computational cost remains the same as for the RVM. This can be seen by looking at the covariance
{tilde over (K)}=cD
/2=c[{tilde over (Φ)}
{tilde over (Φ)}
] (7)
where D
), . . . ,d(x
)] and {tilde over (Φ)}
are the normalized basis functions. This is no longer a low-rank covariance matrix, but rather a low-rank matrix plus a diagonal. The inversion of this matrix (plus the measurement noise σ
) can still be performed in NM
time. Also the cost of the predictions remains the same as a finite linear model: M for the mean and M
for the variance per test case. Just like in the RVM the parameters of the model may be learned by maximizing the evidence p(y) using gradient ascent. Details of the prediction equations and evidence
are given in appendix A.
Although the above examples show normalization assuming that the desired prior variance was constant, normalizing to achieve any arbitrary (and valid) prior envelope c(x) is may be used. In order to
achieve this, the constant c is replaced by the function c(x) (for instance in equation (5)). For example, if the prior variance of a model linear in the inputs was desired, c(x) would be a quadratic
form. In such an example, equation (6) becomes:
( x ) = c ( x ) [ m = 1 M w m φ m ( x ) d ( x ) + w 0 ( x ) 1 d ( x ) ] ##EQU00006##
The method described above allows an arbitrary set of basis functions to be chosen and normalized to produce sensible priors, and the set of basis functions need not necessarily be derived from any
kernel function. This is unlike other techniques, such as the FITC (fully independent training conditional) approximation which requires an underlying desired GP covariance function for its
construction. Additionally, the method described above uses the adjustable A variance parameters to automatically prune out unnecessary basis functions, thereby finding a very sparse solution.
The method described above also enables modeling of non-stationarity and heteroscedasticity. Heteroscedasticity is a property of a series of random variables and in the context of regression may be
described as an input-dependent noise level (i.e. the variance/noise of the output variable depends on the value/location of the input variable). The white noise term added can be used to model both
uncertainty and genuine noise in the system such that the resultant uncertainty in the prediction may be caused by the model uncertainty and/or the noise. This can be demonstrated by applying the
method described above to Silverman's motorcycle data set (as described in `Some aspects of the spline smoothing approach to nonparametric regression curve fitting` by B. W. Silverman and published
in J. Roy. Stat. Soc. B, 47(1):1-52, 1985), which comprises accelerometer readings as a function of time in a simulated impact experiment on motorcycle crash helmets, with 133 recordings. This is a
classic benchmark dataset which exhibits both heteroscedastic (variable noise levels) and non-stationary properties.
The results are shown in
FIG. 4
, with the first graph 401 showing the results for decorrelation and normalization and the other graphs 402-404 showing the corresponding results using RVM (graph 402), GP (graph 403) and FITC (graph
404). The following Gaussian basis functions were used:
φ m ( x ) = exp ( - x - x m 2 λ 2 ) ( 8 ) ##EQU00007##
and the parameters of the model
(A, a
, c, λ, σ
) were learnt by maximizing the evidence with gradient ascent as described in appendix A. Initially there was a basis function centred on every data point, but as the upper section of graph 401
shows, only a handful of significant basis functions remain after training: learning A prunes almost all of them away leaving a very sparse solution. Also note that the shapes of the remaining basis
functions have changed through normalization, adapting well to the non-stationary aspects of the data (for example the left-most flat section 411). The normalization process also results in
modulation of the added noise process such that it not only gives uncertain predictions away from the data, but it also models very well the heteroscedastic noise in the data.
In comparison, using RVM (as shown in graph 402), the noise level is constant, and so it cannot model the heteroscedasticity, and its predictive variances do not grow away from the data (resulting in
overconfidence, as described above). Full GP with Gaussian covariance (as shown in graph 403) can only learn a single gobal noise-level, and so it is not a good model for this data. Graph 404 shows
the FITC sparse GP approximation, where 8 support points are used and which is learnt as described in `Sparse Gaussian processes using pseudo-inputs` by E. Snelson and Z. Ghahramani, published in
`Advances in Neural Information Processing Systems 18` from the MIT Press. This model is of comparable sparsity to decorrelation and normalization; however it shows a tendency to overfit slightly by
`pinching in` at the support points (e.g. as indicated by arrow 414), and its underlying Gaussian stationary covariance is too smooth to model the data well.
There are many different applications for the decorrelation and normalization method described above. In particular the method may be used where a number of different methods are being used to
predict a value and a decision needs to be made as to which method to rely upon. In such an example, if the uncertainty associated with a prediction using one of the methods is overconfident, the
wrong method may be selected and result in an error in the ultimate prediction relied upon. Similarly, the prediction may be used to decide whether to update parameters (e.g. of a filter) on the
basis of a particular prediction and an example of this is real-time probabilistic visual tracking and this is described below.
In order to perform visual tracking a displacement expert may be created by training RVM regression to predict the true location of a target object given an initial estimate of its position in an
image (e.g. as described in `Sparse bayesian learning for efficient visual tracking` by O. Williams, A. Blake, and R. Cipolla and published in IEEE Trans. on Pattern Analysis and Machine
Intelligence, 27(8):1292-1304, 2005). This uses the pixel intensities sampled from an initial image region as (high dimensional) input vectors and as a consequence evaluating a basis function is
expensive. By pruning many of the basis functions from the model, the RVM yields an extremely efficient tracker.
The Gaussian RVM displacement predictions can be fused with a dynamical motion model over time with a Kalman filter, typically yielding improved accuracy. However, when a target changes appearance
significantly or becomes occluded, the small variances (i.e. small error bars) predicted by the RVM corrupt the Kalman filter estimate of the state and consequently the tracker fails. This is shown
in the first three schematic diagrams 501-503 in
FIG. 5
, which show a person 511 walking behind a tree 512 with the tracked facial (or head) position shown by rectangle 511. The bottom row of
FIG. 5
(diagrams 504-506) shows the displacement expert tracking a target (the head of person 511) through an occlusion (behind tree 512) when the decorrelated and normalized linear model described herein
is used. When the tracked person 511 is occluded by a tree 512, the new model correctly makes predictions with a large variance (i.e. with large error bars) which consequently contribute very little
to Kalman filter state updates, which instead relies on the (alternative) constant velocity dynamical model (which is generally less accurate). Once the occlusion is over (e.g. in schematic diagram
506), the displacement expert is again able to make confident predictions and accurate tracking resumes.
Whilst the same successful tracking performance could be achieved by using a full GP (instead of decorrelation and normalization), this would come at a significantly higher computational cost and
would fail to meet real-time requirements. The difficulty with using FITC (a sparse GP approximation that produces sensible error bars) is that finding the inducing inputs requires an optimization in
a space that is of very high dimension and the computational cost increases with the number of dimensions (i.e. with the number of variables in the input space). As a result FITC is also not a
practical solution for such real-time applications.
Visual tracking is just one example of an application which may use the probabilistic regression method described herein. Other example applications include information retrieval, recommending
friends, products etc, matching people, items etc or any other application where a query is used to perform rapid (e.g. real-time) retrieval of data. Further examples include predicting the rate of
clicking of users on web search results or on web adverts and predicting the load (e.g. in terms of power, data etc) on nodes of a network. There are many online applications which perform real-time
processes and where decisions may be made based on the confidence associated with a prediction or the confidence associated with a particular selection. Some of these examples are described in more
detail below.
In information retrieval (IR), the methods described herein may be used to perform a selection or ranking of documents (or other information elements) which takes into consideration the confidence
with which the documents were selected. For example, where a document is identified as a good match to a query with a high degree of confidence, this document may be ranked more highly than another
document which is identified as a better match to the query but with a much lower degree of confidence. In another example, a set of documents which are identified as a match to a query may be
screened so that documents which are selected with a confidence level which is below a defined threshold are removed from the set of documents. This information retrieval may be performed in
real-time and therefore computationally efficient methods are most suitable.
In online gaming, the methods described herein may be used in matching players, particularly where a number of features or characteristics are used in matching players (e.g. user ID, demographics
etc) and where the features used may be different dependent on whether a player is a new player or is an experienced player (about which the system may have actual performance metrics).
Further examples include medical image analysis, where the confidence associated with a prediction may change a course of treatment or a diagnosis which is made, and designing experiments (e.g. where
the experiments themselves are expensive to run), where the model predicts the uncertainty associated with a particular output value.
By incorporating an infinite set of uncorrelated basis functions to the model in the decorrelation and normalization method described above, the prior over functions is enriched. Normalization
ensures a constant prior variance (or a variance which is user-defined), and introduces decorrelations. The role of the initial localized basis functions is now to introduce local correlations, that
do not overconstrain the posterior. The resultant predictive variances increase away from the observed data. The new model can still be treated as a finite linear model and retains the same
propensity to sparsity as the RVM, with the corresponding computational advantage. This is due to the fact that the new basis functions do not correlate to anything, and the number of sources of
correlation remains unchanged: M, the number of original basis functions. For large data sets, the computationally efficient inference schemes that have been devised for the RVM may also be used.
As described above, the treatment of finite linear models as described herein makes them suitable for fitting non-stationary and heteroscedastic data. By individually varying the ratio of the M prior
variances A to the variance a
of the uncorrelated process, the model can both change the shape of the basis functions and the level of input dependent noise.
Whilst the decorrelation and normalization method is described above in comparison to RVM, the methods described above apply to any probabilistic linear model. RVM is described by way of example and
provides one example of an existing method which suffers from over-confidence at positions away from data points.
[0051]FIG. 6
shows two example methods of performing computationally efficient probabilistic linear regression, as described in detail above. In the first example method (blocks 601-603), a large number (such as
an infinite number) of uncorrelated basis functions are added to a weighted linear sum (e.g. as in equation (1)) of M basis functions (block 601). The resultant sum of all the basis functions is then
normalized (block 602) to provide a finite linear model and this finite linear model may be used to perform probabilistic regression (block 603). In the second example method (blocks 611-613), white
noise is added to a regression function comprising a linear model (block 611, e.g. as in equation (2)) and the linear model (with the added white noise) is then normalized to a pre-defined prior
variance envelope (block 612). As in the first example method, the normalized linear model may be used to perform probabilistic regression (block 613).
The probabilistic regression (in block 603 or 613) may be performed as shown in
FIG. 7
. Training data is used to learn the posterior distribution (which in the Gaussian case consists of the mean vector and the covariance matrix) of the weights associated with each of the M basis
functions (block 701, e.g. as described in more detail in the Appendix) and then, on receipt of an input value (in block 702), a predicted output value and a variance associated with the output value
is computed based on the input model, the weights and the linear model (block 703).
As described above, there are many different applications for the methods shown in FIGS. 6 and 7. In an example, the methods may be used in tracking a feature in a sequence of images (e.g. in a video
clip), by predicting the position of the feature based on input parameters, such as the intensities of pixels in the image. The training data (used in block 701) may, for example, comprise
input-output pairs of an IR query and one or more ranked documents (or other objects) or maps of pixel intensity in an image and the position of a feature in the image. The input value (received in
block 702) may, for example, comprise an IR query, a map of pixel intensity in an image and the output value (predicted in block 703) may, for example, comprise one or more ranked documents or the
position of a feature in the image. The variance (predicted in block 703) may comprise a confidence level, a percentage, error bars, a range etc.
In many examples, the output of the probabilistic regression may be used to make a decision (block 801), as shown in
FIG. 8
. In an example, the decision may be whether to update a filter based on the predicted value or whether to discard a piece of data. The decision is made based on the variance associated with the
output value (e.g. as generated in block 703).
[0055]FIG. 9
illustrates various components of an exemplary computing-based device 900 which may be implemented as any form of a computing and/or electronic device, and in which embodiments of the methods
described herein may be implemented. As described above, the methods described herein may be used for online applications and in such a situation the computing-based device 900 may comprise a web
server. The computing-based device 900 may be referred to as a decision making system and may be used to control another apparatus or the methods described herein may be used to control other
processes running on the computing-based device.
Computing-based device 900 comprises one or more processors 901 which may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to
control the operation of the device in order to perform probabilistic regression as described above (e.g. as shown in FIGS. 6-8). Platform software comprising an operating system 902 or any other
suitable platform software may be provided at the computing-based device to enable application software 903 to be executed on the device. The application software 903 may comprise software for
performing the methods described herein.
The computer executable instructions may be provided using any computer-readable media, such as memory 904. The memory is of any suitable type such as random access memory (RAM), a disk storage
device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used. The memory 904 may also be used
to store training data 905 (e.g. as used in block 701 of
FIG. 7
The computing-based device 900 may comprise one or more inputs 906 which are of any suitable type for receiving user input, media content, Internet Protocol (IP) input and/or a communication
interface 907. An input 906, the communication interface 907 or another element may be used to receive input data (e.g. in block 702 of
FIG. 7
) which is used in performing the probabilistic regression.
One or more outputs 908 may also be provided such as an audio and/or video output to a display system integral with or in communication with the computing-based device. The display system may provide
a graphical user interface, or other user interface of any suitable type. An output 908, the communication interface 907 or other element may be used to output the predicted output value and variance
associated with the output value (e.g. as generated in block 703 of
FIG. 7
) or alternatively, the outputs of the probabilistic regression may be used internally within the computing device 900 to drive another process (e.g. in a decision making step such as block 801).
Although the present examples are described and illustrated herein as being implemented in online applications, the system described is provided as an example and not a limitation. As those skilled
in the art will appreciate, the present examples are suitable for application in a variety of different types of systems.
The term `computer` is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are
incorporated into many different devices and therefore the term `computer` includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial
processor such that the method steps may be carried out in any suitable order, or simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls "dumb" or standard hardware, to carry out the desired
functions. It is also intended to encompass software which "describes" or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon
chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the
process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may
download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also
realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP,
programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or
all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to `an` item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without
departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form
further examples without losing the effect sought.
The term `comprising` is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may
contain additional blocks or elements.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above
specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described
above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without
departing from the spirit or scope of this invention.
Appendix A [0070]
All that is needed to make predictions with a finite linear model in general and with the RVM in particular, is the posterior over the M dimensional weights vector:
(w|y)=N(μ,Σ) with Σ=(Φ
and μ=ΣΦ
y (9)
where B
is a unit matrix of size N proportional to the variance of the measurement noise σ
. Given a new test input x*, the response of all M basis functions Φ*
is first evaluated, and the posterior over the weights is used to obtain the mean and the variance of the Gaussian predictive distribution:
μ and Var(f(x*))=Φ*
Although the normalized model described herein contains a weight process w
(x), to make predictions only the posterior over the M weights associated to the original basis functions needs to be computed. The posterior is again Gaussian, with mean and covariance very similar
to those of the RVM:
{tilde over (Σ)}=({tilde over (Φ)}
{tilde over (B)}
{tilde over (Φ)}
and {tilde over (μ)}={tilde over (Σ)}{tilde over (Φ)}*
{tilde over (B)}
y (11)
but with a new definition of the diagonal noise variance matrix
{tilde over (B)}=σ
and where the normalized basis functions are used
{tilde over (Φ)}
As described above
, D=diag(d(x
), . . . , d(X
)) with:
( x ) = a 0 + m = 1 M a m φ m 2 ( x ) ( 4 ) ##EQU00008##
In the model described herein, the mean and the variance of the predictive distribution are given by:
( f ( x * ) ) = Φ ~ * M μ ~ and Var ( f ( x * ) ) = Φ ~ * M Σ ~ Φ ~ * M T + ca 0 d ( x ) ( 14 ) ##EQU00009##
Although the expression for the predictive mean remains unchanged (up to normalization), the predictive variance gets an additional additive term that comes from the modulated white noise process.
For the model the evidence is an N-variate Gaussian distribution with zero mean, and covariance given by:
{tilde over (C)}={tilde over (Φ)}
{tilde over (Φ)}
+B (15)
Using the matrix inversion lemma
, the negative log evidence can be written as:
= 1 2 [ N log ( 2 π ) + log cA + log B ~ - log Σ ~ + y T B ~ - 1 y - y T B ~ - 1 Φ ~ NM Σ ~ Φ ~ NM T B ~ - 1 y ] ( 16 ) ##EQU00010##
The computational cost of evaluating the evidence is NM^2
, as is that of computing its gradients with respect to the prior variances of the weights A, the prior variance a
of the w
process, the variance of the output noise σ
, the prior overall variance of the function c, and the lengthscale λ of the isotropic Gaussian basis functions:
φ m ( x ) = exp ( - x - x m 2 λ 2 ) . ( 8 ) ##EQU00011##
Patent applications by Joaquin Quinonero Candela, Cambridge GB
Patent applications by Microsoft Corporation
Patent applications in class MACHINE LEARNING
Patent applications in all subclasses MACHINE LEARNING
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20100070435","timestamp":"2014-04-25T00:39:54Z","content_type":null,"content_length":"80163","record_id":"<urn:uuid:f81fd281-606e-4665-8c82-3b57ce7f8633>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
calculus help!
October 28th 2008, 06:27 PM
calculus help!
please help with these problems, any of them! it would be of great help! thanks!
1. suppose that A is a constant. Verify that x(t)=1+t+Ae^t is a solution of the differential equation x'=x-t
2. Suppose that A and B are constants. Verify that y(x)=A ln(x)+B+x is a solution of the differential equation xy''+y'=1
October 29th 2008, 07:30 AM
Chris L T521
please help with these problems, any of them! it would be of great help! thanks!
1. suppose that A is a constant. Verify that x(t)=1+t+Ae^t is a solution of the differential equation x'=x-t
2. Suppose that A and B are constants. Verify that y(x)=A ln(x)+B+x is a solution of the differential equation xy''+y'=1
All you need to do is substitute the $x(t)$ values given into the DE. I'll help you start the first one:
since $x(t)=1+t+Ae^t$, we see that $x'(t)=1+Ae^t$
Substituting this into the DE, we see that $(1+Ae^t)=(1+t+Ae^t)-t$
I leave the simplification for you.
This should give you enough help to do the second problem on your own.
|
{"url":"http://mathhelpforum.com/differential-equations/56323-calculus-help-print.html","timestamp":"2014-04-16T08:01:25Z","content_type":null,"content_length":"5465","record_id":"<urn:uuid:f9d7e949-8d38-45a8-a42e-476b6548b51f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Patent application title: METHOD FOR STRENGTHENING THE IMPLEMENTATION OF ECDSA AGAINST POWER ANALYSIS
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A method of inhibiting the disclosure of confidential information through power analysis attacks on processors in cryptographic systems. The method masks a cryptographic operation using a generator
G. A secret value, which may be combined with the generator G to form a secret generator is generated. The secret value is divided into a plurality of parts. A random value is generated for
association with the plurality of parts. Each of the plurality of parts is combined with the random value to derive a plurality of new values such that the new values when combined are equivalent to
the secret value. Each of the new values is used in the cryptographic operation, thereby using the secret generator in place of the generator G in the cryptographic operation. The introduction of
randomness facilitates the introduction of noise into algorithms used by cryptographic systems so as to mask the secret value and provide protection against power analysis attacks.
A method of masking a cryptographic operation using a generator G, said method comprising the steps of:a) generating a secret value;b) generating a masking value for association with said secret
value;c) applying said masking value to said secret value and said generator to obtain a new value corresponding to the combination of said secret value, said generator and said masking value for use
as a session public key;d) using said new value in said cryptographic operation, thereby using a secret generating point corresponding to a combination of said masking value and said generator G in
place of said generator G in said cryptographic operation.
The method of claim 1, wherein said further values are updated by said random value each time a digital signature is generated.
The method of claim 1, said masking value, when divided into first and second parts, has said random value added said first part and subtracted from said second part such that the sum of said first
and second parts and said associated random value is equivalent to said original secret value.
The method of claim 1, wherein said cryptographic system is an elliptic curve digital signature algorithm.
The method claim 1, wherein said masking value is divided into a plurality of parts and each of said parts is combined with a random value to provide a plurality of further values such that when said
further values are combined a value equivalent to a session private key corresponding to said session public is obtained.
A method of computing a digital signature on a message m, said signature being computed by a signer having a private key d and a public key dG, where G is a generator of a cryptographic group, said
method comprising:a) dividing said private key d into a plurality of private key parts;b) presenting a masking value β;c) generating an ephemeral private key k;d) obtaining a first signature
component from an ephemeral public key kG' where G' corresponds to a product of said masking value β and said generator G;e) computing a value e derived from said message m by application of
cryptographic function;f) computing a second signature component utilizing said masking value β, said plurality of private key parts, said ephemeral private key k, said first signature component, and
said value e.
A method according to claim 6, wherein said masking value is presented as a plurality of masking value parts, and at least one of the computation of said ephemeral public key kG' and the computation
of said signature component uses said plurality of masking value parts.
A method according to claim 7, wherein said plurality of private parts comprises a pair of values d
with d=d
2. 9.
A method according to claim 8, wherein said plurality of masking value parts comprises a pair of values β
with β=β
2. 10.
A method according to claim 9, wherein said second signature component is computed as s=(kβ
r) mod n, where n is an order of said cryptographic group.
A method according to claim 9, further comprising the step of generating a random value w, and wherein the step of computing said signature component utilizes said random value w.
A method according to claim 10, wherein said signature component is computed as s=w(kwβ
r) mod n, where n is an order of said cryptographic group.
A method of computing a public key corresponding to a private key d in a cryptosystem, wherein the cryptosystem uses a generator G, said method comprising the steps of:a) representing a masking value
β as a plurality of values which may be combined to obtain said masking value;b) combining each of said plurality of values with said private key to obtain a plurality of private key components;c)
combining each of said plurality of private key components with said generator to obtain a plurality of public key components;d) combining said public key components to obtain said public key.
A method according to claim 12, wherein said plurality of public key components are combined by addition.
A method according to claim 12, wherein said plurality of values are combined with said private key by multiplication.
A method according to claim 12, wherein said plurality of private key components are combined with said generator by exponentiation.
A method according to claim 12, wherein said public key is computed as (dβ
A method of computing an ECDSA signature on a message m, said method being performed by a signer having a private key d and a public key dG, where G is a generator, said signature comprising an
ephemeral public key r obtained from an ephemeral masked public key k and a signature component s derived from said message, m, said ephemeral private key k, and said private key d, said method
characterised in that the computation of said signature comprises the steps of:a) dividing said private key d into a plurality of private key parts;b) presenting a masking β;c) generating an
ephemeral private key k;d) obtaining a first signature component r from an ephemeral public key kG' where G' corresponds to a product of said masking β and said generator G;e) computing a value e
derived from said message m by application of cryptographic function;f) computing a second signature component utilizing said masking value, said plurality of private key parts, said ephemeral
private key k, said first signature component r and said value e.
A method according to claim 17, wherein said masking value is presented as a plurality of masking value parts, and at least one of the computation of said ephemeral public key kG' and the computation
of said signature component uses said plurality of masking value parts.
A method according to claim 18, wherein said plurality of private parts comprises a pair of values d
with d=d
2. 21.
A method according to claim 20, wherein said signature component is computed as s=(kβ
r) mod n, where n is an order of said cryptographic group.
A method according to claim 19, wherein said plurality of masking value parts comprises a pair of values β
with β=β
2. 23.
A method according to claim 20, further comprising the step of generating a random value w, and wherein the step of computing said signature component utilizes said random value w.
A method according to claim 21, wherein said signature component is computed as s=w(kwβ
r) mod n, where n is an order of said cryptographic group.
A method of inverting an element k of a finite field, comprising the steps of:a) generating a random value w;b) computing wk and the inverse (wk)
thereof;c) computing k
as w(wk).sup.
-1. 26.
A method for masking a secret value k used in an elliptic curve cryptographic operation requiring use of a generator G comprising:generating a masking value β for association with said generator G;
associating said masking value β with said generator G to obtain a secret generating point G'; andutilizing said secret generating point G' in performing a cryptographic operation using said secret
value k.
The method according to claim 26 wherein said cryptographic operation is performed mod n, where n is the order of the secret generating point G'.
The method according to claim 26 wherein said cryptographic operation comprises generation of a first signature component.
The method according to claim 28 comprising generating a second signature component using said first signature component and said secret value k and applying said masking value β to said secret value
The method according to claim 29 wherein said masking value β is divided into a plurality of parts each being applied to said secret value k.
The method according to claim 28 wherein said first signature component is an ECDSA signature component r.
A method for masking a secret value k used in a cryptographic operation comprising:generating a masking value β;dividing said masking value β into a plurality of components;applying each of said
plurality of components to said secret value k in performing said cryptographic operation; andupdating said plurality of components by applying a random value to each said plurality of parts such
that said plurality of parts, when combined, equal said masking value β.
The method according to claim 32 wherein said cryptographic operation comprises generation of a signature component.
The method according to claim 33 wherein said signature component is for an ECDSA signature.
CROSS REFERENCE TO RELATED APPLICATIONS [0001]
This application is a continuation of U.S. application Ser. No. 10/119,803 filed on Apr. 11, 2002 which is a continuation-in-part of U.S. application Ser. No. 09/900,959 filed on Jul. 10, 2001, now
U.S. Pat. No. 7,092,523; which is a continuation-in-part of application No. PCT/CA00/00021 filed on Jan. 11, 2000 claiming priority from Canadian Application No. 2,258,338 filed Jan. 11, 1999, and a
continuation-in-part of application No. PCT/CA00/00030 filed on Jan. 14, 2000 claiming priority from Canadian Application No. 2,259,089 filed on Jan. 15, 1999. The contents of all the above
applications are incorporated herein by reference.
FIELD OF THE INVENTION [0002]
This invention relates to a method for minimizing the vulnerability of cryptographic systems to power analysis-type attacks.
BACKGROUND OF THE INVENTION [0003]
Cryptographic systems generally owe their security to the fact that a particular piece of information is kept secret. When a cryptographic algorithm is designed, it is usually assumed that a
potential attacker has access to only the public values. Without the secret information it is computationally infeasible to break the scheme or the algorithm. Once an attacker is in possession of a
piece of secret information they may be able to forge the signature of the victim and also decrypt secret messages intended for the victim. Thus it is of paramount importance to maintain the secrecy
and integrity of the secret information in the system. The secret information is generally stored within a secure boundary in the memory space of the cryptographic processor, making it difficult for
an attacker to gain direct access to the secret information. Manufacturers incorporate various types of tamper-proof hardware to prevent illicit access to the secret information. In order to decide
how much tamper proofing to implement in the cryptographic system, the designers must consider the resources available to a potential attacker and the value of the information being protected. The
magnitude of these resources is used to determine how much physical security to place within the device to thwart attackers who attempt to gain direct access to the secure memory. Tamper-proof
devices can help prevent an attacker who is unwilling or unable to spend large amounts of time and money from gaining direct access to the secret information in the cryptographic system. Typically,
the amount of work that is required to defeat tamper proof hardware exceeds the value of the information being protected.
However, a new class of attacks has been developed on cryptographic systems that are relatively easy and inexpensive to mount in practice since they ignore the tamper-proof hardware. Recent attacks
on cryptographic systems have shown that devices with secure memory may leak information that depends on the secret information, for example in the power usage of a processor computing with private
information. Such attacks take advantage of information provided by an insecure channel in the device by using the channel in a method not anticipated by its designers, and so render redundant any
tamper proofing in the device. Such insecure channels can be the power supply, electromagnetic radiation, or the time taken to perform operations. At particular risk are portable cryptographic
tokens, including smart cards, pagers, personal digital assistants, and the like. Smart cards are especially vulnerable since they rely on an external power supply, whose output may be monitored
non-intrusively. Access to the power supply is required for proper functioning of the device and so is not usually prevented with tamper-proof hardware.
Further, constrained devices tend not to have large amounts of electromagnetic shielding. Since the device is self-contained and dedicated, the power consumption and electromagnetic radiation of the
smart card may be monitored as the various cryptographic algorithms are executed. Thus in a constrained environment, such as a smart card, it may be possible for an attacker to monitor an unsecured
channel that leaks secret information. Such monitoring may yield additional information that is intended to be secret which, when exposed, can significantly weaken the security of a cryptographic
In response to the existence of such unsecured channels, manufacturers have attempted to minimize the leakage of information from cryptographic devices. However, certain channels leak information due
to their physical characteristics and so it is difficult to completely eliminate leakage. A determined attacker may be able to glean information by collecting a very large number of samples and
applying sophisticated statistical techniques. In addition, there are severe restrictions on what can be done in hardware on portable cryptographic tokens that are constrained in terms of power
consumption and size. As a result, cryptographic tokens are particularly vulnerable to these types of attacks using unsecured channels.
The more recent attacks using the power supply that can be performed on these particularly vulnerable devices are simple power analysis, differential power analysis, higher order differential power
analysis, and other related techniques. These technically sophisticated and extremely powerful analysis tools may be used by an attacker to extract secret keys from cryptographic devices. It has been
shown that these attacks can be mounted quickly and inexpensively, and may be implemented using readily available hardware.
The amount of time required for these attacks depends on the type of attack and varies somewhat by device. For example it has been shown that simple power analysis (SPA) typically takes a few seconds
per card, while differential power analysis (DPA) can take several hours. In order to perform SPA, the attacker usually only needs to monitor one cryptographic operation. To perform DPA, many
operations must be observed. In one method used, in order to monitor the operations, a small resistor is connected in series to smart card's power supply and the voltage across the resistor is
measured. The current used can be found by a simple computation based on the voltage and the resistance. A plot of current against time is called a power trace and shows the amount of current drawn
by the processor during a cryptographic operation. Since cryptographic algorithms tend to perform different operations having different power requirements depending on the value of the secret key,
there is a correlation between the value of the secret key and the power consumption of the device.
Laborious but careful analysis of end-to-end power traces can determine the fundamental operation performed by the algorithm based on each bit of a secret key and thus, be analyzed to find the entire
secret key, compromising the system. DPA primarily uses statistical analysis and error correction techniques to extract information that may be correlated to secret keys, while the SPA attacks use
primarily visual inspection to identify relevant power fluctuations. In SPA, a power trace is analyzed for any discernible features corresponding to bits of the secret key. The amount of power
consumed varies depending on the executed microprocessor instructions. For example, in a typical "square-and-multiply" algorithm for exponentiation, a bit 1 in the exponent will cause the program to
perform both squaring and multiply operations, while a bit 0 will cause the multiply operation to be skipped. An attacker may be able to read off the bits of a secret exponent by detecting whether
the multiply operation is performed at different bit positions.
A DPA attack attempts to detect more subtle features from the power traces and is more difficult to prevent. To launch a DPA attack, a number of digital signatures are generated and the corresponding
power traces are collected. The power trace may be regarded as composed of two distinct parts, namely signal and noise. The patterns that correspond to private key operations tend to remain more or
less constant throughout all power traces. These patterns may be regarded as the signal. The other parts of the computation, which correspond to changing data, result in differing patterns in each
power trace. These patterns can be regarded as the noise. Statistical analysis can be performed on all the power traces to separate the signal from the noise. The secret value is then derived using
the identified signal.
Various techniques for preventing these power analysis attacks have been attempted to date. Manufacturers of smart cards and smart card processors have introduced random wait states and address
scrambling. Smart card algorithms avoid performing significantly different operations depending on the value of a secret key and also avoid conditional jump instructions. Hardware solutions include
providing well-filtered power supplies and physical shielding of processor elements or the addition of noise unrelated to secrets. However, the vulnerabilities to DPA result from transistor and
circuit electrical behaviors that propagate to exposed logic gates, microprocessor operation, and ultimately the software implementations. Cryptographic algorithms to date have been designed with the
assumption that there is no leakage of secret information, however with the advent of successful power analysis attacks, it is no longer prudent to assume that a cryptographic device which will leak
no secret information can be manufactured. Information stored in constrained environments is particularly difficult to protect against leakage through an unsecured channel during cryptographic
Accordingly, there is a need for a system for reducing the risk of a successful power analysis attack and which is particularly applicable to current hardware environments.
SUMMARY OF THE INVENTION [0013]
In accordance with this invention, there is provided a method of inhibiting the disclosure of confidential information through power analysis attacks on processors in cryptographic systems. The
method of masking a cryptographic operation using a generator G comprises the steps of:
a) generating a secret value, which may be combined with the generator G to form a secret generator;
b) dividing the secret value into a plurality of parts;
c) generating a random value for association with the plurality of parts;
d) combining each of the plurality of parts with the random value to derive a plurality of new values such that the new values when combined are equivalent to the secret value; and
e) using each of the new values in the cryptographic operation, thereby using the secret generator in place of the generator G in the cryptographic operation.
The introduction of randomness facilitates the introduction of noise into algorithms used by cryptographic systems so as to mask the secret value and provide protection against power analysis
BRIEF DESCRIPTION OF THE DRAWINGS [0020]
An embodiment of the invention will now be described by way of example only with reference to the accompanying drawings in which:
FIG. 1 is a schematic diagram of a constrained device;
FIG. 2 is a schematic representation of steps of a method performed by the device of FIG. 1; and
FIG. 3 is a flow diagram illustrating an embodiment of the invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS [0024]
A mechanism for protection against power analysis attacks on cryptographic systems involves the introduction of random values into existing algorithms employed by cryptographic systems. These random
values are intended to introduce noise into the system.
This technique can be applied to a number of cryptographic systems, including encryption algorithms, decryption algorithms, signature schemes, and the like. In the preferred embodiment, the technique
is applied to the ECDSA (elliptic curve digital signature algorithm) on a constrained device, typically a smart card, in order to inhibit the leakage of secret information.
In the ECDSA, as described in the ANSI X9.62 standard, the public values are:
The domain parameters: An elliptic curve group E generated by a point G, and a finite field F.
The signer's long-term public key D (corresponding to a long-term private key d).
The signature (r, s).
FIG. 1 shows generally a smart card (10) for use in a cryptographic system. The smart card incorporates a random number generator (RNG) (11), which may be implemented as hardware or software. The
card also includes a cryptographic module (CRYPTO) (14), which may be for example a cryptographic co-processor or specialized software routines. The card includes a memory space (13) for storage
needed while making computations, and a parameter storage space (17, 18, 19, 21) for storing the parameters G, G', β
of the system. The card also includes a secure memory space (15, 16) for storing its private key d split into two parts d
and d
, and a processor (12) which may be, for example, an arithmetic logic unit, an integrated circuit, or a general purpose processing unit.
In order to generate a digital signature using an elliptic curve, the signer first computes an elliptic curve point K=kG, where k is a random number and G is the generating point of the elliptic
curve group. The value k is selected as a per-message secret key and the point K serves as the corresponding per-message public key. The values k and K are also referred to as an ephemeral private
key and an ephemeral public key respectively. These values are used to generate a signature (r, s) wherein:
mod n, where K
is the x coordinate of K and n is the order of the generating point G; and
(e+dr)mod n, where e is the message to be signed.
The ANSI X9.62 standard provides techniques for interpreting the bit strings corresponding to finite field elements as integers in the above calculations. The standard also provides some guidelines
on what elliptic curve groups and finite fields can be used.
Several algorithms, using both direct and indirect methods, may be used to compute kG in order to obtain the elliptic curve point K. Algorithms to compute signature components are potentially
vulnerable to power analysis attacks since they perform different operations depending on the bits in the secret values. Repeated iterations of the algorithm use the same secret values, and so their
power traces are statistically correlated to the secret values.
In order to mask a private key or other secret value to improve resistance to DPA-like attacks, a random value is introduced into the algorithm as shown in FIG. 2. This random value avoids repeated
use of a secret value in order to eliminate correlation among the power traces. There will be no signal to differentiate from the background noise since no operation is repeated on subsequent
iterations of the algorithm.
In the case of a long-term private key, the private key d is split into two parts d
and d
such that d=d
. As seen in FIG. 2, the card generates its private key d (110), then computes the public key dG (112). The public key is sent to the server (114), which keeps it in a directory for future use. A
smart card is initialized with a private key d being split into the values d
=d (118) and d
=0 (116) as is illustrated in FIG. 2. The initialization is performed either by embedding the private key at manufacture or by instructing the smart card to generate its own private key. These
initial values d
and d
are stored in the device instead of storing the value for d. Each time a digital signature is generated, a random value A is generated using the hardware random number generator 11 and d
and d
are updated as follows:
(old)+Δ(mod n), and d
(old)-Δ(mod n).
The formula for s, one component of the digital signature, then becomes:
r))mod n.
When computing the above formula, the quantities d
and d
are essentially random values because of the random quantity Δ that is introduced after each signature. When comparing subsequent signatures, there is no correlation in the side channels to either
the calculation of d, r or d
r corresponding to the secret key d since the quantities d
and d
are randomized in each successive signature but only together does the correlation to d emerge and this changes every time. As a result, leakage of the private key d is minimized when computing the
component s of the digital signature. However, the component r of the digital signature is also calculated using the private key k and the calculation of r has still in the past been vulnerable to
power analysis type attacks. In order to compute r, the signer must compute kG and so information about the value of the secret key k may leak during the repeated group operations.
In order to protect the per-message secret key k during computation of r, the signer modifies the group generator used. In order to mask the value of k, a random value β is introduced and stored for
each smart card such that G'=/βG where β is a random number generated for each smart card. The point G' can be used as a secret generating point for each user, thus using the random value β to hide
some information about k.
It is recognized that the signer's effective per-message secret key is kβ, corresponding to the public key kβ G. The security is thus based on the secrecy of the derived value kβ, which could be
computed from k and β, both of which are secret. It is also recognized that the per-message secret key may be regarded as k and the per-message public key as kG'. However, unless the point G' were
shared publicly, knowledge of k alone would not permit the computation of shared keys based on kG'.
During smart card personalization, when the private/public key pair is generated on the smart card, the point G' is computed. The introduction of β in the calculation of a digital signature means the
formula still contains a constant value, making it vulnerable to power analysis type attacks. In order to overcome these attacks, β is split into two parts β
and β
, and those parts are updated by a random value r every time a signature is generated. This process is detailed in FIG. 3.
In order to verify signatures produced in this manner, the verifier uses standard ECDSA verification from ANSI X9.62 since the signer's secret key remains unchanged when using this technique.
Thus the formulae for the ECDSA signature scheme in the preferred embodiment are:
mod n, where K
is the x coordinate of K and n is the order of the point G'; and
r))mod n.
Using these formulae to compute ECDSA signatures reduces the vulnerability of the algorithm to power analysis attacks. It is recognized that similar techniques may be applied to other signatures. For
example, ECNR or any other signature form could be used. These techniques may also be used individually, not necessarily in combination. Also, the ECDSA signature equation is not a necessary
component of these techniques.
FIG. 3 shows the generation of a digital signature in accordance with the above protocol. First, the signer generates a random private session key k (200), and stores k (210) for future use in the
algorithm. The signer updates the values β
(224) and β
(226) as described above by generating a random π (222) and then computes the public session key r (220). The signer then obtains the input message e or hash thereof (250). The signer then computes
the signature s (260). The signer updates the private key parts d
(264) and d
(266) as described earlier by generating a random Δ (262).
The inverse algorithm used in the generation of the digital signature to compute k
is also potentially vulnerable to power analysis attacks since it performs repeated operations on the secret key every time a signature is generated. This vulnerability is reduced in a further
embodiment by introducing a random w and computing (kw)
instead of w
. The signing formula works since k
=w (kw)
Thus the formulae for the ECDSA signature scheme in this embodiment are:
mod n, where K
is the x coordinate of K and n is the order of the point G'; and
r))mod n.
Updating the parts of the private key may occur before or after the generation of the random w.
In a further embodiment, since G'=β
G, the value of kG' can be computed as (kβ
)G. In this way, the value of k is masked when computing kG', even if the value of β is determined. The formula for K then becomes: K=(kβ
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit
and scope of the invention as outlined in the claims appended hereto. For example, it is not necessary that there be two components combining to make the private key.
Patent applications by Robert J. Lambert, Cambridge CA
Patent applications by CERTICOM CORP.
Patent applications in class Nonlinear (e.g., pseudorandom)
Patent applications in all subclasses Nonlinear (e.g., pseudorandom)
User Contributions:
Comment about this patent or add new information about this topic:
|
{"url":"http://www.faqs.org/patents/app/20090262930","timestamp":"2014-04-16T11:32:59Z","content_type":null,"content_length":"59319","record_id":"<urn:uuid:ae633d1e-9260-4fc1-8a4b-b36a96c42b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Table of definite Integrals elliptic Integrals
Table of Integrals
A. Dieckmann, Physikalisches Institut der Uni Bonn
This integral table contains hundreds of expressions: indefinite and definite integrals of elliptic integrals, of square roots, arcustangents and a few more exotic functions. Most of them are not
found in Gradsteyn-Ryzhik.
Sometimes m, n, k denote real parameters and are restricted mostly to 0<{m, n, k}<1, at times they represent natural numbers.
Results may be valid outside of the given region of parameters, but should always be checked numerically!
Definite Integrals:
Substitute and the Feynman-Hibbs Integral can be calculated with Mathematica:
To see a nice cancellation of singularities at work plot the next expression around c = negative Integer:
…this is a special case of the next integral below (m = -1 / 2).
( Z stands for J or Y; in case a = n π, the sum is zero );
in the following expressions (∫ f(x)/(a x^2 + b x + c ) dx) we abbreviate s = :
the values at integer n can be found approximately by setting n near to an integer .
in the following expressions (∫ f(x)/(a x^4 + b x^2 + c ) dx) we abbreviate s = :
Master formula of Boros and Moll:
Here the result is a threefold sum shown in Mathematica syntax:
KSubsets[aList, k] is in Package DiscreteMath`Combinatorica` and gives a list of all subsets with k elements of aList .
For n=3 the sum is .
|
{"url":"http://pi.physik.uni-bonn.de/~dieckman/IntegralsDefinite/DefInt.html","timestamp":"2014-04-21T02:00:04Z","content_type":null,"content_length":"191351","record_id":"<urn:uuid:91cda899-5333-49c9-bce6-a666b1bacb5d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Galvez, Enrique J. "Kiko" - Department of Physics and Astronomy, Colgate University
• Existence and Absence of Geometric Phases Due to Mode Transformations of High-Order Modes
• Qubit quantum mechanics with correlated-photon experiments Enrique J. Galvez
• Preparing photon pairs entangled in any desired spatial modes via interference
• Undergraduate Laboratories Using Correlated Photons: Experiments on the Fundamentals of Quantum Mechanics
• To appear in Coherence and Quantum Optics VIII (Plenum). Measurements of the geometric phase of
• July 1, 2001 / Vol. 26, No. 13 / OPTICS LETTERS 971 Achromatic polarization-preserving beam displacer
• 15 November 1999 Z .Optics Communications 171 1999 713
• Phase shifting of an interferometer using nonlocal quantum-state correlations E. J. Galvez, M. Malik, and B. C. Melius
• Quantum Optics Experiments with Single Photons for Undergraduate Laboratories
• Non-integral vortex structures in diracted light beams S. Baumann and E.J. Galvez
• Composite vortices of displaced Laguerre-Gauss beams Daniel M. Kalb and Enrique J. Galvez
• Applications of Geometric Phase in Optics Enrique J. Galvez
• Correlated-Photon Experiments for Undergraduate Labs
• Gaussian Beams Enrique J. Galvez
• Composite Optical Vortices Formed by Collinear Laguerre-Gauss Beams
• Nonlocal labeling of paths in a single-photon interferometer M. J. Pysher,* E. J. Galvez, K. Misra, K. R. Wilson, B. C. Melius, and M. Malik
• Geometric Phase Associated with Mode Transformations of Optical Beams Bearing Orbital Angular Momentum
• Photon quantum mechanics and beam splitters C. H. Holbrow,a)
• Research Signpost 37166I (2), Fort P.O.,Trivandrum-695 023, Kerala,India
• Imaging Spatial-Helical Mode Interference of Single Photons E.J. Galvez, E. Johnson, B.J. Reschovsky, L.E. Coyle, and A. Shah
• Qubit quantum mechanics with correlated-photon experiments Enrique J. Galvez
• IOP PUBLISHING JOURNAL OF PHYSICS B: ATOMIC, MOLECULAR AND OPTICAL PHYSICS J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 015503 (9pp) doi:10.1088/0953-4075/42/1/015503
• Interference with correlated photons: Five quantum mechanics experiments for undergraduates
• Blackbody-radiation-induced resonances between Rydberg-Stark states of Na E. J. Galvez, C. W. MacGregor,* B. Chaudhuri,
• Propagation dynamics of optical vortices due to Gouy phase
• Poincare modes of light Enrique J. Galvez and Shreeya Khadka
• Poincare-beam patterns produced by non-separable superpositions of Laguerre-Gauss and polarization modes of light
• Proposal to produce two and four qubits with spatial modes of two photons
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/26/038.html","timestamp":"2014-04-19T20:13:06Z","content_type":null,"content_length":"11249","record_id":"<urn:uuid:a37c2488-1c09-4052-9002-88345b7c53aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weekly Challenge 25: Trig Trig Trig
Copyright © University of Cambridge. All rights reserved.
'Weekly Challenge 25: Trig Trig Trig' printed from http://nrich.maths.org/
Differentiating gives
$$f'(x) = \sin(\sin(\cos x)) \cdot \cos(\cos(x)) \cdot \sin(x)$$
This is zero if and only if
$$\sin(\sin(\cos x))=0 \mbox{ or } \cos(\cos(x)) = 0 \mbox{ or } \sin(x)=0$$
Consider the first of these three conditions:
$$\sin(\sin(\cos x))=0 \Rightarrow \sin(\cos x) = n\pi, n\in \mathbb{Z}$$
Since $|\sin(X)|\leq 1$ for any real $X$ and $\pi > 1$ we must choose $n=0$ in the previous equation.
$$\sin(\sin(\cos x))=0 \Rightarrow \sin(\cos x) = 0 \Rightarrow \cos x = m\pi, m\in \mathbb{Z}$$
Similarly, we must choose $m=0$ in this expression. We can thus conclude that
$$\sin(\sin(\cos x))=0 \Leftrightarrow x = \left(r+\frac{1}{2}\right)\pi, r \in \mathbb{Z}$$
Consider the second of the three conditions:
$$\cos(\cos(x)) = 0 \Leftrightarrow \cos(x) = \left(r+\frac{1}{2}\right)\pi, r \in \mathbb{Z}$$
Since $\frac{1}{2}\pi> 1$ there are no real solutions to this condition.
Consider the third of the three conditions:
$$\sin(x) =0 \Leftrightarrow x = n\pi, n \in \mathbb{Z}$$
Combining all three conditions gives us the locations of the turning points:
f'(x)=0 \Leftrightarrow x = \frac{N\pi}{2}, N\in \mathbb{Z}
We now need to consider whether they are maxima, minima or something else. We could look at the second derivative, but this will be complicated and the boundedness of $\sin(x)$ and $\cos(x)$ allows
us to make shortcuts as follows:
Notice that $f(x) = 1$ when $x = \pm \frac{\pi}{2}, \pm \frac{3\pi}{2}, \pm \frac{5\pi}{2}, \dots$. Since $f(x)$ is continuous and differentiable and $|f(x)|\leq 1$ these points must be maxima. The
even multiples of $\frac{\pi}{2}$ must therefore be minima, at which the function takes the values $f(x) = \cos(\sin 1) \approx 0.666$.
A plot of the graph confirms this calculation:
|
{"url":"http://nrich.maths.org/7059/solution?nomenu=1","timestamp":"2014-04-20T23:37:18Z","content_type":null,"content_length":"5131","record_id":"<urn:uuid:56523971-a0e9-438c-81f7-9948889c9a2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ShareMe - Free Algebra For Dummies download
How To Make A Rss Reader In Flash
What Is Ultraviolet Radiation
Chasing Pavements Lyrics
B 17 Wallpapers
Autocad 2004 Student Version Freeware
Block Beaker Deluxe
Open Source Leads Management Software
Java Phone Tool
The Driveshaft Shop
Blackberry China E900 Free Uzzap Download
Super Icon Helper
Materials Sketchup
Rail Simulator Red Code
Password Protect File Ubuntu
Game Avatar Plus 2 Nick
Free Algebra For Dummies
From Title
1. free High School algebra 1 Help eBook - Educational/Mathematics
... This is a short eBook that describes how to get free high school algebra 1 help online without having to spend any money, buy anything, join any free trials, or anything like that. ...
2. Java For dummies - Utilities/Other Utilities
... This is the Java For dummies book that are people that want to learn how to program in Java ...
3. Forex For dummies - Educational/Teaching Tools
... free Forex eBook for people that want to learn about trading foreign currency. This book is a helpful guide for the total Forex newbie. ...
4. Torrent - Transmission for dummies - Utilities/Other Utilities
... This is the far-easiest command line tool for running torrents under Linux. It only has 4 options, but a little-more advanced user can finetune it by editing. Created by Salcay's Boring Hours
5. algebra - One On One - Educational/Mathematics
... algebra One on One is an educational game for those wanting a fun way to learn and practice algebra. This program covers 21 functions which includes maximums, minimums, absolute values,
averages, x/y, ax + b, axy + b, ax + by + c, squares, cubes, and so on. It has a practice and a game area. It has a great help system that makes it easy for the beginner to do and understand
algebra. It also has a "Einstein" level that even algebra experts will find fun and challenging. You can choose from a ten ...
6. Linear algebra - Educational/Mathematics
... Performs computations associated with matrices, including solution of linear systems of equations (even least squares solution of over-determined or inconsistent systems and solution by LU
factors), matrix operations (add, subtract, multiply), finding the determinant, inverse, adjoint, QR or LU factors, eigenvalues and eigenvectors, establish the definiteness of a symmetric matrix,
perform scalar multiplication, transposition, shift, create matrices of zeroes or ones, identity, symmetric or ...
7. EMSolution algebra - Educational/Mathematics
... This bilingual program offers 70690 of fully explained step by step solutions of algebra problems together with test authoring tools. Problems of 11 levels of complexity vary from basic to
advanced: linear, quadratic, biquadratic, reciprocal, cubic, high degree and complex fractional expressions - computational problems, proofs of identities, solutions of equations and inequalities
and more. Fully explained step by step solutions and proofs. Each solution step is provided with the corresponding ...
8. MathAid algebra II - Educational/Mathematics
... Highly interactive tutorials and self-test system for individual e-learning, home schooling, college and high school computer learning centers, and distance learning. The product emphasizes on
building problem-solving skills. Tutorials include the reviews of basic concepts, interactive examples, and standard problems with randomly generated parameters. The self-test system allows
selecting topics and length for a test, saving test results, and getting the test review. Topics covered: rectangular ...
9. EMMentor algebra - Educational/Mathematics
... Interactive multilingual mathematics software for training problem-solving skills offers 70439 algebraic problems, a variety of appropriate techniques to solve problems and a unique system of
performance analysis with methodical feedback. Included are linear, quadratic, biquadratic, reciprocal, cubic and complex fractional algebraic expressions, identities, equations and inequalities.
The software allows students at all skill levels to practice at their own pace, learn from both errors and ...
10. Infinite algebra 2 - Educational/Other
... Stop searching through textbooks, old worksheets, and question databases. Infinite Pre-algebra helps you create questions with the characteristics you want them to have. It enables you to
automatically space the questions on the page and print professional-looking assignments. Give Infinite algebra 2 a try to fully assess its capabilities! FEATURES: TE Change All Questions to free
-Response TE Change All Questions to Multiple-Choice TE Change the Heading TE Change the Starting Number TE ...
Free Algebra For Dummies
From Short Description
1. PyNaC - Utilities/Mac Utilities
... PyNaC is a Python package for coordinate-free symbolic math, based on Geometric algebra (Clifford algebra) and Geometric Calculus (Clifford Analysis). ...
2. MathProf - Educational/Education
... MathProf can display mathematical correlations in a very clear and simple way. The program covers the areas Analysis, Geometry, algebra, Stochastics, Vector algebra. It helps Junior High
School students with problems in Geometry and algebra. High School and College students, seeking to expand their knowledge into further reaching mathematical concepts find this program very useful
as well. ...
3. Task Light - Educational/Mathematics
... free multilingual test authoring mathematics software enables math teachers and tutors to easily prepare math tests, quizzes and homeworks from a repository of more than 500 of solved math
problems in arithmetic, pre-algebra, algebra, trigonometry and hyperbolic trigonometry, and develop numerous variant tests around each prepared test. All tests with or without the solutions can be
printed out. The software includes basic math problems and advanced tasks, such as solutions of linear, quadratic, ...
4. EMTask Light - Educational/Teaching Tools
... free multilingual test authoring mathematics software enables math teachers and tutors to easily prepare math tests, quizzes and homeworks from a repository of more than 500 of solved math
problems in arithmetic, pre-algebra, algebra, trigonometry and hyperbolic trigonometry, and develop numerous variant tests around each prepared test. All tests with or without the solutions can be
printed out. The software includes basic math problems and advanced tasks, such as solutions of linear, quadratic, ...
5. AlgeWorksheets - Educational/Mathematics
... Generate and print pre-algebra and algebra worksheets or test papers in minutes. The sums may be from any one or combination of the following topics: pre-algebra (integers), algebraic
expressions, algebraic expansion (multiplication of binomials and trinomials), algebraic factors, algebraic fractions, indices, inequalities, simple equations, simultaneous equations, quadratic
equations, indices, algebraic fractions, inequalities, fractions, decimals, significant figures, standard form and whole ...
6. algebra Vision - Educational/Mathematics
... algebra Vision is a unique educational software tool to help students develop algebraic problem solving strategies. It provides an environment to play and see algebra in a more tangible light.
You can literally move expressions around! Draw lines connecting distributive elements! ...
7. Linear algebra Class Library - Utilities/Other Utilities
... The Linear algebra class library for Java provides a full set of tools to programmers who need linear algebra operations to use in their own projects.Currently the library supports elementary
matrix operations. ...
8. Basic algebra Shape-Up - Educational/Mathematics
... Basic algebra Shape-Up helps students master specific basic algebra skills, while providing teachers with measurable results. Concepts covered include creating formulas; using ratios,
proportions, and scale; working with integers, simple and multi-step equations, and variables. Students start with an assessment and receive immediate instructional feedback throughout.
Step-by-step tutorials, which introduce each level, can be referred to during practice. Problems are broken down into small, ...
9. Automatically Tuned Linear algebra Soft. - Utilities/Other Utilities
... ATLAS (Automatically Tuned Linear algebra Software) provides highly optimized Linear algebra kernels for arbitrary cache-based architectures. ATLAS provides ANSI C and Fortran77 interfaces for
the entire BLAS API, and a small portion of the LAPACK AP ...
10. PHP-Broadcast - Utilities/Mac Utilities
... (NOW OBSOLETE) PHP-Broadcast is a web publishing tool for dummies. It allows anyone withing an organisation to publish small amounts of news on and section of a web site or to "broadcast" it
to all sites and intranets. Soon security and other featu ...
Free Algebra For Dummies
From Long Description
1. Innoexe Visual algebra - Educational/Mathematics
... Innoexe Visual algebra works in three modes. Work with others over the internet, network, or alone. Chat with others and solve problems at the same time. IVA is perfect for tutors teaching
students over the internet or a network connection. Innoexe Visual algebra will solve your problems step by step and explain as it goes. Innoexe Visual algebra will change the way you look at
algebra problems. All registered users will receive free up grades. ...
2. Gaigen - Utilities/Other Utilities
... Gaigen is a Geometric algebra Implementation Generator. You specify the geometric algebra you want to use in your (C++) project, and then Gaigen generates C++ code that implemenents this
algebra. Requires FLTK library for the user interface. ...
3. Mathniac - Utilities/Mac Utilities
... Mathniac is a computer algebra system and a small, portable C++ library with symbolic mathcapabilities. Features: - Symbolic math with multivariable functions and expresions. - Symbolic
derivation. - Matrix algebra, etc... ...
4. Math.NET - Utilities/Mac Utilities
... Math.NET aims to provide a self contained clean framework for symbolic mathematical (Computer algebra System) and numerical/scientific computations, including a parser and support for linear
algebra, complex differential analysis, system solving and more ...
5. Clifford algebra and Utilities Library - Utilities/Other Utilities
... The CLU (CLifford algebra and Utilities) library is a C++ library that implements Clifford algebra and visualizes the geometric meaning of multivectors. ...
6. MiJAL (Minor Java algebra Library) - Utilities/Mac Utilities
... The Minor Java algebra Library aims to become an open, standard Java library for common problems in the domain of algebra and selected optimisation problems such as TSP and others. MiJAL is
aimed to be used in educational envirements but not limited to. ...
7. Equator for Mac OS - Educational/Mathematics
... A word-processor-like editor specifically designed for use in high school and college-level algebra-based physics courses. Equator helps high school and college physics students to easily
navigate the algebra. The program integrates a math editor, drawing palette, formula reference library, drag-and-drop algebra generator, and calculator. It records each work step, accepts figures
and comments, collects all of an assignment into a single file, and in one click prints homework-quality documents. ...
8. AlgebraNet - Educational
... Having trouble doing your Math homework? This program can help you master basic skills like reducing, factorising, simplifying and solving equations. Step by step explanations teach you how to
solve problems concerning fractions,binomials, trinomials etc. Each type of problem has three levels to help you to start with easy problems and slowly building up your skills. algebra has three
different functionalities: you start learning each type of problem by seeing the program solve it step by ...
9. Universal Java Matrix Package - Utilities/Mac Utilities
... The Universal Java Matrix Package (UJMP) is a Java library which provides implementations for sparse and dense matrices, as well as linear algebra calculations such as matrix decomposition,
inverse, multiply, mean, correlation, standard deviation, etc. <br>Linear algebra<br>Matrix Visualization<br>Matrix Decomposition<br>Matrix Inverse<br>Matrix Import/Export ...
10. webrel - Utilities/Other Utilities
... This tool executes simple relational algebra expressions. It is useful for learning of Database course. Javascript and xhtml is used to develop this tool , so it can be run on any platform
which have a web browser. Mirror http://launchpad.net/webrel/ <br>relational database<br>relational algebra<br>javascript<br>web-based<br>xhtml<br>portable ...
Free Algebra For Dummies
Related Searches:
|
{"url":"http://shareme.com/programs/free/algebra-for-dummies","timestamp":"2014-04-20T23:28:31Z","content_type":null,"content_length":"53545","record_id":"<urn:uuid:cf6c2552-2f37-4b3a-8845-91593a1f461a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
- math-teach
Discussion: math-teach
A discussion of teaching mathematics, including conversations about the NCTM Standards. It is not officially sponsored by or affiliated with the NCTM.
To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe math-teach in the body of the message.
To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe math-teach in the body of the message.
|
{"url":"http://mathforum.org/kb/forum.jspa?forumID=206&start=19575","timestamp":"2014-04-16T22:11:20Z","content_type":null,"content_length":"38682","record_id":"<urn:uuid:f59afef4-c8fc-486f-98de-c83fafb20069>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Princeton Junction Precalculus Tutor
Find a Princeton Junction Precalculus Tutor
...I teach Linear Algebra (MTH 201) at Burlington County College. This subject includes topics such as linear systems, matrix operations, vectors and vector spaces, linear independence, basis and
dimension, homogeneous systems, rank, coordinates and change of basis, orthonormal bases, linear transf...
17 Subjects: including precalculus, calculus, geometry, statistics
...I've always excelled in all academic areas and taking standardized tests. When I wanted to begin tutoring the LSAT, I took the test, scoring 175. I've also devoured the teaching materials from
several well known test prep companies in order to understand how they break the test down to help students with a variety of learning styles.
16 Subjects: including precalculus, calculus, geometry, algebra 2
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because
this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including precalculus, calculus, physics, ACT Math
...I've been using Apple products ever since I received my iphone years ago. I currently use a macbook air and have taught friends and family to use their iPhones, apple computers, iPads, and
iTunes. Using these products requires an understanding of their software and competitor products.
26 Subjects: including precalculus, chemistry, calculus, physics
I graduated from West Point with a Bachelor of Science degree in Engineering Management, and I currently teach mathematics, physics and engineering at an independent school in the Philadelphia
suburbs. I have tutored middle and high school students in the areas of PSAT/SAT/ACT preparation, math (Al...
19 Subjects: including precalculus, English, calculus, GRE
|
{"url":"http://www.purplemath.com/Princeton_Junction_Precalculus_tutors.php","timestamp":"2014-04-19T19:37:40Z","content_type":null,"content_length":"24669","record_id":"<urn:uuid:1325d273-435e-43ce-a8bb-6807903ebf32>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Papers Published
1. Wechsatol, W. and Lorente, S. and Bejan, A., Optimal tree-shaped networks for fluid flow in a disc-shaped body, Int. J. Heat Mass Transf. (UK), vol. 45 no. 25 (2002), pp. 4911 - 24 [S0017-9310
(02)00211-9] .
(last updated on 2007/04/06)
In this paper we consider the fundamental problem of how to design a flow path with minimum overall resistance between one point (O) and many points situated equidistantly on a circle centered at
O. The flow may proceed in either direction, from the center to the perimeter, or from the perimeter to the center. This problem is an integral component of the electronics cooling problem of how
to bathe and cool with a single stream of coolant a disc-shaped area or volume that generates heat at every point. The smallest length scale of the flow structure is fixed (d), and represents the
distance between two flow ports on the circular perimeter. The paper documents a large number of optimized dendritic flow structures that occupy a disc-shaped area of radius R. The flow is
laminar and fully developed in every tube. The complexity of each structure is indicated by the number of ducts (n[0]) that reach the central point, the number of levels of confluence or
branching between the center and the perimeter, and the number of branches or tributaries (e.g., doubling vs. tripling) at each level. The results show that as R/d increases and the overall size
of the structure grows, the best performance is provided by increasingly more complex structures. The transition from one level of complexity to the next, higher one is abrupt. Generally, the use
of fewer channels is better, e.g., using two branches at one point is better than using three branches. As the best designs become more complex, the difference between optimized competitors
becomes small. These results emphasize the robustness of optimized tree-shaped networks for fluid flow
computational complexity;cooling;electronics industry;flow simulation;laminar flow;minimisation;optimisation;pipe flow;
|
{"url":"http://fds.duke.edu/db/pratt/mems/faculty/sylvie.lorente/publications/57252","timestamp":"2014-04-19T04:51:44Z","content_type":null,"content_length":"15767","record_id":"<urn:uuid:0e707ff4-b9c1-4d46-82b8-6a96bbb9262d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
prove Digital line is not hemeomorphic
Prove that the digital line in not homeomorphic to Z with the finite complement topology
If two topological spaces are homeomorphic to each other, they share the same topological invariants, such as connectedness, compactness, Hausdorffness, and homotopy groups. That means if one
topological space is compact but the other topological space is not compact, then two topological spaces are not homeomorphic to each other. A finite complement topology on $\mathbb{Z}$ is compact.
Any open cover of $\mathbb{Z}$ has a finite subcover. Once you have chosen an arbitrary initial open set of a finite complement topology on $\mathbb{Z}$, only finite point sets are remained to cover
the whole $\mathbb{Z}$. The remaining choices of open sets for an open cover for $\mathbb{Z}$ is bound to be finite. In contrast, the digital topology on $\mathbb{Z}$ is not compact. All basis
elements of a digital topology on $\mathbb{Z}$ forms an open cover for $\mathbb{Z}$, but it has no finite subcover. Thus, the above two topological spaces are not homeomorphic.
|
{"url":"http://mathhelpforum.com/differential-geometry/85273-prove-digital-line-not-hemeomorphic.html","timestamp":"2014-04-23T16:23:41Z","content_type":null,"content_length":"34233","record_id":"<urn:uuid:9fe7ed15-c9a6-4f65-936c-22ddfb2ebcd3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Small probabilities
From: Iain Strachan <igd.strachan@gmail.com> Date: Wed Nov 09 2005 - 08:19:53 EST
Thanks for the thoughtful reply.
Your example of the repeat-deal with a pack of cards is of course
susceptible to the same analysis within the description-length framework -
to transmit the information relating to two deals of a pack of cards
requires in general for you to send 104 card descriptors in the message. But
if the second deal was the same as the first, then you'd only have to send
52, plus an indicator that the same sequence would then be repeated. Hence
the description length of the repeat-deal case is about half that of the
general case & can therefore by the same token be treated as a meaningful
low probability.
Regarding the "pi in Gen 1:1" phenomenon. I agree that it would be no more
than a curiosity if that were all there was to it. The internet has many
peculiar people who do tortuous mathematical calculations to prove something
(often the date of the "rapture", which to my knowledge no one has got right
yet! ;-). So if you get pi to 5 sig figures from Gen 1:1 using an apparently
arbitrarily concocted mathematical formula, then so what? From the vast
range of mathematical formulae you could apply it would be easy to find one
that gave you any answer you wanted. What makes it more interesting is that
Vernon reported that the value of e to a similar accuracy could be obtained
from the related NT "beginnings" verse, John 1:1, _by applying exactly the
same formula_ . It's the fact that it is the same formula that makes it
noteworthy. In description length terms, you have to transmit the model as
well as the parameters. In this case, the model only has to be transmitted
once (as the 52-card sequence has to be transmitted once), thus reducing the
description length. I don't think this example is capable of being used to
derive a formal probability, however, but it reminds me of the story,
printed in the UK newspapers a year or so ago, of a little girl called Laura
Buxton. Laura Buxton was a girl who released a helium balloon from her back
garden at a party. The balloon travelled 150 kms and then came down in
another back garden, where it was picked up by another little girl whose
name was also Laura Buxton. If that sort of thing happens to you, then you
immediately think it's an amazing coincidence, or something special has
happened and it gets reported in the local press. But when you consider the
vast range of amazing coincidences that could happen, then it's not
surprising that such things happen, and when they do, get reported. But
consider this; suppose next week you read in the newspaper that exactly the
same thing had happened to two girls called Sarah Harding who lived 135 Km
apart. Now that really would be amazing; note how much shorter my
description was the second time, because the basic model ( one girl releases
a balloon that travels a large distance to be found by another girl of the
same name) is taken as read the second time, and only the parameters of the
model ( girls' name, distance travelled) need to be described. In the same
way, it takes Vernon's web-page some considerable space to show how pi can
be derived from Gen 1:1, but very little to say that the same formula yields
e from John 1:1.
However, I think the geometric features of Vernon's work are more
susceptible to this kind of formal analysis than the pi/e derivations.
On 11/9/05, Randy Isaac <randyisaac@adelphia.net> wrote:
> Iain,
> Thanks for getting the discussion back to the original key point and for
> the good analysis. There isn't a probability so low that it eliminates
> chance simply because it isn't the whole story We have to consider the
> bigger picture and the span of possible events. This is also an area where
> one must differentiate between the past and the future. Events in the past
> can have extremely low probabilities of occurrence but they nevertheless
> occurred because of the large number of possible outcomes. That same event
> predicted for the future is virtually guaranteed not to occur by chance.
> Dealing a deck of cards is still a good example. Deal a hand of bridge and
> the result has an infinitesimally small probability of occurring, namely
> 1/52!, if you also count the sequence of cards in each hand. But there are
> also 52! possibilities so the probability of getting one of them is unity.
> But put that same sequence in the future, and predict a particular sequence
> and the probability reverts to exactly 1/52! which is essentially zero.
> Similarly, in evolution the "design space" is indeed vast but so is the
> set of possibilities. Net: calculation of probabilities of what has been
> observed is not only impossible to do because of our lack of knowledge, but
> it is also meaningless because of the range of possibilities. On the other
> hand, predicting a specific result in the future has a near zero chance of
> occurring.
> That's why I keep saying that finding numerical or geometrical
> curiosities in a text is fun but not meaningful. Pi to 5 significant digits
> in Gen. 1:1 is simply that, a curious observation. (on the other hand, if we
> were to find Hubble's constant in Gen. 1:1 to 5 significant figures, now
> THAT would be fascinating!)
> Randy
> ----- Original Message -----
> *From:* Iain Strachan <igd.strachan@gmail.com>
> *To:* Bill Hamilton <williamehamiltonjr@yahoo.com>
> *Cc:* asa@lists.calvin.edu
> *Sent:* Tuesday, November 08, 2005 11:08 AM
> *Subject:* Re: Small probabilities
> While everyone has got interested in the point-picking-from-a-line
> example, I don't believe that anyone has really addressed Bill's question
> about low probability "eliminating chance". One can get lost in the
> philosophy of picking a point from an infinite number of points, without
> seeing the real point (which was to argue against Dembski's notion that low
> probability can eliminate chance). I'd like to re-address this point. This
> is not to say that low probability can detect "design", which is a separate
> issue.
> Low probability by itself cannot "eliminate chance", because if every
> event is low probability, then one of them has to happen. .......
There are 3 types of people in the world.
Those who can count and those who can't.
Received on Wed Nov 9 08:22:47 2005
|
{"url":"http://www2.asa3.org/archive/asa/200511/0129.html","timestamp":"2014-04-20T03:36:20Z","content_type":null,"content_length":"13294","record_id":"<urn:uuid:954666df-8ae7-4108-bf31-4028c4039c57>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need help with a review question
May 19th 2009, 01:30 PM #1
May 2009
Need help with a review question
Write the line parallel to the line with the equation x-5y=6 and through point (10,2) in standard form.
If anyone can explain this to me, i would greatly appreciate it.
Standard form for a line is $Ax+By=c$ where $\frac{-A}{B}$ is the slope of the line. So in order for the line you want to be parallel to x-5y=6, it needs to have the same slope. Solving this
equation by subracting x from both sides and then dividing by -5 we get $x-5y-x=6-x=-5y=6-x$ so $\frac{-5y}{-5}=\frac{6-x}{-5}$ so $y=\frac{6}{-5}+\frac{x}{5}$ so the slope is $\frac{1}{5}$
So our line needs to be y=mx+b and we now know $m=\frac{1}{5}$
now we need it to go through the point (10,2), so we can plug that point into the equation to solve for b. 10 is x and 2 is y so $2=\frac{1}{5}(10)+b$ and solving for b, $b=2-2=0$
So $y=\frac{1}{5}x$ which is parallel to the given line and contains the point (10,2)
To get it in standard form simply subtract $\frac{1}{5}x$ from both sides to get $\frac{-1}{5}x+y=0$
May 19th 2009, 01:50 PM #2
May 2009
|
{"url":"http://mathhelpforum.com/trigonometry/89695-need-help-review-question.html","timestamp":"2014-04-24T10:19:00Z","content_type":null,"content_length":"33864","record_id":"<urn:uuid:699ea8b5-4916-4c5a-9c60-6baec5aa2c05>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Results 1 - 10 of 38
, 2003
"... A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and comm ..."
Cited by 1000 (29 self)
Add to MetaCart
A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and communicate. In this memorandum we develop their static and dynamic theory. In part I, we
- In Proc. of ICALP, volume 2380 of LNCS , 2001
"... We study a spatial logic for reasoning about labelled directed graphs, and the application of this logic to provide a query language for analysing and manipulating such graphs. We give a graph
description using constructs from process algebra. We introduce a spatial logic in order to reason loca ..."
Cited by 62 (5 self)
Add to MetaCart
We study a spatial logic for reasoning about labelled directed graphs, and the application of this logic to provide a query language for analysing and manipulating such graphs. We give a graph
description using constructs from process algebra. We introduce a spatial logic in order to reason locally about disjoint subgraphs. We extend our logic to provide a query language which preserves
the multiset semantics of our graph model. Our approach contrasts with the more traditional set-based semantics found in query languages such as TQL, Strudel and GraphLog.
, 2004
"... A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and comm ..."
Cited by 59 (6 self)
Add to MetaCart
A bigraphical reactive system (BRS) involves bigraphs, in which the nesting of nodes represents locality, independently of the edges connecting them; it also allows bigraphs to reconfigure
themselves. BRSs aim to provide a uniform way to model spatially distributed systems that both compute and communicate. In this memorandum we develop their static and dynamic theory. In Part I we
, 2005
"... Bigraphs are graphs whose nodes may be nested, representing locality, independently of the edges connecting them. They may be equipped with reaction rules, forming a bigraphical reactive system
(Brs) in which bigraphs can reconfigure themselves. Following an earlier paper describing link graphs, a c ..."
Cited by 50 (5 self)
Add to MetaCart
Bigraphs are graphs whose nodes may be nested, representing locality, independently of the edges connecting them. They may be equipped with reaction rules, forming a bigraphical reactive system (Brs)
in which bigraphs can reconfigure themselves. Following an earlier paper describing link graphs, a constituent of bigraphs, this paper is a devoted to pure bigraphs, which in turn underlie various
more refined forms. Elsewhere it is shown that behavioural analysis for Petri nets, π-calculus and mobile ambients can all be recovered in the uniform framework of bigraphs. The paper first develops
the dynamic theory of an abstract structure, a wide reactive system (Wrs), of which a Brs is an instance. In this context, labelled transitions are defined in such a way that the induced bisimilarity
is a congruence. This work is then specialised to Brss, whose graphical structure allows many refinements of the theory. The latter part of the paper emphasizes bigraphical theory that is relevant to
the treatment of dynamics via labelled transitions. As a running example, the theory is applied to finite pure CCS, whose resulting transition system and bisimilarity are analysed in detail. The
paper also mentions briefly the use of bigraphs to model pervasive computing and
- In ESOP, volume 4421 of LNCS , 2007
"... Abstract. Service Level Agreements are a key issue in Service Oriented Computing. SLA contracts specify client requirements and service guarantees, with emphasis on Quality of Service (cost,
performance, availability, etc.). In this work we propose a simple model of contracts for QoS and SLAs that a ..."
Cited by 47 (5 self)
Add to MetaCart
Abstract. Service Level Agreements are a key issue in Service Oriented Computing. SLA contracts specify client requirements and service guarantees, with emphasis on Quality of Service (cost,
performance, availability, etc.). In this work we propose a simple model of contracts for QoS and SLAs that also allows to study mechanisms for resource allocation and for joining different SLA
requirements. Our language combines two basic programming paradigms: name-passing calculi and concurrent constraint programming (cc programming). Specifically, we extend cc programming by adding
synchronous communication and by providing a treatment of names in terms of restriction and structural axioms closer to nominal calculi than to variables with existential quantification. In the
resulting framework, SLA requirements are constraints that can be generated either by a single party or by the synchronisation of two agents. Moreover, restricting the scope of names allows for local
stores of constraints, which may become global as a consequence of synchronisations. Our approach relies on a system of named constraints that equip classical constraints with a suitable algebraic
structure providing a richer mechanism of constraint combination. We give reductionpreserving translations of both cc programming and the calculus of explicit fusions. 1
, 2004
"... A framework is defined within which reactive systems can be studied formally. The framework is based upon s-categories, a new variety of categories, within which reactive systems can be set up
in such a way that labelled transition systems can be uniformly extracted. These lead in turn to behavi ..."
Cited by 26 (5 self)
Add to MetaCart
A framework is defined within which reactive systems can be studied formally. The framework is based upon s-categories, a new variety of categories, within which reactive systems can be set up in
such a way that labelled transition systems can be uniformly extracted. These lead in turn to behavioural preorders and equivalences, such as the failures preorder (treated elsewhere) and
bisimilarity, which are guaranteed to be congruential. The theory rests upon the notion of relative pushout previously introduced by the authors. The framework
, 2006
"... This document presents two different paradigms of description of communication behaviour, one focussing on global message flows and another on end-point behaviours, as formal calculi based on
session types. The global calculus originates from Choreography Description Language, a web service descript ..."
Cited by 26 (9 self)
Add to MetaCart
This document presents two different paradigms of description of communication behaviour, one focussing on global message flows and another on end-point behaviours, as formal calculi based on session
types. The global calculus originates from Choreography Description Language, a web service description language developed by W3C WS-CDL working group. The end-point calculus is a typed π-calculus.
The global calculus describes an interaction scenario from a vantage viewpoint; the endpoint calculus precisely identifies a local behaviour of each participant. After introducing the static and
dynamic semantics of these two calculi, we explore a theory of endpoint projection which defines three principles for well-structured global description. The theory then defines a translation under
the three principles which is sound and complete in the sense that all and only behaviours specified in the global description are realised as communications among end-point processes. Throughout the
theory, underlying type structures play a fundamental role. The document is divided in two parts: part I introduces the two descriptive frameworks using simple but non-trivial examples; the second
part establishes a theory of the global and end-point formalisms.
- PROCEEDINGS OF THE INTERNATIONAL CONFERENCE OF MATHEMATICIANS , 2001
"... A notion of bigraph is proposed as the basis for a model of mobile interaction. A bigraph consists of two independent structures: a topograph representing locality and a monograph representing
connectivity. Bigraphs are equipped with reaction rules to form bigraphical reactive systems (BRSs), which ..."
Cited by 25 (6 self)
Add to MetaCart
A notion of bigraph is proposed as the basis for a model of mobile interaction. A bigraph consists of two independent structures: a topograph representing locality and a monograph representing
connectivity. Bigraphs are equipped with reaction rules to form bigraphical reactive systems (BRSs), which include versions of the -calculus and the ambient calculus. Bigraphs are shown to be a
special case of a more abstract notion, wide reactive systems (WRSs), not assuming any particular graphical or other structure but equipped with a notion of width, which expresses that agents,
contexts and reactions may all be widely distributed entities. A behavioural theory is established for WRSs using the categorical notion of relative pushout; it allows labelled transition systems to
be derived uniformly, in such a way that familiar behavioural preorders and equivalences, in particular bisimilarity, are congruential under certain conditions. Then the theory of bigraphs is
developed, and they are shown to meet these conditions. It is shown that, using certain functors, other WRSs which meet the conditions may also be derived; these may, for example, be forms of BRS
with additional structure. Simple examples of bigraphical systems are discussed; the theory is developed in a number of ways in preparation for deeper application studies.
- IN ICALP’99, LNCS 1644:513–523 , 1999
"... We present a calculus of mobile processes without prefix or summation, and using two different encodings we show that it can express both action prefix and guarded summation. One encoding gives
a strong correspondence but uses a match operator; the other yields a slightly weaker correspondence but u ..."
Cited by 21 (4 self)
Add to MetaCart
We present a calculus of mobile processes without prefix or summation, and using two different encodings we show that it can express both action prefix and guarded summation. One encoding gives a
strong correspondence but uses a match operator; the other yields a slightly weaker correspondence but uses no additional operators.
- Proceedings of the Graph Transformation for Verification and Concurrency workshop (GT-VC'05) , 2006
"... Bigraphs have been introduced with the aim to provide a topographical meta-model for mobile, distributed agents that can manipulate their own linkages and nested locations, generalising both
characteristics of the π-calculus and the Mobile Ambients calculus. We give the first bigraphical presentatio ..."
Cited by 17 (10 self)
Add to MetaCart
Bigraphs have been introduced with the aim to provide a topographical meta-model for mobile, distributed agents that can manipulate their own linkages and nested locations, generalising both
characteristics of the π-calculus and the Mobile Ambients calculus. We give the first bigraphical presentation of a non-linear, higher-order process calculus with nested locations, non-linear active
process mobility, and local names, the calculus of Higher-Order Mobile Embedded Resources (Homer). The presentation is based on Milner’s recent presentation of the λ-calculus in local bigraphs. The
combination of non-linear active process mobility and local names requires a new definition of parametric reaction rules and a representation of the location of names. We suggest localised bigraphs
as a generalisation of local bigraphs in which links can be further localised. Key words: bigraphs, local names, non-linear process mobility
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=8781","timestamp":"2014-04-21T12:10:05Z","content_type":null,"content_length":"38453","record_id":"<urn:uuid:36776239-edec-4417-91ec-096ab811ad74>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SymMath Application
Computing Enthalpies of Reaction©
Theresa Julia Zielinski
Department of Chemistry, Medical Technology, and Physics
Monmouth University
West Long Branch, NJ 07764-1898
United States
mail to: tzielins@monmouth.edu
The goal of this document is to provide hands on practice for computing heats of reaction and heat of reaction as a function of temperature given a constant value for Cp.The thermodynamic data
required for a large variety of reactions is provided in the file ThDATA.xls. The data file should be in the same director as the computational template. The file ThData has the following column
headings, Compound Name, fH, fG, S and Cp. Units are in kJ/mol or J/Kmol. Simple instructions for using the template are also included. This document is based on the Mathematica notebook directions
given in "Physical Chemistry Using Mathematica" by Joseph H. Noggle; Harper Collins College Publishers, New York, 1996, pp151-5. Data used in the calculations initially were prepared by Dr. Noggle
using Tables from his Physical Chemistry textbook. TJZ transformed the RXDATA file into an Excel file for use with Mathcad and updated Mathematica workbook to Mathematica 5. The CPDATA.nb and
FXCATA.nb files were also updated for use in the Mathematica notebook. CPDATA and FXDATA files must be loaded before using the processes in the Enthalpy of Reaction Mathematica notebook. Be sure to
edit the Enthalpy of Reaction notebook to include the correct path to the data files.
Audiences: Upper-Division Undergraduate
Pedagogies: Computer-Based Learning
Domains: Physical Chemistry
Topics: Mathematics / Symbolic Mathematics, Thermodynamics
File Name Description Software Type Software Version
Enthalpy of Reactions.mcd Computational Document. Mathcad 2001i or higher required Mathcad 2001i
Enthalpy of Reactions11.mcd Computational Document. Mathcad 11 required Mathcad 11
Enthalpy of reaction.nb Mathematica Data File Mathematica
ThDATA.xls Excel Data File
FXDATA.nb Mathematica Data File Mathematica
CPDATA.nb Mathematica Data File Mathematica
Enthalpymcd.pdf Read-Only Document for Mathcad
Enthalpy of Reaction nb.pdf Read-Only Document for Mathematica
©Copyright, Theresa Julia Zielinski, 2004. All rights reserved. You are welcome to use this document in your own classes but commercial use is not allowed without the permission of the author.
|
{"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/app?app_id=121&guest=true","timestamp":"2014-04-21T07:18:34Z","content_type":null,"content_length":"9276","record_id":"<urn:uuid:b23d2841-2bec-46d8-ac6a-50bf367c605a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Ammann-Beenker Tiling
There are also tilings with octagonal symmetry that are like the Penrose tiling. One of them is the Ammann-Beenker tiling. This tiling uses two pieces, a square and a diamond. It can be formed either
by recursion or by matching rules.
Two generations of the recurrence relation for the Ammann-Beenker tiling are shown below:
Since the recurrence for the square reduces its symmetry to bilateral symmetry, of course, this recurrence does not tell the whole story; the orientation of the expanded version of a square would
have to be inferred from surrounding rhombs.
And here is a small portion of this tiling:
Note obvious linear details in this image. From symmetry, it would appear that where long rows of rhombs are all pointing in the same direction, the corresponding Ammann bars must run down the middle
of the rhomb, but I have seen no reference for the alignment of Ammann bars relative to the other obvious linear features, where one sees a line of rhombs with directions pointing to one or the other
side of the line with a deviation of 22 1/2 degrees next to a line of squares. Nor do I know where the Ammann bars of the Socolar tiling are located.
As the matching rules for those two pieces are somewhat complicated, the diagram below, which illustrates them, does so by means of adding a third piece, an octagon, which enforces the matching rules
for the corners of the diamonds and squares.
Note that although these component tiles are drawn similarly to those on the previous page, the outer row of octagons on adjacent pieces does not overlap; instead, the squares nestled into the
indentations between adjacent octagons are used to indicate matching rules.
Unfortunately, taking a little bite out of the obtuse corners of the diamond makes it impossible to draw from component pieces. Using conventional diamonds and squares, as shown on the previous page,
to make an Ammann-Beenker tiling is, of course, entirely possible, and so recursion from one into a few Keplerian layers can be done, one just has to observe the matching rules through some other
device than explicit shape fitting. This would lead to results like this:
Note also the symmetrical version of the octagon with an even number of octagons on its sides; this could be used as an alternative in the recursion relations on the previous page.
|
{"url":"http://www.quadibloc.com/math/oct01.htm","timestamp":"2014-04-17T09:58:38Z","content_type":null,"content_length":"3703","record_id":"<urn:uuid:d4703aa4-a03f-4238-85d4-da1f46156be4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
|