content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Introduction to the Math of Neural Networks
Book Name: Introduction to the Math of Neural Networks
ISBN: 9781604390339
Author: Jeff Heaton
Pages: 112
Last Update: 2011-10-15 11:58:29
Status: First Draft
View Errata Sheet: [Click Here]
Note: Our PDF books contain no DRM and can be printed, copied to multiple computers owned by you, and once downloaded do not require an internet connection.
Note: This book is currently in a beta state.
You can buy an ebook now, and will recieve all upgrades as it completes. However, since it is currently in beta it may be incomplete or unedited. The current beta status of this book is:
Book is complete, but first draft. Corrections and edits will follow.
Purchase From:
Source Price
Buy DRM-Free PDF eBook:
$9.99 (USD)
You will download a regular PDF file directly from Heaton Research. Some books provide multiple file formats. This ebook includes the following file format(s): PDF/MOBI/Amazon/ePUB.
This book introduces the reader to the basic math used for neural network calculation. This book assumes the reader has only knowledge of college algebra and computer programming. This book begins by
showing how to calculate output of a neural network and moves on to more advanced training methods such as backpropagation, resilient propagation and Levenberg Marquardt optimization. The mathematics
needed by these techniques is also introduced.
Mathematical topics covered by this book include first, second, Hessian matrices, gradient descent and partial derivatives. All mathematical notation introduced is explained. Neural networks covered
include the feedforward neural network and the self organizing map. This book provides an ideal supplement to our other neural books. This book is ideal for the reader, without a formal mathematical
background, that seeks a more mathematical description of neural networks.
The first chapter, “Neural Network Activation”, shows how the output from a neural network is calculated. Before you can see how to train and evaluate a neural network you must understand how a
neural network produces its output.
The second chapter, named “Error Calculation”, demonstrates how to evaluate the output from a neural network. Neural networks begin with random weights. Training adjusts these weights to produce
meaningful output.
The third chapter, “Understanding Derivatives”, focuses entirely on a very important Calculus topic. Derivatives, and partial derivatives, are used by several neural network training methods. This
chapter will introduce you to those aspects of derivatives that are needed for this book.
Chapter 4, “Training with Backpropagation”, shows you how to apply knowledge from Chapter three towards training a neural network. Backpropagation is one of the oldest training techniques for neural
networks. There newer, and much superior, training methods available.
However, understanding backpropagation provides a very important foundation for RPROP, QPROP and LMA.
Chapter 5, “Faster Training with RPROP”, introduces resilient propagation (RPROP) which builds upon backpropagation to provide much quicker training times.
Chapter 6, “Weight Initialization”, shows how neural networks are given their initial random weights. Some sets of random weights perform better than others. This chapter looks at several, less than
random, weight initialization methods.
Chapter 7, “LMA Training”, introduces the Levenberg Marquardt Algorithm (LMA). LMA is the most mathematically intense training method in this book. LMA sometimes offers very rapid training for a
neural network.
Chapter 8, “Self Organizing Maps” shows how to create a clustering neural network. The SOM can be used to group data. The structure of the SOM is similar to the feedforward neural networks seen in
this book.
Chapter 9, “Normalization” shows how numbers are normalized for neural networks. Neural networks typically require that input and output numbers be in the range of 0 to 1, or -1 to 1. This chapter
shows how to transform numbers into that range. | {"url":"http://www.heatonresearch.com/book/introduction-neural-network-math.html","timestamp":"2014-04-16T10:47:00Z","content_type":null,"content_length":"21594","record_id":"<urn:uuid:209d2555-9fa0-44c0-8ae0-8be92deee6e9>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inflation, decelerating expansion and accelerating expansion due to negative mass
2012-Feb-12, 02:20 PM #1
Inflation, decelerating expansion and accelerating expansion due to negative mass
I'm sorry. I can't English well.
I had a new computer simulation.
We set up each model from the birth of universe to the present, and calculated GPE using computer simulation in each level.
As a result, we could verify that “pair creation model of negative mass and positive mass” explains inflation of the early universe and decelerating expansion, and present accelerating expansion
in time series.
This simulation is showing incredible results.
It not only explains the total energy of the universe, flatness, and the essence (Total zero energy, pair creation of negative energy and positive energy) of the process of birth of the universe,
but it explains inflation, decelerating expansion in the early stage, accelerating expansion in the late stage, and dark matter through the only term, negative energy. Moreover, this negative
energy is one that is essentially required by the law of energy conservation.
Please see to below link!
1. Dark energy - Accelerating expansion of distant galaxy due to negative mass
2. Inflation, decelerating expansion and accelerating expansion with pair creation of negative mass and positive mass
3. Paper: The change of Gravitational Potential Energy and Dark Energy in the Zero Energy Universe.
A. Birth of the universe from zero energy state
1) computer simulation
Fig14. m+=+1 (1,000ea), -m-=-1 (1,000ea),
U++ = -5190.4707907,
U-- = -5308.0373689,
U-+= 10499.2712222,
U_tot = 0.7630625
Total rest mass energy is zero. Total gravitational potential energy (
We could not make GPE 0 for there were too many particles. Therefore, we simulated dividing the value of
2) Accelerating expansion of the universe (inflation)
It can be confirmed that even though the total energy starts with 0, the universe expands and positive masses combine one another due to attractive interaction among themselves, while negative
masses can not form massive mass structure because of repulsive interaction.
The pair creation model of negative mass and positive mass explains “energy conservation” in times of the birth of the universe and “expansion after the birth” naturally, and it does not need
institution of new mechanism or field like inflaton or inflation itself, and it explains this effect with only gravity.
3) Change of GPE
Figure15-a.The ratio of negative GPE to positive GPE of the early universe. We can confirm that as the universe expands, (+GPE/-GPE) ratio decreases, and
The graph above is that the change of GPE related with positive mass and
As we have observed activities of only positive masses, “GPE related with positive mass (
a) Nevertheless the value of
b) Note that nevertheless the total energy is 0, GPE related with positive mass has very big positive value, and this value approaches to 0 very rapidly. This explains the dramatic expansion like
the early universe inflation and the finish of this inflation mechanism.
c) The thing we can notice by this and the next simulation is that if time goes bit more,
d) In order to explain the flatness of the universe, typical researchers assume the inflation mechanism and explain it using this. But Zero Energy Universe does not need institution of new field
for it guarantees flatness itself, and additionally, the simulation above means that the accelerating expansion of the early universe can be explained with gravity without instituting new field.
4) Change of GPE related with positive mass and
Fig16-a. Total rest mass energy=0,
Fig16-b. Change of GPE related with positive mass in three cases
GPE related with positive mass has very big positive value, and this value approaches to 0 very rapidly.
B. GPE among distant galaxies and accelerating expansion
1) When positive mass is spread through relatively large area
Fig17.Distant galaxy – The structure that negative mass surrounds galaxy composed of positive mass.
After the birth of the universe, positive masses bind together by attractive interaction. Meanwhile, negative masses are being almost uniformly distributed because of repulsive interaction.
Negative masses are gravitational bounded to massive positive masses (Galaxy or Galaxy cluster) for massive positive mass has attractive effect on negative mass.
Figure18-a.The ratio of +GPE to –GPE of distant galaxy. Figure18-b. Note that GPE value related with positive mass changes from positive value to negative value, and to positive value again. This
represents acceleration expansion
a) The ratio of +GPE to –GPE of distant galaxy
i) Early status is that positive GPE is smaller than negative GPE, and the
ii) As time goes by, binding of positive mass increases due to attractive interaction, and the absolute value of negative gravity potential reaches maximum.
iii) The absolute value of negative GPE decreases due to positive mass has a gravitational binding and negative mass does gravitational contraction and
v) On the simulation above, we can confirm that +GPE increases 200% the value of –GPE, and for we deduce the universal components through GPE, we will guess that repulsive dark energy increases
200% the value of attractive mass energy (matter + dark matter, as a general deduction).
i) In the early universe, GPE related with positive mass had very big + value, but this value gets smaller as positive masses binds together and comprise of galaxy structure. On the simulation
above, it still has positive value, and so it is in the status of accelerating expansion.
ii) We can notice that
iii) GPE related with positive mass is converted to positive value due to negative mass does gravitational contraction around massive positive mass(Galaxy or Galaxy Cluster). Therefore, the
universe gets to an era of accelerating expansion again.
iv) The decelerating expansion and accelerating expansion is naturally explained through “pair creation model of negative mass and positive mass”, and the conversion from accelerating expansion
to decelerating expansion and from decelerating expansion to accelerating expansion is explained in sequence.
v) The conversion from negative value to positive value shall be done more smoothly than the graph above for there exist thousands of billions of galaxies in our universe.
Figure19-a.The change of distance and relative speed among distant galaxies. Figure19-b. GPE related with positive mass and negative mass.
c) The change of distance and relative speed among distant galaxies
Massive positive mass is given a birth from the 8th stage due to gravitational contraction. We calculated distance between the two massive positive masses (corresponding to the galaxy or galaxy
cluster) and relative speeds of the two from then.
We can notice that there exits positive acceleration, and it corresponds to accelerating expansion.
d) GPE related with positive mass and negative mass
Positive mass and negative mass have different GPE value each other, therefore their movements are different each other.
Last edited by pzkpfw; 2012-Feb-13 at 08:07 PM. Reason: Unembed video
C. Change of GPE among close galaxies
refer to paper.
D. Gravitational contraction due to positive mass and negative mass
1) When positive mass does gravitational contraction
The structure that negative mass surrounds galaxy composed of positive mass.
negative mass distribution: center1(-1000,0,0), center2(+1000,0,0), within R=220~250.
positive mass distribution : center1(-1000,0,0), center2(+1000,0,0), within
a)R0-R200, b)R0-R150, c)R0-R100, d)R0-R50
It is shown that as positive mass does gravitational contraction,
This means that our universe is converted from accelerating expansion (inflation) in the early universe to decelerating expansion.
2) When negative mass does gravitational contraction
positive mass distribution: center1(-1000,0,0), center2(+1000,0,0).
negative mass distribution : center1(-1000,0,0), center2(+1000,0,0), within
a)R50-R250, b)R50-R200, c)R50-R150, d)R50-R100
It is shown that as negative mass does gravitational contraction around massive positive mass,
This means that our universe is converted from decelerating expansion to accelerating expansion (Dark energy effect).
E. Distant six galaxies
fig26. Distant six galaxies.
Each +100 at (±1000,0,0),(0,±1000,0),(0,0,±1000).
center(±1000,0,0), center(0,±1000,0), center(0,0,±1000) negative mass is spread within
1) The ratio of +GPE to –GPE and GPE related with positive mass on six galaxies
2) The change of distance and relative speed among six galaxies
We can notice that there exits positive acceleration, and it corresponds to accelerating expansion.
F. The change of GPE in the whole time of the universe
fig29. The change of
2) The change of
a) GPE approaches to 0 at last as universe gets larger for it is in proportion to 1/r.
c) It seems that
Although the total GPE starts with 0 or positive value in the early stage, it change to negative value as time goes by, and positive masses forms galaxies binding themselves, and as negative mass
does gravitational contraction, it is converted to positive value.
This provides natural explanation about accelerating expansion of the early universe, decelerating expansion in the first half, and accelerating expansion in the second half.
fig30.The change of
The reason of dark energy seems to be constant, is that our universe pass this section (slope) lately 5~7Gyr. Refer to Figure15-b,c,16-a,b,18-b,21-b,23-c,27-b.
3) The change of
In early universe, Even if the total GPE is 0, the universe can expand in acceleration.
The typical matter we observe in the universe is positive mass, and this is because there are two GPE categories related with positive masses.
More information ( Paper ) :
The change of Gravitational Potential Energy and Dark Energy in the Zero Energy Universe.
In my paper on the Refractive Field Theory I discuss the possibility that an original distribution with a small variance about 0 everywhere would lead to the positive mass-energy regions
shrinking while the negative mass-energy regions expand. The negative mass-energy is exactly the gravitational field of the positive mass-energy regions.
Do you have a published paper on this I can cite in this section when I respond to the reviewers inevitable comments on my paper?
I have also seen that you have posted on this topic before. If this thread is closed, I will welcome your thoughts along these lines as they apply to the theory I am currently presenting.
In my paper on the Refractive Field Theory I discuss the possibility that an original distribution with a small variance about 0 everywhere would lead to the positive mass-energy regions
shrinking while the negative mass-energy regions expand. The negative mass-energy is exactly the gravitational field of the positive mass-energy regions.
Do you have a published paper on this I can cite in this section when I respond to the reviewers inevitable comments on my paper?
I have also seen that you have posted on this topic before. If this thread is closed, I will welcome your thoughts along these lines as they apply to the theory I am currently presenting.
ATM is not a collaborative effort. It is for one person to present their idea, and others to ask questions about it. You can not use someone else's thread to solicit information for your idea or
to present your idea. If you want to chat with icarus2, do it by PM.
At night the stars put on a show for free (Carole King)
All moderation in purple - The rules
I am fascinated by this research showing that assuming a negative energy density equal and opposite to the known positive energy density can account for a qualitative explanation of the expansion
profiles of our current cosmological models. It would appear that your work is a very good first step, but I think the issue of temperature needs to be addressed.
The existing cosmological models are driven largely by the needs to produce the distribution of atomic nuclei we observe.
Is there a quantitative difference between positive and negative energy separating, and positive and negative energy being generated in equal and opposite quantities? It would appear to me that
this is not the case, regardless of the nature of the negative energy. I would suggest further that nullification of positive and negative mass would decrease the entropy of a system, so the
reverse process would be tightly constrained.
This then suggests that the temperature relations of this model would be significantly lower than the standard cosmologies at the earlier time frames. Can this be shown to not be the case, or
what mechanism alters the nucleon synthesis relative to the standard cosmology to produce the ratios we see today?
Positive and negative mass being attracted to positive mass while negative mass repels negative mass is asymmetrical. How would that work in a presumed symmetrical universe where positive mass
equals negative mass and so forth? If the total mass of the universe is zero, then does positive and negative mass cancel when they come in contact like anti-matter? Let's say that an equal
amount of positive and negative mass is close to each other at some point in space, +m1 and -m1, while positive mass +m2 is nearby. If close enough, the fields of +m1 and -m1 should largely
counteract each other, like a point in empty space with overall zero mass/energy, shouldn't they? If so, then positive mass +m2 would not be attracted or repelled by the positive and negative
masses +m1 and -m1 at that point, they would be neutral to +m2, while we know that positive mass is attracted to positive mass, so +m1 and +m2 attract each other, therefore the positive and
negative masses +m2 and -m1 must repel in order for positive mass +m2 to be neutral to the point where both positive and negative masses +m1 and -m1 are. Hopefully you followed that.
Positive and negative mass being attracted to positive mass while negative mass repels negative mass is asymmetrical. How would that work in a presumed symmetrical universe where positive mass
equals negative mass and so forth? If the total mass of the universe is zero, then does positive and negative mass cancel when they come in contact like anti-matter? Let's say that an equal
amount of positive and negative mass is close to each other at some point in space, +m1 and -m1, while positive mass +m2 is nearby. If close enough, the fields of +m1 and -m1 should largely
counteract each other, like a point in empty space with overall zero mass/energy, shouldn't they? If so, then positive mass +m2 would not be attracted or repelled by the positive and negative
masses +m1 and -m1 at that point, they would be neutral to +m2, while we know that positive mass is attracted to positive mass, so +m1 and +m2 attract each other, therefore the positive and
negative masses +m2 and -m1 must repel in order for positive mass +m2 to be neutral to the point where both positive and negative masses +m1 and -m1 are. Hopefully you followed that.
I see nothing in the OP that did not follow the mainstream treatment of negative mass, as shown to be consistent in the context of Newtonian and relativistic equations by Forward and Bondi. This
REQUIRES the equivalence principle of sign for inertial mass and both passive and active gravitational mass.
This complicates things because force and momentum are opposed to acceleration and velocity for negative mass, complicating the meaning of attracts and repels, and making your question highly
ambiguous. Try wording your question using push and pull for force/momentum and attract/repel for acceleration/velocity. This will be particularly helpful for the English as a second language
If the masses are the same we have a pull. If they are different we have a push. Positive mass attracts both types of mass. Negative mass repulses both.
An equal magnitude positive and negative mass next to each other accelerate to infinity in the direction of the positive mass, but their net zero mass make the total momentum and energy 0 no
matter how fast the system goes.
If a model has anything else, it can not rely on the work of Forward and Bondi for consistency. If I missed something, wording your question as indicated above will make it more clear to the
author and myself what you are questioning about the model presented here.
I’m sorry. I can’t English well. My native language is not English.
So my expression is very limited.
In the negative mass, we must attention that direction of force can be different direction of motion.
1) Positive mass & positive mass
+m1 ------ +m2
Fig01. Positive mass +m1 and positive mass +m2 (initial velocity =0, m1 >0, m2 >0)
Positive mass and positive mass : The force worked between positive mass is attraction, and two objects move toward the center of mass. The force is attraction, thus their potential energy has
negative value. The direction of acceleration is in the direction of - r, so the distance between two objects are reduced gradually.
Force is attraction, and Motion is attractive.
2) Negative mass & negative mass
- m1 ------ - m2
fig02. Ngative mass - m1 and negative mass - m2 (initial velocity =0, m1>0, m2> 0)
Negative mass and negative mass: Both two objects are accelerated in the direction of + r which extends distance r, so as time passes, the distance between them is greater than initially given
condition, and the force between them is attraction, but the effect is repulsive.
If negative mass and positive mass were born together at the beginning of universe, positive mass has attractive effect each other, so it forms star and galaxy structure now, but negative mass
has repulsive effect each other, so they cannot make massive mass structure like star or galaxy.
Force is attraction, but Motion is repulsive.
3) Positive mass & negative mass
-m1 ------- +m2
fig03. Negative mass - m1 and positive mass +m2 (initial velocity =0, m1 >0, m_2 >0)
Negative mass and positive mass : Negative mass is accelerated in the direction of positive mass, and positive mass is accelerated in the direction to be far away from negative mass.
The direction of acceleration a1 worked on negative mass – m1 is - r, so - m1 moves in the direction of reducing distance r, and the direction of acceleration a2 worked on positive mass +m2 is
+r, so positive mass +m2 is accelerated in the direction that distance r increases, namely the direction of being far away from negative mass.
If the absolute value of positive mass is bigger than that of negative mass, they will meet within finite time(attractive effect) , and if the absolute value of positive mass is smaller than that
of negative mass, the distance between them will be bigger, and they cannot meet(repulsive effect) . The type of force is repulsion, so the potential energy has positive value.
==> Uniformly distributed negative mass receives attractive effect from massive positive mass(Galaxy and Galaxy cluster), so dark matter which has negative mass is clustered around galaxy because
of attraction of galaxy.
In my(Bondi, Forward and me) negative mass model
Inertial mass < 0
(Active and Passive) Gravitational mass < 0
At this model, the principle of equivalence is valid.
In your guess,
[positive masses attract positive masses and negative masses attract negative masses while positive and negative masses repel ]
This guess is same to below model.
Inertial mass > 0
(Active and Passive) Gravitational mass < 0
At this model, the principle of equivalence is not valid. In pair creation of positive mass and negative mass, energy conservation is not valid
For the motion of negative mass, please refer to below simulation video.
--- Icarus2
Last edited by pzkpfw; 2012-Feb-13 at 08:09 PM. Reason: Unembed video
Thanks, icarus2 and utesfan100. Okay, so the force between two like masses is pull and between two unlike masses is push, but when the mass is divided out to find the acceleration involved, we
get negative mass repelling both negative and positive, and positive mass attracting both negative and positive. Interesting. This actually keeps the symmetry because negative mass and positive
mass don't really attract in that case, which was the original impression I got from the OP, rather the negative mass is attracted to the positive mass and the positive mass is repelled by the
negative mass. The Wiki link says that this also keeps the equivalence principle of GR intact. Looking at it that way, positive mass has a positive curvature, the geometry is curved inward,
attracting all negative and positive mass equally, while negative mass has a negative curvature, the geometry is curved outward, repelling all negative and positive mass. So an equal amount of
negative mass and positive mass that are close together at some point in space will curve the geometry inward and outward equally, behaving the same as flat empty space to all other mass. Cool.
I want to help you express your ideas better in English. The biggest issue is that you use attraction and repulsion for both force and acceleration. These words suggest motion to a native English
speaker, but for positive mass this makes no difference for a force. We thus use these interchangeably with pull and push that suggest force to an native English speaker.
If negative mass is allowed, this equivalence of direction is broken. We should then use different words to express whether we are talking about force or acceleration.
Attraction and repulsion should be used only to refer to acceleration or velocity.
Pull and push should be used only to refer to a force or momentum.
I think taking care to use this notation in the future will significantly clarify the English presentation of your ideas when negative mass is considered.
Thanks utesfan100.
In my opinion,
Nullification(annihilation) of positive and negative mass would be tightly constrained (energy conservation, momentum conservation, etc...). And nullification(annihilation) of positive and
negative mass would decrease the entropy of a system. But law of entropy is not constraint condition in my think.
If the energy vanishes, the entropy also vanishes.
This then suggests that the temperature relations of this model would be significantly lower than the standard cosmologies at the earlier time frames. Can this be shown to not be the case, or
what mechanism alters the nucleon synthesis relative to the standard cosmology to produce the ratios we see today?
(Temperature or heat) and (kinetic energy and potential energy) are related.
Kinetic energy of all particles can be non-zero value in initial state.
Even if kinetic energy of all particles is zero in initial state, very big GPE is exist in the early universe.
Density of GPE = GPE/ V
Now Universe’s radius ~ 10^26m ~10^27m
Planck radius ~ 10^-35m
In the early universe, GPE density can be (10^240) X (now GPE density)
Maybe, temperature will suffice.
You have an interesting idea. But, as you say, you do not English well. Could you please find someone who is a native speaker of English and is fluent in your language to restate your ATM idea,
and present it here, without the links to other sites?
Good luck, John M.
Last edited by John Mendenhall; 2012-Feb-14 at 04:55 PM. Reason: typo
I'm not a hardnosed mainstreamer; I just like the observations, theories, predictions, and results to match.
"Mainstream isn’t a faith system. It is a verified body of work that must be taken into account if you wish to add to that body of work, or if you want to change the conclusions of that body of
work." - korjik
I do not speak ICARUS2's native language, but I have read through his xiVra paper and have considered theories along these lines to understand what is being communicated. I also have a vested
interest in seeing this cosmology vetted, as it relates strongly to my own ideas.
This does present a slight conflict of interest that should warrant requesting the author to respond to the accuracy of this post prior to it being taken as a accurate representation of this
Inflation, decelerating expansion and accelerating expansion due to negative mass
I apologize for my poor English.
I have new computer results to provide significant advances to the theory I presented at http://www.bautforum.com/showthread.php/105870 to warrant the reconsideration of these ideas.
These models track the gravitational potential energy (GPE) over the life of the universe. In particular, these computer models shown that this model predicts an expansion profile of the universe
similar to what is required from modern cosmologies.
The negative energy, required by the conservation of energy, is shown to account for the flatness of the universe[isotropy?], an early expansion, a deceleration phase and the current
acceleration, which also accounts for dark matter.
[Video links, and a link to a fuller description provided in OP]
1) Computer simulation of 1000 positive and 1000 neagtive unit masses [Potential values omitted]
U++: The potential considering only positive masses
U--: The potential considering only negative masses
U+-: The potential considering only mixed masses
U_tot: U++ + U-- + U+-
[For simplicity the translator will also use (as clarified by post #14):
+GPE=U+-, the positive energy density component of the GPE.
-GPE=U++ + U--, for negative enerergy density component of the GPE.
+U=U++ + U+-, the GPE experienced by a positive mass.
-U=U-- + U+-, the GPE experienced by a negative mass.]
The total mass is 0. The total GPE=+0.763.
With this many particles it is difficult to make the total potential 0. Thus we partitioned the original set into two parts, the one shown with a potential of +0.763 and another with a potential
of -0.533. Both yield similar results.
2) Accelerating Universe
This model shows that, starting from a total energy of 0, a concentrated distribution of positive and negative energy will expand, with the positive mass concentrating due to its attractive
gravity and negative energy expanding uniformly from its repulsive gravity.
Thus cosmic inflation, and the separation of masses into gravitationally interacting units, are predicted using only gravity and the conservation of energy from a 0 energy initial condition.
3) Change of GPE
[Graphs omitted showing +GPE/-GPE decreases with time, Utot approaches 0 from below and that +GPE vanishes with time.]
Since our observations have been limited to positive masses, only U++ and U+- have an observable significance.
a) Utot is negative, so the universe should expand, even though +U has a large positive value.
b) Utot remains small as +U vanishes, ending the initial period of rapid expansion.
c) Running this further shows that +U actually becomes negative, initiating a deceleration phase of the universe.
d) This eliminates the need for an additional rapid expansion mechanism, explaining the flatness of the early universe, using only gravity.
4) [Graphs provided showing that three different +U initial conditions yield the same asymptotic behavior]
B) GPE on distant galaxies [gravitationally interacting units?] and accelerating expansion
[Link provided to video of simulation]
Initially, positive mass will collect due to their mutual gravitational attraction, while negative mass will disperse due to their mutual gravitational repulsion. Negative mass will still be
attracted to positive mass and could form a region of negative mass around large objects, similar in magnitude to the object itself.
[Figures showing potential profile for model, noting the same early expansion->deceleration->expansion profile]
a) Ratio of +GPE to -GPE
i)+GPE starts smaller than -GPE, and Utot is negative. -GPE is negative due to the gravitational binding of the masses.
ii) As time goes by this gravitational binding stabilizes, and -GPE reaches a minimum.
iii) The magnitude of -GPE decreases, as the negative mass is also bound tighter to the central positive mass. This results in Utot eventually becoming positive.
iv) Eventually Utot and +U becomes positive and accelerating expansion begins again.
v) +GPE appears to converge to 200% of -GPE, allowing us to calculate -GPE and Utot from the known observations of U++ and U+-. This +GPE is observed as the repulsive dark energy effecting
positive matter.
b) Utot and +U on distant masses.
i) The universe started with a very large +U, but this value is reduced as the positive masses become gravitationally bound. The simulation above started in this early phase of the universe.
ii) We note that both values become negative, leading to a deceleration phase of expansion.
iii) Negative mass then concentrates around the centers of positive mass, increasing +U and leading to another period of expansion.
iv) Again we see the initial acceleration, followed by deceleration, leading to acceleration history of the universe that our standard cosmologies require.
v) The graphs presented here should be smoothed out due to the large numbers of large masses in our universe, compared to the limited numbers used in the model.
c) Change in velocity and distance of galaxies with time
[included pictures of distance, velocity and also +U and -U vs. time]
We begin the distance and velocity profiles at the 8th time step, to allow the initial gravitational contraction to complete.
We note that there is a positive acceleration, corresponding to an accelerating expansion.
d) Comparison of the +U and -U
+U and -U are different, so their motions are also different.
D. Gravitational contraction due to positive and negative masses.
1) When positive masses contract, negative mass will form a structure around the center.
[image of GPE components with time, showing decrease, except near constant U+- and U--]
Utot is shown to become increasing negative, along with U++ and +U.
This transitions the universe from expansion to contraction in the early universe.
2) When negative mass contracts around positive centers of mass.
[Graph showing divergence of all GPE components]
This shows that Utot and +U increase as negative mass concentrates around positive mass, increasing the total effect on positive mass.
This transitions our universe from deceleration back to acceleration.
E) 6 distant galaxies[gravitationally interacting units?]
[video embedded, along with graphs GPE components with time]
1) The ratio of +GPE, -GPE and +U for six positive mass galaxies.
Utot and +U are positive, so the expansion of the universe is accelerating.
[Image of distance and velocity with time]
This is visible from the plot of distance and velocity with time.
F. The change of GPE over the lifetime of the universe.
[graph given, for three conditions outlined above.]
2) Utot of universe
a) GPE approaches 0 as 1/r at the final phase of the universe.
b) The shape of Utot is not expected to depend on its initial value.
c) It is most likely that Utot is non-negative. An initially homogenous solution would have a positive value, and is shown in red.
Although Utot starts non-negative, the gravitational binding of positive masses drives this value negative. The concentration of negative mass around the positive mass centers then drives this
value positive again.
This provides a natural explanation of the early rapid expansion of the early universe, decelerating expansion in the first half, and accelerating expansion in the second half.
[graph showing life of universe, including period where dark energy would appear constant]
Dark energy appears constant because we are currently in this region of the graph.
3) The change of Utot and +U according to time.
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. The typical matter we observe is positive mass, [unclear] and the observed acceleration is from
the two GPE components that impact positive masses.[??]
Although Utot and +U cans have large differences in the early universe, as time goes by they converge.
[Link to a more detailed, but still broken English, xiVra article provided]
Last edited by utesfan100; 2012-Feb-15 at 05:15 PM. Reason: To fix an error in translation of +GPE and -GPE (introducing +U and -U for clarity), as explained in post #14
I really appreciate utesfan100
I really appreciate utesfan100. And I apologize for my poor English.
If negative mass and positive mass coexist, gravitational potential energy consists of the below three items.
GPE between positive masses are negative value.
GPE between negative masses are negative value.
GPE between positive mass and negative mass are positive value.
GPE= Gravitational Potential Energy.
utesfan100's explanation
==> my explanation
================================================== =========================
+GPE=U++ + U+-, for GPE on the positive masses
-GPE=U-- + U+-, for GPE on the negative masses]
+GPE = positive gravitational potential energy = U-+
-GPE = negative gravitational potential energy = U++ + U--
Utot = Total gravitational potential energy = (+GPE)+(-GPE)=(U-+) + (U++) + (U--)
New concept is introduced.
"GPE related with positive mass" = U++ + U-+
As we have observed activities of only positive masses, “GPE related with positive mass (
Two terms(U++ and U-+) have a m+.
U++ has an only negative value(-Gmm/r). But universe’s expansion is accelerating.
Therefore, new concept (GPE related with positive mass ) is need.
Since our observations have been limited to positive masses, only +GPE has an observable significance.
Since our observations have been limited to positive masses, only "GPE related with positive mass" has an observable significance.
a) Utot is negative, so the universe should expand, even though +GPE has a large positive value.
a) Nevertheless the value of Utot changes from 0 to negative value, the universe expands for "GPE related with positive mass" has + value.
b) Utot remains small as +GPE vanishes, ending the initial period of rapid expansion.
b) Note that nevertheless the total energy is 0, "GPE related with positive mass" has very big positive value, and this value(GPE related with positive mass) approaches to 0 very rapidly. This
explains the dramatic expansion like the early universe inflation and the finish of this inflation mechanism.
c) Running this further shows that +GPE actually becomes negative, initiating a deceleration phase of the universe.
Running this further shows that "GPE related with positive mass(U++ + U-+)" actually becomes negative, initiating a deceleration phase of the universe.
flatness :
[Figures showing potential profile for model, noting the same early expansion->deceleration->expansion profile]
Figures showing potential profile for model, noting the same early accelerating expansion-> decelerating expansion-> accelerating expansion profile
i)+GPE starts smaller than -GPE, and Utot is negative. -GPE is negative due to the gravitational binding of the masses.
i)+GPE(U-+) starts smaller than -GPE(U++ + U--), and Utot is negative. "Utot is negative" due to the gravitational binding of the masses.
ii) As time goes by this gravitational binding stabilizes, and -GPE reaches a minimum.
ii) As time goes by this gravitational binding stabilizes, and Utot reaches a minimum.
iii) The magnitude of -GPE decreases, as the negative mass is also bound tighter to the central positive mass. This results in Utot eventually becoming positive.
iii) The magnitude of Utot decreases, as the negative mass is also bound tighter to the central positive mass. This results in Utot eventually becoming positive.
iv)Eventually +GPE becomes positive and accelerating expansion begins again.
iv)Eventually both (Utot and "GPE related with positive mass") becomes positive and accelerating expansion begins again.
"GPE related with positive mass" = U++ + U-+
v) +GPE appears to converge to 200% of -GPE, allowing us to calculate -GPE and Utot from the known observations of U++ and U+-.
v) +GPE(U-+) appears to converge to 200% of -GPE(U++ + U--).
i) The universe started with a very large +GPE, but this value is reduced as the positive masses become gravitationally bound.
i) The universe started with a very large positive "GPE related with positive mass", but this value is reduced as the positive masses become gravitationally bound.
ii) We note that both values become negative, leading to a deceleration phase of expansion.
ii) We note that both("Utot" and "GPE related with positive mass") values become negative, leading to a deceleration phase of expansion.
iii) Negative mass then concentrates around the centers of positive mass, increasing +GPE and leading to another period of expansion.
iii) Negative mass then concentrates around the centers of positive mass, increasing "GPE related with positive mass(U++ +U-+)" and leading to another period of accelerating expansion.
+GPE and -GPE are different, so their motions are also different.
"GPE related with positive mass" and "GPE related with negative mass" are different, so their(positive mass and negative mass) motions are also different.
(U++ +U-+) and (U-- + U-+) are different, so their(positive mass and negative mass) motions are also different.
Utot is shown to become increasing negative with U++ and +GPE, while U+- remains constant.
"Utot" and "GPE related with positive mass" are shown to become increasing negative, while U+- remains constant.
This transitions the universe from expansion to contraction in the early universe.
This transitions the universe from accelerating expansion to decelerating expansion in the early universe.
This shows that Utot and +GPE increase as negative mass concentrates around positive mass, increasing the total effect on positive mass.
This shows that "Utot" and "GPE related with positive mass" increase as negative mass concentrates around positive mass, increasing the total effect on positive mass.
"Utot" and "GPE related with positive mass" are positive, so the expansion of the universe is accelerating.
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. The typical matter we observe is positive mass, [unclear] and the observed acceleration is from
the two GPE components that impact positive masses.[??]
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. Because of U-+ exist.
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. Because of that "GPE related with positive mass" has a positive value.
Although Utot and GPE+ cans have large differences in the early universe, as time goes by they converge.
Although "Utot" and "GPE related with positive mass" cans have large differences in the early universe, as time goes by they converge.
I apologize for my poor English. And, I really appreciate utesfan100.
--- Icarus2
Inflation, decelerating expansion and accelerating expansion due to negative mass
I apologize for my poor English.
I have new computer results to provide significant advances to the theory I presented at http://www.bautforum.com/showthread.php/105870 to warrant the reconsideration of these ideas.
These models track the gravitational potential energy (GPE) over the life of the universe. In particular, these computer models shown that this model predicts an expansion profile of the universe
similar to what is required from modern cosmologies.
The negative energy, required by the conservation of energy, is shown to account for the flatness of the universe, an early expansion, a deceleration phase and the current acceleration, which
also accounts for dark matter.
[Video links, and a link to a fuller description provided in OP]
1) Computer simulation of 1000 positive and 1000 negative unit masses [Potential values omitted]
U++: The potential considering only positive masses
U--: The potential considering only negative masses
U+-: The potential considering only mixed masses
U_tot: U++ + U-- + U+-
[For simplicity the translator will also use:
+GPE = positive gravitational potential energy = U-+
-GPE = negative gravitational potential energy = U++ + U--
Utot = Total gravitational potential energy = (+GPE)+(-GPE)=(U-+) + (U++) + (U--)
New concept is introduced.
"GPE related with positive mass" = U++ + U-+
As we have observed activities of only positive masses, “GPE related with positive mass (
Two terms(U++ and U-+) have a m+.
U++ has an only negative value(-Gmm/r). But universe’s expansion is accelerating.
Therefore, new concept (GPE related with positive mass ) is need.
The total mass is 0. The total GPE=+0.763.
With this many particles it is difficult to make the total potential 0. Thus we partitioned the original set into two parts, the one shown with a potential of +0.763 and another with a potential
of -0.533. Both yield similar results.
2) Accelerating Universe
This model shows that, starting from a total energy of 0, a concentrated distribution of positive and negative energy will expand, with the positive mass concentrating due to its attractive
gravity and negative energy expanding uniformly from its repulsive gravity.
Thus cosmic inflation, and the separation of masses into gravitationally interacting units, are predicted using only gravity and the conservation of energy from a 0 energy initial condition.
3) Change of GPE
[Graphs omitted showing +GPE/-GPE decreases with time, Utot approaches 0 from below and that +GPE vanishes with time.]
Since our observations have been limited to positive masses, only "GPE related with positive mass" has an observable significance.
a) Nevertheless the value of Utot changes from 0 to negative value, the universe expands for "GPE related with positive mass" has + value.
b) Note that nevertheless the total energy is 0, "GPE related with positive mass" has very big positive value, and this value(GPE related with positive mass) approaches to 0 very rapidly. This
explains the dramatic expansion like the early universe inflation and the finish of this inflation mechanism.
c) Running this further shows that "GPE related with positive mass(U++ + U-+)" actually becomes negative, initiating a deceleration phase of the universe.
d) This eliminates the need for an additional rapid expansion mechanism, explaining the flatness of the early universe, using only gravity.
flatness :
4) [Graphs provided showing that three different +GPE initial conditions yield the same asymptotic behavior]
B) GPE on distant galaxies [gravitationally interacting units?] and accelerating expansion
[Link provided to video of simulation]
Initially, positive mass will collect due to their mutual gravitational attraction, while negative mass will disperse due to their mutual gravitational repulsion. Negative mass will still be
attracted to positive mass and could form a region of negative mass around large objects, similar in magnitude to the object itself.
Figures showing potential profile for model, noting the same early accelerating expansion-> decelerating expansion-> accelerating expansion profile
a) Ratio of +GPE to -GPE
i)+GPE(U-+) starts smaller than -GPE(U++ + U--), and Utot is negative. "Utot is negative" due to the gravitational binding of the masses.
ii) As time goes by this gravitational binding stabilizes, and Utot reaches a minimum.
iii) The magnitude of Utot decreases, as the negative mass is also bound tighter to the central positive mass. This results in Utot eventually becoming positive.
iv)Eventually both (Utot and "GPE related with positive mass") becomes positive and accelerating expansion begins again.
"GPE related with positive mass" = U++ + U-+
v) +GPE(U-+) appears to converge to 200% of -GPE(U++ + U--).
b) Utot and +GPE on distant masses.
i) The universe started with a very large positive "GPE related with positive mass", but this value is reduced as the positive masses become gravitationally bound. The simulation above started in
this early phase of the universe.
ii) We note that both("Utot" and "GPE related with positive mass") values become negative, leading to a deceleration phase of expansion.
iii) Negative mass then concentrates around the centers of positive mass, increasing "GPE related with positive mass(U++ +U-+)" and leading to another period of accelerating expansion.
iv) Again we see the initial acceleration, followed by deceleration, leading to acceleration history of the universe that our standard cosmologies require.
v) The graphs presented here should be smoothed out due to the large numbers of large masses in our universe, compared to the limited numbers used in the model.
c) Change in velocity and distance of galaxies with time
[included pictures of distance, velocity and also +GPE and -GPE vs. time]
We begin the distance and velocity profiles at the 8th time step, to allow the initial gravitational contraction to complete.
We note that there is a positive acceleration, corresponding to an accelerating expansion.
d) Comparison of the GPE of positive and negative masses
"GPE related with positive mass" and "GPE related with negative mass" are different, so their(positive mass and negative mass) motions are also different.
(U++ +U-+) and (U-- + U-+) are different, so their(positive mass and negative mass) motions are also different.
D. Gravitational contraction due to positive and negative masses.
1) When positive masses contract, negative mass will form a structure around the center.
[image of GPE components with time, showing decrease, except near constant U+- and U--]
"Utot" and "GPE related with positive mass" are shown to become increasing negative, while U+- remains constant.
This transitions the universe from accelerating expansion to decelerating expansion in the early universe.
2) When negative mass contracts around positive centers of mass.
[Graph showing divergence of all GPE components]
This shows that "Utot" and "GPE related with positive mass" increase as negative mass concentrates around positive mass, increasing the total effect on positive mass.
"Utot" and "GPE related with positive mass" are positive, so the expansion of the universe is accelerating.
This transitions our universe from deceleration back to acceleration.
E) 6 distant galaxies[gravitationally interacting units?]
[video embedded, along with graphs GPE components with time]
1) The ratio of +GPE, -GPE and Utot for six positive mass galaxies.
Utot and +GPE are positive, so the expansion of the universe is accelerating.
[Image of distance and velocity with time]
This is visible from the plot of distance and velocity with time.
F. The change of GPE over the lifetime of the universe.
[graph given, for three conditions outlined above.]
2) Utot of universe
a) GPE approaches 0 as 1/r at the final phase of the universe.
b) The shape of Utot is not expected to depend on its initial value.
c) It is most likely that Utot is non-negative. An initially homogenous solution would have a positive value, and is shown in red.
Although Utot starts non-negative, the gravitational binding of positive masses drives this value negative. The concentration of negative mass around the positive mass centers then drives this
value positive again.
This provides a natural explanation of the early rapid expansion of the early universe, decelerating expansion in the first half, and accelerating expansion in the second half.
[graph showing life of universe, including period where dark energy would appear constant]
Dark energy appears constant because we are currently in this region of the graph.
3) The change of Utot and +GPE according to time.
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. Because of U-+ exist.
In the early universe, even if Utot is 0, the universe can have an accelerating early expansion. Because of that "GPE related with positive mass" has a positive value.
Although "Utot" and "GPE related with positive mass" cans have large differences in the early universe, as time goes by they converge.
Have a nice day!
--- Icarus2
+GPE=U++ + U+-, for GPE on the positive masses
-GPE=U-- + U+-, for GPE on the negative masses]
+GPE = positive gravitational potential energy = U-+
-GPE = negative gravitational potential energy = U++ + U--
Utot = Total gravitational potential energy = (+GPE)+(-GPE)=(U-+) + (U++) + (U--)
Most of the rest of your differences rest on this misunderstanding of your notation. I believe I have corrected my previous post to accurately reflect your use of these values, and to introduce
+U and -U for the potential observed be a positive and negative mass respectively.
Question: You state that your model predicts flatness, but the computer simulations do not appear to allow motion in higher dimensions. Thus the observed flatness appears to be an assumed initial
condition, not a derived fact of your theory.
Why must the initial distribution be limited to three spatial dimensions?
What would prevent non trivial deformations in curvature from radiating to infinity, breaking flatness?
On a different line of thought,
Would it make sense to consider your model from an initial infinite cubic lattice of positive mass, with a negative mass at the body center of each cube?
Would this arrangement be attractive or repulsive?
My model starts with a single assumption that is “There was a pair creation of negative energy(mass) and positive energy(mass) in the early universe”, and in other word, “The law of energy
conservation came into existence when the universe was birthed.”
This single assumption explains all of the total energy of the universe, flatness, inflation, decelerating expansion, dark energy, and dark mass.
Diverse momentary assumptions that the typical theories (the momentum of inflation, cosmological constant, vacuum energy, dark matter like WIMP) have are not needed, and negative energy is the
essential energy to satisfy energy conservation at the time of birth of the universe.
Flatness and Isotropy originated from locally energy conservation. In other word, Flatness and Isotropy originated from the pair creation of negative energy and positive energy.
Locally energy conservation means
when one pair creation, energy is conserved.
Because of the pair creation of negative energy and positive energy
Expansion problem from the status of exceeding density of black hole doesn’t take place because of an offset of positive energy and negative energy.
In my opinion,
Infinity is only mathematical concept. In the real world, infinity does not exist.
Anyway, if total m+ = |total m-|,
Maybe, it will be expanded.
My model starts with a single assumption that is “There was a pair creation of negative energy(mass) and positive energy(mass) in the early universe”, and in other word, “The law of energy
conservation came into existence when the universe was birthed.”
This single assumption explains all of the total energy of the universe, flatness, inflation, decelerating expansion, dark energy, and dark mass.
The rest of your response seems adequate for my convincing
I agree that your computer models show that dispersion of equal magnitudes of positive and negative mass from a concentrated point origin in a flat 3D space-time has very compelling descriptive
I am suggesting that a flat 3D space-time is an implicit assumption to your model, at least it is a constraint of your computer models, and should be viewed as a second principle.
This would simply reduce flatness to an assumption, one most would consider reasonable and well tested asymptotically. Otherwise, you need to explain why matter only went in three dimensions, not
4 or 5; or 103.
The rest of your response seems adequate for my convincing
I agree that your computer models show that dispersion of equal magnitudes of positive and negative mass from a concentrated point origin in a flat 3D space-time has very compelling descriptive
I am suggesting that a flat 3D space-time is an implicit assumption to your model, at least it is a constraint of your computer models, and should be viewed as a second principle.
This would simply reduce flatness to an assumption, one most would consider reasonable and well tested asymptotically. Otherwise, you need to explain why matter only went in three dimensions, not
4 or 5; or 103.
In my model, flatness does not assumption.
Flatness originated from offset of positive energy(positive curvature) and negative energy(negative curvature). Therefore, it is induced from the energy conservation or pair creation of positive
energy and negative energy.
refer to wiki.
A gravitational field has negative energy. Matter has positive energy. The two values cancel out provided the universe is completely flat.
Flatness means space-time is Euclidean. How does your model show that space-time must be Euclidean?
I will concede that an initially flat space-time will remain flat under your model. How does your model show that space was initially flat?
III-H. Observation value of WMAP
2)Some interpretation
According to the observance result of WMAP, it is predicted that current dark energy, dark matter, and matter is approximately 72.1%, 23.3%, and 4.6%, respectively.
Now, let's correspond to the GPE as follows.
Matter =
Dark Matter =
Dark Energy =
fig13. m+ = +100 X 6 = + 600. (±1200,0,0), (0,±1200,0), (0,0,±1200), each 100.
Negative mass distribution : center(±1200,0,0), center(0,±1200,0), center(0,0,±1200), negative mass is spread within R=3-120.
U++ : -83.2 (1)
U-- : -459.6 (5.523)
U-+ : +1286.9 (15.463)
The ratio above is valid between the 3 physical parameters. Case of this ratio being valid was found through simulation.
III-H. Observation value of WMAP
2)Some interpretation
According to the observance result of WMAP, it is predicted that current dark energy, dark matter, and matter is approximately 72.1%, 23.3%, and 4.6%, respectively.
Now, let's correspond to the GPE as follows.
Matter =
Dark Matter =
Dark Energy =
fig13. m+ = +100 X 6 = + 600. (±1200,0,0), (0,±1200,0), (0,0,±1200), each 100.
Negative mass distribution : center(±1200,0,0), center(0,±1200,0), center(0,0,±1200), negative mass is spread within R=3-120.
U++ : -83.2 (1)
U-- : -459.6 (5.523)
U-+ : +1286.9 (15.463)
The ratio above is valid between the 3 physical parameters. Case of this ratio being valid was found through simulation.
Very interesting.
What determined why some galaxies formed envelopes of negative mass, while others did not? How do you explain the large elliptic galaxies which do not have observed dark matter, as I gather you
would project for them?
Have a good time!
Additory explanation :
In total zero rest mass energy
U++ = -83.2 (1)
U-- = -459.6 (5.523)
U-+ = 1286.9 (15.463)
When we judge the components of the universe, we judge the components by gravitational effect rather than mass energy.
Therefore, when gravitational potential energy U-+ exists larger than gravitational potential energy U which is generated by materials, we will be confused to think that some mass energy bigger
than the mass energy of materials exists.
Since repulsive effects occur between negative masses, negative masses will be distributed all over space because it cannot form large mass structures like stars. Negative mass within the galaxy
is cancelled out by attraction from large positive mass during the galaxy formation process. Furthermore, the space, other that the galaxy, will maintain the distribution state of negative mass.
As the factor that breaks the uniform distribution of negative mass,
1. Negative mass receives the attractive effect from massive positive mass, thus for the distribution of negative mass near massive positive mass such as galaxy or galaxy cluster, the density of
negative mass is higher as it is closer to galaxy or galaxy cluster, and is lower as it is farther.
2. If positive mass(like galaxy cluster) that has strong gravity or interstellar cloud that has positive mass pass through existing area that negative mass is distributed, negative mass can be
disappeared when meeting positive mass or it can be drawn owing to attractive effect of massive positive mass at this moment, so there can be the area that negative mass, namely, dark matter
is not uniformly distributed
3. For complete spherical symmetry, gravitational effect by negative mass outside the galaxy is cancelled by each other, and there is possibility that there isn't any additional gravitational
effect inside the galaxy.
4. When galaxy clusters of uneven shape exist, the negative mass surrounding this doesn't always evenly surround clusters
Have a good time!
--- Icarus2
1. Dark energy - Accelerating expansion of the universe due to negative mass
2. Inflation, decelerating expansion and accelerating expansion with pair creation of negative mass and positive mass
3. Paper: The change of Gravitational Potential Energy and Dark Energy in the Zero Energy Universe.
2012-Feb-12, 02:26 PM #2
2012-Feb-12, 02:42 PM #3
Established Member
Join Date
Feb 2012
2012-Feb-12, 03:53 PM #4
2012-Feb-12, 08:39 PM #5
Established Member
Join Date
Feb 2012
2012-Feb-13, 01:49 AM #6
2012-Feb-13, 02:43 AM #7
Established Member
Join Date
Feb 2012
2012-Feb-13, 02:40 PM #8
2012-Feb-13, 03:34 PM #9
2012-Feb-13, 04:53 PM #10
Established Member
Join Date
Feb 2012
2012-Feb-13, 05:03 PM #11
2012-Feb-14, 04:53 PM #12
Established Member
Join Date
Jun 2006
2012-Feb-15, 02:18 AM #13
Established Member
Join Date
Feb 2012
2012-Feb-15, 04:26 PM #14
2012-Feb-15, 04:59 PM #15
2012-Feb-15, 05:22 PM #16
Established Member
Join Date
Feb 2012
2012-Feb-15, 05:42 PM #17
Established Member
Join Date
Feb 2012
2012-Feb-17, 04:26 AM #18
2012-Feb-19, 01:13 AM #19
Established Member
Join Date
Feb 2012
2012-Feb-20, 04:40 PM #20
2012-Feb-20, 06:10 PM #21
Established Member
Join Date
Feb 2012
2012-Mar-04, 09:14 PM #22
2012-Mar-05, 04:17 AM #23
Established Member
Join Date
Feb 2012
2012-Mar-11, 06:09 PM #24 | {"url":"http://cosmoquest.org/forum/showthread.php?128392-Inflation-decelerating-expansion-and-accelerating-expansion-due-to-negative-mass&p=1995110","timestamp":"2014-04-16T13:27:04Z","content_type":null,"content_length":"239016","record_id":"<urn:uuid:423e938a-2f76-42de-aa09-17266fcb551b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
s and Co
Seminars and Colloquia
DnA Seminar
Tuesday, October 15, 2013
12:00 pm - 01:00 pm
DnA Seminar, Han Li (Yale): "Effective Discreteness of the 3 Dimensional Markov Spectrum"
Abstract: Let O denote the set of non-degenerate, indefinite, real quadratic forms in 3-variables. We define for every such quadratic form Q, the Markov infimum m(Q)=inf{|Q(v)|^3/|det(Q)|: v is a
nonzero integral vector in R^3}. This normalization makes the infimum invariant after rescaling the quadratic form. The set M={m(Q): Q in O} is called the 3-dimensional Markov spectrum. An early
result of Cassels-Swinnerton-Dyer combined with Margulis' proof of the Oppenheim conjecture asserts that M consists of rational numbers, and for every a>0 there are only finitely many numbers in M
which are greater than a. In this lecture we will discuss an effective improvement of this result. This is an ongoing joint work with Prof. Margulis. The key ingredient is to study the compact orbits
of the SO(2,1) action on SL(3, R)/SL(3, Z), and our method involves techniques from the geometry of numbers, dynamics on homogeneous spaces and automorphic representations.
ESC 638
Tuesday, November 27, 2012
12:00 pm - 01:00 pm
DnA Seminar, Ron Blei - UCONN: "Measurements of randomness and interdependence"
Abstract: Interdependence and randomness can be calibrated by indices based separately on combinatorial measurements, p-variations, and tail-probability estimates. These notions had naturally
originated in a context of harmonic analysis, and appeared later in stochastic settings. I intend in this talk to survey and explain these ideas, and (hopefully) also shed some new light on them. No
formal proofs will be given. I will speak heuristically, but will try to be precise.
ESC 638
Tuesday, November 13, 2012
12:00 pm - 01:00 pm
DnA Seminar, Jane Hawkins - UNC Chapel Hill: "Some ergodic measures coming from complex dynamics: properties and open questions"
Abstract: There are a variety of natural measures that one can use when studying rational or meromorphic maps of the sphere. We give an overview of the measures of interest and discuss some of what
is known about their ergodic properties, both the classical old results and recent ones. There are some open questions that are very easy to describe to ergodic theorists.
ESC 638
Tuesday, November 06, 2012
12:00 pm - 01:00 pm
DnA Seminar, Jon Chaika - University of Chicago: "Diophantine approximation for interval exchange transformations"
Abstract: Interval exchange transformations (IETs) are invertible, piecewise order preserving isometries of the unit interval with finitely many discontinuities. They generalize rotations of the
circle. Motivated by this connection several diophantine properties will be presented. In particular, this talk will answer the questions of typically how quickly does the orbit of a point under an
IET approach another point or itself. This is joint work with M.<br/>Boshernitzan and D. Constantine.<br/>
ESC 638
Tuesday, October 30, 2012
12:00 pm - 01:00 pm
DnA Seminar, Giulio Tiozzo - Harvard: "Renormalization and alpha-continued fractions"
Abstract: Alpha-continued fraction transformations are a one-parameter family of maps which arise from generalized continued fraction algorithms. The average speed of convergence of these algorithms
(which corresponds to the entropy of the maps) varies wildly with the parameter, and is known to be locally monotone outside a closed fractal set E.<br/>Surprisingly, such a set has the same
structure as the real slice of the Mandelbrot set, making it possible to apply ideas from complex dynamics to continued fractions: in this talk, we will investigate the self-similar structure of E
and characterize the plateaux occurring in the graph of entropy.<br/>
ESC 638
Tuesday, October 23, 2012
12:00 pm - 01:00 pm
DnA Seminar, Stephen Shea-St Anslem: "Topological Conjugacy on the Complement of the Periodic Points
Abstract: Let Per(X) denote the periodic points of a subshift X. I will say two subshifts X and Y are essentially conjugate if there exists a topological conjugacy from X\Per(X) to Y\Per(Y). In 1990,
Susan Williams presented an example of a sofic shift that is not topologically conjugate to a renewal system. I will show that this example is essentially conjugate to a renewal system. I will also
present an example of a renewal system that is essentially conjugate to a shift of finite type but not topologically conjugate to a shift of finite type. Lastly, we will provide a sufficient
condition for a renewal system to be essentially conjugate to a shift of finite type.
ESC 638
Tuesday, October 09, 2012
12:00 pm - 01:00 pm
DnA Seminar, David Constantine, Wesleyan: "Frame flow and rank-rigidity"
Abstract: I will describe the dynamics of the frame flow on a compact manifold of negative curvature, as well as some extensions to certain non-positively curved manifolds. I will then use this to
prove that a compact manifold with "higher hyperbolic rank" is actually hyperbolic, under a few curvature assumptions. This fits into a family of rank-rigidity theorems/open problems.
ESC 638
Tuesday, November 15, 2011
12:00 pm - 01:00 pm
DnA Seminar: The Mathematics of Iterated Paperfolding
Speaker: Michel Dekking, Delft, Wesleyan <br /><br/><br/>Abstract: I will present some old and some new results in a project which has been going on for 35 years. There are bits of algebra
(representation theory), bits of topology (The Jordan curve theorem, Hausdorff metric), bits of graph theory (Euler's 1736 theorem), bits of number theory (Gaussian primes, Lvschian numbers), bits of
theoretical computer science (automatic sequences) and bits of measure theory (Hausdorff dimension).<br/><br/>NOTE: This talk will be appropriate for undergraduates.<br/>
ESC 638
Tuesday, November 08, 2011
12:00 pm - 01:00 pm
DnA Seminar: Sequences of Intergers Associated wtih Infinte Measure Preserving Transformations
Speaker: Yuji Ito<br/><br/>Abstract: Infinite measure preserving ergodic transformations cannot, because of ergodicity, admit a finite, equivalent, invariant measure. Because of this fact, there
exist various kinds of infinite sequences of integers that one can associate with such transformations, which are isomorphism invariants. I will discuss some of these sequences, such as weakly
wandering sequences, dissipative sequences, recurrent sequences and so on and describe some of their properties.
ESC 638
Tuesday, November 01, 2011
12:00 pm - 01:00 pm
DnA Seminar: Sequences of Integers Associated With Infinite Measure Preserving Transformations
Speaker: Yuji Ito<br/><br/>Abstract: Infinnite measure preserving ergodic transformations cannot, because of ergodicity, admit a finite, equivalent, invariant measure. Because of this fact, there
exist various kinds of infinite sequences of integers that one can associate with such transformations, which are isomorphism invariants. I will discuss some of these sequences, such as weakly
wandering sequences, dissipative sequences, recurrent sequences and so on and describe some of their properties.
ESC 638
Tuesday, October 18, 2011
12:00 pm - 01:00 pm
DnA Seminar: How fractal is the sum of two random fractals?
Speaker: Michel Dekking, Delft/Wesleyan<br/><br/>Abstract: The Palis conjecture states that if one takes two Cantor sets, such that the sum of their Hausdorff dimensions is greater than 1, then
generically it should be true that their algebraic sum should contain an interval. I shall discuss this conjecture in a probabilistic setting.
ESC 638
Tuesday, October 11, 2011
12:00 pm - 01:00 pm
Dynamics 'n Analysis Seminar: Weakly Wandering, Exhaustive Weakly Wandering and Strongly Weakly Wandering Sequences for Measure Preserving Transformations
Speaker: Vidhu Prasad (UML)<br/><br/>Abstract: We consider sequences of distinct integers associated to T, an ergodic invertible measure preserving transformation of an infinite measure space. It is
well known that such a transformation always has associated to it weakly wandering (WW) sequences, and exhaustive weakly<br/>wandering sequences (EWW). Despite this there are many well-known T<br/>
for which not a single EWW is known.<br/><br/>In this talk, we consider a class of sequences for T called strongly weakly wandering (SWW). All SWW sequences are also EWW. Their advantage is that the
conditions implying SWW are easier to verify than the conditions implying EWW.<br/><br/>We will examine SWW sequences for type zero ergodic transformations. An example of a type zero transformation
is simple random walk on the integers. All concepts will be defined in the talk. If time permits we connect these sequences to tiling the integers with a single infinite tile. This represents joint
work with Eigen, Hajian and Ito.<br/>
ESC 638
Tuesday, October 04, 2011
12:00 pm - 01:00 pm
The dynamics of the Nash map for 2 by 2 Games
Speaker: Bruce Kitchens, IUPUI, BU<br/><br/>Abstract: J. Nash defined a continuous map from a product of simplices, corresponding to the strategies for a finite game, to itself and used it to show
that every finite game has a Nash equilibrium. The map is known as the Nash better response map and a point is a fixed point for the map if and only it is a Nash equilibrium for the game. In the case
of a 2 by 2 game the map reduces to a continuous, piecewise rational map of the unit square to itself. I will discuss the dynamics of the map for several 2 by 2 games - prisoner's dilemma, matching
pennies and chicken. The dynamics of the map divide the games into three classes, those having a dominant strategy, those having elliptic dynamics and those having hyperbolic dynamics. The dynamics
of the map for the game of matching pennies will be discussed in detail.
ESC 638 | {"url":"http://www.wesleyan.edu/mathcs/pastevents/dna.html","timestamp":"2014-04-19T17:05:58Z","content_type":null,"content_length":"30480","record_id":"<urn:uuid:c744c0fc-9fcf-4208-9709-9228276dbd5a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Schrodinger-Virasoro algebra: a mathematical structure between
conformal field theory and non-equilibrium dynamics
We explore the mathematical structure of the infinite-dimensional Schrodinger-Virasoro algebra, and discuss possible applications to the integrability of anisotropic or out-of-equilibrium statistical
systems with a dynamic exponent z different from 1 by defining several correspondences with conformal field theory. | {"url":"http://www.newton.ac.uk/programmes/PDS/unterberger.html","timestamp":"2014-04-16T13:54:13Z","content_type":null,"content_length":"2381","record_id":"<urn:uuid:e25b0dd4-8af9-41bc-b85c-f0ae7872ec04>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to prove as an identity?
how do i prove that this is an identity? $\frac{tan^2\theta}{1+tan^2\theta}+\frac{cot^3\thet a}{1+cot^2\theta}=\frac{1-2sin^2\theta cos^2\theta}{sin\theta cos\theta}$
The left-hand side, having two fractions, is (or at least looks to be) more complicated than the right-hand side, so try working on the left-hand side. A good start would probably be to convert
everything to sines and cosines, convert to a common denominator, combine the two fractions, and see where that leads.... | {"url":"http://mathhelpforum.com/trigonometry/96423-how-prove-identity.html","timestamp":"2014-04-19T09:01:40Z","content_type":null,"content_length":"39244","record_id":"<urn:uuid:019c94e3-ab67-4cb1-8e9a-7f9b3bc31ad6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Betti Numbers and number of generators
up vote 1 down vote favorite
Suppose that $R:=k[x_0,\dots,x_n]$ and $I$ is an ideal. Is there any relation between finding the minimal generators of $I$ and the graded betti numbers of the module $R/I$?
6 The answer to your question ('yes') is basically part of the defintion of graded Betti numbers. You might want to review that. – J.C. Ottem Oct 22 '12 at 1:27
As far as I understand, betti number will give the number of generator of specific degree to each module in the free resolution, how that can contribute to the minimal number of generators of the
whole R/I. Any extra hint or reference? – abd Oct 22 '12 at 2:42
I think you mean minimal generators of $I$ rather than $R/I$. – Mahdi Majidi-Zolbanin Oct 22 '12 at 2:55
Thanks Mahdi. I've edited it. – abd Oct 22 '12 at 3:01
add comment
1 Answer
active oldest votes
To support J.C. Ottem's answer, let me present one example.
Let $R = \mathbb{C}[x,y]$ and $I = (x,y^2)R$. What is the minimal graded free resolution of $R/I$, equivalently $I$? That is,
up vote 1 $0 \rightarrow R(-3) \stackrel{d_1}{\rightarrow} R(-1) \oplus R(-2) \stackrel{d_0}\rightarrow R \rightarrow R/I \rightarrow 0 $
down vote
accepted where $d_1 = (-y^2 \;\; x)$ and $d_0 = (x \;\; y^2)$.
Now, ask what are the graded Betti numbers and minimal number of generators for $R/I$. I agree with J.C. Ottem's opinion on reviewing the definitions. I hope this helps.
Thanks Youngsu, your example helped me to understand the definition more. please correct me if I understood that wrong, now from the free resolution we can find the betti numbers
easily and now the first module in the sequence $R(-1)\oplus R(-2)$ will be generated by two polynomials of degree 1 and 2 respectively. Is that means the ideal $I$ will be generated
by polynomials of degree 1 and 2? Also how the differential map relate to the generators of $I$? and what we can get from the betti number about $R/I=<1,y>$? Is there any thing we can
extract about the quotient? – abd Oct 23 '12 at 17:28
Hi. You always need to put the condition "graded minimal free resolution". Check this condition with your definition of (graded) Betti numbers. The entries of $d_0$ are a generating
set of $I$. Here $(x, y^2) = I$. This can be understood how you build a resolution. I am a bit confused by your example. $R/I=<1,y>$. Did you mean that $I = (1,y)$? If so then you
wouldn't get anything since $I = R$. – Youngsu Oct 24 '12 at 14:42
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/110279/betti-numbers-and-number-of-generators","timestamp":"2014-04-17T04:10:42Z","content_type":null,"content_length":"56619","record_id":"<urn:uuid:6332b225-90e4-48fb-b65d-1a6028b45c3c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
AA Similarity
What if you were given a pair of triangles and the angle measures for two of their angles? How could you use this information to determine if the two triangles are similar? After completing this
Concept, you'll be able to use AA Similarity to decide if two triangles are similar.
Watch This
CK-12 Foundation: Chapter7AASimilarityA
Watch this video beginning at the 2:09 mark.
James Sousa: Similar Triangles
James Sousa: Similar Triangles by AA
The Third Angle Theorem states if two angles are congruent to two angles in another triangle, the third angles are congruent too. Because a triangle has $180^\circ$$180^\circ$
Investigation: Constructing Similar Triangles
Tools Needed: pencil, paper, protractor, ruler
1. Draw a $45^\circ$$60^\circ$$45^\circ$$60^\circ$
2. Repeat Step 1 and make the horizontal side between the $45^\circ$$60^\circ$
3. Find the ratio of the sides. Put the sides opposite the $45^\circ$$60^\circ$
AA Similarity Postulate: If two angles in one triangle are congruent to two angles in another triangle, the two triangles are similar.
The AA Similarity Postulate is a shortcut for showing that two triangles are similar. If you know that two angles in one triangle are congruent to two angles in another, which is now enough
information to show that the two triangles are similar. Then, you can use the similarity to find the lengths of the sides.
Example A
Determine if the following two triangles are similar. If so, write the similarity statement.
Find the measure of the third angle in each triangle. $m \angle G = 48^\circ$$m \angle M = 30^\circ$$\triangle FEG \sim \triangle MLN$
Example B
Determine if the following two triangles are similar. If so, write the similarity statement.
$m \angle C = 39^\circ$$m \angle F = 59^\circ$$\triangle ABC$$\triangle DEF$
Example C
Are the following triangles similar? If so, write the similarity statement.
Because $\overline{AE} \ || \ \overline{CD}, \angle A \cong \angle D$$\angle C \cong \angle E$$\triangle ABE \sim \triangle DBC$
Watch this video for help with the Examples above.
CK-12 Foundation: Chapter7AASimilarityB
Two triangles are similar if all their corresponding angles are congruent (exactly the same) and their corresponding sides are proportional (in the same ratio).
Guided Practice
Are the following triangles similar? If so, write a similarity statement.
1. Yes, $\triangle DGE \sim \triangle FGD \sim \triangle FDE$
2. Yes, $\triangle HLI \sim \triangle HMJ$
3. No, though $\angle MNQ \cong \angle ONP$
Use the diagram to complete each statement.
1. $\triangle SAM \sim \triangle \underline{\;\;\;\;\;\;\;\;\;}$
2. $\frac{SA}{?}=\frac{SM}{?}=\frac{?}{RI}$
3. $SM = \underline{\;\;\;\;\;\;\;\;\;}$
4. $TR = \underline{\;\;\;\;\;\;\;\;\;}$
5. $\frac{9}{?}=\frac{?}{8}$
Answer questions 6-9 about trapezoid $ABCD$
6. Name two similar triangles. How do you know they are similar?
7. Write a true proportion.
8. Name two other triangles that might not be similar.
9. If $AB = 10, AE = 7,$$DC = 22$$AC$
10. Writing How many angles need to be congruent to show that two triangles are similar? Why?
11. Writing How do congruent triangles and similar triangles differ? How are they the same?
Use the triangles below for questions 12-15.
$AB = 20, DE = 15,$$BC = k$
12. Are the two triangles similar? How do you know?
13. Write an expression for $FE$$k$
14. If $FE = 12$$k$
15. Fill in the blanks: If an acute angle of a _______ triangle is congruent to an acute angle in another ________ triangle, then the two triangles are _______.
Use the diagram below to answer questions 16-20.
16. Draw the three separate triangles in the diagram.
17. Explain why $\triangle GDE \sim \triangle DFE \sim \triangle GFD$
Complete the following proportionality statements.
18. $\frac{GF}{DF}=\frac{?}{FE}$
19. $\frac{GF}{GD}=\frac{?}{GE}$
20. $\frac{GE}{DE}=\frac{DE}{?}$
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Geometry-Concepts/r2/section/7.4/","timestamp":"2014-04-25T04:37:33Z","content_type":null,"content_length":"153359","record_id":"<urn:uuid:8076694e-f391-414d-b5bd-c4d7bf0896cd>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entering Commands Directly
In addition to using controllers, you can type in commands directly from the keyboard. Press
Type this input and press CalculationCenter automatically adds the In and Out labels.
Every calculation that you perform using a controller can also be done by typing and evaluating input in your notebook. With experience, you may find this method faster and more convenient than using
Here is an example from algebra. To help you match brackets, the first bracket appears colored until the closing bracket is typed.
Here is an example from calculus using the Integrate command.
You can look up the syntax of any command in the Help Browser. The Browser includes examples for each function, which you can edit and evaluate in place by pressing
See Using Help for more information on the Help Browser.
Here are some conventions governing the syntax of CalculationCenter commands.
Built-in functions are capitalized. Arguments to functions are wrapped with square brackets.
Each of these represents multiplication:
a*b a b a(b+1)
2x means 2*x.
These are standard arithmetic operations:
2^3 This means 2 to the power of 3.
Uppercase and lowercase letters are recognized as different. Lists are wrapped with curly brackets.
{a, b, B}
Built-in functions and symbols are capitalized. Commas are used to separate arguments. A semicolon prevents output, but the command is still evaluated.
Variables are usually in lowercase letters. Entire words can be used as variables.
x = 5
xvalue = 3
To auto-complete a command:
1. Type a part of the command into your notebook and select it.
2. Press
If more than one command matches the text, a pop-up menu showing a list of matching commands appears. Select a command from the pop-up menu to paste it into your notebook.
To get a template for a command:
1. Select a command name in your notebook.
2. Press
The template shows the full syntax of the command with dummy variables to indicate parameters you must specify. | {"url":"http://reference.wolfram.com/legacy/calculationcenter/v2/GettingStarted/StartingOut/EnteringCommandsDirectly.html","timestamp":"2014-04-21T02:43:16Z","content_type":null,"content_length":"37944","record_id":"<urn:uuid:a3fa8a04-0345-47b8-871e-595de55461d3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: baseline adjustment in mixed models
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: baseline adjustment in mixed models
From Clyde Schechter <clyde.schechter@einstein.yu.edu>
To statalist@hsphsun2.harvard.edu
Subject st: RE: baseline adjustment in mixed models
Date Mon, 16 Nov 2009 09:48:56 -0800
Yes, in general. One could concoct a data generating process in which the
baseline value y0 exerted some outsize influence over all subsequent
values (e.g., repeated measures in which the subject was fed back his/her
response at the baseline measurement shortly prior to each subsequent one
and asked to try to achieve consistency with that) that would necessitate
inclusion of y0 as a covariate as well. But I can't think of any examples
that aren't really artificial. So, unless there is something about what
you are studying that specifically suggests y0 is needed as a covariate,
the standard growth model represented just by
xtmixed y group time groupXtime || id:
(or the corresponding random-slopes version) should do the trick. For most
situations it adequately accounts for any baseline group difference.
Again, as coded this model assumes that the y-time relationship is linear.
If that is not the case, time needs to be transformed or recoded as
dummies or splines, etc., accordingly. And again, since the coefficient
of group represents the mean group difference conditional on the other
model variables all being zero, life is simplest for this purpose if time
is coded so that it (or, more generally, all variables representing it) is
zero at baseline.
I've never really thought about using random slopes as a way of optimally
regressing extreme values to the mean. It seems to me that the
distinction between a random intercept and random slopes model depends on
what the science says about the evolution of y over time. If it is
credible that a single growth rate (coefficient of time) applies within
each group and that individual deviations from that reflect either
baseline differences being carried forward or simple random errors (e.g.
measurement error), then the random intercept model is a complete
specification. If, however, it is more reasonable to suppose that, within
each group, subjects may differ not just in their baseline values but also
in the rate at which y varies over time, then a random slopes model is a
better specification. If the science is not clear, one could test this
empirically by seeing whether the random slopes model turns up appreciable
variance for the slope(s) or not.
I don't think I understand your question regarding a model of choice with
respect to regression to the mean, so I won't say any more about it here.
Clyde Schechter
> Date: Sun, 15 Nov 2009 10:42:58 -0500
> From: "Visintainer PhD, Paul" <Paul.Visintainer@baystatehealth.org>
> Subject: st: RE: baseline adjustment in mixed models
> Clyde, thanks for the very clear explanation. You're getting to the root
> of my question. So, if I understand you correctly, the following model is
> unnecessary:
> xtmixed y y0 group time groupXtime || id:
> or the random slope equivalent, because the group variable accounts for
> differences at Y0. Two related questions:
> 1) You mentioned that coding baseline as Y0 makes life simpler. Suppose
> time is coded as baseline plus time1 through time4. Is there any utility
> to the model:
> xtmixed y baseline group time groupXtime || id: , where the time variable
> does not include baseline Y0.
> 2) Controlling for baseline attempts to account for group differences at
> the start of the trial, and also for control for those observations with
> extreme values (i.e., regression to the mean). Am I correct in assuming
> that the random coefficient model is the model of choice for correcting
> regression to the mean? My logic (or illogic) here is that the more
> extreme the baseline values, the greater the effect of regression to the
> mean (i.e., an individual's slope is a composite of the group assignment
> plus the effect of regression to the mean depending on his initial value).
> Thanks, this has been really helpful.
> - -Paul
Clyde Schechter, MA MD
Associate Professor of Family & Social Medicine
Please note new e-mail address: clyde.schechter@einstein.yu.edu
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-11/msg00840.html","timestamp":"2014-04-16T07:24:19Z","content_type":null,"content_length":"9623","record_id":"<urn:uuid:f88e6ccb-df01-4b84-8127-032d934eb1c6>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I am trying to finish a set of problems give to me by my teacher. I am using C I need to know about Arrays and How to find the mode inside an array.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
what do you mean by "mode"??
Best Response
You've already chosen the best response.
The mode is the most frequently occurring element. This problem seems simple enough at first glance, but then you realize that an efficient solution requires playing with different kinds of data
structures :) A C example is here: http://rosettacode.org/wiki/Averages/Mode#C but it seems way too complicated for my tastes.
Best Response
You've already chosen the best response.
A simple way to do it is to sort the thing and then search for the longest-recurring element.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
It can be done without sorting as well. Just create a hash table to store element as key, and total no. of repetitions as value. Traverse the source array, and upate the hash table for each
element detected. After traversal, iterate through the hash table to find the key with maximum value.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb4c718e4b05565342a916e","timestamp":"2014-04-19T07:24:38Z","content_type":null,"content_length":"37925","record_id":"<urn:uuid:cb660927-8db1-485a-98c7-1cb236c48dbc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harder, Better, Faster, Stronger
A runcible nonce
April 15, 2014
In the recent panic surrounding the Heartbleed bug, we ask ourselves why, and how, these bugs still happen. We know that it was a preventable bug, with a simple fix, but with potentially important
The bug is explained in non-technical (but accurate) terms here, and the patch is shown here. But that’s not what I want to talk you about. Let’s discuss the source of the problem.
Of Sets and bitmaps
April 8, 2014
When we represent sets, we have many options. We can use a language-specific primitive, like std::set<T> (which is likely list- or tree-like in its detail), or use a bitmap that marks, for each
element (and therefore assumes that there is an universal set that contains all elements) whether or not it is included in the set. Bitmaps are simple to implement (especially when one uses something
like std::vector<bool>) but need an amount of memory proportional to the universal set, not to the actual subset you’re trying to encode.
We can also use lists, or interval lists. But which one is the most efficient? Under what conditions? Let’s have a look.
Hidden lunch
April 2, 2014
What delicious lunch hides in the equation
$\displaystyle \pi=\left(\frac{n_{om}}{z\sqrt{a}}\right)^2$?
Rearrange the equation to find out!
Consider Simplicity, Verily.
April 1, 2014
If you’re a perfectionist, it’s really hard to limit the efforts you put into developing code. A part of you wants to write the perfect code, while another reminds you that you haven’t time for that,
and you will have to settle for good enough code. Today’s entry is exactly this: an ambitious design that was reduced to merely OK code.
I needed to have an exporter (but no importer) to CSV format for C++. One of the first thing that came to mind is to have a variant-like hierarchy that can store arbitrary values, each specific class
having its own to_string function, and then have some engine on top that can scan a data structure and spew it to disk as CSV. That’s ridiculously complicated—very general—but ridiculously
The Great Sultan of the Indies
March 25, 2014
The Great Sultan of the Indies was sent by one of his viziers 99 bags of gold, each containing exactly 50 coins. Hidden somewhere in those 99 bags is a hollowed coin (indistinguishable to the naked
eye from the others) containing a secret message destined to His Most Excellent Majesty. Using a simple two-pan balance, in how many weighing can you find the bag containing the lighter coin? With
how many further weighing can you find the coin within the bag?
The solution of the problem is not that complicated, but depends entirely on the assumptions you make on the balance. If one supposes that the balance is large enough to pile the 99 bags in either
one of its pan, and that is locked while you’re loading it, giving its reading only once you’re done loading it, the solution that comes to mind is Read the rest of this entry »
Converting PDFs to Hard B+W
March 18, 2014
Nothing too this week: how to convert a Djvu or PDF to hard black and white PDF—not shades of gray. Why would you want to do that anyway? Well, you may, like me, have a printer that has no concept of
color calibration and has dreadful half-toning algorithms, resulting in unreadable text and no contrast when you print a Djvu or a PDF of a scanned book.
[DEL:Walk:DEL] Count like an Egyptian (Part I)
March 11, 2014
The strangest aspect of the Ancient Egyptian’s limited mathematics is how they wrote fractions. For example, they could not write $\frac{5}{7}$ outright, they wrote
$\displaystyle \frac{5}{7}=\frac{1}{2}+\frac{1}{5}+\frac{1}{70}$,
and this method of writing fractions lead to absurdly complicated solutions to trivial problems—at least, trivial when you can write a vulgar fraction such as $\frac{3}{5}$. But how much more
complicated is it to write $\frac{1}{2}+\frac{1}{5}+\frac{1}{70}$ rather than $\frac{5}{7}$? | {"url":"http://hbfs.wordpress.com/","timestamp":"2014-04-18T03:49:17Z","content_type":null,"content_length":"56910","record_id":"<urn:uuid:75f79bf2-09eb-4e08-8732-bde78af11f6c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Artificial magnetism for ultracold atoms
Recent progress in cooling and trapping of atoms has stimulated intense studies of the physical properties of the atomic Bose-Einstein condensates (BECs) and the degenerate Fermi gases. These systems
are formed typically at densities of $1014$ atoms per $cm3$ and temperatures in the nano-Kelvin range. Much current research in atomic quantum gases is motivated by the possibility of simulating a
variety of condensed-matter phenomena [1, 2, 3, 4] such as the transition between a superfluid state and an insulating state. An advantage of using atomic systems is the high degree of control: One
can relatively easily change the physical parameters of the system including the number of the trapped atoms, the shape of the trapping potential, and the strength of the atom-atom coupling.
On the other hand, the atoms forming the quantum gases are electrically neutral particles and there is no Lorentz force acting on them. Thus there is no direct analogy between the properties of the
degenerate atomic gases and magnetic phenomena involving electrons in solids, such as the quantum Hall effect discussed in a recent Viewpoint [5]. Now, Yu-Ju Lin and colleagues at NIST in
Gaithersburg, Maryland, report in Physical Review Letters an important experimental advance towards producing an artificial magnetic field for an atomic Bose-Einstein condensate [6].
The usual way to imitate the magnetic field in a cloud of electrically neutral atoms is to rotate the system [7, 8, 9]. In the rotating frame of reference the Hamiltonian for the atomic motion
acquires a vector-potential-type term describing the Coriolis force. The latter has the same mathematical structure as the Lorentz force for a charged particle in a uniform magnetic field.
Additionally, inertial effects push the atoms away from the center of the rotating cloud. When the rotation frequency approaches the frequency of the atomic trap, the trapping is canceled by the
centrifugal effect and the problem reduces to the cyclotron motion of a charged particle in a constant magnetic field. Using this method it is possible to produce vortex lattices in the atomic
Bose-Einstein condensates [1].
Limitations in the experimental techniques have so far prevented researchers from reaching higher artificial magnetic fields where cold atoms could enter the fractional quantum Hall regime.
Furthermore, the method of rotation usually applies to atomic clouds confined by cylindrically symmetric traps with a small anisotropic rotating potential added [1, 7, 8, 9]. However, there are
important trapping configurations like atom chips [10] lacking axial symmetry. Therefore it is desirable to have alternative methods for producing artificial magnetic fields for cold atoms without
rotation. For atoms trapped in an optical lattice, the artificial magnetic field can be generated by inducing an asymmetry in the atomic tunneling between the lattice sites [11, 12, 13]. To simulate
a magnetic flux there should be a nonvanishing phase for the atomic transfer around an elementary cell of the lattice known as the Peierls phase.
It is possible to create an artificial magnetic field for cold atoms without a trapping lattice. In these proposals, several laser beams are used to induce transitions between different atomic
internal states in a position-dependent manner [14, 15, 16, 17]. The laser fields alter (“dress”) the internal atomic eigenstates, which become superpositions of the original atomic states. It is the
position-dependence of these dressed eigenstates that changes the momentum operator $p=-i$$ħ$$∇$ to $p-A$ for the atomic center-of-mass motion in the $nth$ internal state [18, 19]. The appearing
effective vector potential $A$ represents an atomic momentum associated with the $nth$ internal dressed state. Adopting the adiabatic approximation, one can ignore transitions to other atomic levels.
The atomic motion in the $nth$ dressed state then becomes equivalent to a motion of a particle of a unit charge in the magnetic field $B=∇×A$. Several schemes have been proposed providing a nonzero
effective magnetic field $B$ for electronically neutral atoms in the laser fields [15, 16, 17]. Yet up to now there has been no experimental evidence of generating the artificial magnetic field using
methods other than rotation.
The experiment by Y.-J. Lin et al. [6] makes use of two counter-propagating laser beams that act on a $BEC$ of $87Rb$ atoms characterized by a total spin $F=1$. The atoms contain three magnetic
ground states $|mF〉$ corresponding to different spin projections $mF=0,±1$. Two laser beams induce the Raman transitions between the magnetic states with $ΔmF=1$. The Raman process involves the
absorption (emission) of a photon with a wave vector $kr$ from one beam, and emission (absorption) of a photon from another beam with the opposite wave vector $kr′≈-kr$ as shown in Fig. 1. As a
result the laser beams couple the atomic states $|mF,k〉$ differing in spin projection $ΔmF=1$ and the linear momentum $Δk=2kr$ . In such a situation the laser-dressed atomic states are the
superpositions of the combined spin and the center-of-mass motion states $|k〉=|-1,k-2kr〉c-1+|0,k〉c0+|1,k+2kr〉c1$, where $cj$ is the probability amplitude for the atom to have the spin projection
$mF=j$. The states $|k〉$ have a minimum energy at a certain wave vector $k=kmin$, which depends on the detuning from the two-photon resonance. The corresponding momentum $ħ$$kmin$ plays a role of a
spatially uniform effective vector potential, shifting the origin of the atomic momentum. To simulate a Lorentz force $F=v×B$, the effective vector potential $A=$$ħ$$kmin$ should be spatially
dependent and have a nonzero curl: $B=∇×A≠0$.
An important ingredient of the experiment by Y.-J. Lin et al. [6] is the application of a real magnetic field in addition to the laser beams. The magnetic field removes the initial degeneracy of the
spin levels with $mF=0,±1$ via the linear and quadratic Zeeman effects. By changing the strength of the magnetic field one can increase or decrease the two-photon detuning $δ$ and thus alter the
effective vector potential $A=$ $ħ$$kmin(δ)$. Lin et al. have managed to transfer adiabatically the condensate atoms to the state with $k=kmin(δ)$ in the presence of the Raman laser beams and the
magnetic field. Subsequently, the condensate was allowed to expand by removing the trap. The time-of-flight measurements have determined the atomic momentum $ħ$$kmin(δ)$ as a function of the detuning
$δ$, showing a good agreement with the theoretical calculations [6].
The vector potential $A=$ $ħ$$kmin(δ)$ produced in this way is uniform and thus corresponds to a zero effective magnetic field: $B=∇×A=0$ [6]. The technique can be extended to generate nonuniform
vector potentials for cold atoms. To accomplish this Lin et al. propose to apply a nonhomogeneous real magnetic field [6], resulting in the distance-dependent detuning $δ$. This provides the spatial
dependence of the wave vector $kmin(δ)$ and leads to the nonzero effective magnetic field $B=$ $ħ$$∇kmin(δ)×kmin$. In this way, it is a combination of the inhomogeneous real magnetic field and the
counter-propagating laser beams that results in the artificial magnetic field and the Lorentz force acting on the electrically neutral atoms. The suggested scheme [6] can serve as an alternative to
the previous proposals relying on the spatial variation of the laser beam profiles [15, 16, 17] or the optical lattices [11, 12, 13]. The future will show which of these approaches works better in
creating the artificial magnetism.
If this could be achieved, we would have within our grasp the experimental realization of gauge potentials for cold atoms that would allow us to properly simulate a range of fascinating
condensed-matter and high-energy physics. An important issue to be explored would be the creation of the non-Abelian gauge potentials for cold atoms [21, 22, 23, 24, 25, 26]. The non-Abelian effects
can appear if the atoms have degenerate dressed states and hence the atomic center-of-mass motion is described by a multicomponent wave function. The light-induced vector potential becomes then a
matrix, the Cartesian components of which do not necessarily commute. By choosing the proper light fields, one can simulate a number of intriguing phenomena, such as formation of non-Abelian magnetic
monopoles for cold atoms [22, 26].
1. I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).
2. M. Greiner, O. Mandel, T. Esslinger, T. Hänsch, and I. Bloch, Nature 415, 39 (2002).
3. Z. Hadzibabic, P. Krüger, M. Cheneau, B. Battelier, and J. Dalibard, Nature 441, 1118 (2006).
4. M. Greiner, C. A. Regal, and D. S. Jin, Phys. Rev. Lett. 94, 110401 (2005).
5. H. A. Fertig, Physics 2, 15 (2009).
6. Y-J. Lin, R. L. Compton, A. R. Perry, W. D. Phillips, J. V. Porto, and I. B. Spielman, Phys. Rev. Lett. 102, 130401 (2009).
7. K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. 84, 806 (2000).
8. J. R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ketterle, Science 292, 476 (2001).
9. E. Hodby, G. Hechenblaikner, S. A. Hopkins, O. M. Marago, and C. J. Foot, Phys. Rev. Lett. 88, 010405 (2002).
10. R. Folman, P. Krüger, J. Schmiedmayer, J. Denschlag, and C. Henkel, Adv. At. Mol. Opt. Phys. 48, 263 (2002).
11. D. Jaksch and P. Zoller, New J. Phys. 5, 561 (2003).
12. E. J. Mueller, Phys. Rev. A 70, 041603 (2004).
13. A. S. Sørensen, E. Demler, and M. D. Lukin, Phys. Rev. Lett. 94, 086803 (2005).
14. R. Dum and M. Olshanii, Phys. Rev. Lett. 76, 1788 (1996).
15. G. Juzeliūnas and P. Öhberg, Phys. Rev. Lett. 93, 033602 (2004).
16. G. Juzeliūnas, J. Ruseckas, P. Öhberg, and M.Fleischhauer, Phys. Rev. A 73, 025602 (2006).
17. K. J. Günter, M. Cheneau, T. Yefsah, S. P. Rath, and J. Dalibard, Phys. Rev. A 79, 011604 (2009).
18. F. Wilczek and A. Shapere, Geometric Phases in Physics (World Scientific, Singapore, 1989)[Amazon][WorldCat].
19. A. Bohm, A. Mostafazadeh, H. Koizumi, Q. Niu, and J. Zwanziger, The Geometric Phase in Quantum Systems (Springer, New York, 2003)[Amazon][WorldCat].
20. M. Cheneau, S. P. Rath, T. Yefsah, K. J. Günter, G. Juzeliūnas, and J. Dalibard, Europhys. Lett. 83, 60001 (2008).
21. K. Osterloh, M. Baig, L. Santos, P. Zoller, and M. Lewenstein, Phys. Rev. Lett. 95, 010403 (2005).
22. J. Ruseckas, G. Juzeliūnas, P. Öhberg, and M. Fleischhauer, Phys. Rev. Lett. 95, 010404 (2005).
23. J. Y. Vaishnav and C. W. Clark, Phys. Rev. Lett. 100, 153002 (2008).
24. G. Juzeliūnas, J. Ruseckas, A. Jacob, L. Santos, and P. Öhberg, Phys. Rev. Lett. 100, 200405 (2008).
25. N. Goldman, A. Kubasiak, P. Gaspard, and M. Lewenstein, Phys. Rev. A 79, 023624 (2009).
26. V. Pietilä and M. Möttönen, Phys. Rev. Lett. 102, 080403 (2009). | {"url":"http://physics.aps.org/articles/print/v2/25","timestamp":"2014-04-18T00:33:29Z","content_type":null,"content_length":"32218","record_id":"<urn:uuid:6baed09f-69cb-4990-86c3-c8ec1015ae8f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the "strongest" non-local property of a ring/module that is true of all localizations at maximal ideals?
up vote 4 down vote favorite
Given a commutative ring $A$ we say that a property P is local if
$A$ has P $\leftrightarrow$ $A_{p}$ has P for all prime ideals $p$ of $A$
It is usually the case that this requirement is equivalent to $A_{m}$ having P for all maximal ideals $m$ of $A$. I was wondering which (if any) are the strongest/most interesting local properties
$P$ of a commutative ring that do not satisfy the second equivalence. Similarly, I would like to know the strongest/most interesting non-local properties P that are true at all localizations at $p$.
That is to say, what are the most interesting properties P of $A$ such that:
(1) $A_{p}$ has P for all prime ideals $p$ of $A$ but P is NOT local
(2) P is local BUT it is NOT true that if $A_m$ has P for all maximal ideals $m$ of $A$ then $A$ has P.
EDIT: After comments and answers received have edited (and expanded) the question. Hope it is clear and unambiguous now.
2 I think the more natural and common definition is: $P$ is local if it can be tested locally with respect to the Zariski topology. This means that $A$ has $P$ iff there is a partition of unity
$f_1,...,f_n$ such that each $A_{f_i}$ has $P$. Anyway, your question is interesting. So you're asking for a property such that "$A_m$ has P for all maximal ideals $m$ iff $A$ has P", but it is
not true for prime ideals? – Martin Brandenburg Oct 23 '11 at 7:47
@Martin Yes, i.e. there is a prime ideal for which P doesn't hold at $A_p$ even though it holds for all maximal $m$ at $A_m$ – Chuck Oct 23 '11 at 14:50
A standard example is the ideal class group of a Dedekind domain $R$, which measures the failure of freeness for projective rank one modules over $R$. Locally the group is trivial. – Tommaso
Centeleghe Oct 23 '11 at 21:35
1 @Martin: this is a pretty widely used notion of a property being local. The idea is that if your property is open and local in this sense, then it is local in your sense. A lot of the properties
for which this interpretation of local is used is a singularity condition, such as being regular, Cohen-Macaulay, Gorenstein, $S_n$, etc. are inherently open, so the two interpretations of local
agrees for them. – Sándor Kovács Oct 23 '11 at 23:44
Dear Chuck, I think your formulation is slightly ambiguous. My interpretation is that you want a property that holds for the local rings of $A$ at maximal ideals, but does not hold for some (all?)
localizations at non-maximal primes. Since the other two answers interpret your question as finding a property that holds at *all primes but not true for the ring itself, it might be useful if you
merged your two requirements in grey boxes into a statement that we can all agree upon. Thanks in advance and sorry for giving you some work: your interesting question deserves a crystal-clear
formulation. – Georges Elencwajg Oct 24 '11 at 8:38
show 2 more comments
4 Answers
active oldest votes
A simple example is obtained by taking $P$ to mean "has positive dimension".
Every local domain of positive dimension $(A,\mathfrak m)$ has $P$ at all maximal ideals (i.e. just at $ \mathfrak m$ !) since $A_{\mathfrak m}=A$ , but $P$ fails at the generic point $\
eta=(0)$ since $A_\eta=Frac(A)$ has dimension zero, being a field.
Edit In order to address Chuck's comment, let me emphasize that the answer above is very easily adapted to non-local rings.
up vote 10 For example any finitely generated domain $A$ of positive dimension $d$ over a field has property $P$ when localized at a maximal ideal $\frak m$ but not at the zero ideal.
down vote More precisely, $dim A_{(0)}=0$ and $dimA_{\frak m}=d$ for any maximal ideal ${\frak m}$ : this equidimensionality result follows from Noether's normalization theorem.
This shows that if property $P_d$ is " has dimension $d$ ", then $P_d$ holds for $A_{\frak m}$ if ${\frak m}$ is maximal and does not hold for $A_{\frak p}$ if the prime $\frak p$ is not
Thanks for this Georges - I considered local rings too just to make sure that there is such a P as the one I am asking about. I would like to know about more interesting non-local P's
with the maximal property though - the question was about the most 'interesting/strongest (i.e. local-flavoured)' non-local P for which this holds. I actually can't think of any
(non-trivial ones) for a non-local ring, which is why I asked the question. – Chuck Oct 23 '11 at 17:11
3 Dear Chuck, I have added a class of non-local examples in an edit. Needless to say, however, I will never write an answer claiming that it contains "the most interesting/strongest "
property, example,... in any subject whatsoever. – Georges Elencwajg Oct 23 '11 at 18:38
That was stupid of me, apologies – Chuck Oct 24 '11 at 0:35
A more general version of this would be "having dimension at least $d$" for some positive $d$. – Sándor Kovács Oct 24 '11 at 2:38
Dear Chuck, no need to apologize: it is perfectly legitimate for you to require the most interesting and strongest answer! It's another matter with the answerer... – Georges Elencwajg
Oct 24 '11 at 8:00
show 2 more comments
On the class of noetherian rings, the property "having finite Krull dimension" holds for every local ring, hence is equivalent for $A_m$ at all $m$ maximal, or for $A_p$ at all $p$.
However the property is not local since there are noetherian rings of infinite Krull dimension (Nagata).
If you want the property to be defined over all commutative rings, just build-in noetherianity by changing P into: "is non-noetherian or of finite Krull dimension". ;-)
up vote 9
down vote As to the final such property, it is probably P="being local". Indeed, it is a non-local property but it holds for all $A_m$, or equivalently for all $A_p$. At this stage I'm wondering
whether I understood the question right. :)))
Grüezi, Paul ! – Georges Elencwajg Oct 24 '11 at 8:27
1 +1 P="being local" – Reimundo Heluani Oct 24 '11 at 10:43
3 No, "being local" is not nearly as strong as some other such properties, for example "being isomorphic to $\mathbb C$". – Tom Goodwillie Oct 24 '11 at 14:28
1 Right! I update strongest into "final". – Paul Balmer Oct 25 '11 at 0:11
add comment
Let P be the property of "being an integral domain". Then
1: If $A$ is an integral domain, then $A_p$ is an integral domain for every prime ideal $p\subseteq A$.
up vote 5 down On the other hand.
2: Let $A=A_1\oplus A_2$ be a direct sum of two integral domains. Then it is obviously not an integral domain, although $A_p$ is an integral domain for every prime ideal $p\
subseteq A$. So P is not local.
add comment
If I understand correctly (1), the following properties are not local
• being a PID (including fields): take a non principal Dedekind domain;
up vote 3 down vote • being Noetherian (consider an infinite product of $\mathbb F_2$);
• being Artinian (same example as above).
add comment
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/78852/what-is-the-strongest-non-local-property-of-a-ring-module-that-is-true-of-all/78891","timestamp":"2014-04-16T13:49:09Z","content_type":null,"content_length":"82492","record_id":"<urn:uuid:948d5498-1e8c-4dc0-a4bf-badd66babe80>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
An equation of the line containing the given point and perpendicular to the line
May 12th 2011, 09:28 AM
An equation of the line containing the given point and perpendicular to the line
I need help with the steps and solving for an equation of the line containing the given point and perpendicular to the line:
(5, -2); 9x + 8y =3
I can find the slope (-8/9x?)... I just can't figure out the rest(Headbang)
May 12th 2011, 09:33 AM
The gradient of the normal to the line will be the negative reciprocal of the original gradient. That is, if your gradient is $m$, the gradient of the normal will be $\frac{-1}{m}$. Then, once
you've found the gradient, substitute it, along with your point, into $y-y_1=m(x-x_1)$ or $y=mx+c$ to find the equation.
May 12th 2011, 09:48 AM
Hi jay1,
The slope of $9x+8y=3$ is not $-\frac{8}{9}x$
Solve for y to put the equation in slope-intercept form ( $y=mx+b$ where $m$ is the slope):
Now do you know what the slope is?
May 12th 2011, 11:18 AM
Here is a useful trick.
Suppose that $Ae 0~\&~Be 0$ then the lines
are perpendicular.
So for your problem just use your point in $8x-9y=k$ to find the value of k.
May 12th 2011, 12:18 PM
You've got a point and the slope; go find out:
Straight-Line Equations: Point-Slope Form | {"url":"http://mathhelpforum.com/algebra/180336-equation-line-containing-given-point-perpendicular-line-print.html","timestamp":"2014-04-18T10:41:09Z","content_type":null,"content_length":"9658","record_id":"<urn:uuid:f6e4ad18-042a-4ea7-80c5-748c05509186>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Relativity
Want to stay on top of all the space news? Follow @universetoday on Twitter
General relativity, or GR, is Einstein’s general theory of relativity (or the general theory of relativity, gtr), which is the single work his status as outstanding genius rests on (of course, there
are plenty of other works which make Einstein a remarkable physicist, but GR stands head and shoulders above the rest).
It is called ‘general’ because it is intended to apply generally, to everything in the universe, at all times, and in all circumstances (compared with the special theory of relativity – or special
relativity (SR) – which applies only when and where gravity is not important).
General relativity rests on an astonishingly simple assumption (or postulate), which can be written as “everybody, everywhere in the universe and at every time, will find the laws of nature to be
exactly the same“. Well, actually there’s a bit more, which John Wheeler summed up best: “spacetime tells matter how to move; matter tells spacetime how to curve.”
As a theory of gravity, general relativity takes over from Newton’s universal law of gravitation … not that Newton was necessarily wrong, but that GR provides a more accurate way to describe gravity
(in the sense that observations and experiments are consistent with GR, but not – in their entirety – with Newton’s law). Because general relativity goes beyond Newtonian gravitation, and because
Newton’s theory works very well (you don’t need GR to navigate spaceprobes around the solar system, for example), testing GR is not easy. Nevertheless, over the nine decades since GR was published,
thousands of such tests have been done … and general relativity passed every one! Clifford Will keeps a thorough list of all of these, in the Living Reviews of Relativity The Confrontation Between
General Relativity and Experiment.
Further reading: General Relativity (University of Illinois); for an introduction to the basic equation John Baez’ The General Relativity Tutorial; and for GR and cosmology, Ned Wright’s Relativity
Tutorial (UCLA) should get you started. There’s lots more on the internet of course, but do be careful (sadly, there’s lots of cranky and crackpot material besides the good).
Given the universal scope of general relativity, and given that GR is the basis of modern cosmology (and much more besides), it is the central topic of a great many Universe Today stories; here are a
few: Cassini Confirms General Relativity, Flyby Anomalies Explained?, and LISA Will Watch Snacking Black Holes.
Einstein’s Theory of General Relativity, an Astronomy Cast episode, covers this topic well.
Comments on this entry are closed. | {"url":"http://www.universetoday.com/46606/general-relativity/","timestamp":"2014-04-19T02:20:37Z","content_type":null,"content_length":"34046","record_id":"<urn:uuid:4a56af6c-b97c-4899-b62a-191336e4a4bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Menlo Park Algebra 2 Tutor
Find a Menlo Park Algebra 2 Tutor
...I have worked with students who have weak math skills and are currently struggling to keep up, students who are doing well and want to do advanced work, as well as students who fall somewhere
in between. I have tutored in all junior high and high school math subject areas. I am comfortable with and have ample experience tutoring students of all ages.
5 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...Calculus can be really easy and fun to learn (and use in real world), if you approach it right, otherwise you may end up not liking it. Geometry is really fun to learn, if you approach it
right, otherwise you may find yourself liking algebra better; I've been told more often than not that I've i...
9 Subjects: including algebra 2, calculus, physics, geometry
...I have extensive problem-solving experience including winning 1st place at the ICTM State Math Contest in 2007 and placing in the top 500 in the national Putnam competition. My tutoring methods
vary student-by-student, but I specialize in breaking down problems and asking questions to guide the ...
17 Subjects: including algebra 2, chemistry, statistics, calculus
I am a Mathematics and Statistics graduate from UC Berkeley. I have more than 5 years experience in private tutoring. I started my tutoring from De Anza College, where I tutored accounting in
their tutorial center.
13 Subjects: including algebra 2, geometry, statistics, trigonometry
...I have also run an after school program for grade school students, taught a current events class for inner city high school students, and been a catechism teacher for 8 years. I taught six high
school science classes for a summer in Cebu City, Philippines, and ran a weekly "Entertaining Science"...
27 Subjects: including algebra 2, reading, Spanish, English | {"url":"http://www.purplemath.com/menlo_park_ca_algebra_2_tutors.php","timestamp":"2014-04-18T08:58:01Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:7f5899a6-0790-47e3-9ee0-55438b8f032a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Frequency Response Or A System Is Given By ... | Chegg.com
Image text transcribed for accessibility: The frequency response or a system is given by H (j omega) = 10 / j omega + 10. Give the output to x(t) = 2 + 2cos(50t + pi/2 ) - 3sin(80t) Plot using Matlab
(or sketch) the magnitude response, |H (j omega)|. Show on the graph how to obtain your answer for part (a). What is the frequency that the filter has a gain ?
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/frequency-response-system-given-h-j-omega-10-j-omega-10-give-output-x-t-2-2cos-50t-pi-2-3s-q1041503","timestamp":"2014-04-19T12:32:35Z","content_type":null,"content_length":"20680","record_id":"<urn:uuid:b3485d93-686b-4ef6-82fc-3ab1ac178d9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangle PQR
Ok. Let's say P is (a,b) Q is (c,d) and R is (e,f)
a,b,c,d,e and f are all rational ie they are fractions
Midpoint PQ
and centroid is
the {rationals} are closed for +, - , x and ÷
ie. adding two fractions, or subtracting one fraction from another, or multiplying two fractions or dividing them will always give another fracftion.
Therefore both the midpoint and the centroid have rational coordinates.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=243922","timestamp":"2014-04-16T07:38:05Z","content_type":null,"content_length":"13774","record_id":"<urn:uuid:ee2a71bf-2219-4865-978f-f83db792d8a1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do I find a random point on the hypotenuse?
June 20th 2007, 01:45 PM #1
How do I find a random point on the hypotenuse?
I'm trudging through 5-6 years worth of forgotten math lessons and can't remember how to do this. I have a right triangle, the two sides both are equal to 30 so the angles are a clean 45 degrees
a piece. I need to know how to determine the x,y coords of a random point on the hypotenuse. I have a diagram but forgive it's crudeness, it's just an example:
Those red dots are what I need to figure out. Ignore their position, since it's just an example. If I pointed to a random spot on that line, what math do I need to figure out the coords?
Hello, worldspawn!
The points are on the line: . $x + y \:=\:30\quad\Rightarrow\quad y \:=\:30- x$
So given an $x$-value, say, $x = 7$, then: . $y \:=\:30 - 7 \:=\:23$
Therefore, the point is: . $(7,\,23)$
I'm trudging through 5-6 years worth of forgotten math lessons and can't remember how to do this. I have a right triangle, the two sides both are equal to 30 so the angles are a clean 45 degrees
a piece. I need to know how to determine the x,y coords of a random point on the hypotenuse. I have a diagram but forgive it's crudeness, it's just an example:
Those red dots are what I need to figure out. Ignore their position, since it's just an example. If I pointed to a random spot on that line, what math do I need to figure out the coords?
You need to use Deep Blue.
Selecting any point on that line segment is a uniform distribution. We can model it this way: select a number uniformly in the interval [0,30]. Call it a, then the corresponding point on the line
segment is (a,30-a).
June 20th 2007, 02:09 PM #2
Super Member
May 2006
Lexington, MA (USA)
June 20th 2007, 02:09 PM #3
June 20th 2007, 02:56 PM #4 | {"url":"http://mathhelpforum.com/geometry/16104-how-do-i-find-random-point-hypotenuse.html","timestamp":"2014-04-21T08:17:51Z","content_type":null,"content_length":"40500","record_id":"<urn:uuid:707a212f-07db-49af-9ebb-fad47be02ecc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please tell me how to make a highscore list?
April 14th, 2008, 12:56 PM #1
Please tell me how to make a highscore list?
Well, i need to make a highscore list, but i need to make it using textfiles! but since i am very new at programming i cant figure out how to sort the the list...i want to sort the list within
flash, but how do i make it so that the names and the scores get sorted together? currently i am thinking of somehow using sortOn on the array generated using "split"....but since the information
resides on a external text file, therefore this method might be very slow in practice, and especially when my target viewers are dial-up users. So what do you guys suggest?
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
Well, i need to make a highscore list, but i need to make it using textfiles! but since i am very new at programming i cant figure out how to sort the the list...i want to sort the list within
flash, but how do i make it so that the names and the scores get sorted together? currently i am thinking of somehow using sortOn on the array generated using "split"....but since the information
resides on a external text file, therefore this method might be very slow in practice, and especially when my target viewers are dial-up users. So what do you guys suggest?
What has dial up to do with the method of sorting the array?
What is the lay out of the textfile?
You don't need to return all highscores to flash, only the amount you want to list, usually around 5 to 10, as well as the user's.
Pass the name/score pair to your backend script, have it update the text file (if using PHP, the split, search and replace functions are similar to flash's), and return the above amount of data
in a string of "name,score,name,score,name,score" etc. You can then do something like this example simulation to parse the returned data and sort it for display:
PHP Code:
// create some textfields
createTextField('playerText', getNextHighestDepth(), 100, 100, 100, 20);
playerText.text = 'Enter Player Name';
createTextField('player', getNextHighestDepth(), 100, 140, 100, 20);
player.border = true;
player.type = 'input';
createTextField('scoreText', getNextHighestDepth(), 100, 200, 100, 20);
scoreText.text = 'Enter Sample Score';
createTextField('score', getNextHighestDepth(), 100, 240, 100, 20);
score.border = true;
score.type = 'input';
createEmptyMovieClip('activator', getNextHighestDepth());
activator._x = 150; activator._y = 300;
activator.createTextField('activatorButton', activator.getNextHighestDepth(), 0, 0, 0, 0);
with(activator.activatorButton){ border = true; selectable = false; autoSize = 'center'; text = 'Press to Simulate'; }
createTextField('highScoreList', getNextHighestDepth(), 300, 50, 300, 450);
with(highScoreList){ border = wordWrap = multiline = true; }
// this string is a simulation of the string you will be receiving after sending your name/score to the server
updatedList = 'Adam,'+Math.round(Math.random() * 10000)+',Barry,'+Math.round(Math.random() * 10000)+',Charles,'+Math.round(Math.random() * 10000)+',Darren,'+Math.round(Math.random() * 10000)+
',Ernest,'+Math.round(Math.random() * 10000)+',Frank,'+Math.round(Math.random() * 10000)+',George,'+Math.round(Math.random() * 10000)+',Arianna,'+Math.round(Math.random() * 10000)+',Brittany,'+
Math.round(Math.random() * 10000)+',Christine,'+Math.round(Math.random() * 10000)+',Dorothy,'+Math.round(Math.random() * 10000)+',Elizabeth,'+Math.round(Math.random() * 10000)+',Francine,'+Math.
round(Math.random() * 10000)+',Gretchen,'+Math.round(Math.random() * 10000);
// this would be in an onLoad or onData type of function
activator.onPress = function(){
// split it up at the commas
splitList = updatedList.split(',');
// seed your score list with the name/score pairs as properties of an object in an array
scoreList = new Array();
for(var l = 0; l < splitList.length; l += 2){
scoreList.push( { name: splitList[l], score: Number(splitList[l + 1]) } );
// this next line is just for this simulation, it will be handled by the backend script in your app
scoreList.push( { name: player.text, score: Number(score.text) } );
// sort by numeric scores in descending order
scoreList.sortOn('score', Array.NUMERIC | Array.DESCENDING);
// display the highscore list
for(var i = 0; i < scoreList.length; i++){
highScoreList.text += scoreList[i].name + ' ' + scoreList[i].score + '
Last edited by Jerryscript; April 14th, 2008 at 06:04 PM.
Thanks Jerryscript, this is kinda like what i did....my text file is like
var_text=john,26 sam,24 dead,18 last,19
I, similar to you, split the string first at " " and then at "," giving the required array, and then used same technique as yours... but the problem is, though i can get the highscore, it is
showing up a bit late...i dont want that time lag.
But anyway, i really learnt a lot from your code, thanks for the help again JerryScript.
EDIT:: now i have a problem.
p0.text="Game"; works
this["p"+i].text="Game"; dosent work, anybody know why?
Last edited by bluemagica; April 15th, 2008 at 07:03 AM.
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
I'm not sure what the problem is with your code without seeing more of it. Maybe a scope issue since you're using "this".
The lag issue is something you have to learn to live with, but hopefully it's not more than 150ms or so. Just toss in some sort of fancy particle effect or other animation that your highscore
list emerges from, and by the time the effect is done, enough time should have elapsed for your list to be loaded, parsed, sorted, and rendered.
well it is fixed now, and i have a new problem. The texts which are rendered do not have the formatting of the dynamic textbox. by formatting i mean the font and "bold"....when the text comes up
it is not bold and in times new roman font....any ideas why??
Last edited by bluemagica; April 15th, 2008 at 11:06 AM.
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
Did you embed the font?
Nope, cause as soon as i embed the font, for some weird reason, the texts dissappear.
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
AS2 - check the textFormat and newTextFormat
AS3 - check the default textformat
Hmm, jerryscript, does that mean i have to set the formats for all the fields in as?
And anyway i have a BIG PROBLEM on me. I am using "split_array.sortOn('plscore', Array.DESCENDING | Array.NUMERIC)". the array was 99,88,69,7,8, and it is coming 99,8,88,7,69
Why is this?
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
It looks like it's sorting them as strings, not as numbers.
I'm pretty sure the problem is that you are not seeding your plscores as numbers. They do not become numbers automatically, you have to convert them using Number() or int() or uint().
In the code I posted above, it sorts your list of numbers fine. However, if you leave out the Number() statement when creating the array of objects, it gives the same results as you showed
because the string is never converted to a number.
PHP Code:
// note the "score: Number(score.text)" statement
scoreList.push( { name: player.text, score: Number(score.text) } );
actually i managed to solve it using parseInt(). but thanks for your replies...
Anyway, this might sound a bit oftopic, but recently i saw a firefox extension, that enables to modify the outgoing flash data like score, resulting in a possible hack. For those who know about
this, is there any solution against this?
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
Try some form of encryption and code obfuscation. Nothing is 100% safe, but that should stop most.
hmm but since i dont know php well, i cant really decrypt it properly on the server side....ah well i guess i will have to learn php now.
MY BLOG
I need a SPRITER who can do pixel arts for an arcade fighter project. If you can help out, please pm me.I also need someone who can write simple xml files.
April 14th, 2008, 01:07 PM #2
April 14th, 2008, 06:01 PM #3
Registered User
April 14th, 2008, 11:52 PM #4
April 15th, 2008, 10:38 AM #5
Registered User
April 15th, 2008, 10:47 AM #6
April 15th, 2008, 01:57 PM #7
Registered User
April 15th, 2008, 11:42 PM #8
April 16th, 2008, 12:16 PM #9
Registered User
April 17th, 2008, 01:11 AM #10
April 17th, 2008, 12:34 PM #11
Registered User
April 17th, 2008, 02:35 PM #12
Registered User
April 17th, 2008, 10:39 PM #13
April 18th, 2008, 01:51 AM #14
Registered User
April 18th, 2008, 02:05 AM #15 | {"url":"http://www.kirupa.com/forum/showthread.php?294937-Please-tell-me-how-to-make-a-highscore-list&s=cadae7b5afbddd4ff46746a310ad9229","timestamp":"2014-04-21T09:45:48Z","content_type":null,"content_length":"121815","record_id":"<urn:uuid:fb84bbad-e130-48f6-96b0-634c62d66f4b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design optimization of spasers considering the degeneracy of excited plasmon modes
« journal navigation
Design optimization of spasers considering the degeneracy of excited plasmon modes
Optics Express, Vol. 21, Issue 13, pp. 15335-15349 (2013)
We model spaser as an n-level quantum system and study a spasing geometry comprising of a metal nanosphere resonantly coupled to a semiconductor quantum dot (QD). The localized surface plasmons are
assumed to be generated at the nanosphere due to the energy relaxation of the optically excited electron-hole pairs inside the QD. We analyze the total system, which is formed by hybridizing spaser’s
electronic and plasmonic subsystems, using the density matrix formalism, and then derive an analytic expression for the plasmon excitation rate. Here, the QD with three nondegenerate states interacts
with a single plasmon mode of arbitrary degeneracy with respect to angular momentum projection. The derived expression is analyzed, in order to optimize the performance of a spaser operating at the
triple-degenerate dipole mode by appropriately choosing the geometric parameters of the spaser. Our method is applicable to different resonator geometries and may prove useful in the design of
QD-powered spasers.
© 2013 OSA
1. Introduction
The emerging era of nanoplasmonics is expected to improve the speed and efficiency of optical devices, by allowing miniaturization beyond the diffraction limit using surface plasmons in circuits [
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011) [CrossRef] .
3. S. A. Maier and H. A. Atwater, “Plasmonics: Localization and guiding of electromagnetic energy in metal/dielectric structures,” J. Appl. Phys. 98, 011101 (2005) [CrossRef] .
]. Despite their advantages in miniaturization, surface plasmons which excite at metal-dielectric interfaces are highly dissipative and vanish in a few wavelengths when traveling [
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011) [CrossRef] .
]. Therefore, the energy must be transferred from an external source to the surface plasmon wave to sustain its existence in nanoplasmonic circuits [
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011) [CrossRef] .
]. The spaser, which is the nanoplasmonic counterpart of a conventional laser is the prime device for generating these surface plasmon waves and amplifying them during propagation [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
6. R. F. Oulton, “Plasmonics: Loss and gain,” Nature Photon. 6, 219–221 (2012) [CrossRef] .
Bergman and Stockman proposed the theory of the spaser [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
] by claiming that a nanosystem with an excited active medium can transfer energy nonradiatively to a closely located plasmonic resonator and excite localized fields inside it. This energy transfer
takes place due to the interaction between the resonator and active medium through the near field. It was stated that, electronic transitions in active medium are stimulated by the surface plasmons
which already exist in the system resulting in a multiplication of the plasmon population, very much resembling the coherent feedback strategy used in conventional laser cavities [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
7. M. I. Stockman, “Spasers explained,” Nature Photon. 2, 327–329 (2008) [CrossRef] .
Since its theoretical formulation, many experimental efforts have been carried out to fabricate a spaser. Seidel
et al.
8. J. Seidel, S. Grafström, and L. Eng, “Stimulated emission of surface plasmons at the interface between a silver film and an optically pumped dye solution,” Phys. Rev. Lett. 94, 177401 (2005)
[CrossRef] [PubMed] .
] proved the possibility of the stimulated emission of surface plasmon by amplifying surface plasmons at the interface between a flat continuous silver film and a liquid containing organic dye
molecules. The first demonstration of a spaser was done by Noginov
et al.
9. M. A. Noginov, G. Zhu, A. M. Belgrave, R. Bakker, V. M. Shalaev, E. E. Narimanov, S. Stout, E. Herz, T. Suteewong, and U. Wiesner, “Demonstration of a spaser-based nanolaser,” Nature 460,
1110–1112 (2009) [CrossRef] [PubMed] .
] using a gold nanosphere with a 7 nm radius, surrounded by the active medium, which is a 15 nm thick silica shell containing dye molecules. When the dye is optically pumped, it releases energy to
the gold nanosphere causing excitation of localized surface plasmons. Another realization of a spaser was reported by Flynn
et al.
10. R. A. Flynn, C. S. Kim, I. Vurgaftman, M. Kim, J. R. Meyer, A. J. Mäkinen, K. Bussmann, L. Cheng, F. S. Choa, and J. P. Long, “A room-temperature semiconductor spaser operating near 1.5 μm,” Opt.
Express 19, 8954–8961 (2011) [CrossRef] [PubMed] .
] who sandwiched a gold-film plasmonic waveguide between optically pumped InGaAs quantum wells.
Several other structures have also been proposed for spasers, including a V-shaped metallic nanoparticle attached to quantum dots (QDs) [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
], a two-dimensional array of split-ring shape plasmonic resonators supported by a substrate acting as the active medium [
11. N. Zheludev, S. Prosvirnin, N. Papasimakis, and V. Fedotov, “Lasing spaser,” Nature Photon. 2, 351–354 (2008) [CrossRef] .
], a bowtie-shaped metallic structure, in which QDs are placed in the bowtie gap and multiple quantum wells are located in the substrate [
12. S. W. Chang, C. Y. A. Ni, and S. L. Chuang, “Theory for bowtie plasmonic nanolasers,” Opt. Express 16, 10580–10595 (2008) [CrossRef] [PubMed] .
], and a metal groove with QDs placed at its bottom [
13. A. Lisyansky, I. Nechepurenko, A. Dorofeenko, A. Vinogradov, and A. Pukhov, “Channel spaser: Coherent excitation of one-dimensional plasmons from quantum dots located along a linear channel,”
Phys. Rev. B 84, 153409 (2011) [CrossRef] .
]. Even though most of these spaser designs employ QDs in the active medium, it is possible to have optically pumped rare-earth ions, dyes or bulk electrical injection as the excitable gain element [
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011) [CrossRef] .
]. QDs are widely used in many lasing setups. As the charge carriers in a QD are confined in all three dimensions to a very small size, its density of states almost resembles a delta function,
mimicking an atomistic behavior with a well defined spectral response. They also promise a better stability over temperature variations [
14. M. Grundmann, J. Christen, N. N. Ledentsov, J. Böhrer, D. Bimberg, S. S. Ruvimov, P. Werner, U. Richter, U. Gösele, J. Heydenreich, V. M. Ustinov, A. Y. Egorov, A. E. Zhukov, P. S. Kop’ev, and Z.
I. Alferov, “Ultra-narrow luminescence lines from single quantum dots,” Phys. Rev. Lett. 74, 4043–4046 (1995) [CrossRef] [PubMed] .
17. A. V. Fedorov, A. V. Baranov, I. D. Rukhlenko, T. S. Perova, and K. Berwick, “Quantum dot energy relaxation mediated by plasmon emission in doped covalent semiconductor heterostructures,” Phys.
Rev. B 76, 045332 (2007) [CrossRef] .
Although ample research has been done on spasers, it is seen that results depend on the model. Furthermore, most of these studies very much focus on the role of the active medium. In this paper,
instead of only considering the states of the active medium, we analyze the electronic and plasmonic states of the whole spaser as a single quantum system. Degeneracy of plasmon modes is also
considered. This is a more general approach to describe the spaser quantum mechanically compared to available literature. We rigorously describe our model using a simple spaser geometry comprising of
a spherical metal nanoparticle and a QD. A major importance of the spaser is its ability to be effectively used in nanoplasmonic circuits, to generate surface plasmon waves. This paper is based on
studying the capability of a spaser to excite these surface plasmons and finding all the parameters affecting the excitation rate. We also consider dissipations to the environment and pay attention
on all the material properties influencing the spaser operation. The presenting work also provides clear design guidelines for a spherical spaser. These guidelines allow one to select its operating
wavelength, possible values for the size parameters of the spaser components, and optimum QD placement.
Among the quantum mechanical models of the spaser, the work of Stockman [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
18. M. I. Stockman, “The spaser as a nanoscale quantum generator and ultrafast amplifier,” J. Opt. 12, 024004 (2010) [CrossRef] .
19. M. I. Stockman, “Nanoplasmonics: past, present, and glimpse into future,” Opt. Express 19, 22029–22106 (2011) [CrossRef] [PubMed] .
] and Protsenko
et al.
20. I. E. Protsenko, A. V. Uskov, O. A. Zaimidoroga, V. N. Samoilov, and E. P. O‘reilly, “Dipole nanolaser,” Phys. Rev. A 71, 063812 (2005) [CrossRef] .
] are noteworthy. However, they only analyze the quantum states of the active medium assuming it as a TLS. We improve this model by considering the whole spaser as a single
-level quantum system with a 3-level active medium. Stockman considers a third level [
18. M. I. Stockman, “The spaser as a nanoscale quantum generator and ultrafast amplifier,” J. Opt. 12, 024004 (2010) [CrossRef] .
] but assumes that its population is negligible and electron-hole pairs rapidly relax to the second level. In our model, there is a finite third level population and electron-holes pairs relax to the
second level by interacting with the environment at a defined rate. Having a three level system, a designer has more control on choosing a suitable pump frequency such that it doesn’t overlap with
any surface plasmon resonance frequency of the resonator. In addition, we consider dissipations through the generalized master equation in the form of interaction with the system’s bath. Here, all
the major dissipation rates at each level are well represented. Since our intention is to find all the parameters affecting the plasmon excitation rate, it is required to consider all the forms of
In our analysis, we first find the electric field and the eigenfrequencies of the localized surface plasmon modes in the resonator, then characteristics of the active medium, followed by the
derivation of the spaser Hamiltonian, which is needed to analyze the spaser kinetics using the density matrix theory. Then we obtain an expression for the plasmon excitation rate of the spaser, and
study how the geometric parameters affect the operation of a dipole spaser.
2. Spaser model
A spaser consists of a plasmonic resonator (or cavity), which supports surface plasmon modes, and an active medium, which amplifies the surface plasmons [
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
Figure 1
shows a schematic diagram of our spaser model where the resonator is a metal nanosphere surrounded by dielectric shell, and a QD is embedded in the shell as the active medium. Spasing occurs as a
consequence of the nonradiative energy transfer from the QD to the nanosphere, exciting localized surface plasmon modes inside it.
The spherical core-shell structure is one of the most studied geometries for surface plasmon resonance [
32. D. Sarid and W. Challener, Modern Introduction to Surface Plasmons: Theory, Mathematica Modeling, and Applications(Cambridge University, 2010) [CrossRef] .
34. K. L. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, “The optical properties of metal nanoparticles: The influence of size, shape, and dielectric environment,” J. Phys. Chem. B 107, 668–677
(2003) [CrossRef] .
], because it is one of the few geometries where Maxwell’s equations can be analytically solved [
35. A. L. Aden and M. Kerker, “Scattering of electromagnetic waves from two concentric spheres,” J. Appl. Phys. 22, 1242–1246 (1951) [CrossRef] .
]. Suppose the inner radius of the metal nanosphere is
and it is smaller than the skin depth of the metal [
37. E. Sondheimer, “The mean free path of electrons in metals,” Adv. Phys. 1, 1–42 (1952) [CrossRef] .
], which is usually around 25 nm for noble metals. The latter assumption is necessary to allow the localized surface plasmon modes to penetrate everywhere within the nanosphere and excite coherent
electron cloud oscillations [
18. M. I. Stockman, “The spaser as a nanoscale quantum generator and ultrafast amplifier,” J. Opt. 12, 024004 (2010) [CrossRef] .
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today 64, 39–44 (2011) [CrossRef] .
]. Hence, the nanosphere radius is very smaller compared to the wavelength of incident light. Further, we assume that
is greater than
is Fermi velocity and
is the surface plasmon frequency), which is about 1 nm for noble metals, to avoid the effects of Landau damping [
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today 64, 39–44 (2011) [CrossRef] .
]. The outer radius of the shell,
must be chosen such that the shell thickness is large enough to the QD be entirely embedded within the dielectric (i.e.
> QD diameter).
When the nanosphere radius and the shell thickness are fixed, the resonator supports a series of plasmon modes (dipole, quadrupole,
) with unique energies [
32. D. Sarid and W. Challener, Modern Introduction to Surface Plasmons: Theory, Mathematica Modeling, and Applications(Cambridge University, 2010) [CrossRef] .
33. P. K. Jain, K. S. Lee, I. H. El-Sayed, and M. A. El-Sayed, “Calculated absorption and scattering properties of gold nanoparticles of different size, shape, and composition: Applications in
biological imaging and biomedicine,” J. Phys. Chem. B 110, 7238–7248 (2006) [CrossRef] [PubMed] .
]. These plasmon modes may overlap with the QD in different degrees, and hence experience gain in different amounts. The plasmon mode receiving the highest gain survives and becomes the dominant
spaser mode [
9. M. A. Noginov, G. Zhu, A. M. Belgrave, R. Bakker, V. M. Shalaev, E. E. Narimanov, S. Stout, E. Herz, T. Suteewong, and U. Wiesner, “Demonstration of a spaser-based nanolaser,” Nature 460,
1110–1112 (2009) [CrossRef] [PubMed] .
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today 64, 39–44 (2011) [CrossRef] .
]. The gain to the spasing mode is received as a result of the electronic transitions in the QD causing recombination of an electron-hole pair. To ensure continuous spasing, we facilitate population
inversion of the two energy levels pertaining to these transitions using a suitable pump source. Here we opt for optical pumping for the sake of design simplicity. Even though QD is the target of the
pumping field, it may also excite the resonator in the vicinity [
39. J. B. Khurgin, G. Sun, and R. Soref, “Practical limits of absorption enhancement near metal nanoparticles,” Appl. Phys. Lett. 94, 071103–071103 (2009) [CrossRef] .
]. To minimize this extraneous effect, we intentionally select the optical pumping frequency with considerable detuning from the surface plasmon resonances of the resonator. Therefore, the excited
field is much weaker compared with the spaser mode which overlaps with QD emission lines. However, we have not made any restrictive assumption here because one may completely avoid this situation by
adopting a different pumping mechanism such as direct electric injection to QD as suggested in [
40. J. B. Khurgin, G. Sun, and R. Soref, “Electroluminescence efficiency enhancement using metal nanoparticles,” Appl. Phys. Lett. 93, 021120–021120 (2008) [CrossRef] .
2.1. Localized surface plasmon modes of the resonator
To find the electric field and eigenfrequencies of the localized surface plasmon modes, we assume that QD’s presence does not perturb the electric field in the system because it is very small
compared to the nanosphere and therefore the resultant permittivity change of the shell is insignificant. We adopt a standard spherical coordinate system (
) where the origin coincides with the center of the spherical shell (see
Fig. 1
). The supported electromagnetic modes can be found by solving the vector Helmholtz equation for the spherical shell following the method of Debye potentials [
35. A. L. Aden and M. Kerker, “Scattering of electromagnetic waves from two concentric spheres,” J. Appl. Phys. 22, 1242–1246 (1951) [CrossRef] .
]. The solution gives a series of localized surface plasmon modes denoted by the angular momentum number
and its projection
, where
is the angular frequency of the plasmon mode
. The stationary electric field is given by the expression where
. Here,
is one of the spherical Bessel functions
) for
= 1, 2, 3,
are associated Legendre polynomials,
is the wavenumber, and
are constants assuring the field’s continuity at boundaries.
Plasmon modes expressed by
Eq. (1)
possess a unique energy determined by
, which can be found by solving the dispersion relation of the resonator. In order to obtain the dispersion relation, we first write a set of equations expressing the constants
Eq. (2)
by equating the tangential components of the field at the metal-dielectric and dielectric-ambient interfaces. This results in a homogeneous system of linear equations in which we apply the condition
for a nontrivial solution and obtain the following dispersion relation for the
th mode surface plasmon resonance: where
= −
= −
= −
= −
) and
). Here the Reccati-Bessel functions are
) =
) = −
, and
are the wavenumbers of the electromagnetic field in the metal, dielectric, and ambient, respectively.
According to
Eq. (3)
, the energy of the plasmon mode,
is a function of
, and
, which implies that only the size parameters of the resonator determine the energy of the spaser mode when the spaser materials are chosen. Furthermore, this nontrivial solution of the boundary
conditions allows us to express the coefficients
in terms of
To find an expression for
as a normalization constant, we follow the procedure of secondary quantization for dispersive media [
41. I. D. Rukhlenko, D. Handapangoda, M. Premaratne, A. V. Fedorov, A. V. Baranov, and C. Jagadish, “Spontaneous emission of guided polaritons by quantum dot coupled to metallic nanowire: Beyond the
dipole approximation,” Opt. Express 17, 17570–17581 (2009) [CrossRef] [PubMed] .
], and equate the total energy of the electric field of the plasmon mode to
, Here
implies the volume integral over three dimensional Euclidean space. We can note that owing to this equality,
implicitly depends on the spaser shell parameters
2.2. Active medium
Characteristics of the active medium determine the strength of its electron-hole pairs’ interaction with the plasmon modes in the resonator and the amount of amplification that each plasmon mode
receives. The characteristics of the QD includes its wavefunction and the energy levels of the electron-hole pairs. Electronic transitions between these levels fuel spasing and we assume that, as in
the case of lasers, those transitions representing recombination of electron-hole pairs would excite the plasmon modes in the resonator. Normally, an excitable QD like this can be effectively
described using three energy levels to account for pumping and stimulated transitions. We denote the ground level of the QD by state vector |0
〉 and two excited levels by |1
〉 and |2
〉. To assign energies to these three levels, we have to take the quantum confinement effects into account. To do this, we approximate the QD by a sphere with a radius
, confined by an infinite potential barrier (i.e. potential
) = 0 for
and it is infinite otherwise). The resulting time-independent Schrodinger equation is separable in spherical coordinates and possesses a solution similar to the hydrogen atom and given by [
43. M. Ventra, S. Evoy, and J. Heflin, Introduction to Nanoscale Science and Technology, Nanostructure Science and Technology (Springer, 2004) [CrossRef] .
] where
are principal, azimuthal and magnetic quantum numbers describing the states,
is the
th root of the spherical Bessel function of the first kind (i.e
) = 0), and
are spherical harmonics.
The corresponding eigenvalue of the wave function ψ[n[q],l[q],m[q]] gives the excitable energy levels of an electron-hole pair: ℰnq,lq=ℰg+h¯22μRq2ξnq,lq2, where μ=me*mh*/(me*+mh*) is the reduced mass
of an electron-hole pair, me* and mh* are effective masses of an electron and a hole, and ℰ[g] is the bandgap of the QD material. These energy levels denoted by ℰ[n[q],l[q]] are 2l[q] + 1 times
degenerate. For a transition from an initial state |s[i]〉 with quantum numbers s[i] = (n[i], l[i], m[i]) to a final state |s[f]〉 with quantum numbers s[f] = (n[f], l[f], m[f]), the absorbed energy
from the system will be Δℰ[s[f],s[i]] = ℰ[n[f],l[f]] − ℰ[n[i],l[i]]. Energy will be released in case this quantity is negative.
Having this knowledge, we map three QD energy levels. For |1
〉 and |2
〉, we select two lowest energy levels [i.e (
n[q], l[q]
) = (1, 0), (1, 1)], assuming that probability of populating the higher energy levels are very small, and |0
〉 is mapped to the ground level. This mapping also enables us to calculate the QD radius,
required for an efficient energy transfer to the resonator where the energy received by the spaser modem,
matches the energy released by the QD, −Δ
, giving the resonance QD radius: Although achieving a perfectly matched resonance may be difficult in practice, having a closer value to the quantity in
Eq. (6)
will be adequate for spasing. This analysis of QD reveals how its geometrical parameters affect the spaser operation.
3. Spaser kinetics
In the previous section we analyzed the characteristics of the isolated electronic and plasmonic subsystems. Here, we let them to interact with each other and create the functioning spaser system, as
shown in
Fig. 2
. In the electronic subsystem, we have three states denoted by |0
〉, |1
〉, and |2
〉, as described in Section 2.2. We assume that active medium strongly interacts only with the plasmon mode
, which is the spaser mode. Here we note that this assumption is not valid for higher modes when frequency spacing between them become smaller than the QD emission linewidth. Let us define the state
〉 with zero plasmons, and 2
+ 1 states
with one plasmon, as the plasmonic subsystem. Then we amalgamate these two subsystems to makeup the total system shown in
Fig. 2(c)
= 2
+ 4 product states defined as |1
〉 ≡ |0
〉 |0
〉, |2
〉 ≡ |1
〉 |0
〉, |3
〉 ≡ |2
〉 |0
〉, |4
〉 ≡ |0
, |5
〉 ≡ |0
The product states |1[s]〉, |2[s]〉 and |3[s]〉 are associated with zero plasmons and |4[s]〉, |5[s]〉,...,|n[s]〉 possess one plasmon of the spaser mode. The |1[s]〉 → |3[s]〉 transition, which is
the excitation of ground electron-hole pairs to the highest energy level in our model, occurs due to the electron-hole pairs’ interactions with the pump light which we analyze classically.
Transitions from the state |2[s]〉 to one of the states |4[s]〉, |5[s]〉,...,|n[s]〉 is the driving force for the phenomena of spasing because they excite plasmon modes in the resonator. Some
transitions may occur from the state |j[s]〉 to |i[s]〉 due to the interaction with the bath. They can be considered as dissipations. Having this model, we analyze the kinetics of the n-level system
by first constructing its Hamiltonian, and then deriving the density matrix equations to find the corresponding active state populations.
3.1. Hamiltonian of the spaser
Hamiltonian H of the spaser contains the non interacting electronic and plasmonic Hamiltonians, H[e] and H[pl], and Hamiltonian, H[i], of the interacting subsystems: where H[e] = h̄ω[1[e]] |1[e]〉〈1
[e]| + h̄ω[2[e]] |2[e]〉〈2[e]|, and Hpl=∑mp=−lplph¯ωlpblp,mp†blp,mp. Here blp,mp†, b[l[p],m[p]] are the creation and annihilation operators of the surface plasmons corresponding to the quantum
numbers l[p] and m[p].
The interacting Hamiltonian
can be decomposed to represent the interactions between the electron-hole pairs and pump light as
and the interactions between electron-hole pairs and surface plasmons as
: where Here
is the matrix element for the transition |
〉 → |
i, f
= {0
, 1
, 2
, where
is the normalization volume,
) and
are the envelope function and frequency of the pump light, and c.c. represents the complex conjugate [
44. A. Fedorov, A. Baranov, and Y. Masumoto, “Coherent control of optical-phonon-assisted resonance secondary emission in semiconductor quantum dots,” Opt. Spectrosc. 93, 52–60 (2002) [CrossRef] .
46. A. Fedorov and I. Rukhlenko, “Study of electronic dynamics of quantum dots using resonant photoluminescence technique,” Opt. Spectrosc. 100, 716–723 (2006) [CrossRef] .
]. Relaxation process from the state |2
〉 to |1
〉 is taken into account through the relaxation constant
in spaser kinetics. The matrix element
corresponding to plasmon’s interaction with electron-hole pairs can be written as where |
〉 and |
〉 are the Bloch functions, |
〉 and |
〉 are the envelope wavefunctions of the initial and final electronic states characterized by the sets of quantum numbers
is the displacement vector of the electron-hole pairs. Assuming the equality
E[l[p]m[p]] e[l[p]m[p]]
, we may write The first matrix element in
Eq. (12)
can be expressed through the Kane’s parameter
] as,
and the second matrix element,
, can be expressed by where the integration is evaluated over the quantum dot’s volume
is the electron position inside the quantum dot, and
is radius vector of the quantum dot’s center.
) can be derived from
Eq. (2)
is taken from
Eq. (5)
. In case QD is very small compared to the nanosphere, it is reasonable to assume that
) is approximately constant over the QD’s volume, therefore the integral in
Eq. (13)
can be simplified to
. Hence, from
Eq. (12)
, the matrix element for the spaser mode’s interaction with electron-hole pairs can be given by Since this quantity determines the contribution to the total Hamiltonian by the interaction of the
spaser mode with the active medium, it also defines the strength of spasing. If the QD is moved away from the nanosphere this matrix element becomes very much smaller. Therefore, the closer the QD
stronger the spasing. When calculating the
, electric field term
Eq. (11)
should be replaced by the electric field caused by pump light.
3.2. Plasmon excitation rate of the spaser
Having the Hamiltonian calculated, we analyze the
states system comprises of the product states |1
〉, |2
〉, using the density matrix formalism. We define the populations of those states by
and assume that the system has a short-term memory [
49. U. Fano, “Description of states in quantum mechanics by density matrix and operator techniques,” Rev. Mod. Phys. 29, 74–93 (1957) [CrossRef] .
] and coupled to a bath which is the reservoir for system’s dissipations. Then the relaxation superoperator, which is added to the commuted Hamiltonian and density operator to incorporate
dissipations, consists of a set of constants that define the relaxation kinetics of diagonal and off-diagonal elements of the reduced density matrix [
44. A. Fedorov, A. Baranov, and Y. Masumoto, “Coherent control of optical-phonon-assisted resonance secondary emission in semiconductor quantum dots,” Opt. Spectrosc. 93, 52–60 (2002) [CrossRef] .
49. U. Fano, “Description of states in quantum mechanics by density matrix and operator techniques,” Rev. Mod. Phys. 29, 74–93 (1957) [CrossRef] .
]. Using the Markov and secular approximations [
], master equation for the system can be given by where
is the population relaxation rate of the state |
= (
2 +
is the coherence relaxation rate between the states |
〉 and |
is the pure dephasing rate, and
is the transition rate from state |
〉 to state |
〉 due to interaction with the bath. We assume that the lifetime of the ground state |1
〉 is very large, by setting
= 0. The parameters
define the dissipation of degenerated states of the plasmon mode denoted by
l[p], m[p]
quantum numbers. However, since the energy and dielectric properties are common for all the degenerated plasmon states, we assume that all these plasmon dissipation constants are equal and can be
denoted by
represent the excitation rates for the plasmon modes
l[p], m[p]
= (
), (
, −
+ 1)
l[p], l[p]
) respectively. The sum of the plasmon populations of
th mode,
gives the number of plasmons excited in the spaser at a given time. Hence, this quantity can be referred as the effective plasmon excitation rate of the spaser. We solve the system of partial
differential equations given in the
Eq. (15)
for the continuous wave (CW) operation assuming that
) = 1 and obtain an expression for the total plasmon excitation rate: where Δ
, Δ
are the energies of the states |2
〉, |3
〉, |
〉 and pump light respectively. As all the degenerate plasmon modes have the same energy,
and therefore Δ
= Δ
. We can assume that detuning of the energy of the pump light with the energy of the state |3
〉, Δ
and hence,
. Further, we also assume that all the degenerate states of the plasmon modes decay equally and therefore
. With these simplifications,
Eq. (16)
can be further simplified to The system interacting with the bath introduces various relaxations to the electronic subsystem. We assume that these relaxation rates and the decay rate of the dominant
spaser mode depend only on the materials of the spaser [
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today 64, 39–44 (2011) [CrossRef] .
]. Since the nanosphere is much smaller than the wavelength, we can neglect the radiation losses [
51. F. Wang and Y. R. Shen, “General properties of local plasmons in metal nanostructures,” Phys. Rev. Lett. 97, 206806 (2006) [CrossRef] [PubMed] .
] and only consider the nonradiative decay terms. However, this is not an overly restrictive assumption because in case radiation losses are not negligible, the decay rate
of the spaser mode can also incorporate the resultant of both radiative and nonradiative decay rates. In addition, decay rates could potentially be different for other plasmon modes [
52. G. Sun, J. B. Khurgin, and C. Yang, “Impact of high-order surface plasmon modes of metal nanoparticles on enhancement of optical emission,” Appl. Phys. Lett. 95, 171103–171103 (2009) [CrossRef] .
], which are not taken into account assuming that they weakly overlap with the QD emission spectrum. As discussed in Section 2.1, energy of the spaser mode
depends on resonator’s size parameters (i.e.
) when the materials of the spaser components are chosen. Therefore, it can be observed from
Eq. (17)
that the total plasmon excitation rate of the spaser,
mainly depends on the matrix elements for the electron-hole pair-plasmon interaction,
for each degenerate state and the detuning Δ
because we assume that matrix element for the pump light-QD interaction
is constant under CW operation. In the case of exact resonance (i.e.
is given by
Eq. 6
), Δ
= 0 and we achieve the highest plasmon excitation rate.
For fixed materials and size parameters
, there is a unique spaser mode energy
may be chosen according to
to achieve a higher plasmon excitation rate. Then the position of the QD plays a major role deciding the amount of amplification of the plasmons as the electric field of the spaser mode changes with
the location according to
Eq. (2)
. To analyze these factors, we investigate the spaser’s behavior with respect to spaser size parameters and the QD’s location in the following section taking a dipole spaser as a case study.
4. Case study: A dipole spaser
Let us consider a spaser whose dipole mode (l[p] = 1) is amplified by the active medium. We construct this spaser using gold for the nanosphere, and coating silica (SiO[2]) over it, making up a
dielectric shell. We embed a CdSe QD in this shell. Once the materials for spaser components are chosen, we are ready to investigate the dipole spaser’s operation according to its geometric
parameters. Here, we use the analytical results obtained in Sections 2 and 3 on the resonator’s plasmonic properties and the characteristics of the resonator-QD interactions.
The knowledge of the frequency dependence of the permittivities are required to calculate the electric field of the spaser mode and the plasmon decay rate. For silica, we assume a non-dispersive
= 2.15 in contrast to gold in which we assume a frequency dependent permittivity. Furthermore, since our gold nanosphere is very small, we have to consider the size dependency modification of the
permittivity as well [
53. J. Lim, A. Eggeman, F. Lanni, R. D. Tilton, and S. A. Majetich, “Synthesis and single-particle optical detection of low-polydispersity plasmonic-superparamagnetic nanoparticles,” Adv. Mater. 20,
1721–1726 (2008) [CrossRef] .
54. R. Averitt, S. Westcott, and N. Halas, “Linear optical properties of gold nanoshells,” J. Opt. Soc. Am. B 16, 1824–1832 (1999) [CrossRef] .
]. In order to incorporate this size effect, we use the model [
54. R. Averitt, S. Westcott, and N. Halas, “Linear optical properties of gold nanoshells,” J. Opt. Soc. Am. B 16, 1824–1832 (1999) [CrossRef] .
, where
is the bulk plasma frequency of gold, Γ is the electron collision frequency with damping defined by Γ =
is the bulk electron collision frequency of gold,
is the Fermi velocity, and
is the contribution from the interband transitions obtained by fitting
to the experimental data published by Johnson and Christy [
55. P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B 6, 4370–4379 (1972) [CrossRef] .
] for bulk material. Also we assume
= 1.36 × 10
= 3.33 × 10
= 1.4 × 10
m/s, and
= 9.84 [
54. R. Averitt, S. Westcott, and N. Halas, “Linear optical properties of gold nanoshells,” J. Opt. Soc. Am. B 16, 1824–1832 (1999) [CrossRef] .
56. K. Kolwas, A. Derkachova, and M. Shopa, “Size characteristics of surface plasmons and their manifestation in scattering properties of metal particles,” J. Quant. Spectrosc. Radiat. Transfer 110,
1490–1501 (2009) [CrossRef] .
]. Spaser’s outer boundary is assumed to be in free space and hence
= 1.
Using these permittivities, we solve the dispersion relation given in
Eq. (3)
for dipole mode and then plot energy of the spaser mode as a function of the nanosphere radius and shell thickness as shown in
Fig. 3(a)
. The contour plot shows that there can be many (
) pairs which can result in the same spaser mode energy. For example, if the spaser mode energy is 2.385 eV (i.e. equivalent wavelength is approximately 520 nm), it traces a curve on the contour plot
as marked. It is important to note that, although we refer energies in electron volts for convenience, they should be converted to SI units when substituted to equations.
To find the appropriate QD radius to match the operating point, we substitute the corresponding spaser mode energy in
Eq. (6)
. For the previously set operating point, 2.385 eV, the substitution gives resonant QD radius to be 2.4 nm, when the QD material parameters are
= 1.74 eV,
, and
57. L. Liu, Q. Peng, and Y. Li, “An effective oxidation route to blue emission cdse quantum dots,” Inorg. Chem. 47, 3182–3187 (2008) [CrossRef] [PubMed] .
58. W. Kwak, T. Kim, W. Chae, and Y. Sung, “Tuning the energy bandgap of CdSe nanocrystals via Mg doping,” Nanotechnology 18, 205702 (2007) [CrossRef] .
]. Calculating the energy levels of the QD with this radius, and considering the fact that dipole mode is triply degenerate with
= −1, 0 and 1, we redraw the total system diagram given in
Fig. 2(c)
for this dipole spaser to attain the system illustrated in
Fig. 4
. The corresponding electronic levels |0
〉, |1
〉 and |2
〉 of the QD in this system posses energies 0, 2.385 and 3.059 eV respectively. The resulting total system has
= 6 states denoted by |1
〉, |2
Evaluating this 6 state system of the dipole spaser for the highest plasmon excitation rate gives the optimum size parameters. Just opting an arbitrary
pair on the preferred spaser mode energy curve may not result in the highest plasmon generation. To investigate the behavior of our dipole spaser, we follow the results derived in Section 3 by
= 1. As we discussed there, the total plasmon excitation rate of the dipole spaser is represented by the sum of the populations of the states |4
〉, |5
〉, and |6
. We use the expression for
given in
Eq. (17)
to study the plasmon excitation rate of the dipole spaser. This equation contains some relaxation constants,
, and
, which depend on the environment and used materials, but do not depend on spaser’s geometry. Therefore, we keep them constant and compare the plasmon excitation rate by investigating the normalized
plasmon population in the relevant plots, as our objective is to study how the spaser’s geometrical parameters result in relatively high or low plasmon populations. While calculating the matrix
elements, for the sake of simplicity, we assume that QD is located on the nanosphere’s dipole axis in
= 0 direction.
The only relaxation rate influenced by spaser’s geometry is the decay rate of surface plasmons,
is a function of the spaser mode energy given by
19. M. I. Stockman, “Nanoplasmonics: past, present, and glimpse into future,” Opt. Express 19, 22029–22106 (2011) [CrossRef] [PubMed] .
]. Since the nanosphere is very much smaller than the wavelength, here we consider only nonradiative decay assuming that its radiation loss is negligible [
51. F. Wang and Y. R. Shen, “General properties of local plasmons in metal nanostructures,” Phys. Rev. Lett. 97, 206806 (2006) [CrossRef] [PubMed] .
]. By substituting the model for nanosphere’s permittivity in this expression, the plasmon decay rate can be simplified to
. Since both the quantities Γ and
depend on size parameters
also depends on them for a given material. Continuing with these simplifications, we plot the plasmon excitation rate,
of the dipole spaser with respect to nanosphere radius and shell thickness, as shown in
Fig. 3(b)
, when the QD’s location is fixed to the middle of the dielectric shell and
is fixed to the resonant radius given by
Eq. (6)
. It can be noted that plasmon excitation rate is higher for smaller nanosphere radii and shell thicknesses. It monotonically decreases when the total volume increases. This plot helps us to figure
out the (
) pair that gives the highest plasmon excitation rate for a preferred spaser mode energy marked on
Fig. 3(a)
. Hence, a designer has the freedom to optimize the spaser geometry by tuning either nanosphere radius or dielectric shell thickness.
Let us continue with the previous example and draw the curve corresponds to the spaser mode energy 2.385 eV on
Fig. 3(b)
. Based on this curve, we can observe that the plasmon excitation rate varies with geometric parameters, even for the same spaser mode energy. Hence, we can select parameters for the optimum design
by picking the
pair which results in the highest plasmon excitation rate. In our example, parameters (
) = (6 nm, 11 nm) offer the highest spaser mode amplification for the 2.385 eV energy curve. Here we emphasize that, although we obtained this result for a single QD dipole spaser, a better spaser
configuration may consists of many QDs to achieve a higher gain. If this dipole spaser consists of
QDs, then
, if the QDs are uniformly distributed inside the dielectric shell. In that case, having a bigger shell volume to accommodate more QDs may result in a higher plasmon excitation rates because all QDs
contribute the total amplification. However, each QD does not contribute evenly in many QDs case as the location of the QD and its size play vital roles in deciding the plasmon excitation rate.
In order to examine how the location of the QD affects the plasmon excitation rate of the spaser, we fix
, and plot
for the case of resonant QD radius (i.e.
is also fixed). Here we vary QD’s location
within the dielectric shell from the innermost to the outermost position with respect to the nanosphere’s center. Such plots for four different
pairs are shown in
Fig. 3(d)
. It can be observed from these plots that the plasmon excitation rate rapidly decreases when the QD is moved away from the nanosphere. This happens because the interactions between plasmon modes and
electron-hole pairs in QD gets weaker towards the shell’s outer boundary as the matrix element for interactions,
decreases in the radial direction according to
Eqs. (2)
. The decreasing rate is higher for smaller nanosphere radii. Especially, when there are many QDs, their contribution to the spasing mode won’t be uniform. Hence, when designing a multiple QD spaser,
we must set the optimum size parameters such that a large number of QDs are closely located to the nanosphere center.
Thus far, we have only considered the case where the QD radius is tuned according to
Eq. (6)
such that its emission energy exactly equals to the energy of the spaser mode. To investigate the influence of the QD radius on spasing, we vary
within an interval of 1–3 nm and plot the plasmon excitation rate, as shown in
Fig. 3(c)
, for different shell thicknesses keeping the nanosphere radius fixed to 10 nm. According to this plot, highest plasmon excitation rate is observable in the case of exact resonance and it rapidly
decreases when the
deviates from the resonant QD radius. For example, if the the QD radius deviates much (i.e. about 0.5 nm) from the resonant value, the resultant plasmon excitation rate will tend to zero.
The plots in
Figs. 3(a)–3(d)
, clearly explicate how the geometrical parameters of spaser design can be optimized to attain an elevated plasmon excitation rate. In addition, we need to discuss the threshold power required from
optical pumping which is a condition of spasing. Since we use a normalized electric field in this analytical treatment, it is important to calculate the required threshold gain, denoted by
. Threshold gain can be found by applying the condition of population inversion:
. It can be shown that it does not depend on spaser geometry, and is a function of the dielectric constants of the spaser materials and the spaser mode frequency given by [
19. M. I. Stockman, “Nanoplasmonics: past, present, and glimpse into future,” Opt. Express 19, 22029–22106 (2011) [CrossRef] [PubMed] .
] Threshold gain is a function of spaser mode frequency and nanosphere’s permittivity. Since spaser mode frequency is a function of
and nanosphere’s permittivity is a function of
, threshold gain also becomes a function of these two geometrical parameters as we plot in
Fig. 3(e)
. According to the graph, the gain does not vary much with the size parameters but the required threshold gain for spasing is little higher when the nanosphere is smaller. We mark the 2.385 eV curve,
which we used in the previous example, on threshold pump power plot and it provides an idea of choosing the correct geometrical parameters. It should be noted that, the expression for threshold gain
is derived assuming that stimulated emission dominates in the spaser. However, there can be cases where spontaneous emission dominates spaser kinetics when the number of plasmons is one. In such
situations, the semiclassical concept of threshold is not applicable and plasmon population in the resonator increases linearly with the pump rate [
59. I. E. Protsenko, “Quantum theory of dipole nanolasers,” J. Russ. Laser Res. 33, 559–577 (2012) [CrossRef] .
]. In addition, extending this analysis to find the required pumping intensity will be useful in practical implementations. The developed model can also be used to study the polarization of resultant
plasmons, which has been discussed in [
20. I. E. Protsenko, A. V. Uskov, O. A. Zaimidoroga, V. N. Samoilov, and E. P. O‘reilly, “Dipole nanolaser,” Phys. Rev. A 71, 063812 (2005) [CrossRef] .
], as it allows us to consider the case where QD dipole moment is neither parallel nor orthogonal to the line connecting the centers of nanosphere and QD. However, in this study, we only focus on the
optimization of spaser geometry to enhance plasmon generation based on the introduced quantum mechanical model.
The case study which we performed throughout this section provides us design guidelines on choosing the optimum size parameters of spaser components and where to place QDs to achieve a maximum
plasmon excitation rate. In the process of designing a spaser, one might choose its operating wavelength as the first step, then pick possible values for the size parameters R[1], R[2] and R[q] by
examining the resulting in plasmon excitation rate. Then, it is necessary to figure out where to embed the QD. Placing the QD closer to the nanosphere will give a higher plasmon excitation rate.
However, if multiple QDs are to be placed, then one need to consider about the precise placement because not all QDs can be placed close to the nanosphere’s boundary.
Although we studied a spaser realization with spherical structure, the derivations done in Section 3 is valid for a spaser with any geometry. It might not be possible to obtain the electric field and
the energies of plasmon modes analytically for most of the other resonator geometries as we did in Section 2.1. But numerical solutions for the field can be admitted into the expressions derived in
Section 3 to analyze the spaser kinetics and investigate the plasmon excitation rate which characterize its performance.
5. Conclusions
We have theoretically modeled spaser as an n-level quantum system formed by amalgamating spaser’s electronic and plasmonic subsystems. With this model, we studied a simple spaser geometry consists of
a metal nanosphere resonantly coupled to a QD. The energy transfer between the QD and the nanosphere accompanied the relaxation of electron–hole pairs resonantly generated inside the QD by a
continuous-wave laser. By employing the density matrix theory, we analytically found the excitation rate of surface plasmons where three nondegenerate electron–hole pair states are coupled to a
single plasmon mode of arbitrary angular momentum. The obtained expression was then examined numerically for the special case of a spaser operating at the triple-degenerate dipole mode. It was shown
that the plasmon excitation rate can be significantly enhanced by appropriately choosing the geometric parameters of the spaser.
The work of C. Rupasinghe is supported by the Monash University Institute of Graduate Research. The work of I. D. Rukhlenko and M. Premaratne is supported by the Australian Research Council, through
its Discovery Early Career Researcher Award DE120100055 and Discovery Grant DP110100713, respectively.
References and links
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011) [CrossRef] .
2. D. K. Gramotnev and S. I. Bozhevolnyi, “Plasmonics beyond the diffraction limit,” Nature Photon. 4, 83–91 (2010) [CrossRef] .
3. S. A. Maier and H. A. Atwater, “Plasmonics: Localization and guiding of electromagnetic energy in metal/dielectric structures,” J. Appl. Phys. 98, 011101 (2005) [CrossRef] .
4. S. Maier, Plasmonics: Fundamentals and Applications(Springer, 2007).
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett. 90, 027402
(2003) [CrossRef] [PubMed] .
6. R. F. Oulton, “Plasmonics: Loss and gain,” Nature Photon. 6, 219–221 (2012) [CrossRef] .
7. M. I. Stockman, “Spasers explained,” Nature Photon. 2, 327–329 (2008) [CrossRef] .
8. J. Seidel, S. Grafström, and L. Eng, “Stimulated emission of surface plasmons at the interface between a silver film and an optically pumped dye solution,” Phys. Rev. Lett. 94, 177401 (2005)
[CrossRef] [PubMed] .
9. M. A. Noginov, G. Zhu, A. M. Belgrave, R. Bakker, V. M. Shalaev, E. E. Narimanov, S. Stout, E. Herz, T. Suteewong, and U. Wiesner, “Demonstration of a spaser-based nanolaser,” Nature 460,
1110–1112 (2009) [CrossRef] [PubMed] .
10. R. A. Flynn, C. S. Kim, I. Vurgaftman, M. Kim, J. R. Meyer, A. J. Mäkinen, K. Bussmann, L. Cheng, F. S. Choa, and J. P. Long, “A room-temperature semiconductor spaser operating near 1.5 μm,” Opt.
Express 19, 8954–8961 (2011) [CrossRef] [PubMed] .
11. N. Zheludev, S. Prosvirnin, N. Papasimakis, and V. Fedotov, “Lasing spaser,” Nature Photon. 2, 351–354 (2008) [CrossRef] .
12. S. W. Chang, C. Y. A. Ni, and S. L. Chuang, “Theory for bowtie plasmonic nanolasers,” Opt. Express 16, 10580–10595 (2008) [CrossRef] [PubMed] .
13. A. Lisyansky, I. Nechepurenko, A. Dorofeenko, A. Vinogradov, and A. Pukhov, “Channel spaser: Coherent excitation of one-dimensional plasmons from quantum dots located along a linear channel,”
Phys. Rev. B 84, 153409 (2011) [CrossRef] .
14. M. Grundmann, J. Christen, N. N. Ledentsov, J. Böhrer, D. Bimberg, S. S. Ruvimov, P. Werner, U. Richter, U. Gösele, J. Heydenreich, V. M. Ustinov, A. Y. Egorov, A. E. Zhukov, P. S. Kop’ev, and Z.
I. Alferov, “Ultra-narrow luminescence lines from single quantum dots,” Phys. Rev. Lett. 74, 4043–4046 (1995) [CrossRef] [PubMed] .
15. S. Mukamel, Principles of Nonlinear Optical Spectroscopy, Oxford series on optical sciences (Oxford University, 1999).
16. A. V. Baranov, A. V. Fedorov, I. D. Rukhlenko, and Y. Masumoto, “Intraband carrier relaxation in quantum dots embedded in doped heterostructures,” Phys. Rev. B 68, 205318 (2003) [CrossRef] .
17. A. V. Fedorov, A. V. Baranov, I. D. Rukhlenko, T. S. Perova, and K. Berwick, “Quantum dot energy relaxation mediated by plasmon emission in doped covalent semiconductor heterostructures,” Phys.
Rev. B 76, 045332 (2007) [CrossRef] .
18. M. I. Stockman, “The spaser as a nanoscale quantum generator and ultrafast amplifier,” J. Opt. 12, 024004 (2010) [CrossRef] .
19. M. I. Stockman, “Nanoplasmonics: past, present, and glimpse into future,” Opt. Express 19, 22029–22106 (2011) [CrossRef] [PubMed] .
20. I. E. Protsenko, A. V. Uskov, O. A. Zaimidoroga, V. N. Samoilov, and E. P. O‘reilly, “Dipole nanolaser,” Phys. Rev. A 71, 063812 (2005) [CrossRef] .
21. J. B. Khurgin and G. Sun, “Injection pumped single mode surface plasmon generators: threshold, linewidth, and coherence,” Opt. Express 20, 15309–15325 (2012) [CrossRef] [PubMed] .
22. J. B. Khurgin and G. Sun, “How small can nano be in a nanolaser?” Nanophotonics 1, 3–8 (2012) [CrossRef] .
23. E. Andrianov, A. Pukhov, A. Dorofeenko, A. Vinogradov, and A. Lisyansky, “Dipole response of spaser on an external optical wave,” Opt. Lett. 36, 4302–4304 (2011) [CrossRef] [PubMed] .
24. E. Andrianov, A. Pukhov, A. Dorofeenko, A. Vinogradov, and A. Lisyansky, “Forced synchronization of spaser by an external optical wave,” Opt. Express 19, 24849–24857 (2011) [CrossRef] .
25. E. S. Andrianov, A. A. Pukhov, A. V. Dorofeenko, A. P. Vinogradov, and A. A. Lisyansky, “Rabi oscillations in spasers during nonradiative plasmon excitation,” Phys. Rev. B 85, 035405 (2012)
[CrossRef] .
26. A. Ridolfo, O. Di Stefano, N. Fina, R. Saija, and S. Savasta, “Quantum plasmonics with quantum dot-metal nanoparticle molecules: influence of the fano effect on photon statistics,” Phys. Rev.
Lett. 105, 263601 (2010) [CrossRef] .
27. A. Rosenthal and T. Ghannam, “Dipole nanolasers: A study of their quantum properties,” Phys. Rev. A 79, 043824 (2009) [CrossRef] .
28. A. K. Sarychev and G. Tartakovsky, “Magnetic plasmonic metamaterials in actively pumped host medium and plasmonic nanolaser,” Phys. Rev. B 75, 085436 (2007) [CrossRef] .
29. M. Wegener, J. L. García-Pomar, C. M. Soukoulis, N. Meinzer, M. Ruther, and S. Linden, “Toy model for plasmonic metamaterial resonances coupled to two-level system gain,” Opt. Express 16,
19785–19798 (2008) [CrossRef] [PubMed] .
30. S. Wuestner, A. Pusch, K. L. Tsakmakidis, J. M. Hamm, and O. Hess, “Overcoming losses with gain in a negative refractive index metamaterial,” Phys. Rev. Lett. 105, 127401 (2010) [CrossRef]
[PubMed] .
31. A. Fang, T. Koschny, and C. M. Soukoulis, “Lasing in metamaterial nanostructures,” J. Opt. 12, 024013 (2010) [CrossRef] .
32. D. Sarid and W. Challener, Modern Introduction to Surface Plasmons: Theory, Mathematica Modeling, and Applications(Cambridge University, 2010) [CrossRef] .
33. P. K. Jain, K. S. Lee, I. H. El-Sayed, and M. A. El-Sayed, “Calculated absorption and scattering properties of gold nanoparticles of different size, shape, and composition: Applications in
biological imaging and biomedicine,” J. Phys. Chem. B 110, 7238–7248 (2006) [CrossRef] [PubMed] .
34. K. L. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, “The optical properties of metal nanoparticles: The influence of size, shape, and dielectric environment,” J. Phys. Chem. B 107, 668–677
(2003) [CrossRef] .
35. A. L. Aden and M. Kerker, “Scattering of electromagnetic waves from two concentric spheres,” J. Appl. Phys. 22, 1242–1246 (1951) [CrossRef] .
36. C. Bohren and D. Huffman, Absorption and scattering of light by small particles(Wiley, 1983).
37. E. Sondheimer, “The mean free path of electrons in metals,” Adv. Phys. 1, 1–42 (1952) [CrossRef] .
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today 64, 39–44 (2011) [CrossRef] .
39. J. B. Khurgin, G. Sun, and R. Soref, “Practical limits of absorption enhancement near metal nanoparticles,” Appl. Phys. Lett. 94, 071103–071103 (2009) [CrossRef] .
40. J. B. Khurgin, G. Sun, and R. Soref, “Electroluminescence efficiency enhancement using metal nanoparticles,” Appl. Phys. Lett. 93, 021120–021120 (2008) [CrossRef] .
41. I. D. Rukhlenko, D. Handapangoda, M. Premaratne, A. V. Fedorov, A. V. Baranov, and C. Jagadish, “Spontaneous emission of guided polaritons by quantum dot coupled to metallic nanowire: Beyond the
dipole approximation,” Opt. Express 17, 17570–17581 (2009) [CrossRef] [PubMed] .
42. L. Landau, E. M. Lifshitz, and L. P. Pitaevskii, Course of Theoretical Physics Vol 8: Electrodynamics of Continuous Media (Elsevier, 2004).
43. M. Ventra, S. Evoy, and J. Heflin, Introduction to Nanoscale Science and Technology, Nanostructure Science and Technology (Springer, 2004) [CrossRef] .
44. A. Fedorov, A. Baranov, and Y. Masumoto, “Coherent control of optical-phonon-assisted resonance secondary emission in semiconductor quantum dots,” Opt. Spectrosc. 93, 52–60 (2002) [CrossRef] .
45. I. Rukhlenko, A. Fedorov, A. Baymuratov, and M. Premaratne, “Theory of quasi-elastic secondary emission from a quantum dot in the regime of vibrational resonance,” Opt. Express 19, 15459–15482
(2011) [CrossRef] [PubMed] .
46. A. Fedorov and I. Rukhlenko, “Study of electronic dynamics of quantum dots using resonant photoluminescence technique,” Opt. Spectrosc. 100, 716–723 (2006) [CrossRef] .
47. A. Ansel’m, Introduction to Semiconductor Theory (Mir, 1981).
48. D. Bimberg, R. Blachnik, P. Dean, T. Grave, G. Harbeke, K. Hübner, U. Kaufmann, W. Kress, and O. Madelung, Physics of Group IV Elements and III–V Compounds / Physik der Elemente der IV. Gruppe
und der III–V Verbindungen, v. 17 (Springer, 1981).
49. U. Fano, “Description of states in quantum mechanics by density matrix and operator techniques,” Rev. Mod. Phys. 29, 74–93 (1957) [CrossRef] .
50. K. Blum, Density Matrix Theory and Applications(Springer, 2010).
51. F. Wang and Y. R. Shen, “General properties of local plasmons in metal nanostructures,” Phys. Rev. Lett. 97, 206806 (2006) [CrossRef] [PubMed] .
52. G. Sun, J. B. Khurgin, and C. Yang, “Impact of high-order surface plasmon modes of metal nanoparticles on enhancement of optical emission,” Appl. Phys. Lett. 95, 171103–171103 (2009) [CrossRef] .
53. J. Lim, A. Eggeman, F. Lanni, R. D. Tilton, and S. A. Majetich, “Synthesis and single-particle optical detection of low-polydispersity plasmonic-superparamagnetic nanoparticles,” Adv. Mater. 20,
1721–1726 (2008) [CrossRef] .
54. R. Averitt, S. Westcott, and N. Halas, “Linear optical properties of gold nanoshells,” J. Opt. Soc. Am. B 16, 1824–1832 (1999) [CrossRef] .
55. P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B 6, 4370–4379 (1972) [CrossRef] .
56. K. Kolwas, A. Derkachova, and M. Shopa, “Size characteristics of surface plasmons and their manifestation in scattering properties of metal particles,” J. Quant. Spectrosc. Radiat. Transfer 110,
1490–1501 (2009) [CrossRef] .
57. L. Liu, Q. Peng, and Y. Li, “An effective oxidation route to blue emission cdse quantum dots,” Inorg. Chem. 47, 3182–3187 (2008) [CrossRef] [PubMed] .
58. W. Kwak, T. Kim, W. Chae, and Y. Sung, “Tuning the energy bandgap of CdSe nanocrystals via Mg doping,” Nanotechnology 18, 205702 (2007) [CrossRef] .
59. I. E. Protsenko, “Quantum theory of dipole nanolasers,” J. Russ. Laser Res. 33, 559–577 (2012) [CrossRef] .
OCIS Codes
(140.3430) Lasers and laser optics : Laser theory
(230.0230) Optical devices : Optical devices
(250.5403) Optoelectronics : Plasmonics
(250.5590) Optoelectronics : Quantum-well, -wire and -dot devices
ToC Category:
Optics at Surfaces
Original Manuscript: May 8, 2013
Revised Manuscript: June 12, 2013
Manuscript Accepted: June 12, 2013
Published: June 19, 2013
Chanaka Rupasinghe, Ivan D. Rukhlenko, and Malin Premaratne, "Design optimization of spasers considering the degeneracy of excited plasmon modes," Opt. Express 21, 15335-15349 (2013)
Sort: Year | Journal | Reset
1. M. Premaratne and G. P. Agrawal, Light Propagation in Gain Media: Optical Amplifiers(Cambridge University, 2011). [CrossRef]
2. D. K. Gramotnev and S. I. Bozhevolnyi, “Plasmonics beyond the diffraction limit,” Nature Photon.4, 83–91 (2010). [CrossRef]
3. S. A. Maier and H. A. Atwater, “Plasmonics: Localization and guiding of electromagnetic energy in metal/dielectric structures,” J. Appl. Phys.98, 011101 (2005). [CrossRef]
4. S. Maier, Plasmonics: Fundamentals and Applications(Springer, 2007).
5. D. J. Bergman and M. I. Stockman, “Surface plasmon amplification by stimulated emission of radiation: Quantum generation of coherent surface plasmons in nanosystems,” Phys. Rev. Lett.90, 027402
(2003). [CrossRef] [PubMed]
6. R. F. Oulton, “Plasmonics: Loss and gain,” Nature Photon.6, 219–221 (2012). [CrossRef]
7. M. I. Stockman, “Spasers explained,” Nature Photon.2, 327–329 (2008). [CrossRef]
8. J. Seidel, S. Grafström, and L. Eng, “Stimulated emission of surface plasmons at the interface between a silver film and an optically pumped dye solution,” Phys. Rev. Lett.94, 177401 (2005).
[CrossRef] [PubMed]
9. M. A. Noginov, G. Zhu, A. M. Belgrave, R. Bakker, V. M. Shalaev, E. E. Narimanov, S. Stout, E. Herz, T. Suteewong, and U. Wiesner, “Demonstration of a spaser-based nanolaser,” Nature460,
1110–1112 (2009). [CrossRef] [PubMed]
10. R. A. Flynn, C. S. Kim, I. Vurgaftman, M. Kim, J. R. Meyer, A. J. Mäkinen, K. Bussmann, L. Cheng, F. S. Choa, and J. P. Long, “A room-temperature semiconductor spaser operating near 1.5 μm,” Opt.
Express19, 8954–8961 (2011). [CrossRef] [PubMed]
11. N. Zheludev, S. Prosvirnin, N. Papasimakis, and V. Fedotov, “Lasing spaser,” Nature Photon.2, 351–354 (2008). [CrossRef]
12. S. W. Chang, C. Y. A. Ni, and S. L. Chuang, “Theory for bowtie plasmonic nanolasers,” Opt. Express16, 10580–10595 (2008). [CrossRef] [PubMed]
13. A. Lisyansky, I. Nechepurenko, A. Dorofeenko, A. Vinogradov, and A. Pukhov, “Channel spaser: Coherent excitation of one-dimensional plasmons from quantum dots located along a linear channel,”
Phys. Rev. B84, 153409 (2011). [CrossRef]
14. M. Grundmann, J. Christen, N. N. Ledentsov, J. Böhrer, D. Bimberg, S. S. Ruvimov, P. Werner, U. Richter, U. Gösele, J. Heydenreich, V. M. Ustinov, A. Y. Egorov, A. E. Zhukov, P. S. Kop’ev, and Z.
I. Alferov, “Ultra-narrow luminescence lines from single quantum dots,” Phys. Rev. Lett.74, 4043–4046 (1995). [CrossRef] [PubMed]
15. S. Mukamel, Principles of Nonlinear Optical Spectroscopy, Oxford series on optical sciences (Oxford University, 1999).
16. A. V. Baranov, A. V. Fedorov, I. D. Rukhlenko, and Y. Masumoto, “Intraband carrier relaxation in quantum dots embedded in doped heterostructures,” Phys. Rev. B68, 205318 (2003). [CrossRef]
17. A. V. Fedorov, A. V. Baranov, I. D. Rukhlenko, T. S. Perova, and K. Berwick, “Quantum dot energy relaxation mediated by plasmon emission in doped covalent semiconductor heterostructures,” Phys.
Rev. B76, 045332 (2007). [CrossRef]
18. M. I. Stockman, “The spaser as a nanoscale quantum generator and ultrafast amplifier,” J. Opt.12, 024004 (2010). [CrossRef]
19. M. I. Stockman, “Nanoplasmonics: past, present, and glimpse into future,” Opt. Express19, 22029–22106 (2011). [CrossRef] [PubMed]
20. I. E. Protsenko, A. V. Uskov, O. A. Zaimidoroga, V. N. Samoilov, and E. P. O‘reilly, “Dipole nanolaser,” Phys. Rev. A71, 063812 (2005). [CrossRef]
21. J. B. Khurgin and G. Sun, “Injection pumped single mode surface plasmon generators: threshold, linewidth, and coherence,” Opt. Express20, 15309–15325 (2012). [CrossRef] [PubMed]
22. J. B. Khurgin and G. Sun, “How small can nano be in a nanolaser?” Nanophotonics1, 3–8 (2012). [CrossRef]
23. E. Andrianov, A. Pukhov, A. Dorofeenko, A. Vinogradov, and A. Lisyansky, “Dipole response of spaser on an external optical wave,” Opt. Lett.36, 4302–4304 (2011). [CrossRef] [PubMed]
24. E. Andrianov, A. Pukhov, A. Dorofeenko, A. Vinogradov, and A. Lisyansky, “Forced synchronization of spaser by an external optical wave,” Opt. Express19, 24849–24857 (2011). [CrossRef]
25. E. S. Andrianov, A. A. Pukhov, A. V. Dorofeenko, A. P. Vinogradov, and A. A. Lisyansky, “Rabi oscillations in spasers during nonradiative plasmon excitation,” Phys. Rev. B85, 035405 (2012).
26. A. Ridolfo, O. Di Stefano, N. Fina, R. Saija, and S. Savasta, “Quantum plasmonics with quantum dot-metal nanoparticle molecules: influence of the fano effect on photon statistics,” Phys. Rev.
Lett.105, 263601 (2010). [CrossRef]
27. A. Rosenthal and T. Ghannam, “Dipole nanolasers: A study of their quantum properties,” Phys. Rev. A79, 043824 (2009). [CrossRef]
28. A. K. Sarychev and G. Tartakovsky, “Magnetic plasmonic metamaterials in actively pumped host medium and plasmonic nanolaser,” Phys. Rev. B75, 085436 (2007). [CrossRef]
29. M. Wegener, J. L. García-Pomar, C. M. Soukoulis, N. Meinzer, M. Ruther, and S. Linden, “Toy model for plasmonic metamaterial resonances coupled to two-level system gain,” Opt. Express16,
19785–19798 (2008). [CrossRef] [PubMed]
30. S. Wuestner, A. Pusch, K. L. Tsakmakidis, J. M. Hamm, and O. Hess, “Overcoming losses with gain in a negative refractive index metamaterial,” Phys. Rev. Lett.105, 127401 (2010). [CrossRef]
31. A. Fang, T. Koschny, and C. M. Soukoulis, “Lasing in metamaterial nanostructures,” J. Opt.12, 024013 (2010). [CrossRef]
32. D. Sarid and W. Challener, Modern Introduction to Surface Plasmons: Theory, Mathematica Modeling, and Applications(Cambridge University, 2010). [CrossRef]
33. P. K. Jain, K. S. Lee, I. H. El-Sayed, and M. A. El-Sayed, “Calculated absorption and scattering properties of gold nanoparticles of different size, shape, and composition: Applications in
biological imaging and biomedicine,” J. Phys. Chem. B110, 7238–7248 (2006). [CrossRef] [PubMed]
34. K. L. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, “The optical properties of metal nanoparticles: The influence of size, shape, and dielectric environment,” J. Phys. Chem. B107, 668–677
(2003). [CrossRef]
35. A. L. Aden and M. Kerker, “Scattering of electromagnetic waves from two concentric spheres,” J. Appl. Phys.22, 1242–1246 (1951). [CrossRef]
36. C. Bohren and D. Huffman, Absorption and scattering of light by small particles(Wiley, 1983).
37. E. Sondheimer, “The mean free path of electrons in metals,” Adv. Phys.1, 1–42 (1952). [CrossRef]
38. M. I. Stockman, “Nanoplasmonics: The physics behind the applications,” Phys. Today64, 39–44 (2011). [CrossRef]
39. J. B. Khurgin, G. Sun, and R. Soref, “Practical limits of absorption enhancement near metal nanoparticles,” Appl. Phys. Lett.94, 071103–071103 (2009). [CrossRef]
40. J. B. Khurgin, G. Sun, and R. Soref, “Electroluminescence efficiency enhancement using metal nanoparticles,” Appl. Phys. Lett.93, 021120–021120 (2008). [CrossRef]
41. I. D. Rukhlenko, D. Handapangoda, M. Premaratne, A. V. Fedorov, A. V. Baranov, and C. Jagadish, “Spontaneous emission of guided polaritons by quantum dot coupled to metallic nanowire: Beyond the
dipole approximation,” Opt. Express17, 17570–17581 (2009). [CrossRef] [PubMed]
42. L. Landau, E. M. Lifshitz, and L. P. Pitaevskii, Course of Theoretical Physics Vol 8: Electrodynamics of Continuous Media (Elsevier, 2004).
43. M. Ventra, S. Evoy, and J. Heflin, Introduction to Nanoscale Science and Technology, Nanostructure Science and Technology (Springer, 2004). [CrossRef]
44. A. Fedorov, A. Baranov, and Y. Masumoto, “Coherent control of optical-phonon-assisted resonance secondary emission in semiconductor quantum dots,” Opt. Spectrosc.93, 52–60 (2002). [CrossRef]
45. I. Rukhlenko, A. Fedorov, A. Baymuratov, and M. Premaratne, “Theory of quasi-elastic secondary emission from a quantum dot in the regime of vibrational resonance,” Opt. Express19, 15459–15482
(2011). [CrossRef] [PubMed]
46. A. Fedorov and I. Rukhlenko, “Study of electronic dynamics of quantum dots using resonant photoluminescence technique,” Opt. Spectrosc.100, 716–723 (2006). [CrossRef]
47. A. Ansel’m, Introduction to Semiconductor Theory (Mir, 1981).
48. D. Bimberg, R. Blachnik, P. Dean, T. Grave, G. Harbeke, K. Hübner, U. Kaufmann, W. Kress, O. Madelung, and , Physics of Group IV Elements and III–V Compounds / Physik der Elemente der IV. Gruppe
und der III–V Verbindungen, v. 17 (Springer, 1981).
49. U. Fano, “Description of states in quantum mechanics by density matrix and operator techniques,” Rev. Mod. Phys.29, 74–93 (1957). [CrossRef]
50. K. Blum, Density Matrix Theory and Applications(Springer, 2010).
51. F. Wang and Y. R. Shen, “General properties of local plasmons in metal nanostructures,” Phys. Rev. Lett.97, 206806 (2006). [CrossRef] [PubMed]
52. G. Sun, J. B. Khurgin, and C. Yang, “Impact of high-order surface plasmon modes of metal nanoparticles on enhancement of optical emission,” Appl. Phys. Lett.95, 171103–171103 (2009). [CrossRef]
53. J. Lim, A. Eggeman, F. Lanni, R. D. Tilton, and S. A. Majetich, “Synthesis and single-particle optical detection of low-polydispersity plasmonic-superparamagnetic nanoparticles,” Adv. Mater.20,
1721–1726 (2008). [CrossRef]
54. R. Averitt, S. Westcott, and N. Halas, “Linear optical properties of gold nanoshells,” J. Opt. Soc. Am. B16, 1824–1832 (1999). [CrossRef]
55. P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B6, 4370–4379 (1972). [CrossRef]
56. K. Kolwas, A. Derkachova, and M. Shopa, “Size characteristics of surface plasmons and their manifestation in scattering properties of metal particles,” J. Quant. Spectrosc. Radiat. Transfer110,
1490–1501 (2009). [CrossRef]
57. L. Liu, Q. Peng, and Y. Li, “An effective oxidation route to blue emission cdse quantum dots,” Inorg. Chem.47, 3182–3187 (2008). [CrossRef] [PubMed]
58. W. Kwak, T. Kim, W. Chae, and Y. Sung, “Tuning the energy bandgap of CdSe nanocrystals via Mg doping,” Nanotechnology18, 205702 (2007). [CrossRef]
59. I. E. Protsenko, “Quantum theory of dipole nanolasers,” J. Russ. Laser Res.33, 559–577 (2012). [CrossRef]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-21-13-15335&id=258097","timestamp":"2014-04-20T06:13:10Z","content_type":null,"content_length":"544781","record_id":"<urn:uuid:59c85374-f408-40c7-8f5a-5249db352417>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Virial Theorem Made Easy
John Baez
August 10, 2000
Suppose you have a finite collection of point particles interacting gravitationally via good old Newtonian mechanics. And suppose that:
1. The time averages of the total kinetic energy and the total potential energy are well-defined.
2. The positions and velocities of the particles are bounded for all time.
Then we have
<T> = -<V>/2
where <T> is the time average of the total kinetic energy, and <V> is the time average of the total potential energy.
I always found this to be a bit magical. It seems surprising at first that such a simple law could hold so generally. But in fact, it's just a special case of something called the "virial theorem",
which also applies to forces other than gravity, and impacts everything from astronomy to the theory of gases.
For example, out in space, very often a bunch of particles will collapse to form a gravitationally bound system. If the system is roughly in equilibrium so the time averages of kinetic and potential
energy are close to their current values, the virial theorem implies that T = -(1/2) V. we know that <T> = -<V>/2. This is a terrific thing, because it lets you find the masses of bound systems. In
fact, it's really the reason we think that dark matter exists.
To be specific, suppose you measure the speeds of a bunch of visible objects in your system, and infer T. Then the virial theorem tells you V. If you find out that the potential well is deeper than
what you'd get by adding up the contributions from the masses of everything you see, you know there's dark matter. People do this for spiral galaxies, elliptical galaxies, and galaxy clusters,
getting strong evidence for dark matter in all cases.
For applications of the virial theorem to astrophysics, this book is good:
• William C. Saslaw, Gravitational physics of stellar and galactic systems, Cambridge U. Press, Cambridge, 1985.
The Simplest Example
Before I sketch the proof of the virial theorem, let's consider the simplest possible case: a single light particle in circular orbit around a heavy one. Say the light one has mass m and the heavy
one has mass M. And suppose the orbit has radius R. Then the potential energy is
V = -GmM/R (1)
where G is Newton's constant. To figure out the kinetic energy, remember that the gravitational force is
F[grav] = -GmM/R^2
while the centrifugal force is
F[centrif] = mv^2/R
In a circular orbit these counteract each other perfectly, so we must have
mv^2/R = GmM/R^2
Thus the kinetic energy of the light particle is
T = mv^2/2 = GmM/2R (2)
while the kinetic energy of the heavy one is negligible. Comparing (1) and (2), we see that
T = -V/2
just as the virial theorem says!
The virial theorem lets us generalize this fact to arbitrary gravitationally bound systems. Of course, in a more general system of this sort - even a particle in an elliptical orbit - the kinetic and
potential energy change with time. That's why the virial theorem refers to time averages of the kinetic and potential energy. But the basic idea is the same. And the proof is surprisingly simple.
The Proof
Here's how it goes. We consider a quantity called the "virial":
G = ∑[i] p[i] ^. r[i]
that is, the sum over all the particles of the dot product of each particle's momentum with its position. A little calculation shows that
dG/dt = 2T + ∑[i] F[i] ^. r[i]
where F[i] is the total force exerted on the ith particle. Now let's compute the time average of both sides. Integrate both sides from time 0 to time t and then divide by t. Then take the limit as t
-> ∞. On the left hand side, we get
lim[t -> ∞] (G(t) - G(0))/t = 0
since by assumption 2 the function G(t) is bounded. We thus obtain
0 = 2<T> + <∑[i] F[i] ^. r[i]>
at least if the time averages here are well-defined. We know that <T> is well-defined by assumption 1. Why is that other time average well-defined? Well, the force on the ith particle is caused by
all the other particles, so we have
∑[i] F[i] ^. r[i] = ∑[i ≠ j] -grad(V[ij]) ^. r[i]
where V[ij] is the potential energy for the interaction between the ith and jth particles. Rewriting this a bit, we get
∑[i] F[i] ^. r[i] = ∑[i < j] -grad(V[ij]) ^. r[i] + ∑[j < i] -grad(V[ij]) ^. r[i]
= ∑[i < j] -grad(V[ij]) ^. r[i] + ∑[i < j] -grad(V[ji]) ^. r[j]
= ∑[i < j] -grad(V[ij]) ^. (r[i] - r[j])
where in the second step we switched the dummy indices i and j on the second term. Now, since V[ij] is proportional to the reciprocal of the distance between the ith and jth particles, we have
grad(V[ij]) ^. (r[i] - r[j]) = - V[ij]
<∑[i] F[i] ^. r[i]> = <∑[i < j] V[ij]> = <V>
so this time average is well-defined by assumption 1. We also see what it equals! So we get
0 = 2<T> + <V>
or in other words
<T> = -<V>/2
Voila! The virial theorem!
You can find this proof in any good textbook on classical mechanics, for example:
• Herbert Goldstein, Classical Mechanics, Addison-Wesley, Reading, Massachusetts, 1950.
Some Spinoffs and Caveats
Having done all that work proving the virial theorem, it's nice to note some spinoffs.
First of all, if the motion of our particles is periodic, we don't need to average over all time: we can just average over a period. This applies to one particle in elliptical orbit about another,
for example. You could also handle that case using Kepler's laws, but I like the greater generality of what we just did.
Also, the virial theorem can be adapted to some other forces! Suppose the potential between particles is proportional to the nth power of their distance. This only changes the above argument a little
bit. We get
grad(V[ij]) ^. (r[i] - r[j]) = n V[ij]
so we get
<T> = (n/2) <V>.
With extra work, we can generalize the ideas behind the virial theorem to obtain useful results about other more complicated forces. This is especially important in the theory of gases, where we
measure the deviation from being an ideal gas using "virial coefficients".
But finally, before you walk away feeling too happy, I should warn you that in astrophysics, assumption 2 is usually not quite true! For example, a galaxy will occasionally fling stars into the
vastness of space, making their position unbounded as a function of time. This process of "boiling off" is very important in the long run, as explained in my webpage on the end of the universe. But
it's very slow, so the conditions of the virial theorem seem to be "approximately true" in the short run. Most people go ahead and use it without worrying about this subtlety. To justify this, we
should modify the above argument by averaging not over an infinite time, but a finite time. This time should be long compared to the time it takes stars to go around the galaxy, while still short
compared to the time it takes for them to boil off. Then we'll approximately get <T> = -<V>/2.
I talk about this a bit more in another webpage, where I use the virial theorem to study the thermodynamics of gravitating systems. By the way, both that webpage and this one are heavily indebted to
discussions with people on sci.physics.research, especially Ted Bunn and Jim Means. I even borrowed some of Ted's exact words in the above discussion about applications to astronomy! He's an
astrophysicist; I'm not.
© 2000 John Baez | {"url":"http://math.ucr.edu/home/baez/virial.html","timestamp":"2014-04-16T13:04:41Z","content_type":null,"content_length":"9201","record_id":"<urn:uuid:e2cb6608-a324-49fe-9271-f4d12ad4f434>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theorem 2.4.1: No Square Roots in Q
Suppose there was such an
. Being a rational number, we can write it as
• x = a / b (with no common divisors)
x^2 = x * x = 2
we have
In other words,
is even, and therefore
must be even as well. (Can you prove this ?). Hence,
• a = 2 c for some integer c.
But then we have that
• 4 c^2 = 2 b^2, or 2 c^2 = b^2
As before, this means that
is even.. But then both
are divisible by 2. That's a contradiction, because
were supposed to have no common divisors. | {"url":"http://www.mathcs.org/analysis/reals/infinity/proofs/nosqrt.html","timestamp":"2014-04-20T15:38:42Z","content_type":null,"content_length":"5571","record_id":"<urn:uuid:e9d141ed-3ba3-4b40-b7c1-3f7ad7f7b454>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
all 6 comments
[–]darkbeanie0 points1 point2 points
sorry, this has been archived and can no longer be voted on
Many years ago I downloaded a bootlegged copy of Mathematica, mainly because I was curious about what it could do. I then discovered that there's a certain type of person who is capable of
understanding not only how to use Mathematica, but what sorts of things you would use it for -- and I am not that kind of person.
I felt like a border collie sniffing at a book, imagining that if I kept poking at it, I'd eventually figure out what why humans like to open them and stare at them for long periods of time.
[–]claird[S] 1 point2 points3 points
sorry, this has been archived and can no longer be voted on
[–]darkbeanie0 points1 point2 points
sorry, this has been archived and can no longer be voted on
[–]BugeyeContinuum1 point2 points3 points
sorry, this has been archived and can no longer be voted on
I know lots of people who use Mathematica for its symbolic math capabilities. The general idea is that if you know math, and have to do calculations, you can do the conceptual parts yourself, and
have mathematica do all the boring cog turning.
Calculations in general relativity or quantum field theory or even fluid mechanics can often involve incredibly complicated mathematical expressions that are 10-15 lines long on paper. Back in the
day people would trudge through them on paper, using different colored pens and bookkeeping tricks to not lose track of things and it would take enormous amounts of time to complete, with a high
possiblity of human error because of the sheer size.
Theorists no longer have to contend with spending endless hours doing brute force calculations, only to realize they have to redo it because they accidentally missed a minus sign or a 2 along the
Nowadays, people plug them into mathematica, hit enter, and voila. It also gives people the ability to do things like calculate simpler forms of unweidly math by making approximations, to get an
intuitive idea of what's going on.
[–]sebastiansboat0 points1 point2 points
sorry, this has been archived and can no longer be voted on | {"url":"http://www.reddit.com/r/softscience/comments/u1zps/s_wolfram_announces_the_symbolic_and_highly/?sort=controversial","timestamp":"2014-04-24T09:25:34Z","content_type":null,"content_length":"62311","record_id":"<urn:uuid:cadf8075-d9a2-48ea-8188-2d388cc54e6c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Table with medians of a variable with many categoricalvariab
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Table with medians of a variable with many categoricalvariables
From Jeph Herrin <junk@spandrel.net>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Table with medians of a variable with many categoricalvariables
Date Thu, 15 Mar 2007 16:48:43 -0400
I don't think "quick question" deserves quite this
much discredit. In particular:
1. I would agree with you that just as simple question
may have unsimple answers, quick questions may have
unquick answers; but I don't see why one would
therefore refrain from framing either as what
they are.
2. "quick question" doesn't indicate to me either
triviality (see 1 above) or apology.
Nick Cox wrote:
Never say "I have a quick question".
1. If the question's quick, the answer may not be!
2. If you think your question is trivial, then don't ask it. If you think your question is interesting, it needs no
That said, -findit tabout-. I think -tabout- can do this.
Nick n.j.cox@durham.ac.uk
Mehmet Eris
I have a quick question about tables in Stata. I want to have table of
medians for one variable, say income, for a bunch of categorical
variables, say household head's sex, educational level etc.
Table would something like the following:
median of income
Educational level:
table A, c(median X) is sort of what I am looking for. It gives the
median of the variable X for each category in A. But this one
works for only one variable. I don't want to do it one by one for
each row categorical variable. It will be really tedious as I have to
do it for so many times.
Is there any tricks or commands that I can use to achieve what I want
to do? I also want to be able to export the table I get to Excel,
Thanks so very much in advance!
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-03/msg00608.html","timestamp":"2014-04-18T13:14:11Z","content_type":null,"content_length":"8617","record_id":"<urn:uuid:f7da4439-99f8-47da-9246-f1449a7c37d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Csum and csum copyroutines benchmark
>>>>> "Denis" == Denis Vlasenko <vda@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> writes:
Denis> [please drop libc from CC:]
Denis> On 25 October 2002 05:48, Momchil Velikov wrote:
>>> Short conclusion:
>>> 1. It is possible to speed up csum routines for AMD processors
>>> by 30%.
>>> 2. It is possible to speed up csum_copy routines for both AMD
>>> andd Intel three times or more.
>> Additional data point:
>> Short summary:
>> 1. Checksum - kernelpii_csum is ~19% faster
>> 2. Copy - lernelpii_csum is ~6% faster
>> Dual Pentium III, 1266Mhz, 512K cache, 2G SDRAM (133Mhz, ECC)
>> The only changes I made were to decrease the buffer size to 1K (as I
>> think this is more representative to a network packet size, correct
>> me if I'm wrong) and increase the runs to 1024. Max values are
>> worthless indeed.
Denis> Well, that makes it run entirely in L0 cache. This is unrealistic
Denis> for actual use. movntq is x3 faster when you hit RAM instead of L0.
Oops ...
Denis> You need to be more clever than that - generate pseudo-random
Denis> offsets in large buffer and run on ~1K pieces of that buffer.
Here it is:
Csum benchmark program
buffer size: 1 K
Each test tried 1024 times, max and min CPU cycles are reported.
Please disregard max values. They are due to system interference only.
csum tests:
kernel_csum - took 8678 max, 808 min cycles per kb.
kernel_csum - took 941 max, 808 min cycles per kb.
kernel_csum - took 11604 max, 808 min cycles per kb.
kernelpii_csum - took 28839 max, 664 min cycles per kb.
kernelpiipf_csum - took 9163 max, 665 min cycles per kb.
pfm_csum - took 2788 max, 1470 min cycles per kb.
pfm2_csum - took 1179 max, 915 min cycles per kb.
copy tests:
kernel_copy - took 688 max, 263 min cycles per kb.
kernel_copy - took 456 max, 263 min cycles per kb.
kernel_copy - took 11241 max, 263 min cycles per kb.
kernelpii_copy - took 7635 max, 246 min cycles per kb.
ntqpf_copy - took 5349 max, 536 min cycles per kb.
ntqpfm_copy - took 769 max, 425 min cycles per kb.
ntq_copy - took 672 max, 469 min cycles per kb.
ntqpf2_copy - took 8000 max, 579 min cycles per kb.
Ran on a 512K (my cache size) buffer, choosing each time a 1K
piece. (making the buffer larger (2M, 4M) does not make any
And the modified 0main.c is attached.
Description: Text Data | {"url":"http://oss.sgi.com/projects/netdev/archive/2002-10/msg01128.html","timestamp":"2014-04-16T10:13:18Z","content_type":null,"content_length":"14297","record_id":"<urn:uuid:68dc90de-2bea-423f-bd3b-724bdc0dfa83>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
D'Deridex Class Weapon Power
In 'Unification, Part II', a Warbird decloaked and fired disruptor beams at the three stolen Vulcan ships - appearing to vaporize each with a one-half second discharge.
Where size is concerned, we can scale the ships next to a Romulan Warbird. Warbirds, at slightly over 1,200 metres, have a forward section approximately one-quarter of overall length, i.e. 300m.
Scaled to this forward section, the cargo vessels are of equal length and, as a rough guess, perhaps one-quarter the height of said section. The Enterprise-A is said to weigh some 1,000,000 metric
tons; that being the case, the Vulcan ships of comparable length and greater volume are certainly no less than that figure. 1 x 10^6 tons is 1,002,000,000 kg. To be conservative, let us assume it is
primarily composed of a metal with similar properties to Aluminum (900 J/kg K specific heat and boiling point of 2740 K).
Assuming a relatively high 305 K initial temp, we have the following:
E = Mass x specific heat x Temperature increase
= 1.002 x 10^9 x 900 x 2435
= 2.195883 x 10^15
= 2,195,883,000,000,000 joules
or about 2,195 terajoules
I captured this scene at a rate of twelve frames per second, and each shot lasted a total of six frames - hence each shot lasted exactly one half second. The lower-limit power of the WarbirdÇs
disruptor beams is, therefore, no less than 4,390 Terawatts.
This compares unfavorably with the 100,000 TW figure for the Type X phaser arrays of the Galaxy class starship, but of course there is no reason to suppose that the warbird would fire on maximum
power against defenceless targets. In addition, we know that at least some Federation vessels are composed of materials which are exceptionally difficult to heat - see the entry under the 'Materials'
entry for details.
Had the Vulcan ships been comprised of Tritanium instead of Aluminium, the blast would have had to be some 450 times more powerful, or 1,975,500 Terawatts! Obviously this is also out of line with the
Warbird being an approximate match for a Galaxy class starship.
On the other hand, it is not improbable that a civilian interstellar vessel would have a significantly weaker hull than a Starship, whilst still being much stronger than present day materials. If we
take the 78,000 Terawatt figure generated under the 'Pegasus' entry below as fact, then the Vulcan ships would have a hull some eighteen times tougher than Aluminium. This is certainly feasible, but
we are in the realms of manipulating the data to fit the conclusion here, so look at this one with caution.
In this episode a Romulan Warbird melts a significant portion of a large asteroid in order to seal the Enterprise-D inside a large chamber in the interior. The asteroid is described by the
Encyclopedia (2nd edition, page 24) as 'moon sized'. No clear indication of the overall size is possible from the episode itself, but the when the Enterprise is on its way out of the asteroid
Lieutenant Worf reports 'We have passed through two kilometres of the asteroid. Now within one kilometre of the surface.', indicating that the fissure is three kilometres deep. The fissure must have
a diameter of at least six hundred metres in order to accomodate the Enterprise-D. The Warbird melts sufficient rock to fill the entire fissure.
The time scale is uncertain. We see Admiral Pressman and Commander Riker on the Starship Pegasus when the attack begins; they beam to the Enterprise, then the scene cuts and shows them arriving on
the bridge to see the rock cooling. Twenty five seconds of screen time elapse, but some twenty or so extra seconds would be required for the two officers to reach the bridge from the transporter
room. The total time of the Warbirds barrage is therefore approximately forty five seconds.
The volume of asteroidal material melted can be calculated by the equation :
V = pi x average radius of fissure^2 x depth of fissure
= 3.142 x 300^2 x 3000
= 3.142 x 300^2 x 3000
= 848,340,000 cubic metres.
Assuming that the asteroid is rock the density, boiling point and specific heat capacity should be approximately 2,300 kg/cubic metre, 2500 K, and 720 J/Kg/K respectively. The energy required to melt
this volume can be calculated thus :
E = 8.4834 x 10^8 x 2300 x 2500 x 720
= 3.512 x 10^18 Joules
The average power output of the Warbird was therefore equal to :
P = 3.512 x 10^18 / 45
= 7.805 x 10^16 Watts
Or 78,047 Terawatts. This fits in exactly with the Warbird having slightly less firepower than the 100,000 TW output calculated earlier for the Galaxy class.
Yellow text = Canon source Green text = Backstage source Cyan text = Novel White text = DITL speculation
Copyright Graham Kennedy Page views : 1,259 Last updated : 1 Jan 1970 | {"url":"http://www.ditl.org/pagarticle.php?ArticleID=28&ListID=People&ListOption=Other","timestamp":"2014-04-24T18:30:09Z","content_type":null,"content_length":"16581","record_id":"<urn:uuid:3c1a1ce6-8dca-4931-a35b-743cff98de53>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Idaho Public Television NTTI Lesson Plan: Bead It!
Lesson 1
Step 1: Pre-assessment. This step will help you design instruction that helps the development of incomplete or missing concepts, pinpoint the error or misunderstanding about the nature of student
thinking. Give each student a copy of the pre-assessment sheet to complete. Conduct one-on-one interviews to check their level of understanding of proportional thinking. Record information to keep
track of for each student.
Step 2: View the video. Your questioning and explanations throughout the video will provide the FOCUS FOR MEDIA INTERACTION. Say, “Let’s see how this class works together to solve their problem.”
PLAY showing how the students use drawing, discussing, numbers and writing to solve their problem.
STOP and ask what your students saw happening in the classroom. SKIP to the spot where a group of 3 girls and 1 boy work through the problem together.
PAUSE before the teacher talks about the 6 packs of beads. Ask what they saw happening in the group. Point out how they all wrote on their pads using numbers.
PLAY through the segment showing the problem-solvers using drawings. PAUSE and point that out. PLAY through the segment that shows the teacher explaining the next step until after the students
explain their work at the board. STOP. Ask what your students saw on the group’s paper taped on the board. Did they notice how well the students explained their work?
Step 3: Use mathematicians’ names as you form groups. Some suggestions are; Fibonacci, Pythagoras, Euclid, Archimedes, Pascal, Descartes, Gauss, Boole. (We study historical figures in mathematics
throughout the year. They come alive through their fascinating stories.) Don’t forget to include women who contributed to math.
Step 4: In your problem, the original necklace contained 20 beads (2 colors) and 5 are left from the broken necklace. Can you recreate the whole necklace from these 5 beads? Discuss the problem.
Volunteers will explain their reasoning. Hand out pre-made packages of beads that contain either 2, 4, or 5 beads of 2 different colors. Each group gets one package so that different groups have
different numbers of beads. Groups will answer the question: How many packs of beads will it take to make the original 20 bead necklace? Hand out chart paper and markers to each group. Check each
group’s progress. One member of each group will present their solution at the board using their chart paper. Introduce the words, “ 1 package is to 2 beads as blank packages is to 20 beads, or 1
package is to 4 beads as blank is to 20, or 1 package is to 5 beads as blank is to 20.”
Lesson 2
Step 1: A new problem. Students will solve a similar problem. They will also assign values to the bead bags and determine the cost of the necklace. Give each group a pack of 4, 8, or 12 beads with 2
colors, chart paper and markers. Ask how many packs of beads they would need to make a necklace of 32 beads. They will create a chart that shows their reasoning and answer. Explain that there might
be more than one way to get an answer. They can make any pattern they chose and must show their pattern on the chart. They will get the necessary packs and make the necklace they designed. One member
of each group will present their solution at the board using their chart paper, their necklace, and the language of proportions.
Step 2: What is the cost of our necklace? Explain that a pack of 4 beads costs $4, a pack of 8 beads costs $5, and a pack of 12 beads costs $7 dollars. They will determine the cost of their necklace.
Using the same procedures as the previous problem, each group will present their conclusions and reasoning.
Step 3: In the classroom, display all the work.
Lesson 3
Step 1: Students will look for patterns in nature and in the manmade world. Take a walk around your community and, using a digital camera or Polaroid, let students take photos of any patterns they
see, (at least one for each student).
The student will decide if his pattern repeats (not all will) and will draw and describe the pattern in his math journal.
Student will apply their knowledge of: ____ is to _____ as _____ is to _____ in their descriptions.
Students will present their work at the board.
Step 2: Make a mural of the photos and students’ charts. Student Assessment: Read the math journals for improved understanding.
Look for patterns in the animal world and discuss why animals might need this characteristic. (camouflage, mate attraction, a signal to stay away.)
Language arts:
Daily use of a math journal.
Students can apply their new knowledge as they design and make a beaded bracelet on small looms. This will take a few days to accomplish. You’ll demonstrate the method in the first lesson. Students
will design the bracelet using graph paper and colored pencils, The bracelet will be 3 beads wide.
Let the students work independently on this however you wish. They must make a repeating pattern and they must calculate how many beads they will need to complete a 5” beaded section. Make a class
quilt with repeating patterns.
Students can use proportional reasoning to determine how many pieces of each color they will need.
Community Connections:
Weavers and quilters often use proportional reasoning. Invite one to the class to explain his/her craft. | {"url":"http://www.idahoptv.org/ntti/nttilessons/lessons2001/KarenByersbeadit.html","timestamp":"2014-04-17T21:58:01Z","content_type":null,"content_length":"25744","record_id":"<urn:uuid:88d7c04c-95ff-4c11-90d8-a9f651811488>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proposition 10
Of magnitudes which have a ratio to the same, that which has a greater ratio is greater; and that to which the same has a greater ratio is less.
Let A have to C a greater ratio than B has to C.
I say that A is greater than B.
If not, then A either equals B or is less than it.
Now A does not equal B, for in that case each of the magnitudes A and B would have the same ratio to C, but they do not, therefore A does not equal B.
Nor is A less than B, for in that case A would have to C a less ratio than B has to C, but it does not, therefore A is not less than B.
But it was proved not to be equal either, therefore A is greater than B.
Next, let
have to
a greater ratio than
has to
I say that B is less than A.
If not, it is either equal or greater.
Now B does not equal A, for in that case C would have the same ratio to each of the magnitudes A and B, but it does not, therefore A does not equal B.
Nor is B greater than A, for in that case C would have to B a less ratio than it has to A, but it does not, therefore B is not greater than A.
But it was proved that it is not equal either, therefore B is less than A.
Therefore, of magnitudes which have a ratio to the same, that which has a greater ratio is greater; and that to which the same has a greater ratio is less.
This converse to proposition V.8 has two statements.
If a : c > b : c, then a > b.
If c : b > c : a, then b < a.
Part of the law of trichotomy for ratios is used in this proof, the part which says at most one of the three cases a : c < b : c, a : c = b : c, or a : c > b : c, can occur.
Euclid’s proof relies on using V.Def.4 as an axiom of comparability since it uses proposition V.8 and the law of trichotomy for ratios. But the proposition can also be proved without the axiom.
Suppose a : c > b : c. Then there are numbers m and n such that na > mc but nb is not greater than mc. Therefore na > nb. Therefore a > b. Thus a : c > b : c implies a > b.
The other implication of the proposition can be proved similarly.
This proposition is used a few times in book V starting with V.14. | {"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookV/propV10.html","timestamp":"2014-04-16T10:31:14Z","content_type":null,"content_length":"5324","record_id":"<urn:uuid:9081cea6-0c95-442b-bce3-78e6ce7fdf85>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Concatenative Language
December 31, 2008
Recently there has been
some discussion on the concatenative discussion group
about the term "concatenative language" and what it actually means. In this post I provide my definition and attempt to deconstruct it.
Recently there has been some discussion on the concatenative discussion group about the term "concatenative language" and what it actually means. In this post I provide my definition and attempt to
deconstruct it.
The canonical example of a concatenative language is the Joy programming language by Manfred von Thun. I consider my language Cat to be concatenative as well. Factor by Slava Pestov is another
language that describes itself as concatenative. For more examples of concatenative languages see the concatenative wiki and Wikipedia.
One problem that we have in the concatenative community currently has is lack of a rigorous definition. This makes it hard to determine whether a given language is concatenative or not, and makes any
formal study difficult. The following is my attempt to provide a rigorous definition for a concatenative programming language:
"A concatenative programming language is language in which terms correspond to functions and in which the juxtaposition of terms denotes the composition of functions.".
For those readers not scared away yet, allow me to elaborate. A term is a valid and complete syntactic phrase that can be generated from the concrete (syntactic) grammar of a language. For example in
Scheme "(f a 5)" is a term, as is "a" or "5" or "f". However, "(f" is not a term. Juxtaposition is just a fancy way of saying "two terms side by side". A concatenative language differs from the
functional programming language paradigm where terms correspond to values (including functions) and the fundamental operation is function application. In the Lambda calculus for example -- which is
the basis for functional programming -- all terms correspond to values, function application, or function abstraction (i.e. lambda expressions). However, in a concatenative language all terms
correspond to functions on a tuple (e.g. a single stack, a pair of stacks, a pair of stacks plus a dictionary, or even a deque if you are really masochistic), or the composition of functions (which
yields a new function, so is really a function).
So for example a literal term such as "42" in Joy or Cat is in fact a function that maps a stack to a new stack that is a copy of the original with the value 42 added to the top. For most practical
purposes a programmer may think of the term "42" as being equivalent to the value "42". This in fact aids computations, but it can confuse the theory a bit. To have a correct and formal understanding
of programming languages we have to understand the term/value distinction. IN very down-to-earth terms: the operation that pushes a value on the stack is different from the value that is on the
At this point, some people may say, hey what about quotations? In Joy and Cat a quotation is a function that yields a new stack with a function on the top. This is consistent with what I have been
While in theory a concatenative language does not require a stack, in practice most concatenative languages are stack-based, and at the same time most stack-based languages can be modeled formally as
a concatenative language.
An interesting side note: it is not strictly neccessary to evaluate a concatenative language like Joy or Cat using a stack, it is simply convenient. One could for example devise an evaluator (e.g. an
interpreter) that used term rewriting.
In practical terms a concatenative language is not really different from a functional language, except that there is less nesting of functions. Rather than writing
(f0 (f1 (f2 ... (fn x) ...))
We could write:
x fn ... f2 f1 f0
One of the interests of concatenative languages is that it is a formal computation model that closely models the actual processor of a lot of computers. It also corresponds nicely to both imperative
and functional reasoning about code.
One interesting property of some concatenative languages that has me particularly interested is that we can replace any referentially transparent sub-sequence of terms with a new function that is
defined as that sub-sequence. For example "a b c d a b c" can be replaced with "f d f" where "f" is defined as "a b c". This make automated code refactoring and analysis much easier. Hence the
moniker chosen by Slava for his language Factor. This is also why I am actively studying the usage of concatenative languages for code size optimization (let me know if this interests you, and I'll
tell you more).
I think that it may benefit the community to distinguish between pure point-free concatenative languages (e.g. those with no environment and control structures such as Joy and Cat) and those with an
explicit environment (e.g. Postscript and Forth), as is done in the functional community.
Hopefully this blog entry didn't get too esoteric for my readers. Maybe I'll write about some neat C++ hacks next time. :-) | {"url":"http://www.drdobbs.com/architecture-and-design/what-is-a-concatenative-language/228701299","timestamp":"2014-04-17T13:38:26Z","content_type":null,"content_length":"98925","record_id":"<urn:uuid:549ae548-ee49-4299-87cb-a85405a6264f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
What examples of distributions should I keep in mind?
up vote 22 down vote favorite
I'm learning a bit about the theory of distributions. What examples of distributions will help me develop good intuition?
Definitions: Let $U$ be an open subset of $\mathbb{R}^n$. Write $C_c^\infty(U)$ for the complex vector space of infinitely differentiable functions $U \to \mathbb{C}$ with compact support. A
distribution on $U$ is a linear map $C^\infty_c(U) \to \mathbb{C}$, continuous with respect to a certain topology on $C^\infty_c(U)$.
Examples: If $\mu$ is a signed measure on $U$, finite on compact subsets, then $f \mapsto \int_U f \mathrm{d}\mu$ is a distribution. (This covers, for instance, the Dirac distribution, $f \mapsto f
More generally, write $D_i = \partial/\partial x_i$. Then $$ f \mapsto \int_U D_{i_1} D_{i_2} \cdots D_{i_r} f \mathrm{d}\mu $$ is a distribution, for any indices $i_1, \ldots, i_r$ and measure $\
Any linear combination of such things is again a distribution, since distributions form a vector space. E.g. if $n \geq 3$ then there's a distribution $$ f \mapsto \int_U D_3 D_1 f \mathrm{d}\mu + \
int_U D_2^2 D_3 f \mathrm{d}\nu $$ for any measures $\mu$ and $\nu$. I guess we can also take infinite linear combinations, subject to convergence conditions.
My question: Is it OK if I go round thinking of things like the last example as being a typical sort of distribution? Or is the concept of distribution much more general than I'm realizing? The texts
I've seen are short on this kind of intuition.
schwartz-distributions intuition ca.analysis-and-odes
The measure has to be finite on compact sets, right? – MLevi Nov 9 '09 at 3:31
Right. Thanks. – Tom Leinster Nov 9 '09 at 4:12
Yeah... gets confusing sometimes when so general ;) – MLevi Nov 9 '09 at 4:17
add comment
7 Answers
active oldest votes
From the way your question is phrased, it seems as though you want to get a handle on particular distributions rather than the space of all distributions. In which case, the result cited by
Debraj is probably the most comprehensive. Properly stated, the result is:
Theorem: If $T \in C^\infty(\mathbb{R},\mathbb{C})'$ (continuous dual) with $supp T \subseteq K$ ($K$ compact) then there are integers $n_1$, $n_2$, ..., $n_p$ and continuous functions $f_1$,
$f_1$, ..., $f_p$ with supports in $K$, such that
$$ \sum_{j=1}^p f_j^{(n_j)} = T $$
The references for this are: Schwartz Theorie des distributions (1965) and Vo Khac Khoan Distributions, Analyse de Fourier. Operateurs aux derivess partielles (1972).
Then, of course, any arbitrary distribution can be written as the sum of distributions with compact support in a "nice" way.
From this point of view, the best examples are ones that are close enough to continuous functions that they are accessible (sorry, I know you're a category theorist but read that in British
not categorish) but far enough away that you see some weird behaviour that you wouldn't expect if everything was a nice, continuous function. The examples mentioned in other answers are all
good from this point of view: delta functions, derivatives of delta functions, $L^p$ functions, derivatives thereof. I'd add a few things like the Dirac comb, $\Delta_{a} = \sum_{n \in \
mathbb{Z}} \delta_{n a}$ for $a \in \mathbb{R}$, $a \ne 0$, which has a particularly nice Fourier transform. You could integrate this to get an infinite staircase function (the floor
function, that is). Indeed, any piecewise continuous function is actually a limit of a sequence of variations on the theme of Dirac's comb (i.e. where the tines can vary in length and
separation) so Dirac's comb and its derivatives are the "only" distributions you need to know about.
But for me, this is the wrong way to think about distributions. If you want to understand distributions by looking at specific examples then you should really say that distributions are just
up vote smooth functions with compact support but in a slightly different topology. Once you've grokked the topology, then there's no reason not to simply think about really nice smooth functions.
23 down And if you haven't grokked the topology, then none of the "examples" is going to give you a good intuition as to how distributions behave. Indeed, I'd say that most of the examples are
vote designed to make you think about the topology and to "shock" you into realising that the topology isn't what you naturally assume it should be when thinking about smooth functions.
I think of distributions simply as dual to smooth functions. The fact that we can think of functions as distributions is simply down to the fact that we have a pairing
$$ (f,g) \mapsto \int_{\mathbb{R}} f(t) g(t) d t $$
between many of the different function spaces that we can define. (Note the lack of conjugation.) This pairing defines a map from the one function space into the dual of the other and we can
ask how much of the dual we can see in this way. That's essentially what the results about representing distributions try to answer. But this doesn't give much intuition as to what the dual
space looks like as a whole because it tries to build it up piece by piece, each time saying "have we got it all yet"?
For example, many of the answers you got talk about differentiation of distributions. How do we know that we can differentiate these? In one answer, you got the formula $\partial \phi (f) = -
\phi( \partial f)$. Where did that minus sign come from? After all, if I'm in tempered distributions then I can define the Fourier transform of a distribution and then the formula is $\
mathcal{F}(\phi)(f) = \phi(\mathcal{F}(f))$. Why a minus sign on the one and not on the other? And I can multiply smooth functions, so why can't I multiply distributions? What's going on?
The truth is that by simply embedding functions into distributions you miss out on the whole duality story and the difference between defining a dual operator versus an extension operator.
But I've already written up this part on the n-lab so I'll simply refer you to there for the next chapter. Take a look over there. And while you're there, add your favourite of the above
examples and correct the statement of the theorem.
Hey! Not fair! My first answer that gets more than 10 votes, and it's a community wiki question. Where's the referee when you need them?! – Andrew Stacey Nov 10 '09 at 7:55
Andrew, you'll just have to bask in the warm glow of appreciation. Your answer was very helpful, especially to me. – Tom Leinster Nov 11 '09 at 23:26
Bask. Bask. Given that it's below freezing here, then a warm glow of appreciation is more use than the reputation points so I'll content myself with that. – Andrew Stacey Nov 12 '09 at 8:25
add comment
A rather crazy (and very useful) example is a fundamental solution of an arbitrary differential equation with constant coefficients, i.e., a distribution $u$ satisfying $P(D)u=\delta_0$
where $P$ is a polynomial and $D$ is the differentiation operator. The construction can be found in many decent PDE textbooks. It is as far from the standard "take a non-smooth function,
differentiate a few times" idea of how to get distributions as possible.
up vote 9
down vote Another thing to understand is that, like with everything else, it is even more important to learn what you can and what you cannot do with distributions than what they can be.
This is a good family of examples, but it seems more algebraic than analytic. The construction with P yields a D-module supported on the zero set of P. – S. Carnahan♦ Nov 9 '09 at 15:09
add comment
I believe you should start with the theory of tempered distributions, which are the linear functionals $\phi:\mathcal S(\mathbb R^n) \to \mathbb C$ where $\mathcal S(\mathbb R^n)$ is the
Schwartz space on $\mathbb R^n$, i. e. the $C^\infty$ functions on $\mathbb R^n$ which are bounded together with all their derivatives.
up vote 5 You can get more intuition in $\mathcal S'$, since the tempered distributions behave pretty much as functions. In fact, every $f\in L^p$ is a distribution, via $$f(g) = \int fg$$ for every
down vote $g\in\mathcal S$. You can take a derivative $\partial$ of a distribution $\phi$ via $$\partial \phi(f) = -\phi(\partial f),$$ or the Fourier transform via $$\hat\phi(f) = \phi(\hat f\ ).$$
A good reference is Folland's Real Analysis book, Chapter 9.
add comment
Two comments:
(1) Every distribution can be locally represented as a (distributional) partial derivative of a continuous function. For example, for the dirac delta at 0, we can start from the function
which is 0 for negative x, and equal to x for positive x and take two derivatives. Therefore, it is important to understand that not all distributions are made equal -- the more
up vote 4 complicated ones are made by taking more derivatives of continuous functions.
down vote
(2) Some examples to definitely keep in mind (to emphasize the subtleness of the notion) while thinking about distributions are the principal value p.v $\frac{1}{x}$ and the
pseudofunctions p.f. $\frac{1}{x^n}$
I thought p.f. was pronounced as "finite part", but "pseudofunction" makes sense too. – timur Jun 27 '11 at 1:47
add comment
Stepping back from the problem a little bit, I'd say that focusing on distributions is not the right approach. It's obvious from the way you've written your question that you understand
the basics of distribution theory. Distributions are meant to fade into the background once you've established their theory. I'd say concentrate on Sobolev spaces, their embedding
up vote 4 theorems, and their applications.
down vote
3 Absolutely not! Distributions are fascinating in their own right and deserve to be centre stage, not simply as a background for Sobolev spaces and other such "constructed" spaces. –
Andrew Stacey Nov 9 '09 at 9:35
2 Andrew, maybe instead of implying that distributions should fade into the background, I should have said you may let distributions fade into the background if their applications are
your main interest. – John D. Cook Nov 9 '09 at 22:17
2 Yes, that's better. I agree with that. – Andrew Stacey Nov 19 '09 at 8:55
add comment
Although $L^1_{loc}$ does not contain the Dirac distribution, it may be useful to distinguish $L^1_{loc}$-distributions from say distributions represented by Radon measures. Your question
is interesting, because it is definitely important to understand examples of distributions. That said, perhaps the motivation for distributions is equally important. Distributions help us
take weak derivatives. The definition of a derivative of a distribution is motivated by Integration by Parts.
up vote 3 As you may know, often many mathematicians are more interested in working with specific distributions such as those in Sobolev spaces such as $W^{k,p}(U)$ ($1\leq p\leq \infty$), $BV(U)$
down vote (integrable functions whose first order (weak) derivatives are signed measures with finite variation), or even tempered distributions. Then there are distributions like $$T(\phi):=\sum_{k=
1}^{\infty} \int_{0}^{1}\frac{\phi(x)\sin(k\pi x)}{x}\, dx$$ for $\phi\in C_c^{\infty}((0,1))$. I guess the point is, be careful not to think that all distributions somehow behaving the
same way.
add comment
Every distribution is, locally, a finite number of derivatives of measures. You can prove this with Hahn Banach: restricted to an open set $\Omega$ with compact closure, your distribution
belongs to the dual of $C^k(\Omega)$ for some $k$. Note that $C^k$ embeds into $(C^0)^N$ for some large $N$ by taking $f \mapsto (f, D f, D^2 f, \ldots, D^k f)$. By Hahn Banach your
distribution is the restriction of some linear function on the dual of $(C^0)^N$, which has the form $u(f) = \sum_{|\alpha| \leq k} \int \partial^\alpha f(x) d\mu_\alpha$. This
characterization can then be used to deduce the one mentioned in other responses where you replace $\mu_\alpha$ with some continuous functions, but you have to take more than $k$ derivatives
so the latter characterization can be a bit misleading.
up vote
0 down So your intuition is right, all a distribution does is take a few derivatives and integrate. In this sense, distribution theory is the natural setting to combine differential calculus with
vote measure theory. Once you understand how crazy measures can be (e.g. measures on hypersurfaces; the derivative of the Cantor function is a measure supported on the Cantor set), you basically
have the extent of the pathologies of distributions. But a lot of distributions don't come given to you as derivatives of measures ($p.v. \frac{1}{x}$ is a good example; it requires one
derivative to define, but the Hilbert transform $f \mapsto \frac{1}{\pi} p.v. \int \frac{f(x-y)}{y} dy$ is a bounded operator on $L^2({\mathbb R})$!).
add comment
Not the answer you're looking for? Browse other questions tagged schwartz-distributions intuition ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/4706/what-examples-of-distributions-should-i-keep-in-mind/4711","timestamp":"2014-04-16T16:18:15Z","content_type":null,"content_length":"90857","record_id":"<urn:uuid:84e8eebc-0bad-49ca-900f-46a4cf62b180>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
The film strip shows the steps in the construction of the regular pentagon.
Copy this straight edge and compass construction for yourself and explain why it produces a regular pentagon.
The description of the construction below, and the information in the notes, should help you to explain the construction.
Here are the steps shown in the film sequence:
1. Draw a circle $C_1$ centre $O$ diameter $PQ$.
The circle $C_1$ has radius 1 unit; what is its equation?
2. Draw the perpendicular bisector of $PQ$ cutting $PQ$ at $O$ and $C_1$ at $A$ and $Y$.
3. Draw perpendicular bisectors of $PO$ and $OQ$ cutting $PQ$ at $R$ and $S$.
Find the length $YS$
4. Draw circles $C_2$ and $C_3$ centres $R$ and $S$ and radii $RO$ and $SO$.
5. Join $R$ and $S$ to the point $Y$ cutting $C_2$ at $T$ and $U$ and $C_3$ at $V$ and $W$.
6. Draw circle $C_4$ centre $Y$ radius $YW=YU$ cutting $C_1$ at $D$ and $C$.
What is the equation of $C_4$? Find the value of $y$ at the intersection of $C_1$ and $C_4$ .
7. Draw circle $C_5$ centre $Y$ radius $YT=YV$ cutting $C_1$ at $E$ and $B$.
What is the equation of $C_5$ ?
Find the value of $y$ at the intersection of $C_1$ and $C_5$.
At $B$ and $E$ $x^2 + y^2 +2y +1 = 2y + 2 = (3 + \sqrt 5)/2$ so
8. Join $AB$, $BC$, $CD$, $DE$, $EA$.
How would you adapt this construction to produce a regular decagon? | {"url":"http://nrich.maths.org/2860/index?nomenu=1","timestamp":"2014-04-16T11:07:01Z","content_type":null,"content_length":"5661","record_id":"<urn:uuid:50ca58d8-22cf-4142-a2f6-3eec74c53444>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Michigan City, IN Algebra Tutor
Find a Michigan City, IN Algebra Tutor
...I often times ask for feedback so that I can aid individuals in a way that meets their needs. Everyone is different. Also, I never bill for a lesson if a student or parent is not completely
satisfied with my tutoring.
11 Subjects: including algebra 1, algebra 2, calculus, physics
...I am a non-smoker. I believe that anyone can do whatever they set their minds to.I have passed the Reading Specialist Praxis test to obtain my transition to teaching certificate from IWU. I
enjoy working with the grammar and the English language.
25 Subjects: including algebra 2, algebra 1, reading, English
...With WyzAnt, I hope to be able to reach my desire to personally help more people that has not been able to catch up with the requires skills in Mathematics to improve their prospects in their
lives. I have taught Algebra and Algebra II at several high schools, and I'm constantly working with a l...
11 Subjects: including algebra 1, algebra 2, geometry, SAT math
I have spent in excess of 30 years in the chemical and environmental industry as an industrial trainer, research engineer, supervisor and manager. I have authored technical articles and made
numerous presentations to both technical and public audiences. I believe that in order for anyone to unders...
13 Subjects: including algebra 2, biology, calculus, algebra 1
...I have three children of my own (14, 7, and 4). My strengths are in Math, Writing, and strategic test-taking. I look forward to working with you.Algebra 1 is one of my favorite subjects to
teach. I am intimately familiar with multiple-step equations, graphing, systems of equations, and all of the rules that are associated with algebra.
28 Subjects: including algebra 1, algebra 2, English, writing
Related Michigan City, IN Tutors
Michigan City, IN Accounting Tutors
Michigan City, IN ACT Tutors
Michigan City, IN Algebra Tutors
Michigan City, IN Algebra 2 Tutors
Michigan City, IN Calculus Tutors
Michigan City, IN Geometry Tutors
Michigan City, IN Math Tutors
Michigan City, IN Prealgebra Tutors
Michigan City, IN Precalculus Tutors
Michigan City, IN SAT Tutors
Michigan City, IN SAT Math Tutors
Michigan City, IN Science Tutors
Michigan City, IN Statistics Tutors
Michigan City, IN Trigonometry Tutors
Nearby Cities With algebra Tutor
Beverly Shores algebra Tutors
Chesterton algebra Tutors
La Porte, IN algebra Tutors
Lake Station algebra Tutors
Lakeside, MI algebra Tutors
Laporte, IN algebra Tutors
Long Beach, IN algebra Tutors
Michiana Shores, IN algebra Tutors
Michiana, MI algebra Tutors
New Buffalo, MI algebra Tutors
Porter, IN algebra Tutors
Pottawattamie Park, IN algebra Tutors
Town Of Pines, IN algebra Tutors
Trail Creek, IN algebra Tutors
Union Pier algebra Tutors | {"url":"http://www.purplemath.com/michigan_city_in_algebra_tutors.php","timestamp":"2014-04-18T16:30:50Z","content_type":null,"content_length":"24250","record_id":"<urn:uuid:954a3a4b-068c-4826-a32d-23f56b318431>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collision Response Question
Ok, so I need some help with the math of a collision response. The detection part is finished and works properly - but that's the easy part. The following is a diagram:
Basically the ball has a direction vector of Vector A, and it's going to bounce off of the inside of the circle. I understand that it needs to reflect off the vector drawn between the large circle's
origin and the point of collision (Vector B), thus becoming Vector C. I think Vector D gets involved somewhere in the calculations - that's why I've included it.
What's the easiest way to do this? Seems a bit complicated, but I'm sure there's an easy solution to this one... Thanks in advance.
Posts: 5,143
Joined: 2002.04
Sir, e^iπ + 1 = 0, hence God exists; reply!
Or more succinctly:
perpendicular = (A • B) × B
parallel = A – perpendicular
C = parallel × friction + perpendicular × –bounce
where friction and bounce > 0 and < 1
and B is a unit vector
Ok, the Mathematica page looks great - but very confusing to someone who hasn't taken math courses in 6 years.
Will: that explanation looks decent, but I have to wrap my mind around it for a bit. It's important to me that I *understand* this - makes debugging easier. Assuming there is no friction and the
bounce is constant, would I use values of 1.0 for each?
And when you suggest to multiply floats by vectors, do you mean simply to multiply the value by both elements of the vector?
Excuse my math skills, or lack thereof. It's been a long time since high school math, and I find this all a bit confusing.
Yes when you multiply vectors by floats just multiply both elements of the vector. A bounce of 1 and the ball never stops bouncing. A bounce greater than 1 and it gets exponentially faster until
something goes wrong.
The maths above splits the balls velocity vector into two orthogonal components, one along the vector B perpendicular (pp) to the collision point, and one along the vector D parallel (p) to the
collision point. It is easy to then combine these back to get the reflection vector.
Wow, thanks Will! Great diagram - what did you use to make it? I grow quickly tired of Sketch.app!
I'll try to implement that tonight.
I was just reading on
this page
that the cross product can only be computed for 3D vectors. Since this is 2D space, what does (A • B) × B mean (how do you calculate it)?
A dot B
the dot product is
a.x*b.x + a.y*b.y
in 3D
a.x*b.x + a.y*b.y + a.z*b.z
an important property is that
a.b/|ab| = cos (the angle between a and b)
Sir, e^iπ + 1 = 0, hence God exists; reply!
unknown Wrote:A dot B
the dot product is
a.x*b.x + a.y*b.y
in 3D
a.x*b.x + a.y*b.y + a.z*b.z
an important property is that
a.b/|ab| = cos (the angle between a and b)
But what about the cross product between the resulting scalar and B? Do I just multiply each of B's components by the scalar?
For example, say the dot product turned out to be 3. Would I just stretch each of B's components by a factor of 3?
Quote:But what about the cross product between the resulting scalar and B?
You can only do the cross product in 3D.
Quote:Would I just stretch each of B's components by a factor of 3?
That would give you (A • B) x B.
Sir, e^iπ + 1 = 0, hence God exists; reply!
Technically it is not a cross product. Cross-products are only between vectors. It is just a multiply. So yes just multiply each of B's components by the scalar. So (A • B) × B is just: B scaled
to the magnitude of the dot product of A and B.
Quote:Wow, thanks Will! Great diagram - what did you use to make it? I grow quickly tired of Sketch.app!
If you are interested in beta-testing let me know.
my brother just told me it's probably better to write it like (A•B)B in order to avoid confusion
Well the x means multiply, and you can also write it like that or * or •, the last one is probably quite confusing if your using it for dot product as well!
Sir, e^iπ + 1 = 0, hence God exists; reply!
ARIGHT! It works! Thanks so much, Will!
PHP Code:
General mathematical idea:
define vector B as vector from origin through point of collision
D = (A • B) × B
parallel = A – D
C = parallel + D × –bounce */
// Define 1 temporary vector for calculations ...
vector A = [ball direction];
// Calculate the point where the ball is colliding with the play area
vector pointOfCollision = (vector){ ([ball position].x + (A.x * [ball radius])),
([ball position].y + (A.y * [ball radius]))};
// ... continue by defining 3 more temporary Vectors ...
vector B = (vector){ (pointOfCollision.x - 250.0), (pointOfCollision.y - 250.0) };
B = vectorNormalise(B);
vector D = vectorMultiply(B, vectorDotProduct(A, B));
vector parallel = vectorSubtract(A, D);
// ... and finish by calculating the last "temporary" vector, which is the resulting vector
vector C = vectorAdd(parallel, vectorMultiply(D, -BOUNCE));
C = vectorNormalise(C);
// Apply the change to the ball object's direction vector
[ball setDirection:C];
Binary (
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Collision Response merrill541 7 3,561 Nov 8, 2008 09:14 PM
Last Post: merrill541
Edge Collision Response Problem Bachus 9 6,079 Mar 21, 2008 03:34 PM
Last Post: Skorche
Simple 2D collision response Wowbagger 1 4,560 Jul 30, 2007 03:02 PM
Last Post: Skorche
Collision Response With Circles Nick 8 3,446 Nov 3, 2006 04:11 PM
Last Post: Skorche
Yet Another Collision Detection Question t3knomanser 1 2,117 Apr 14, 2006 06:46 AM
Last Post: codemattic | {"url":"http://idevgames.com/forums/thread-4949-post-27281.html","timestamp":"2014-04-16T07:35:31Z","content_type":null,"content_length":"51656","record_id":"<urn:uuid:339bb22c-21a3-4a64-93a5-c02301eeb188>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMA Newsletter #340
Natalia Alexandrov (NASA Langley Research Center) Mathematical Perspectives on NASA Applications
Abstract: NASA is an unending source of spectacularly interesting problems for an applied mathematician. Although it has traditionally been an "engineering shop", in recent years the growing
complexity of goals and the ever increasing computational power clearly necessitate the development of sophisticated computational models and rigorous numerical procedures, thus providing an
opportunity for a closer collaboration between NASA engineers, scientists and applied mathematicians. I will give an overview of some interesting problems in modeling and design, as well as some
ideas of working for and with NASA.
Allison Baker (Lawrence Livermore National Laboratories) Scalable conceptual interfaces in hypre
Abstract: The hypre software library provides high performance preconditioners and solvers for massively parallel computers. For ease of use, hypre's conceptual interfaces allow users to describe a
problem in a natural way, such as in terms of grids and stencils. In anticipation of machines with tens or hundreds of thousands of processors, we recently re-examined these interfaces and made
substantial design changes to improve scalability. In this poster, we describe the challenges we faced and present solutions.
Katherine Bartley (University of Nebraska-Lincoln) Decoding Algebraic Geometric Codes over Rings
Abstract: Many techniques of algebraic geometry have been applied to study of linear codes over finite fields, beginning with the definition of algebraic geometry codes by Goppa in 1977. In 1996
Walker defined algebraic geometric codes over rings after it had been shown that certain nonlinear binary codes are nonlinear projections of liner codes over Z/4. Many algorithms have been developed
for the efficient decoding of algebraic-geometric codes over fields. We will show that we can modify the 'Basic Algorithm' to decode algebraic geometric codes over rings with respect to the Hamming
distance. We would also like to find a decoding algorithm that decodes algebraic geometric codes over rings with respect to the squared Euclidean distance.
Margaret Cheney (Rensselaer Polytechnic Institute) Radar imaging
Abstract: This talk will survey some of the mathematical ideas behind the formation of high-resolution images from radar data, and will outline some of the open problems in the field.
Agata Comas (Rice University) The Numerical Solution of Linear Quadratic Optimal Control Problems by Time-Domain Decomposition
Abstract: Optimal control problems governed by time--dependent partial differential equations (PDEs) lead to large-scale optimization problems. While a single PDE can be solved marching forward in
time, the optimality system for time-dependent PDE constrained optimization problems introduces a strong coupling in time of the governing PDE, the so-called adjoint PDE, which has to be solved
backward in time, and the gradient equation. This coupling in time introduces huge storage requirements for solution algorithms. We study a time-domain decomposition based method that addresses the
problem of storage and additionally introduces parallelism into the optimization algorithm. The method reformulates the original problem as an equivalent optimization problem using ideas from
multiple shooting methods for PDEs. For convex linear--quadratic problems, the optimality conditions of the reformulated problems lead to a linear system in state and adjoint variables at
time--domain interfaces and in the original control variables. This linear system is solved using a preconditioned Krylov subspace method.
Brenda Dietrich (IBM Corporation) Math inside IBM
Abstract: In this talk I will discuss several IBM Research projects in which advanced mathematics is used to dramatically improve IBM products and processes. Examples include product design,
manufacturing process design, and supply chain operations. I will also discuss ways in which our ability to deploy mathematics, by embedding the math in automated processes or tools, has dramatically
improved in the past 20 years.
Elena Dimitrova (Virginia Tech) Graph-theoretic method for the discretization of gene expression measurements
Abstract: The poster introduces a method for the discretization of experimental data into a finite number of states. While it is of interest in various fields, this method is particularly useful in
bioinformatics for reverse engineering of gene regulatory networks built from gene expression data. Many of these applications require discrete data, but gene expression measurements are continuous.
Statistical methods for discretization are not applicable due to the prohibitive cost of obtaining sample sets of sufficient size. We have developed a new method of discretizing the variables of a
network into the same optimal number of states while at the same time preserving maximum information. We employ graph-theoretic method to affect the discretization of gene expression measurements.
Our C++ program takes as an input one or more time series of gene expression data and discretizes these values into a number of states that best fits the data. The method is being validated on a
recently published computational algebra approach to the reverse engineering of gene regulatory networks by Laubenbacher and Stigler.
Emad S. Ebbini (University of Minnesota) Noninvasive Two-dimensional Temperature Estimation Using Diagnostic Ultrasound
Abstract: The talk will cover basic principles and reconstruction algorithms with examples from experimental image data from tissue and tissue-mimicking phantoms. This is part of our ongoing research
on developing ultrasonic systems for noninvasive image-guided surgery.
Maria Emelianenko (Pennsylvania State University) Uniform convergence of a multigrid energy-based quantization scheme
Abstract: We propose a new multigrid quantization scheme in a nonlinear energy-based optimization setting. The problem of constructing an optimal vector quantizer based on the Centroidal Voronoi
Tesselation is nonlinear in nature and hence cannot in general be analyzed using standard linear multigrid approach. We try to overcome this difficulty by essentially relying on the energy
minimization. Since the energy functional is in general non-convex, a dynamic nonlinear preconditioner is proposed to relate our problem to a sequence of convex optimization problems. In the case of
the one-dimensional problem, we have shown that for a large class of density functions, the nonlinear multigrid algorithm enjoys uniform convergence properties independent of k, the problem size,
thus a significant speedup comparing to the traditional Lloyd-Max iteration is achieved. We show some results of numerical experiments and discuss analytical extensions of our theoretical framework
to higher dimensions.
M. Gregory Forest (University of North Carolina) Modeling the pipeline of high performance, nano-composite materials and effective properties
Abstract: We focus these lectures on the class of nano-composites comprised of nematic polymers, either rod-like or platelet-like macromolecules, together with a matrix or solvent. These materials
are designed for high performance, multifunctional properties, including mechanical, thermal, electric, piezoelectric, aging, and permeability. The ultimate goal is to prescribe performance features
of materials under conditions they are likely to be exposed, and then to reverse engineer the pipeline by picking the composition and processing conditions which generate properties with those
performance characteristics. These lectures will address two critical phases of this nano-composite materials pipeline. First, we model flow processing of nematic polymer films, providing information
about anisotropy, dynamics, and heterogeneity of the molecular orientational distributions and associated stored elastic stresses. Second, we determine various effective property tensors of these
materials based on the processing-induced orientational distribution data. Underlying these technological applications is a remarkable sensitivity of nematic polymer liquids to shear-dominated flow,
which must be understood from rigorous multiscale, multiphysics theory, modeling and simulation in order to approach the ultimate goal stated above. This research is based on multiple collaborations
and supported by various federal sponsors, to be highlighted during the lectures.
Laura JD Frink (Sandia National Laboratories) Complex fluid systems in nanotechnology, biology, and life
Abstract: Complex fluids are ubiquitous in nanoscale materials, at interfaces, and in biology. They are typically modeled with either molecular simulation or molecular theory approaches. Our research
has emphasized implementation of large scale algorithms for density functional theory based approaches to these problems. In density functional theories a free energy functional is minimized to
determine an optimal solution. It turns out that many women also find their lives to be complex fluid systems that require daily optimization around the constraints of the career, their home, and
their families. This seminar will present briefly the content of one applied math career in the context of a national lab, and also discuss how the work-family balance can be achieved in this
Yuliya Gorb (Pennsylvania State University) Discrete network approximation for highly-packed composites with irregular geometry in three dimensions
Abstract: In this poster, a discrete network approximation to the problem of the effective conductivity of a high contrast, densely packed composite in three dimensions is introduced. The inclusions
are irregularly (randomly) distributed in a host medium. For this class of arrays of inclusions a discrete network approximation for effective conductivity is derived and a priori error estimates are
obtained. A variational duality approach is used to provide a rigorous mathematical justification for the approximation and its error estimate.
Genetha Anne Gray (Sandia National Laboratories) Multifidelity optimization using asynchronous parallel pattern search and space mapping
Abstract: We present a new method designed to improve optimization efficiency using interactions between multifidelity models. It optimizes a high fidelity model over a reduced design space using a
direct search algorithm and a specialized oracle. The oracle employs a space mapping technique to map the design space of this high fidelity model to that of a computationally cheaper low fidelity
model. Then, in the low fidelity space, an optimum is obtained using gradient based optimization and is mapped back to the high fidelity space. We will review our algorithm, discuss the suitability
of APPSPACK for multifidelity optimization, and present some preliminary results.
Giovanna Guidoboni (University of Houston) New perspective for simulating incompressible fluid flows with free boundary
Abstract: The investigation of a fast way of performing numerical simulation of fluid flow with free boundary is motivated by many applications in sciences. The main difficulty lies in the fact that
the computational domain is not given a priori but it is another unknown of the problem. Taking advantage of operator splitting techniques, we have been able to avoid the iteration between the
solution of the fluid flow and the position of the boundary at each time step and as a consequence our solver is very simple and fast.
Jennifer Suzanne Hruska (Indiana University) Rigorous numerical computations in complex dynamical systems
Abstract: We demonstrate our work in establishing rigorously, via controlled computer arithmetic, certain phenomena of interest in discrete dynamical systems of two complex variables. In particular,
we study the family of Henon Mappings f(x,y) = (x^2+c-ay, x), first studied by the Astronomer Henon in the late 1960s, which shares some qualitative similarities to the famed Lorenz differential
equations. This family of maps has been widely studied as a diffeomorphism of two real variables, and has a rich variety of chaotic behavior. We extend to consider x,y complex variables, and a,c
complex parameters, with the goal of using the extra tools and structure provided by complex analysis to gain insights about the real system contained in the complex system.
Erica Zimmer Klampfl (Ford Motor Company) Women mathematicians: We can do more than teach
Abstract: How many times has someone asked you what your degree is in and when you respond, "Math," they ask, "Oh, do you teach?" While teaching is a noble profession, it is not for everyone. There
are other career options for women in the mathematical sciences. I will describe career options that I stumbled upon while job searching during the last phases as a graduate student in applied
mathematics, the path I chose, and a brief sampling of some of the research in which I am currently involved.
Satish Kumar (University of Minnesota) Microscale flow and transport problems arising in surfactant rheology, surface patterning, and polymer electrophoresis
Abstract: Fluid flow and transport processes occurring on length scales of microns or less often involve phenomena which are unimportant at larger length scales. Although such phenomena can
complicate our ability to understand and design microscale flow and transport processes, they also offer opportunities to engineer novel and useful effects. Three examples will be presented in this
talk in support of this idea. In the first example, we consider an instability that arises when a fluid flows past a soft elastic solid. Experiments and theoretical calculations suggest that this
instability is responsible for certain rheological phenomena observed in surfactant solutions, and that it may also be useful for enhancing mixing in microscale flows. In the second example, we
consider a thin liquid film dewetting near a polymer gel. Numerical simulations using a lubrication-theory-based model which couples the fluid and gel dynamics indicate that the dewetting process can
be used to template topographical structures on the gel surface. In the third example, we consider polymer electrophoresis through a narrow slit. Brownian dynamics simulations show that the
relationship between the chain transit velocity and chain length depends in a sensitive way on slit dimensions, and suggest the existence of an optimum slit width for electrophoretic separations.
Xiantao Li (University of Minnesota) A multiscale model for the dynamics of solids
Abstract: At the atomic scale, solids can be modeled by molecular mechanics or molecular dynamics, which have become very useful tools in studying crystal structure, defect dynamics and material
properties. However due to the computational complexity, the application of these models are usually limited to very small spatial and temporal scales. On the other hand continuum models, such as
elasticity, elastodynamics and their finite element (or finite volume) formulations, have been widely used to study processes at much larger scales. But the constitutive relation involved in these
continuum models may be ad hoc, and fails to account for the presence of microstructure in the material. In this talk I will present a multiscale model, which couples the atomistic and continuum
models concurrently. The macroscale model evolves the system at continuum scale, and the atomistic model, which only involves a small number of atoms, estimate the constitutive data and defect
structure. I will show the estimate of the modeling error as well as various applications of this new model.
Martha Paola Vera Licona (Virginia Tech) An Optimization Algorithm for the Identification of Biochemical Network Models
Abstract: An important problem in computational biology is the modeling of several types of networks, ranging from gene regulatory networks and metabolic networks to neural response networks. In
[LS], Laubenbacher and Stigler presented an algorithm that takes as input time series of system measurements, including certain perturbation time series, and provides as output a discrete dynamical
system over a finite field. Since functions over finite fields can always be represented by polynomial functions, one can use tools from computational algebra for this purpose. The key step in the
algorithm is an interpolation step, which leads to a model that fits the given data set exactly. Due to the fact that biological data sets tend to contain noise, the algorithm leads to over-fitting.
Here we present a genetic algorithm that optimizes the model produced by the Laubenbacher-Stigler algorithm between model complexity and data fit. This algorithm too uses tools from computational
algebra in order to provide a computationally simple description of the mutation rules. We describe applications of the combined algorithm to the modeling of gene regulatory networks, as well as a
computational neuroscience project. [LS] Laubenbacher, R. and B. Stigler, A computational algebra approach to the reverse-engineering of gene regulatory networks, J. Theor. Biol. 229 (2004) 523-537.
Hyeona Lim (Mississippi State University) On efficient high-order schemes for acoustic waveform simulation
Abstract: We present new high-order implicit time-stepping schemes for the numerical solution of the acoustic wave equation, as a variant of the conventional modified equation method. For an
efficient simulation, the schemes incorporate a locally one-dimensional (LOD) procedure having the fourth-order splitting error. It has been observed from various experiments for 2D problems that (a)
the computational cost of the implicit LOD algorithms is only about 40% higher than that of the explicit methods, for the problems of the same size, (b) the implicit LOD methods produce less
dispersive solutions in heterogeneous media, and (c) their numerical stability and accuracy match well those of the explicit methods.
Robert P. Lipton (Louisiana State University) Composite properties and microstructure
Abstract: We begin with an overview of composite materials and their effective properties. Most often only a statistical description of the microstructure is available and one must assess the
effective behavior in terms of this limited information. To this end approximation schemes such as effective medium schemes and differential schemes are discussed. Variational methods for obtaining
tight bounds on effective properties for statistically defined microgeometries are reviewed. Formulas for the effective properties of extremal microgeometries are presented. Such microgeometries
include layered materials and sphere and ellipsoid assemblages. Next we focus on physical situations where the interface between component materials play an important role in determining effective
transport properties. This is relevant to the study of nanostructured materials in which the interface or interphase between materials can have a profound effect on overall transport properties.
Variational methods and bounds are presented that illuminate the effect of particle size and shape distribution inside random composites with coupled heat and mass transport on the interface. We
conclude by introducing methods for quantifying load transfer between length scales. This is motivated by the fact that many composite structures are hierarchical in nature and are made up of
substructures distributed across several length scales. Examples include aircraft wings made from fiber reinforced laminates and naturally occurring structures like bone. From the perspective of
failure initiation it is crucial to quantify load transfer between length scales. The presence of geometrically induced stress or strain singularities at either the structural or substructural scale
can have influence across length scales and initiate nonlinear phenomena that result in overall structural failure. We examine load transfer for statistically defined microstructures. New
mathematical objects beyond the well known effective elastic tensor are presented that facilitate a quantitative description of the load transfer in hierarchical structures. Several physical examples
are provided illustrating how these quantities can be used to quantify the stress and strain distribution inside multi-scale composite structures.
Hailiang Liu (Iowa State University) Wave breaking in a class of nonlocal dispersive wave equations
Abstract: The Korteweg de Vries (KdV) equation is well known as an approximation model for small amplitude and long waves in different physical contexts, but wave breaking phenomena related to short
wavelengths are not captured in. We introduce a class of nonlocal dispersive wave equations which incorporate physics of short wavelength scales. The model is identified by the renormalization of an
infinite dispersive differential operator and the number of associated conservation laws. Several well-known models are thus rediscovered. Wave breaking criteria are obtained for several typical
models including the Burgers-Poisson system, the Camassa-Holm type equation.
Miriam Lucian (Boeing Company) Becoming an applied mathematician - From mathematical logic to airplanes
Abstract: I will describe briefly some the projects I worked on during my Boeing career, emphasizing the role of a mathematician in a manufacturing environment. In this context I will discuss the
advantages and drawbacks of working in industry and offer some practical advice to mathematicians at the beginning of their careers.
Maeve McCarthy (Murray State University) Numerical analysis of the Exponential Euler method and its suitability for dynamic clamp experiments
Abstract: Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real-time with neurophysiological experiments. The most demanding of these techniques is
known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for
implementing the numerical integration of the gating variables in real-time typically employ first-order numerical methods, either Euler (E) or Exponential Euler (EE). EE is often used for rapidly
integrating ion channel gating variables. We find via simulation studies that for small time-steps, both methods are comparable, but at larger time-steps, EE performs worse than Euler. We derive
error bounds for both methods, and find that the error can be characterized in terms of two ratios: time-step over time-constant, and voltage measurement error over the slope-factor of the
steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds
quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step-sizes. Finally, we demonstrate that Euler can be computed with identical
computational efficiency as EE.
Elena Nagaeva ([None]) A convergence analysis of generalized iterative methods in finite-dimensional lattice-normed spaces
Abstract: This poster introduces a lattice-normed space approach to study convergence of iterative methods for solving systems of nonlinear operatorequations. Systems of nonlinear operator equations
appear in various fields of applied science, e.g. magnetohydrodynamics. A numerical solution of such a system is a multidimensional real vector, which is formed of several "subvectors". Each
subvector corresponds to a certain physical quantity of the problem in hand (pressure, temperature, etc.). We formulate local and semilocal convergence conditions for generalized two-step iterative
methods in finite-dimensional lattice-normed spaces. Using the lattice-normed space approach makes it possible to determine the convergence domain for each physical quantity of the problem
Myunghyun Oh (Ohio State University) Evans function for periodic waves in infinite cylindrical domain
Abstract: n infinite dimensional Evans function theory is developed for the elliptic eigenvalue problem. We consider an elliptic equation with periodic boundary conditions and define a stability
index with Evans function. The key for defining the index is exponential dichotomies for the system. This system has infinite dimensional stable and unstable spaces. We need to address the issue of
how to determine Evans function if two infinite dimensional subspaces have nontrivial intersections. We use Galerkin approximation to reduce down these dimensions to finite and show persistence of
dichotomies. Our work reveals a geometric criterion, the relative orientation of the linear unstable subspace, and relation to the momentum for instability of periodic waves in infinite cylindrical
Sarah K. Patch (General Electric) Thermoacoustic tomography - Inversion of a spherical radon transform with partial data
Abstract: Thermoacoustic tomography (TCT) is a hybrid imaging technique proposed as an alternative to xray mammography. Radiofrequency (RF) energy is deposited into the breast tissue uniformly in
space, but impulsively in time. This heats the tissue causing thermal expansion. Cancerous masses absorb more RF energy than healthy tissue, creating a pressure wave which is detected by standard
ultrasound transducers placed on the surface of a hemisphere surrounding the breast. Assuming constant sound speed, the data represent integrals of the tissue's RF absorptivity over a sphere centered
about the transducers. The inversion problem for TCT is to recover data from integrals over spheres centered on a hemisphere. We present an inversion formula for the complete data case, where
integrals are measured for centers on the entire sphere. We will derive consistency conditions upon TCT data and discuss their implications for reconstructing clinically realizable 1/2-scan data
Natalya Popova (University of Illinois - Chicago) The effect of gravity modulation on the onset of filtrational convection
Abstract: The effect of vertical harmonic oscillations on the onset of convection in an infinite horizontal layer of fluid saturating a porous medium is investigated. Constant temperature
distribution is assigned on the rigid impermeable boundaries. The mathematical model is described by equations of filtrational convection in the Darcy-Oberbeck-Boussinesq approximation. Linear
analysis of the stability of the quasi-equilibrium state is performed by using the Floquet method. Employment of the continued fractions method allows derivation of the dispersion equation for the
Floquet exponent in the explicit form. The Floquet spectrum is investigated analytically and numerically for different values of oscillation frequency and amplitude, and the Rayleigh number. The
neutral curves of the Rayleigh number as a function of the horizontal wave number are constructed for the synchronous and subharmonic resonant modes. The regions of parametric instability contoured
by these neutral curves are investigated under different values of oscillation frequency and amplitude. Asymptotes for the neutral curves are constructed for the case of high frequency using the
method of averaging and, for the case of low frequency, using the WKB method. Analytical, asymptotic and numerical investigation of the system indicates that vertical vibration can be used to control
convective instability in a layer of fluid saturating a porous medium.
Lea Popovic (University of Minnesota) Stochastic modeling of macroevolution
Abstract: The use of stochastic models of evolution has been extensively applied on each level of taxonomy (species, genera, families, etc) separately. It is however desirable to ensure hierarchical
consistency between them, so that the phylogenetic tree on species is consistent with the phylogenetic tree on genera containing those species. We present the fundamental model that allows for such
hierarchical structure. We start with a stochastic model for evolution of species and extend it to higher taxonomic levels allowing for several different grouping schemes.We illustrate the wide range
of probabilistic calculations possible within such model: for the shape of trees at each taxonomic level, the fluctuations of population sizes at each level, etc.
Natalie Rojkovskaia (University of Wisconsin - Madison) A story about Yangians
Abstract: We describe the connection between some remarkable matrices with non-commutative coefficients and the quantum groups Yangians.
Ping Sheng (Hong Kong University of Science & Technology) Nanoparticle suspensions with giant electrorheological response
Abstract: In this talk I wish to tell the story of a 10-year effort in search of a better electrorheological (ER) fluid material, leading to the discovery of the giant ER effect, and the crucial role
that mathematics and simulations has played in the whole process. Electrorheology denotes the control of a material's flow properties (rheology) through the application of an electric field. ER fluid
was discovered sixty years ago. In the early days the ER fluids, generally consisting of solid particles suspended in an electrically insulating oil, exhibited only a limited range of viscosity
change under an electric field, typically in the range of 1-3 kV/mm. The study of ER fluid was revived in the 1980's, propelled by the envisioned potential applications, as well as the successful
fabrication of new ER solid particles that, when suspended in a suitable fluid, can "solidify" under an electric field, with the strength of the high-field solid state characterized by a yield stress
(breaking stress under shear). However, further progress was hindered by the barrier of low yield stress (typically in the range of a few kPa). Starting in 1994, we have adapted the mathematics of
composites, in particular the Bergman-Milton representation of effective dielectric constant, to the study of ER mechanism(s) [1-4]. The questions we aim to answer are: (1) the role of conductivity
in the ER effect, (2) the role multipole interaction, (3) the ground state microstructure of the high-field state and most importantly (4) the upper bounds in the yield stress and shear modulus of
the high field solid state. Finding the answer to (4) led to the suggestion of the coating geometry for the ER solid particles which can optimize the ER effect, but at the same time also pointed out
the limitation of the ER mechanism based on induced polarization. The subsequent study of adding controlled amount of water to the ER fluid pointed to the intriguing possibility of using molecular
dipoles as the new "agent" for enhancing the ER effect [5]. Working along this direction, the experimentalist W.J. Wen was able to synthesize urea-coated nanoparticles of barium titanyl oxalate which
exhibited yield stress in excess of 100 kPa, breaking the yield stress upper bound and pointing to a new paradigm in ER effect in which the molecular dipoles can be harnessed to advantage in
controllable, reversible liquid-solid transitions with a time constant on the order of 1 msec. We propose the model of aligned surface dipole layers in the contact area of the coated nanoparticles to
explain the observed giant ER effect [6], with the electric-field induced dissociation (the Poole-Frenkel effect) of the molecular dipoles accounting for the observed ionic conductivity. Quantitative
agreement between theory and experiment was obtained. The talk concludes with an outline of the intriguing questions yet to be answered, and the problems to be solved before ER fluids can become a
commercial reality. [1] H.R. Ma, W.J. Wen, W.Y. Tam, and P. Sheng, Phys. Rev. Lett. 77, 2499 (1996). [2] W.Y. Tam, G.H. Yi, W.J. Wen, H.R. Ma, M.M. T. Loy, and P. Sheng, Phys. Rev. Lett. 78, 2987
(1997). [3] W.J. Wen, N. Wang, H.R. Ma, Z.F. Lin, W.Y. Tam, C.T. Chan, and P. Sheng, Phys. Rev. Lett. 82, 4248 (1999). [4] H.R. Ma, W.J. Wen, W.Y. Tam and P. Sheng, Adv. Phys. 52, 343 (2003). [5]
W.J. Wen, H.R. Ma, W.Y. Tam and P. Sheng, Phys. Rev. E55, R1294 (1997). [6] W.J. Wen, X.X. Huang, S.H. Yang, K.Q. Lu and P. Sheng, Nature Materials 2, 727 (2003).
Suzanne Sindi (University of Maryland) A symbolic dynamical system for reconstructing repetitive DNA
Abstract: The task of assembling a genome is a complicated lengthy process. When a genome is first published it is usually little more than a draft of the regions of the genome that can be uniquely
reconstructed. The repetitive regions of the genome are much harder to assemble and are usually finished at later phases with more expensive processes. Here we describe a method for using a Symbolic
Dynamical System to reconstruct sufficiently complex regions of repetitive DNA. We demonstrate the ability of our method to reconstruct repetitive DNA using only information available in the early
stages of genome assembly.
Nicoleta Tarfulea (University of Minnesota) A mathematical model for cell movement in tumor induced angiogenesis
Abstract: Angiogenesis - proliferation of new capillaries from preexisting ones - is a natural and complicated process. It is regulated by the interaction between various cell types (e.g. endothelial
cells (ECs), macrophages) and factors (angiogenic promoters such as VEGF and inhibitors such as angiostatin, extracellular matrix). It involves a series of changes in expression of genes, enzymes,
and signaling molecules in tumor cells and ECs, as well as changes in the motility of ECs. In recent years, tumor-induced angiogenesis has become an important field of research since it represents a
crucial step in the development of malignant tumors. In this poster, a biologically realistic model for motile endothelial cells is proposed. A new reaction-diffusion system is used to incorporate
the signaling mechanism in early stages of tumor angiogenesis (signal transduction as well as cell-cell signaling). The ECs are being modeled as deformable viscoelastic ellipsoids. We present
preliminary results that mimic the experiments done in endothelial cell cultures placed on Matrigel film. Also, the model gives further insides into the aggregation patterns by investigating factors
that influence stream formation.
Michelle Wagner (National Security Agency) Building a career at the NSA
Abstract: If you are looking for an environment where you can bring your mathematical background and talents to bear on problems that really make a difference, then the National Security Agency (NSA)
could be the place for you. In this talk I will describe our training programs for new mathematicians, the many ways in which mathematics comes into play at the NSA, some of the opportunities for
advancement throughout a career at the NSA, and the unique opportunities for female mathematicians at the Agency.
Diana Woodward (Societe Generale) Mathematics of risk management
Abstract: In 1996, I made the transition to Wall Street from academia, where I had been an assistant professor of mathematics for almost 10 years. Based on my experiences, I will give an overview of
some of the jobs available to mathematicians on the sell-side and buy-side of the street: from investment banking to hedge fund management. I will then briefly discuss the mathematical skills needed
to work in quantitative finance today, and introduce the basic mathematical framework of quantitative finance. I will present some of the numerical and analytical research problems I have worked on
in stochastic volatility modeling and credit derivatives, and potential research directions in these areas.
Baisheng Yan (Michigan State University) Singular Solutions to a Regular Problem
Abstract: The n-dimensional (quasi)conformal mappings are defined by a first-order partial differential relation (pdr): nabla u(x)in K, with a set K of n x n matrices that is very regular in the
sense of Morrey's quasiconvexity. However, such a pdr can have very singular solutions if considered outside the natural Sobolev space. In this talk, I will discuss how Mueller-Sverak's idea of the
Gromov convex integration method can be applied to construct the singular solutions with a dense set of singularities. | {"url":"http://www.ima.umn.edu/newsletters/2005/02/print-bw.html","timestamp":"2014-04-20T04:00:23Z","content_type":null,"content_length":"97780","record_id":"<urn:uuid:d4c64edc-cb4d-44b2-bd12-3b326586e626>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [Fwd: technical notes on data used by Martin Braun] (Paul Boersma )
Subject: Re: [Fwd: technical notes on data used by Martin Braun]
From: Paul Boersma <paul.boersma(at)HUM.UVA.NL>
Date: Mon, 11 Jun 2001 19:09:17 +0200
>> Was the pitch contour presented as a continuous line, or was it
>> quantized in quarter semitones?
>Well, it was displayed as a series of frame-by-frame values, not as a
>continuous line. At the 16k sampling frequency that we used, the
>frame-by-frame values have a resolution of a quarter of a semitone.
Is it likely that the measured F0 values had a spacing of 1/4 semitone?
It seems more likely that they were expressed as an entire number
of samples per period. For instance, for an F0 of 200 Hz you would
get 16k/200 = 80 samples per period. In that vicinity, then, the
spacing is 1 sample per period, or 1.25%, or just below one quarter
of a semitone, which seems to be consistent with what Bob Ladd remembers.
However, if you "bin" such values into 1/4-semitone buckets,
you should control for the number of possible values that fit into
such a bucket. In the range 115-230 Hz, that number is 1, 2, or 3.
Of course, I don't expect any researcher to have overseen such
a correction, but it is interesting to see what results such mistake
would have led to. The following is a Praat script that generates
random F0 values between 115 and 230 Hz, and bins them according
to the above faulty criterion:
for bin from 10 to 57
n'bin' = 0
for i to 10000
f0 = randomUniform (115, 230)
samplesPerPeriod = round (16000 / f0)
f0_gipos = 16000 / samplesPerPeriod
semitones = hertzToSemitones (f0_gipos)
quarterSemitoneBin = round (semitones * 4)
n'quarterSemitoneBin' = n'quarterSemitoneBin' + 1
echo Bin n
for bin from 10 to 57
n = n'bin'
printline 'bin' 'n'
In the bin range from 10 to 57 (that is, 48 bins = one octave),
we now find distinctive peaks at bin numbers
25, 27, 29, 31, 33, 35, 38, 41, 45, and 52.
Note that this series contains several subseries at distances
of 4, 6, or 8 bins (4 bins = 1 semitone).
Can the authors show us how they corrected for this sampling-and-binning
Does a reanalysis with a program that has an F0 measurement accuracy of,
say, 0.00001 Hz instead of 1/4 semitone, still lead to the reported result?
Best wishes,
Paul Boersma
Institute of Phonetic Sciences, University of Amsterdam
Herengracht 338, 1016CG Amsterdam, The Netherlands
phone +31-20-5252385
This message came from the mail archive | {"url":"http://www.auditory.org/postings/2001/557.html","timestamp":"2014-04-17T21:23:48Z","content_type":null,"content_length":"3443","record_id":"<urn:uuid:739671b2-c5bf-44e9-987e-0706e6ba63df>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Birthday Paradox
How many people need to be crowded into a room before two of them are likely to have the same birthday? The answer is a mere 23 to have a fifty-fifty shot. To bring the probability to ninety-nine
percent, you need a crowd of only fifty-seven people. And yet there are three hundred and sixty-five days in a year. What's going on?
Is it your birthday? Happy birthday to you! If you're in a room with about 22 other people and you've read the above paragraph, you're thinking that it's likely that at least one other person in the
room has your birthday. Fool! That's not the way the Birthday Paradox works. In fact, it's not a paradox at all. When reckoning how likely it is to have matching birthdays in a room, people think of
one particular date, perhaps the one that they were born on, out of three hundred and sixty-five days of the year, and assume since people are as likely to be born on one day as any other, there have
to be three hundred and sixty-five more people in the room before there's a chance that another of them will be born on that particular date.
But the question wasn't about whether or not two people had one particular birthday in common, it was whether they had any birthday in common. That changes the game, although it may not seem like it.
The best way to understand the Birthday Paradox is not to calculate the likelihood of two people having the same birthday, but two people not having the same birthday. Let us say that my birthday is
New Year's Day, January 1st. If a random person were to come along while I was celebrating my birthday/New Year's Day by, say, throwing up behind a park bench, chances are one out of 365 that they
would have the same birthday as me. (Odds are a little higher that they'd be a cop, but we'll leave that alone.) So the odds are that 364 out of 365 that they wouldn't have the same birthday as me.
Translated into decimals, the odds would be 0.99726 that we'd have different birthdays. Those are good odds.
Since I wouldn't be up to moving anytime soon, and the second person (being a concerned citizen) would want to make sure that I didn't die from alcohol poisoning, we'd both wait there until a third
person showed up. Now what are the odds that no one present at that bench at that time has a birthday match?
The odds are already 0.99726 that the first two people don't have a match. The third person would have a 0.99726 chance of not having my birthday. They'd also have a 0.99726 chance of not matching
the Good Samaritan's birthday.
Total odds of the non-match? (0.99726 x 0.99726 x 0.99726) Or 0.99180.
That is 0.99726 to the power of the number of unique pairs that can be made of three people.
The number of unique pairs that can be made of 23 people? Two hundred and fifty-three.
So what is 0.99726^253, the chance of there not being a match between anyone and anyone? It's just under fifty percent.
So only a small crowd, say, a grade school class taking a field-trip to the park, would have to gather around that bench before it became more likely than not that two people shared a birthday. Just
not my birthday.
Via Better Explained and the University of Illinois.
11 69Reply | {"url":"http://io9.com/5825781/the-birthday-paradox?tag=Paradox","timestamp":"2014-04-18T01:14:57Z","content_type":null,"content_length":"85640","record_id":"<urn:uuid:3fa840f9-dfd6-4fc2-8597-e08a08b5ab26>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
complex line
complex line
A complex line is a complex vector space of dimension 1 (of complex dimension that is, as a $\mathbb{C}$-vector space, meaning that as a vector space over the real numbers it is a plane).
In particular the complex plane $\mathbb{C}$ itself is a complex line; conversely, any two complex lines are isomorphic (to each other and to the complex plane). However, there are many such
isomorphisms; the automorphism group is $\mathbb{C} \setminus \{0\}$.
Revised on February 5, 2013 20:28:52 by
Urs Schreiber | {"url":"http://www.ncatlab.org/nlab/show/complex+line","timestamp":"2014-04-18T08:15:32Z","content_type":null,"content_length":"18316","record_id":"<urn:uuid:0ec6de3b-19b7-4d6f-9f4d-a7d157640f23>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Markov Operators on Banach Lattices
A brief search on www.ams.org with the keyword “Markov operator” produces some 684 papers, the earliest of which dates back to 1959. This suggests that the term “Markov operator” emerged around the
1950’s, clearly in the wake of Andrey Markov’s seminal work in the area of stochastic processes and Markov chains. Indeed, [17] and [6], the two earliest papers produced by the ams.org search, study
Markov processes in a statistical setting and “Markov operators” are only referred to obliquely, with no explicit definition being provided. By 1965, in [7], the situation has progressed to the point
where Markov operators are given a concrete definition and studied more directly. However, the way in which Markov operators originally entered mathematical discourse, emerging from Statistics as
various attempts to generalize Markov processes and Markov chains, seems to have left its mark on the theory, with a notable lack of cohesion amongst its propagators. The study of Markov operators in
the Lp setting has assumed a place of importance in a variety of fields. Markov operators figure prominently in the study of densities, and thus in the study of dynamical and deterministic systems,
noise and other probabilistic notions of uncertainty. They are thus of keen interest to physicists, biologists and economists alike. They are also a worthy topic to a statistician, not least of all
since Markov chains are nothing more than discrete examples of Markov operators (indeed, Markov operators earned their name by virtue of this connection) and, more recently, in consideration of the
connection between copulas and Markov operators. In the realm of pure mathematics, in particular functional analysis, Markov operators have proven a critical tool in ergodic theory and a useful
generalization of the notion of a conditional expectation. Considering the origin of Markov operators, and the diverse contexts in which they are introduced, it is perhaps unsurprising that, to the
uninitiated observer at least, the theory of Markov operators appears to lack an overall unity. In the literature there are many different definitions of Markov operators defined on L1(μ) and/or L1
(μ) spaces. See, for example, [13, 14, 26, 2], all of which manage to provide different definitions. Even at a casual glance, although they do retain the same overall flavour, it is apparent that
there are substantial differences in these definitions. The situation is not much better when it comes to the various discussions surrounding ergodic Markov operators: we again see a variety of
definitions for an ergodic operator (for example, see [14, 26, 32]), and again the connections between these definitions are not immediately apparent. In truth, the situation is not as haphazard as
it may at first appear. All the definitions provided for Markov operator may be seen as describing one or other subclass of a larger class of operators known as the positive contractions. Indeed, the
theory of Markov operators is concerned with either establishing results for the positive contractions in general, or specifically for one of the aforementioned subclasses. The confusion concerning
the definition of an ergodic operator can also be rectified in a fairly natural way, by simply viewing the various definitions as different possible generalizations of the central notion of a ergodic
point-set transformation (such a transformation representing one of the most fundamental concepts in ergodic theory). The first, and indeed chief, aim of this dissertation is to provide a coherent
and reasonably comprehensive literature study of the theory of Markov operators. This theory appears to be uniquely in need of such an effort. To this end, we shall present a wealth of material,
ranging from the classical theory of positive contractions; to a variety of interesting results arising from the study of Markov operators in relation to densities and point-set transformations; to
more recent material concerning the connection between copulas, a breed of bivariate function from statistics, and Markov operators. Our goals here are two-fold: to weave various sources into a
integrated whole and, where necessary, render opaque material readable to the non-specialist. Indeed, all that is required to access this dissertation is a rudimentary knowledge of the fundamentals
of measure theory, functional analysis and Riesz space theory. A command of measure and integration theory will be assumed. For those unfamiliar with the basic tenets of Riesz space theory and
functional analysis, we have included an introductory overview in the appendix. The second of our overall aims is to give a suitable definition of a Markov operator on Banach lattices and provide a
survey of some results achieved in the Banach lattice setting, in particular those due to [5, 44]. The advantage of this approach is that the theory is order theoretic rather than measure theoretic.
As we proceed through the dissertation, definitions will be provided for a Markov operator, a conservative operator and an ergodic operator on a Banach lattice. Our guide in this matter will chiefly
be [44], where a number of interesting results concerning the spectral theory of conservative, ergodic, so-called “stochastic” operators is studied in the Banach lattice setting. We will also, and to
a lesser extent, tentatively suggest a possible definition for a Markov operator on a Riesz space. In fact, we shall suggest, as a topic for further research, two possible approaches to the study of
such objects in the Riesz space setting. We now offer a more detailed breakdown of each chapter. In Chapter 2 we will settle on a definition for a Markov operator on an L1 space, prove some
elementary properties and introduce several other important concepts. We will also put forward a definition for a Markov operator on a Banach lattice. In Chapter 3 we will examine the notion of a
conservative positive contraction. Conservative operators will be shown to demonstrate a number of interesting properties, not least of all the fact that a conservative positive contraction is
automatically a Markov operator. The notion of conservative operator will follow from the Hopf decomposition, a fundmental result in the classical theory of positive contractions and one we will
prove via [13]. We will conclude the chapter with a Banach lattice/Riesz space definition for a conservative operator, and a generalization of an important property of such operators in the L1 case.
In Chapter 4 we will discuss another well-known result from the classical theory of positive contractions: the Chacon-Ornstein Theorem. Not only is this a powerful convergence result, but it also
provides a connection between Markov operators and conditional expectations (the latter, in fact, being a subclass of theMarkov operators). To be precise, we will prove the result for conservative
operators, following [32]. In Chapter 5 we will tie the study of Markov operators into classical ergodic theory, with the introduction of the Frobenius-Perron operator, a specific type of Markov
operator which is generated from a given nonsingular point-set transformation. The Frobenius-Perron operator will provide a bridge to the general notion of an ergodic operator, as the definition of
an ergodic Frobenius-Perron operator follows naturally from that of an ergodic transformation. In Chapter 6 will discuss two approaches to defining an ergodic operator, and establish some connections
between the various definitions of ergodicity. The second definition, a generalization of the ergodic Frobenius-Perron operator, will prove particularly useful, and we will be able to tie it,
following [26], to several interesting results concerning the asymptotic properties of Markov operators, including the asymptotic periodicity result of [26, 27]. We will then suggest a definition of
ergodicity in the Banach lattice setting and conclude the chapter with a version, due to [5], of the aforementioned asymptotic periodicity result, in this case for positive contractions on a Banach
lattice. In Chapter 7 we will move into more modern territory with the introduction of the copulas of [39, 40, 41, 42, 16]. After surveying the basic theory of copulas, including introducing a
multiplication on the set of copulas, we will establish a one-to-one correspondence between the set of copulas and a subclass of Markov operators. In Chapter 8 we will carry our study of copulas
further by identifying them as a Markov algebra under their aforementioned multiplication. We will establish several interesting properties of this Markov algebra, in parallel to a second Markov
algebra, the set of doubly stochastic matrices. This chapter is chiefly for the sake of interest and, as such, diverges slightly from our main investigation of Markov operators. In Chapter 9, we will
present the results of [44], in slightly more detail than the original source. As has been mentioned previously, these concern the spectral properties of ergodic, conservative, stochastic operators
on a Banach lattice, a subclass of the Markov operators on a Banach lattice. Finally, as a conclusion to the dissertation, we present in Chapter 10 two possible routes to the study of Markov
operators in a Riesz space setting. The first definition will be directly analogous to the Banach lattice case; the second will act as an analogue to the submarkovian operators to be introduced in
Chapter 2. We will not attempt to develop any results from these definitions: we consider them a possible starting point for further research on this topic. In the interests of both completeness, and
in order to aid those in need of more background theory, the reader may find at the back of this dissertation an appendix which catalogues all relevant results from Riesz space theory and operator | {"url":"http://wiredspace.wits.ac.za/handle/10539/2118","timestamp":"2014-04-16T19:28:00Z","content_type":null,"content_length":"39989","record_id":"<urn:uuid:3326888d-838a-480d-9b81-639f50680507>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
MIMO broadcast channels with finite-rate feedback
Results 1 - 10 of 40
- IEEE J. SEL. AREAS COMMUN , 2008
"... It is now well known that employing channel adaptive signaling in wireless communication systems can yield large improvements in almost any performance metric. Unfortunately, many kinds of
channel adaptive techniques have been deemed impractical in the past because of the problem of obtaining channe ..."
Cited by 43 (8 self)
Add to MetaCart
It is now well known that employing channel adaptive signaling in wireless communication systems can yield large improvements in almost any performance metric. Unfortunately, many kinds of channel
adaptive techniques have been deemed impractical in the past because of the problem of obtaining channel knowledge at the transmitter. The transmitter in many systems (such as those using frequency
division duplexing) can not leverage techniques such as training to obtain channel state information. Over the last few years, research has repeatedly shown that allowing the receiver to send a small
number of information bits about the channel conditions to the transmitter can allow near optimal channel adaptation. These practical systems, which are commonly referred to as limited or finite-rate
feedback systems, supply benefits nearly identical to unrealizable perfect transmitter channel knowledge systems when they are judiciously designed. In this tutorial, we provide a broad look at the
field of limited feedback wireless communications. We review work in systems using various combinations of single antenna, multiple antenna, narrowband, broadband, single-user, and multiuser
technology. We also provide a synopsis of the role of limited feedback in the standardization of next generation wireless systems.
- IEEE J. Select. Areas Commun , 2007
"... Abstract — We analyze the sum-rate performance of a multiantenna downlink system carrying more users than transmit antennas, with partial channel knowledge at the transmitter due to finite rate
feedback. In order to exploit multiuser diversity, we show that the transmitter must have, in addition to ..."
Cited by 36 (2 self)
Add to MetaCart
Abstract — We analyze the sum-rate performance of a multiantenna downlink system carrying more users than transmit antennas, with partial channel knowledge at the transmitter due to finite rate
feedback. In order to exploit multiuser diversity, we show that the transmitter must have, in addition to directional information, information regarding the quality of each channel. Such information
should reflect both the channel magnitude and the quantization error. Expressions for the SINR distribution and the sum-rate are derived, and tradeoffs between the number of feedback bits, the number
of users, and the SNR are observed. In particular, for a target performance, having more users reduces feedback load. Index Terms — MIMO, quantized feedback, limited feedback, zero-forcing
beamforming, multiuser diversity, broadcast channel,
- IEEE J. Sel. Areas Commun , 2007
"... Abstract — In this paper, we study the design of the transmitter in the downlink of a multiuser and multiantenna wireless communications system, considering the realistic scenario where only an
imperfect estimate of the actual channel is available at both communication ends. Precisely, the actual ch ..."
Cited by 16 (1 self)
Add to MetaCart
Abstract — In this paper, we study the design of the transmitter in the downlink of a multiuser and multiantenna wireless communications system, considering the realistic scenario where only an
imperfect estimate of the actual channel is available at both communication ends. Precisely, the actual channel is assumed to be inside an uncertainty region around the channel estimate, which models
the imperfections of the channel knowledge that may arise from, e.g., estimation Gaussian errors, quantization effects, or combinations of both sources of errors. In this context, our objective is to
design a robust power allocation among the information symbols that are to be sent to the users such that the total transmitted power is minimized, while maintaining the necessary quality of service
to obtain reliable communication links between the base station and the users for any possible realization of the actual channel inside the uncertainty region. This robust power allocation is
obtained as the solution to a convex optimization problem, which, in general, can be numerically solved in a very efficient way, and even for a particular case of the uncertainty region, a
quasi-closed form solution can be found. Finally, the goodness of the robust proposed transmission scheme is presented through numerical results. Index Terms — Robust designs, imperfect CSI,
multiantenna systems, broadcast channel, convex optimization.
- IEEE Trans. Inf. Theory , 2009
"... channel, feedback from the receiver can be used to specify a transmit precoding matrix, which selectively activates the strongest channel modes. Here we analyze the performance of Random Vector
Quantization (RVQ), in which the precoding matrix is selected from a random codebook containing independen ..."
Cited by 11 (6 self)
Add to MetaCart
channel, feedback from the receiver can be used to specify a transmit precoding matrix, which selectively activates the strongest channel modes. Here we analyze the performance of Random Vector
Quantization (RVQ), in which the precoding matrix is selected from a random codebook containing independent, isotropically distributed entries. We assume that channel elements are i.i.d. and known to
the receiver, which relays the optimal (rate-maximizing) precoder codebook index to the transmitter using B bits. We first derive the large system capacity of beamforming (rank-one precoding matrix)
as a function of B, where large system refers to the limit as B and the number of transmit and receive antennas all go to infinity with fixed ratios. RVQ for beamforming is asymptotically optimal,
i.e., no other quantization scheme can achieve a larger asymptotic rate. We subsequently consider a precoding matrix with arbitrary rank, and approximate the asymptotic RVQ performance with optimal
and linear receivers (matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show that these approximations accurately predict the performance of finite-size systems of interest.
Given a target spectral efficiency, numerical examples show that the amount of feedback required by the linear MMSE receiver is only slightly more than that required by the optimal receiver, whereas
the matched filter can require significantly more feedback. Index Terms—Beamforming, large system analysis, limited feedback, Multi-Input Multi-Output (MIMO), precoding, vector quantization. I.
- IEEE TRANS. VEHICULAR TECHNOLOGY , 2007
"... On the multi-antenna broadcast channel, the spatial degrees of freedom support simultaneous transmission to multiple users. Optimal multi-user transmission, known as dirty paper coding, requires
non-causal channel state information (CSI) and extreme complexity and is hence not directly realizable. A ..."
Cited by 10 (2 self)
Add to MetaCart
On the multi-antenna broadcast channel, the spatial degrees of freedom support simultaneous transmission to multiple users. Optimal multi-user transmission, known as dirty paper coding, requires
non-causal channel state information (CSI) and extreme complexity and is hence not directly realizable. A more practical design, named per user unitary and rate control (PU2RC), has been proposed for
emerging cellular standards. PU2RC supports multi-user simultaneous transmission, enables limited feedback, and is capable of exploiting multi-user diversity. Its key feature is an orthogonal
beamforming (or precoding) constraint, where each user selects a beamformer (or precoder) from a codebook of multiple orthonormal bases. In this paper, the asymptotic throughput scaling laws for
PU2RC with a large user pool are derived for different regimes. In the interference-limited regime, the throughput of PU2RC is shown to scale logarithmically with the number of users. In the normal
and noise-limited regimes, the throughput is found to scale double logarithmically with the number of users and also linearly with the number of antennas at the base station. In addition, numerical
results show that PU2RC achieves higher throughput and is more robust against CSI quantization errors than the popular alternative of zero-forcing beamforming if the number of users is sufficiently
, 2009
"... In this work we study the capacity of multi-user multiple-input multiple-output (MU-MIMO) downlink channels with codebook-based limited feedback using real measurement data. Several aspects of
MU-MIMO channels are evaluated. Firstly, we compare the sum rate of different MU-MIMO precoding schemes in ..."
Cited by 6 (3 self)
Add to MetaCart
In this work we study the capacity of multi-user multiple-input multiple-output (MU-MIMO) downlink channels with codebook-based limited feedback using real measurement data. Several aspects of
MU-MIMO channels are evaluated. Firstly, we compare the sum rate of different MU-MIMO precoding schemes in various channel conditions. Secondly, we study the effect of different codebooks on the
performance of limited feedback MU-MIMO. Thirdly, we relate the required feedback rate with the achievable rate on the downlink channel. Real multi-user channel measurement data acquired with the
Eurecom MIMO OpenAir Sounder (EMOS) is used. To the best of our knowledge, these are the first measurement results giving evidence of how MU-MIMO precoding schemes depend on the precoding scheme,
channel characteristics, user separation, and codebook. For example, we show that having a large user separation as well as codebooks adapted to the second order statistics of the channel gives a sum
rate close to the theoretical limit. A small user separation due to bad scheduling or a poorly adapted codebook on the other hand can impair the gain brought by MU-MIMO. The tools and the analysis
presented in this paper allow the system designer to trade-off downlink rate with feedback rate by carefully choosing the codebook.
- IEEE J. Select. Areas Commun , 2008
"... Abstract—This paper considers broadcast channels with L antennas at the base station and m single-antenna users, where L and m are typically of the same order. We assume that only partial
channel state information is available at the base station through a finite rate feedback. Our key observation i ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract—This paper considers broadcast channels with L antennas at the base station and m single-antenna users, where L and m are typically of the same order. We assume that only partial channel
state information is available at the base station through a finite rate feedback. Our key observation is that the optimal number of on-users (users turned on), say s, is a function of
signal-to-noise ratio (SNR) and feedback rate. In support of this, an asymptotic analysis is employed where L, m and the feedback rate approach infinity linearly. We derive the asymptotic optimal
feedback strategy as well as a realistic criterion to decide which users should be turned on. The corresponding asymptotic throughput per antenna, which we define as the spatial efficiency, turns out
to be a function of the number of on-users s, and therefore s must be chosen appropriately. Based on the asymptotics, a scheme is developed for systems with finite many antennas and users. Compared
with other studies in which s is presumed constant, our scheme achieves a significant gain. Furthermore, our analysis and scheme are valid for heterogeneous systems where different users may have
different path loss coefficients and feedback rates. Index Terms—Broadcast channels, feedback, MIMO systems, throughput. I.
- Proc. IEEE International Conference on Communications (ICC), Cape Town, South Africa , 2010
"... Abstract—This paper studies the structure of the channel quantization codebook for multiuser MISO systems with limited channel state information at the base-station. The problem is cast in the
form of minimizing the sum power subject to the worst-case SINR constraints over spherical channel uncertai ..."
Cited by 4 (4 self)
Add to MetaCart
Abstract—This paper studies the structure of the channel quantization codebook for multiuser MISO systems with limited channel state information at the base-station. The problem is cast in the form
of minimizing the sum power subject to the worst-case SINR constraints over spherical channel uncertainty regions. This paper adopts a zero-forcing approach for beamforming vectors design, and uses a
robust optimization technique via semidefinite programming (SDP) for power control as the benchmark performance measure. We then present an alternative less complex and practically feasible method
for computing the power values and present sufficient conditions on the uncertainty radius so that the resulting sum power remains close to the SDP solution. The proposed conditions guarantee that
the interference caused by the channel uncertainties can be effectively controlled. Based on these conditions, we study the structure of the channel quantization codebooks and show that the
quantization codebook has a product form that involves spatially uniform quantization of the channel direction, and independent channel magnitude quantization which is uniform in dB scale. The
structural insight obtained by our analysis also gives a bit-sharing law for dividing the quantization bits between the two codebooks. We finally show that the total number of quantization bits
should increase as log(SINRtarget) as the target SINR increases. I.
"... Abstract—This paper proposes an efficient two-stage beamforming and scheduling algorithm for the limited-feedback cooperative multi-point (CoMP) systems. The system includes multiple
base-stations cooperatively transmitting data to a pool of users, which share a rate-limited feedback channel for sen ..."
Cited by 4 (3 self)
Add to MetaCart
Abstract—This paper proposes an efficient two-stage beamforming and scheduling algorithm for the limited-feedback cooperative multi-point (CoMP) systems. The system includes multiple base-stations
cooperatively transmitting data to a pool of users, which share a rate-limited feedback channel for sending back the channel state information (CSI). The feedback mechanism is divided into two stages
that are used separately for scheduling and beamforming. In the first stage, the users report their best channel gain from all the base-station antennas and the basestations schedule the best user
for each of their antennas. The scheduled users are then polled in the second stage to feedback their quantized channel vectors. The paper proposes an analytical framework to derive the bit
allocation between the two feedback stages and the bit allocation for quantizing each user’s CSI. For a total number of feedback bits B, it is shown that the number of bits assigned to the second
feedback stage should scale as log B. Furthermore, in quantizing channel vectors from different base-stations, each user should allocate its feedback budget in proportion to the logarithm of the
corresponding channel gains. These bit allocation are then used to show that the overall system performance scales double-logarithmically with B and logarithmically with the transmit SNR. The paper
further presents several numerical results to show that, in comparison with other beamforming-scheduling algorithms in the literature, the proposed scheme provides a consistent improvement in
downlink sum rate and network utility. Such improvements, in particular, are achieved in spite of a significant reduction in the beamforming-scheduling computational complexity, which makes the
proposed scheme an attractive solution for practical system implementations. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=260184","timestamp":"2014-04-16T23:16:02Z","content_type":null,"content_length":"43687","record_id":"<urn:uuid:ae1cfc43-6259-40c6-b65c-f5aca60f455f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
when to assume equal variances?
Q:Explain in your own words how you determine whether you assume equal variances or not. Why is it important to do this? I know how/when to apply which test , when I have a prior knowledge about
population variances when testing difference of means using t-test. but if we have to determine whether the variances are equal or not, I can only think if the samples are drawn from the same
population they will have equal population variance, cant think of anything else. can you help me explain this question? thanks Sadie
Do i have to mention "Test for variance assumption" where we determine by the value of p?
Hi!! First of all, I am not mathematician or person with deep theoretical background. So, take my comment with some grain of salt. I think I am not going to answer to your question. In fact, I am
wondering if does it make sense to talk of cases in which one has to assume equal variance. As far as I understand it, I would lean to think that equal variance has to be tested, not assumed. This is
why testing for equal/unequal variance is a precondition for many hypothesis tests. Regards, Gm
True, but when we are applying t test for the difference between the means of populations either we already know that the populations from which the samples are drawn have equal variances or not.
lets say we do not know the population variances and we are not told that they are equal or not, what do we do then? As determining the equality of variances is as important as equality of means.
What i could get from google that we have to apply F test for equal variances and then by p value we can determine if the population tests are equal or unequal, then we apply welch`s test or standard
t test according to the findings. But what is the importance of this whole procedure? I mean going to so much hassle , and then welch`s test isn't commonly used as you have to round off the degree of
freedom and one can think that there a bias occurred due to rounding off.. God! I can`t figure out why?
Well, practically we hardly can assume equal variances. the choice here is mostly a modellin one,ie whther we can assume with not much "hurt" that the variances are equal. I mostly not assume that
and go with the unequal test, but this choice affects the modelling in other stages of the analysis too and can get things complicated (ie
Thank You people, it has has helped me a lot Cheers. | {"url":"http://www.talkstats.com/showthread.php/12865-when-to-assume-equal-variances","timestamp":"2014-04-19T14:30:13Z","content_type":null,"content_length":"83131","record_id":"<urn:uuid:cea5ef1e-a911-467b-86df-eaab4faf0787>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Update on Anagram Trees
Posted by Nick Johnson | Filed under coding, tech, damn-cool-algorithms
Original Post
One nice thing about working at Google is that you are surrounded by very smart people. I told one of my coworkers about the anagram tree idea, and he immediately pointed out that reordering the
alphabet so that the least frequently used letters come first would reduce the branching factor early in the tree, which has the effect of reducing the overall size of the tree substantially. While
this seems obvious in retrospect, it's kind of unintuitive - usually we try to _increase_ the branching factor of n-ary trees to make them shallower and require fewer operations, rather than trying
to reduce it.
Trying it out with an ordering determined by looking at the branching factor for each letter produces results that bear this out: Memory is reduced by about a third, and the number of internal nodes
is reduced to 858,858 from 1,874,748, a reduction of more than 50! Though I haven't benchmarked it, difficult lookups are substantially faster, too.
The next logical development to try is to re-evaluate the order of the alphabet on a branch-by-branch basis. While I doubt this will have a substantial impact, it seems worth a try, so I'll give it a
go and update with results.
Edit: Re-evaluating the symbol to choose on a branch-by-branch basis had a bigger impact than I anticipated: The tree created with my sample dictionary now has a mere 661,659 internal nodes. Here's
the procedure for creating a tree using this method:
Assuming you have:
• A dictionary
• A set of symbols that have not yet been used (initially set to the alphabet)
1. If the symbol set is empty, this is a leaf node - store the dictionary in the node and return.
2. Find the symbol from the set that, if used, will result in the smallest number of branches (that is, the symbol that has the least variation in number of occurrences).
3. Mark the current node with the chosen symbol
4. Partition the dictionary into sub-dictionaries based on how many occurrences of the chosen symbol they have
5. For each sub-dictionary, recurse with the sub-dictionary and the set less the symbol you selected.
Implemented in Python, this is actually substantially larger in memory and on disk than the previous approach, likely due to overhead with using classes instead of tuples as the nodes. In
statically-typed languages, however, the overhead should be substantially outweighed by the benefit of the reduction in node count.
Note that the result of this alternate method is that while the letter to branch on is different for every node, following nodes from any leaf to the root of the tree always results in a valid
permutation of the alphabet used.
Edit 2: The code for a Python implementation incorporating these ideas can be found
Previous Post Next Post
blog comments powered by Disqus | {"url":"http://blog.notdot.net/2007/10/Update-on-Anagram-Trees","timestamp":"2014-04-17T18:23:15Z","content_type":null,"content_length":"8645","record_id":"<urn:uuid:e115cc3f-f961-43c4-aae2-e1c725ca8484>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- November 2008, week 3 (#176)LISTSERV at the University of Georgia
Date: Mon, 17 Nov 2008 16:41:09 -0600
Reply-To: Robin R High <rhigh@UNMC.EDU>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Robin R High <rhigh@UNMC.EDU>
Subject: Re: Proc Mixed help
Comments: To: Brad Heins <hein0106@UMN.EDU>
In-Reply-To: <200811172007.mAHGhL20025477@malibu.cc.uga.edu>
Content-Type: text/plain; charset="US-ASCII"
Work through this example dataset with each MODEL statement run in turn
may help demonstrate how you can dummy code the continuous variable of
DATA tst;
cls=1; do x = 1 to 15; x1=x; x2=0; x3=0; y = 5 + 2.5*x + 1.2*rannor(929);
OUTPUT; END;
cls=2; do x = 1 to 15; x1=0; x2=x; x3=0; y = 7 + .10*x + 1.2*rannor(0);
OUTPUT; END;
cls=3; do x = 1 to 15; x1=0; x2=0; x3=x; y = 8 + .15*x + 1.2*rannor(0);
OUTPUT; END;
ods select solutionF;
CLASS cls;
MODEL Y = cls x(cls) / solution ;
* MODEL y = cls x1 x2 x3/ solution ;
* MODEL y = cls x1 / solution ;
Robin High
Brad Heins <hein0106@UMN.EDU>
Sent by: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
11/17/2008 02:10 PM
Please respond to
Brad Heins <hein0106@UMN.EDU>
Proc Mixed help
I have a question about proc mixed at if there is a trick I can do with my
model statement or not.
I have a variable aocm that is continuous and is nested within a class
variable (LNR) that has 3 levels. From the output below you can see that
LNR 2 and 3 the regression coefficient is not significant, but it for LNR
Is there a way that I can tell Proc mixed to just adjust for only LNR 1
not for LNR 2 or 3 since they are not significant. I want to leave LNR 1
the model and adjust for those records, but not for LNR 2 or 3 because the
regression coefficients make no biological sense to the data that I have.
Any help would be appreciated.
proc mixed statement:
class group lnr hy;
model y= group lnr hy aocm(lnr) ;
Effect Estimate Error DF t Value Pr > |t|
aocm(LNR) 1 2.1697 0.2378 9660 4.92 <.0001
aocm(LNR) 2 -1.0259 0.2432 9660 8.33 <.6538
aocm(LNR) 3 -1.4974 0.2572 9660 5.82 <.8520 | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0811c&L=sas-l&D=0&P=19656","timestamp":"2014-04-18T13:19:09Z","content_type":null,"content_length":"10957","record_id":"<urn:uuid:7faee857-2908-4d63-86dc-6b3a765cb490>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
0^0 equals 1 or undefined?
Re: 0^0 equals 1 or undefined?
Not exactly.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0^0 equals 1 or undefined?
Hmmmm. Them is fighting words where I come from.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0^0 equals 1 or undefined?
hi Stefy,
Been getting tools ready to do some real work this afternoon.
Stefy wrote:
But what are the privileges of defining 0.999... to "exist" as a mathematical concept and be equal to 1?
That's my whole point. Anyone can have their own mathematics just by defining it. If it is consistent and useful, others may adopt it too.
If it isn't useful, others will probably just ignore it.
If it's inconsistent, then lots of folk will tell you so.
Whatever, nobody has to use it if they think it is odd, poorly defined or just they don't understand it.
We've had a few of those on the forum recently.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0^0 equals 1 or undefined?
Hi Bob
True, true.
What kind of tools?
Hi bobbym
Look the phrase up.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0^0 equals 1 or undefined?
Saws, powered screwdriver, tape measure, sandpaper .... I'm repairing boxes that act as general purpose stage extensions. Roughly 800 (H) x 1200 x 1200. The side panels of some have developed holes.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0^0 equals 1 or undefined?
Sorry, I gave up looking things up. I could not understand it anyway.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0^0 equals 1 or undefined?
Hi Bob
What do you need those for?
Hi bobbym
You should really look things up from time to time...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0^0 equals 1 or undefined?
I already know what it means because I have been wrongfully accused of doing that. If it were not you saying .99999999... ≠ 1, I would not be involved in the debate.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0^0 equals 1 or undefined?
I'm not actually saying that 0.999!=1, just that 0.999... should be barred.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0^0 equals 1 or undefined?
The boxes are quite big and strong enough to stand on. They are the same height as the school stage so they make the stage bigger. There are 6 boxes plus 4 staircases built to the same sizes. So you
can arrange them in a variety of ways for shows to have centre stairs or side stairs or whatever. Once we arranged them out in a straight line from the stage and made a cat-walk for a fashion show.
But kids sit on them and swing their feet. Over time, some side panels have developed holes and once a small hole starts, some ***&&!! will manage to kick it bigger. gggrrrr!
One got large enough for a kid to climb inside the box. ?????
So I'm about to take off the damaged bits and saw up new panels to replaced them. Hopefully one afternoon's work. Then someone else is going to paint them black.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0^0 equals 1 or undefined?
Yes, I will bar it immediately.
To me the algebraic idea, the geometric series and the fraction idea are enough. We numerical people do not have problems with .99999999... = 1.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: 0^0 equals 1 or undefined?
Stefy wrote:
I'm not actually saying that 0.999!=1, just that 0.999... should be barred.
You can bar it from your universe if you like but it's in mine and you cannot take it away!
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: 0^0 equals 1 or undefined?
Hope you get it finished today!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: 0^0 equals 1 or undefined?
Should do; if I get started. Sometimes I arrive to discover that a teacher wants to use the hall for some frivolous purpose like teaching children. Then I have to come back home and try again another
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=252371","timestamp":"2014-04-20T16:45:15Z","content_type":null,"content_length":"25861","record_id":"<urn:uuid:120896ff-f028-4cd7-8b62-f7b2be1b97b0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Compile-time arithmetics needed
preston@dawn.cs.rice.edu (Preston Briggs)
Wed, 5 Jan 1994 20:54:39 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.compilers
From: preston@dawn.cs.rice.edu (Preston Briggs)
Keywords: arithmetic
Organization: Rice University, Houston
References: 94-01-014
Date: Wed, 5 Jan 1994 20:54:39 GMT
sepp@cs.tu-berlin.de (Maximilian Spring) writes:
>I am looking for algorithms for efficient basic arithmetics (+, -, *, /
>for real & integers and modulus for integers) for constant folding, where
>range overflows are detectable.
First off, you want to be careful about folding real arithmetic; this is
an easy way to get into precision trouble. I limit myself to cases where
I can prove that the operands (or operand) is equal to some integer. That
is, I won't fold
3.14 * 1.23
but I will fold
float y = (float) 1;
float x = y * z;
There's actually a whole class of these possiblities (at least for
Fortran) that might not ordinarily come to mind: sin(0), sign(0, x),
sqrt(0), sqrt(1), log(0), log10(1), log10(10), ..., plus the various
methods of rounding and truncating. Basically, you look for special cases
where an integer-valued input gives an integer-valued output.
For integer arithmetic, the tedious cases are addition and subtraction.
Multiplication, division, and modulus are much simpler and can be handled
by checking the result. For example, if we're trying to compute X * Y and
we know they're both constants, we can compute
temp1 = X * Y; -- assuming there's no overflow checking here!
temp2 = temp1 / X;
if (temp2 != Y)
then we've got an overflow
My approach, when an overflow is detected, is to simply leave the
operation unfolded. That way, it can be detected at runtime _if_ it
actually is executed. And if the target machine has a different integer
precision or doesn't bother to detect integer overflow, then the optimizer
won't have changed the result.
To handle addition and subtraction, you need to check the signs of the
operands and the results. For example, if you add two positive operands
and get a negative result, you've had an overflow.
I seem to recall that there's a further tricky case, but I can't get at my
code to check. So, be careful and check those boundary conditions!
Preston Briggs
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/94-01-018","timestamp":"2014-04-21T09:53:32Z","content_type":null,"content_length":"5963","record_id":"<urn:uuid:94ae8433-8745-4747-a6ed-5e0c041934f0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Integration - The Exercise Bicycle Problem, Part 2
Introduction to Integration - The Exercise Bicycle Problem, Part 2 HELP
An aerobics instructor has decided that the best 1-hour stationary bicycle workout occurs when the speed of the bicycle follows the gray curve shown above. If the speed of the bicycle follows the
curve, what will be the total (virtual) distance traveled? We will estimate the distance.
1. Check the "Show divisions" checkbox, and set the number of divisions to 6. Imagine that the instructor does the workout, following the speed indicated by the gray curve. Every 10 minutes she
notes her speed, and figures that she was probably going approximately that fast during the previous 10 minutes.
1. Do you see how her method for estimating rate relates to the horizontal green lines?
2. If she estimates her distance this way, what is her estimate for total distance after 60 minutes? You can just look at the graph to estimate `y`-values.
2. The instructor does the workout again, following the speed indicated by the gray curve. This time, she checks her speed every 5 minutes, and figures that she was probably going approximately that
fast during the previous 5 minutes. What will be her estimate for total distance traveled during the 1-hour workout?
3. Justify: Estimating the total distance by using shorter time intervals (more divisions) produces more accurate estimates.
4. Suppose the gray speed curve is given by the function `y = f(t)` where `t` is in hours (careful!). Consider the arithmetic needed to find an estimate for total distance if you checked your
bicycle speed every minute (`1/60` of an hour). Your estimate would look like a sum with 60 terms. Using the symbolic notation `f` and not using the graph to estimate `y`-values, write out the
first three and the final three terms of your estimate.
5. Similar to the previous problem, imagine we check the speed every second, and use that to estimate total distance.
1. How many terms will be in the estimate?
2. Use `f` to write out the first three and final three terms of the estimate | {"url":"http://webspace.ship.edu/msrenault/GeoGebraCalculus/integration_intro_bicycle2.html","timestamp":"2014-04-20T18:24:33Z","content_type":null,"content_length":"8209","record_id":"<urn:uuid:f1307c84-a21d-4825-babb-cb85b5cbc0e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
TR13-071 | 8th May 2013 17:36
Inapproximability of Minimum Vertex Cover on $k$-uniform $k$-partite Hypergraphs
We study the problem of computing the minimum vertex cover on $k$-uniform $k$-partite hypergraphs when the $k$-partition is given. On bipartite graphs ($k=2$), the minimum vertex cover can be
computed in polynomial time. For $k \ge 3$, this problem is known to be NP-hard. For general $k$, the problem was studied by Lov\'{a}sz (1975), who gave a $\frac{k}{2}$-approximation based on the
standard LP relaxation. Subsequent work by Aharoni, Holzman, and Krivelevich (1996) showed a tight integrality gap of $\left(\frac{k}{2} - o(1)\right)$ for the LP relaxation.
We further investigate the inapproximability of minimum vertex cover on $k$-uniform $k$-partite hypergraphs and present the following results (here $\epsilon > 0$ is an arbitrarily small constant):
o NP-hardness of obtaining an approximation factor of $\left(\frac{k}{4} - \epsilon \right)$ for even $k$, and $\left(\frac{k}{4} - \frac{1}{4k} - \epsilon\right)$ for odd $k$,
o NP-hardness of obtaining a nearly-optimal approximation factor of $\left(\frac{k}{2}-1+\frac{1}{2k}-\epsilon\right)$, and,
o An optimal Unique Games-hardness for approximation within factor $\left(\frac{k}{2} - \epsilon\right)$, showing the optimality of Lovasz's algorithm if one assumes the Unique Games conjecture.
The first hardness result is based on a reduction from minimum vertex cover in $r$-uniform hypergraphs, for which NP-hardness of approximating within $r - 1 -\epsilon$ was shown by Dinur, Guruswami,
Khot, and Regev (2005). We include it for its simplicity, despite it being subsumed by the second hardness result. The Unique Games-hardness result is obtained by applying the results of Kumar,
Manokaran, Tulsiani, and Vishnoi (2011), with a slight modification, to the LP integrality gap due to Aharoni et al. The modification ensures that the reduction preserves the desired structural
properties of the hypergraph.
The reduction for the nearly optimal NP-hardness result relies on the Multi-Layered PCP of [DGKR05], and uses a gadget based on biased Long Codes, which is adapted from the LP integrality gap for the
problem. The nature of our reduction requires the analysis of several Long Codes with different biases, for which we prove structural properties of cross-intersecting collections of set families,
variants of which have been studied in extremal set theory. | {"url":"http://eccc.hpi-web.de/report/2013/071/","timestamp":"2014-04-17T06:55:50Z","content_type":null,"content_length":"21710","record_id":"<urn:uuid:55996afd-fc2a-492d-a268-889a19063bc4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spanning Tree's in the Complete Bipartite Graph
March 26th 2011, 03:33 AM #1
Mar 2011
Spanning Tree's in the Complete Bipartite Graph
I got a question that states,
It is known that the number of spanning trees in Km,n is given by the function f(m,n) where
f(m,n)= n^(m-1)*m^(n-1)
Prove that the formula gives the correct number of spanning trees for K2,n.
First of all I found that f(2,n)= n*2^(n-1)
Then i tried to come up with some sort of reasoning about the number of spanning trees, and would then show that it is equivalent to the above, but i'm going round in circles really. I may be
going the wrong way about it completely, but not sure, any help would be much appreciated.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/discrete-math/175867-spanning-tree-s-complete-bipartite-graph.html","timestamp":"2014-04-19T05:08:15Z","content_type":null,"content_length":"29480","record_id":"<urn:uuid:5f002528-2be9-4e2f-bb05-52d51c73963e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications and libraries/Mathematics
From HaskellWiki
< Applications and libraries
(Difference between revisions)
(Add plot package to Plotting) Scravy (Talk | contribs)
(→Linear algebra)
← Older edit Newer edit →
(15 intermediate revisions by 10 users not shown)
Line 12: Line 12:
=== Linear algebra === === Linear algebra ===
+ ;[http://hackage.haskell.org/package/bed-and-breakfast bed-and-breakfast]
:A library that implements Matrix operations in pure Haskell using mutable arrays and the ST Monad. bed-and-breakfast
+ does not need any additional software to be installed and can perform basic matrix operations like multiplication,
finding the inverse, and calculating determinants efficiently.
+ ;[https://github.com/patperry/hs-linear-algebra hs-linear-algebra]
:Patrick Perry's linear algebra library, built on BLAS. [https://github.com/cartazio/hs-cblas hs-cblas] seems to be a
+ more up-to-date fork.
;[http://www.cs.utah.edu/~hal/HBlas/index.html Wrapper to CLAPACK] ;[http://www.cs.utah.edu/~hal/HBlas/index.html Wrapper to CLAPACK]
Line 28: Line 34:
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmatrix ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmatrix HMatrix]
:By Alberto Ruiz. From the project [http://www.hmatrix.googlepages.com/ :By Alberto Ruiz. From the project [http://perception.inf.um.es/hmatrix/ website]:
− website]: +
::''This library provides a purely functional interface to linear algebra ::''A purely functional interface to linear algebra and other numerical algorithms, internally implemented using LAPACK,
− and other numerical computations, internally implemented using GSL, BLAS + BLAS, and GSL.
and LAPACK.''
::''All linear algebra functions mentioned in GNU-Octave's Quick
Reference (except syl) are already available both for real and complex ::''This package includes standard matrix decompositions (eigensystems, singular values, Cholesky, QR, etc.), linear
− matrices: eig, svd, chol, qr, hess, schur, inv, pinv, expm, norm, and + systems, numeric integration, root finding, etc.
det. There are also functions for numeric integration and differentiation
, nonlinear minimization, polynomial root finding, etc.''
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Vec Vec] ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Vec Vec]
Line 83: Line 89:
:part of NumericPrelude project :part of NumericPrelude project
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AERN-Basics AERN-Basics] [http://hackage.haskell.org/cgi-bin
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AERN-Real /hackage-scripts/package/AERN-Real AERN-Real] [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/
− AERN-Real] + AERN-Real-Interval AERN-Real-Interval] [http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AERN-Real-Double
:contains type classes that form a foundation for ''rounded arithmetic'' and ''interval arithmetic'' with explicit
control of rounding and the possibility to increase the rounding precision arbitrarily for types that support it. At the
:contains arbitrary precision ''interval arithmetic'' for approximating moment there are instances for Double floating point numbers where one can control the direction of rounding but cannot
exact real numbers in a style similar to Mueller's iRRAM and Lambov's increase the rounding precision. In the near future instances for MPFR arbitrary precision numbers will be provided.
− RealLib + Intervals can use as endpoints any type that supports directed rounding in the numerical order (such as Double or MPFR)
and operations on intervals are rounded either outwards or inwards. Outwards rounding allows to safely approximate exact
real arithmetic while a combination of both outwards and inwards rounding allows one to safely approximate exact interval
arithmetic. Inverted intervals with Kaucher arithmetic are also supported.
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AERN-RnToRm ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/AERN-RnToRm AERN-RnToRm]
:contains arithmetic of ''piecewise polynomial function enclosures'' that :contains arithmetic of ''piecewise polynomial function intervals'' that approximate multi-dimensional (almost
− efficiently approximate multi-dimensional real functions to arbitrary + everywhere) continuous real functions to arbitrary precision
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmpfr hmpfr] ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmpfr hmpfr]
:hmpfr is a purely functional haskell interface to the [http:// :hmpfr is a purely functional haskell interface to the [http://www.mpfr.org/ MPFR] library
www.mpfr.org/ MPFR] library
+ ;[http://hackage.haskell.org/package/numbers numbers]
+ :provides an up-to-date, easy-to-use BigFloat implementation that builds with a modern GHC, among other things.
===== Dynamic precision ===== ===== Dynamic precision =====
* You tell the precision, an expression shall be computed to and the * You tell the precision and an expression shall be computed to, and the computer finds out, how precisely to compute the
− computer finds out, how precise to compute the input values. + input values
* Rounding errors do not accumulate * Rounding errors do not accumulate
* Sharing of temporary results is difficult, that is in <hask>sqrt pi + * Sharing of temporary results is difficult, that is, in <hask>sqrt pi + sin pi</hask>, <hask>pi</hask> ''will'' be
− sin pi</hask>, <hask>pi</hask> will certainly be computed twice, each + computed twice, each time with the required precision.
time with the required precision.
* Almost as fast as arbitrary precision computation * Almost as fast as arbitrary precision computation
Line 118: Line 127:
;[http://www2.arnes.si/~abizja4/hera/ Hera] is an implementation by Aleš ;[http://www2.arnes.si/~abizja4/hera/ Hera] is an implementation by Aleš Bizjak.
:It uses the [http://www.mpfr.org/ MPFR] library to implement dyadic :It uses the [http://www.mpfr.org/ MPFR] library to implement dyadic rationals, on top of which are implemented intervals
rationals, on top of which are implemented intervals and real numbers. A and real numbers. A real number is represented as a function <hask>Int -> Interval</hask> which represents a sequence of
− real number is represented as a function <code>int -> interval</code> + intervals converging to the real.
which represents a sequence of intervals converging to the real.
===== Dynamic precision by lazy evaluation ===== ===== Dynamic precision by lazy evaluation =====
Line 175: Line 184:
;[http://www.info.unicaen.fr/~karczma/arpap/ Papers by Jerzy ;[http://www.info.unicaen.fr/~karczma/arpap/ Papers by Jerzy Karczmarczuk]
:Some interesting uses of Haskell in mathematics, including [[functional :Some interesting uses of Haskell in mathematics, including [[functional differentiation]], power series, continued
differentiation]], power series, continued fractions. fractions.
+ ;[http://www.robtougher.com/HCAS/ HCAS] by Rob Tougher.
=== Statistics === === Statistics ===
;[http://www.sftank.net/?q=node/10 hstats] ;[http://www.sftank.net/?q=node/10 hstats]
: Statistical Computing with Haskell : Statistical Computing with Haskell
+ ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hmatrix-gsl-stats hmatrix-gsl-stats]
+ : A binding to the statistics portion of GSL. Works with hmatrix
+ ;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hstatistics hstatistics]
+ : A library for doing statistics. Works with hmatrix
=== Plotting === === Plotting ===
+ ;[http://hackage.haskell.org/package/easyplot easyplot]
+ : Simple and easy wrapper to gnuplot.
;[[Gnuplot]] ;[[Gnuplot]]
Line 203: Line 223:
:The HaskellMath library is a sandbox for experimenting with mathematics
algorithms. So far I've implemented a few quantitative finance models :The HaskellMath library is a sandbox for experimenting with mathematics algorithms. So far I've implemented a few
(Black Scholes, Binomial Trees, etc) and basic linear algebra functions. quantitative finance models (Black Scholes, Binomial Trees, etc) and basic linear algebra functions. Next I might work on
Next I might work on either computer algebra or linear programming. All either computer algebra or linear programming. All comments welcome!
comments welcome!
;[http://www.polyomino.f2s.com/david/haskell/codeindex.html Haskell for ;[http://hackage.haskell.org/package/HaskellForMaths HaskellForMaths]
− Maths] +
:David Amos' [http://www.polyomino.f2s.com/david/haskell/main.html :David Amos' library for combinatorics, group theory, commutative algebra and non-commutative algebra, which is described
− collection of math libraries] in Haskell - including number theory, + in an [http://haskellformaths.blogspot.com/ accompanying blog].
commutative algebra, combinatorics, permutation groups and more.
;[http://darcs.haskell.org/htam/ Various math stuff by Henning ;[http://darcs.haskell.org/htam/ Various math stuff by Henning Thielemann]
Revision as of 11:47, 16 June 2013
1 Applications
1.1 Physics
Meep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems.
Jan Skibinski's Numeric Quest library provides modules that are useful for Quantum Mechanics, among other things.
2 Libraries
2.1 Linear algebra
A library that implements Matrix operations in pure Haskell using mutable arrays and the ST Monad. bed-and-breakfast does not need any additional software to be installed and can perform basic
matrix operations like multiplication, finding the inverse, and calculating determinants efficiently.
Patrick Perry's linear algebra library, built on BLAS. hs-cblas seems to be a more up-to-date fork.
Modules for matrix manipulation, Fourier transform, interpolation, spectral estimation, and frequency estimation.
Frederik Eaton's library for statically checked matrix manipulation in Haskell
Jan Skibinski's Numeric Quest library provides several modules that are useful for linear algebra in general, among other things.
The vector-space package defines classes and generic operations for vector spaces and affine spaces. It also defines a type of infinite towers of generalized derivatives (linear transformations).
By Alberto Ruiz. From the project website:
A purely functional interface to linear algebra and other numerical algorithms, internally implemented using LAPACK, BLAS, and GSL.
This package includes standard matrix decompositions (eigensystems, singular values, Cholesky, QR, etc.), linear systems, numeric integration, root finding, etc.
By Scott E. Dillard. Static dimension checking:
Vectors are represented by lists with type-encoded lengths. The constructor is :., which acts like a cons both at the value and type levels, with () taking the place of nil. So x:.y:.z:.() is
a 3d vector. The library provides a set of common list-like functions (map, fold, etc) for working with vectors. Built up from these functions are a small but useful set of linear algebra
operations: matrix multiplication, determinants, solving linear systems, inverting matrices.
3 See also
See also: Design discussions
Working with physical units like second, meter and so on in a type-safe manner.
Numeric values with dynamically checked units.
This is not simply a library providing a new type of class, but stand-alone calculation tool that supports user defined functions and units (basic and derived), so it can provide dimension-safe
calculation (not embedded but via shell). Calculations can be modified/saved via shell. It uses rational numbers to avoid rounding errors where possible.
Library providing data types for performing arithmetic with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types and the
validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division of units.
3.2 Number representations
3.2.1 Decimal numbers
An implementation of real decimal arithmetic, for cases where the binary floating point is not acceptable (for example, money).
3.2.2 Real and rational numbers
There are several levels of handling real numbers and according libraries.
3.2.2.1 Arbitrary precision
• Numbers have fixed precision
• Rounding errors accumulate
• Sharing is easy, i.e. in , is computed only once
• Fast, because the routines can make use of the fast implementation of operations
Jan Skibinski's Numeric Quest library provides, among other things, a type for arbitrary precision rational numbers with transcendental functions.
part of NumericPrelude project
AERN-Basics AERN-Real AERN-Real-Interval AERN-Real-Double
contains type classes that form a foundation for rounded arithmetic and interval arithmetic with explicit control of rounding and the possibility to increase the rounding precision arbitrarily
for types that support it. At the moment there are instances for Double floating point numbers where one can control the direction of rounding but cannot increase the rounding precision. In the
near future instances for MPFR arbitrary precision numbers will be provided. Intervals can use as endpoints any type that supports directed rounding in the numerical order (such as Double or
MPFR) and operations on intervals are rounded either outwards or inwards. Outwards rounding allows to safely approximate exact real arithmetic while a combination of both outwards and inwards
rounding allows one to safely approximate exact interval arithmetic. Inverted intervals with Kaucher arithmetic are also supported.
contains arithmetic of piecewise polynomial function intervals that approximate multi-dimensional (almost everywhere) continuous real functions to arbitrary precision
hmpfr is a purely functional haskell interface to the MPFR library
provides an up-to-date, easy-to-use BigFloat implementation that builds with a modern GHC, among other things.
3.2.2.2 Dynamic precision
• You tell the precision and an expression shall be computed to, and the computer finds out, how precisely to compute the input values
• Rounding errors do not accumulate
• Sharing of temporary results is difficult, that is, in , will be computed twice, each time with the required precision.
• Almost as fast as arbitrary precision computation
ERA is an implementation (in Haskell 1.2) by David Lester.
It is quite fast, possibly the fastest Haskell implementation. At 220 lines it is also the shortest. Probably the shortest implementation of exact real arithmetic in any language.
The provided number type is instance of the Haskell 98 numeric type classes and thus can be used whereever you used Float or Double before and encountered some numerical difficulties.
Here is a mirror: http://darcs.augustsson.net/Darcs/CReal/
IC-Reals is an implementation by Abbas Edalat, Marko Krznarć and Peter J. Potts.
This implementation uses linear fractional transformations.
Few Digits by Russell O'Connor.
This is a prototype of the implementation he intendeds to write in Coq. Once the Coq implementation is complete, the Haskell code could be extracted producing an implementation that would be
proved correct.
COMP is an implementation by Yann Kieffer.
The work is in beta and relies on new primitive operations on Integers which will be implemented in GHC. The library isn't available yet.
Hera is an implementation by Aleš Bizjak.
It uses the MPFR library to implement dyadic rationals, on top of which are implemented intervals and real numbers. A real number is represented as a function which represents a sequence of
intervals converging to the real.
3.2.2.3 Dynamic precision by lazy evaluation
The real numbers are represented by an infinite datastructure, which allows you to increase precision successively by evaluating the data structure successively. All of the implementations below use
some kind of digit stream as number representation. Sharing of results is simple. The implementations are either fast on simple expressions, because they use large blocks/bases, or they are fast on
complex expressions, because they consume as little as possible input digits in order to emit the required output digits.
BigFloat is an implementation by Martin Guy.
It works with streams of decimal digits (strictly in the range from 0 to 9) and a separate sign. The produced digits are always correct. Output is postponed until the code is certain what the
next digit is. This sometimes means that no more data is output.
In "The Most Unreliable Technique in the World to compute pi" Jerzy Karczmarczuk develops some functions for computing pi lazily.
Represents a real number as pair , where the digits are s in the open range . There is no need for an extra sign item in the number data structure. The can range from to . (Binary representations
can be derived from the hexadecimal representation.) Showing the numbers in traditional format (non-negative digits) fails for fractions ending with a run of zeros. However the internal
representation with negative digits can always be shown and is probably more useful for further processing. An interface for the numeric type hierarchy of the NumericPrelude project is provided.
It features
□ basis conversion
□ basic arithmetic: addition, subtraction, multiplication, division
□ algebraic arithmetic: square root, other roots (no general polynomial roots)
□ transcendental arithmetic: pi, exponential, logarithm, trigonometric and inverse trigonometric functions
3.3 Type class hierarchies
There are several approaches to improve the numeric type class hierarchy.
Dylan Thurston and Henning Thielemann's Numeric Prelude
Experimental revised framework for numeric type classes. Needs hiding of Prelude, overriding hidden functions like fromInteger and multi-parameter type classes. Probably restricted to GHC.
Jerzy Karczmarczuk's approach
Serge D. Mechveliani's Basic Algebra proposal
Andrew Frank's approach
The proposal: ftp://ftp.geoinfo.tuwien.ac.at/frank/numbersPrelude_v1.pdf
Haskell Prime
3.4 Discrete mathematics
Andrew Bromage's Haskell number theory library, providing operations on primes, fibonacci sequences and combinatorics.
An haskell implementation of Brendan McKay's algorithm for graph canonic labeling and automorphism group. (aka Nauty)
is a book by Jürgen G. Bokowski, where he develops Haskell code for Matroid computations.
See also Libraries and tools/Cryptography
3.5 Computer Algebra
DoCon - Algebraic Domain Constructor
A library for Algebra, turns GHCi into a kind of Computer Algebra System
Some interesting uses of Haskell in mathematics, including functional differentiation, power series, continued fractions.
HCAS by Rob Tougher.
3.6 Statistics
Statistical Computing with Haskell
A binding to the statistics portion of GSL. Works with hmatrix
A library for doing statistics. Works with hmatrix
3.7 Plotting
Simple and easy wrapper to gnuplot.
Simple wrapper to gnuplot
gnuplot wrapper as part of GSL Haskell package
A library for generating 2D Charts and Plots, based upon the cairo graphics library.
A library for generating figures, based upon the cairo graphics libary with
a simple, monadic interface.
the module Numeric.Probability.Visualize contains a wrapper to R
3.8 Miscellaneous libraries
The HaskellMath library is a sandbox for experimenting with mathematics algorithms. So far I've implemented a few quantitative finance models (Black Scholes, Binomial Trees, etc) and basic linear
algebra functions. Next I might work on either computer algebra or linear programming. All comments welcome!
David Amos' library for combinatorics, group theory, commutative algebra and non-commutative algebra, which is described in an accompanying blog.
This is some unsorted mathematical stuff including: gnuplot wrapper (now maintained as separate package), portable grey map (PGM) image reader and writer, simplest numerical integration,
differentiation, zero finding, interpolation, solution of differential equations, combinatorics, some solutions of math riddles, computation of fractal dimensions of iterated function systems
Jan Skibinski wrote a collection of Haskell modules that are useful for Mathematics in general, and Quantum Mechanics in particular.
Some of the modules are hosted on haskell.org. They include modules for:
□ Rational numbers with transcendental functions
□ Roots of polynomials
□ Eigensystems
□ Tensors
□ Dirac quantum mechanics
Other modules in Numeric Quest are currently only available via the Internet Archive. They include, among many other things:
See the Numeric Quest page for more information.
A small Haskell library, containing algorithms for two-dimensional convex hulls, triangulations of polygons, Voronoi-diagrams and Delaunay-triangulations, the QEDS data structure, kd-trees and
A Haskell interface to Lester Ingber's adaptive simulating annealing code.
Hmm is a small Haskell library to parse and verify Metamath databases.
The PFP library is a collection of modules for Haskell that facilitates probabilistic functional programming, that is, programming with stochastic values. The probabilistic functional programming
approach is based on a data type for representing distributions. A distribution represent the outcome of a probabilistic event as a collection of all possible values, tagged with their
likelihood. A nice aspect of this system is that simulations can be specified independently from their method of execution. That is, we can either fully simulate or randomize any simulation
without altering the code which defines it.
A general boolean algebra class and some instances for Haskell.
HODE is a binding to the Open Dynamics Engine. ODE is an open source, high performance library for simulating rigid body dynamics.
A ranged set is a list of non-overlapping ranges. The ranges have upper and lower boundaries, and a boundary divides the base type into values above and below. No value can ever sit on a
boundary. So you can have the set $(2.0, 3.0] \cup (5.3, 6)$.
Hhydra is a tool to compute Goodstein successions and hydra puzzles described by Bernard Hodgson in his article 'Herculean or Sisyphean tasks?' published in No 51 March 2004 of the Newsletter of
the European Mathematical Society.
This page contains a list of libraries and tools in a certain category. For a comprehensive list of such pages, see Libraries and tools. | {"url":"http://www.haskell.org/haskellwiki/index.php?title=Applications_and_libraries/Mathematics&diff=56277&oldid=37174","timestamp":"2014-04-18T07:10:41Z","content_type":null,"content_length":"78828","record_id":"<urn:uuid:0a4d94a0-ddc0-4760-bb46-619e7a2b1b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary Geometry and Topology
Take a pair of points on the arc. Open your compass wide enough so two (equal) circles drawn with these point as centers will intersect. Draw the two circles (or at least part of them) to
determine their two points of intersection. Draw a line through the two points of intersection.
Take another pair of points on the arc and do the above construction again to draw another line the same way.
The point where the two lines you've drawn intersect is the center of the circle your original arc belongs to. (And, of course, the distance between that center and any point on the arc is the
radius you were after.)
Let's start with simple planar surfaces:
□ A circle of radius R has area pR^2 (p=3.14159265...).
□ The area of a trapezoid is the arithmetic average (i.e., the half-sum) of its two parallel bases multiplied by its height (the height is the distance between the bases). The area of a
rectangle (width multiplied by height) may be seen as a special case of this...
□ A triangle may be also considered a special type of trapezoid (with one base of zero length) and its area is [therefore] half the product of a side by the corresponding height.
Let's proceed with the simplest curved surfaces:
□ The surface area of a sphere of radius R is 4pR^2 (its volume is 4pR^3 /3 ).
□ More generally, we may consider the surface (sometimes called a spherical frustum or "frustrum") which consists of the part of the surface of a sphere between two parallel planes that
intersect it. The surface area of such a frustum is 2pRH, if H is the distance between the two planes. When H=2R, the frustum consists of the entire sphere and the above formula does give an
area of 4pR^2, as expected.
□ An ordinary right cylinder of height H is the surface generated by a straight segment of length H perpendicular to a plane containing the trajectory of one of its extremities (a simple curve
of length L). The surface area of such a cylinder is simply LH. In particular, if the above "trajectory" is a circle, we have a circular cylinder of radius R, whose surface area is 2pRH.
(Note that a spherical frustum has the same area as the right cylinder circumscribed to it, a remarkable fact first discovered by Archimedes of Syracuse.)
□ We may also consider the lateral surface area of the conical surface generated by a segment of length R with one fixed extremity, when the other extremity has a trajectory of length L (this
spherical trajectory is not planar unless it happens to be a circle, which is true only for an ordinary circular cone). The area of such a surface is simply RL/2. In particular, the lateral
surface area of an ordinary circular cone is pRr, if r is the radius of its base and R is the distance from the circumference of the base to the apex. If you're given the height H of the cone
instead of R, the Pythagorean theorem (R^2=H^2+r^2) comes in handy to give you the lateral area of the conical surface as pr Ö(H^2+r^2) Note that the case R=r (or H=0) corresponds to a "flat"
circular cone, which is simply a circle of area pR^2... Back to our first formula!
For any triangle, these 3 points are collinear. The straight line on which they stand is often called Euler's line (it's undefined for an equilateral triangle).
The centroid G is between the orthocenter H and the circumcenter O. The distance HG is twice the distance GO. Recall that:
The incenter of a scalene triangle is not on its Euler line, unlike another remarkable point E which is located exactly halfway between H and O: That point E is the center of the so-called Euler
circle (or 9-point circle) which goes though 9 special points of the triangle: the 3 midpoints of the sides, the feet of the 3 altitudes and the 3 midpoints from the orthocenter H to the
vertices. The 9-point circle has half the radius of the circumcircle.
Euler's Circle and Feuerbach's Theorem :
Feuerbach's theorem (1822) states that the 9-point circle is tangent externally to the three excircles and internally to the incircle (at a point called Feuerbach's point).
The existence of the 9-point circle was unknown to Euler. The basic fact that the middles of the sides and the feet of the altitudes all belong to the same circle was discovered independently by
Charles Brianchon (1783-1864; X1803), Jean-Victor Poncelet (1788-1867; X1807) and Karl Wilhelm von Feuerbach (1800-1834). The remark that the same circle also goes through the midpoints between
the orthocenter and the vertices was first made by Olry Terquem (1782-1862; X1801) who coined the term 9-point circle, which stuck...
That might be unfortunate in view of the fact that there are many more than 9 special points on Euler's circle : In 1996, Jingchen Tong & Sidney Kung named 24 of those!
The Poncelet point of a quadrilateral ABCD is the point of intersection of the four respective Euler circles of ABC, ABD, ACD and BCD.
Twelve New Points on the Nine-Point Circle by Jingchen Tong & Sidney Kung (1996)
Consider a point P at a distance d from the center of a circle of radius r.
If a line through P intersects that circle at points A and B (A = B if the line is tangent to the circle) then the following quantity is called the power of P with respect to the circle. It
doesn't depend on the intersecting line.
d^2 - r^2 = PA . PB
MN denotes the linear abscissa whose magnitude is the Euclidean distance between M and N. Its sign depends on the orientation of the line.
UV + VW = UW (Chasles relation)
With respect to a circle, a point has negative power if it's inside the circle, positive power if it's outside and zero power if it's on the circle itself.
Cut-the-Knot by Alexander Bogomolny : Power of a Point | Intersecting Chords Theorem
Trilinear coordinates (trilinears) and barycentric coordinates are examples of homogeneous coordinates. This is to say that, in either system, two proportional triplets represent the same point
of the Euclidean plane.
In both systems, by convention, the triplet (0,0,0) represents the point at infinity (spanning the entire horizon of the plane, in all directions).
A finite point M of barycentric coordinates (x,y,z) is defined in terms of the three base points A,B,C by the following relation:
(x+y+z) M = x A + y B + z C
The trilinear coordinates of a point of barycentric coordinates (x,y,z) are (x/a, y/b, z/c). Under the usual geometric interpretation, a, b and c are the pairwise distances between the three base
points and, therefore, must satisfy the triangular inequality. However, the above correspondence can be investigated abstractly without that requirement...
Actually, barycentric coordinates describe a general vector space without resorting to any metric concept, whereas the mapping from barycentric to trilinear coordinates is a way to endow the
plane with a definite metric (as is a linear mapping from the plane to its dual). Barycentric coordinates are to contravariant cartesian coordinates what trilinears are to covariant coordinates.
When the triangular inequality fails for a, b and c, the metric so defined is Lorentzian, not Euclidean.
Trilinear Coordinates | Barycentric Coordinates
For the length (perimeter) of the entire circumference, see our (unabridged) answer to the next question.
A parametric equation for an ellipse of cartesian equation x^2/a^2 + y^2/b^2=1 is: x = a sin(q) and y = b cos(q) . We assume a>b and define e^2 =1-b^2/a^2. The above figure shows how q may be
determined using an auxiliary circle whose radius is the ellipse's major radius a.
It suffices to calculate the elliptic arc (shown as a red line in the picture) from the flat apex (at q=0) to an arbitrary point, conventionally, no more than a quarter of a perimeter away. The
length of the arc between two points is obtained by adding or subtracting two such quantities (possibly adding a multiple of a quarter of the perimeter).
The above parameterization is used conventionally, because it turns out to be numerically superior to the complementary one which would make q small near the sharper apex. That's especially
so in the case of very elongated ellipses.
The length of an elementary arc is obtained as the square root of (dx)^2+(dy)^2, which boils down to aÖ(1-e^2sin^2q) dq. [This is simply an infinitesimal expression of the Pythagorean theorem: At
infinitesimal scales, every ordinary curve looks straight, and a small piece of it appears as the hypotenuse of a tiny right triangle of sides dx and dy.] The length of the elliptic arc
corresponding to the angle q may thus be expressed as a simple integral (an old-fashioned quadrature ) known as the incomplete elliptic integral of the second kind :
This function (E) was introduced because the integral has no expression in terms of more elementary functions. (The function E also comes in a single-argument version known as the complete
elliptic integral of the second kind, namely E(e) = E(p/2,e), which is a quarter of the perimeter of an ellipse of eccentricity e and unit major radius.) To compute the integral when e is not too
close to 1 and/or q is not too close to p/2 [in which case other efficient approaches exist, see elsewhere on this site], we may expand the square root in the integrand as a sum of infinitely
many terms of the form (-1)^n C(½,n) e^2n sin^2na, for n=0, 1, 2, 3... Each such term may then be integrated individually using the formula:
ó ^q []sin^2n a da = ì 2n ü + ì 2n ü sin 2kq[ ]
õ [0] î n þ î n-k þ
When q = p/2, all the sines vanish and only the first term remains. This translates into the simple series given in the next article. Otherwise, what we are left with is 2q/p times that complete
integral plus the Fourier series of some odd periodic function of q (whose period is p)...
On 2001-11-29, Muz Zviman wrote:
Thank you for the quick answer. The website is great.
Best regards, Muz
On 2002-12-31, David W. Cantrell proposed:
A 0.56% approximation to the above (first posted to the sci.math newsgroup).
The following is a summary. For more details, see our unabridged discussion.
There is no simple exact formula: There are simple formulas but they are not exact and there are exact formulas but they are not simple.
If the ellipse is of equation x^2/a^2 + y^2/b^2=1 with a>b, a is called the major radius, and b is the minor radius. The quantity e = Ö(1-b^2/a^2) is the eccentricity of the ellipse.
An exact expression for the ellipse perimeter P involves the sum of infinitely many terms of the form (-1)/(2n-1) [(2n)!/(2^n n!)^2]^2 e^2n. The first such term (for n=0) is equal to 1 whereas
all the others are negative correction terms :
P/2pa = 1 - [1/4]e^2 - [3/64]e^4 - [5/256]e^6 - [175/16384]e^8 - [441/65536]e^10 ...
Note that for a circle (e=0) of radius a, the above does give the circumference as 2p times the radius.
Among the many approximative formulas for the perimeter of an ellipse, we have:
P » pÖ 2(a^2+b^2) - (a-b)^2/2
A 1914 formula due to Srinivasa Ramanujan (1887-1920) is
P » p [ 3(a+b) - Ö (3a+b) (a+3b) ]
A second 1914 formula, also due to Ramanujan, is expressed in terms of the quantity h = (a-b)^2/(a+b)^2 :
P » p (a+b) [ 1 + 3h / ( 10+Ö 4-3h ) ]
The relative error of this formula for ellipses of low eccentricities is fabulous:
(-3/2^37 ) e^20 [ 1 + 5 e^2 + 11107/768 e^4 + 4067/128 e^6 + 3860169/65536 e^8 + ... ]
In 1917, Hudson came up with a formula without square roots, which is traditionally expressed in terms of the quantity L = h/4 = (a-b)^2/[2(a+b)]^2 :
P » p (a+b)/4 [ 3(1+L) + 1/(1-L) ]
In 2000, Roger Maertens proposed the following so-called "YNOT formula":
P » 4 (a^y+b^y) ^1/y or P » 4a (1 + (1-e^2)^y/2 )^1/y with y = ln(2)/ln(p/2)
The special value of y (the "YNOT constant") makes the formula exact for circles, whereas it is clearly also exact for flat ellipses (b=0 and P = 4a). relative error of the YNOT formula never
exceeds 0.3619%. It is highest for the perimeter of an ellipse whose eccentricity is about 0.979811 [ pictured at right ] with an aspect ratio a/b slightly above 5.
A popular upper bound formula is due to Euler (1773):
The following simple lower bound formula is due to Johannes Kepler (1609):
The precision of all of the above formulas is summarized in the table below. The last column shows the absolute error (in meters) of each formula when it is used to compute the circumference of
an ellipse with the same eccentricity and the same size as the Earth Meridian. Note that even the humble #1 formula is accurate to 15 mm, or about one tenth of the width of a human hair! (For
Ramanujan's first formula, this would be one sixtieth of the diameter of a hydrogen atom. We lack a physical yardstick for the more precise formulas...)
Except for Maertens' YNOT formula, the modest precision shown for the "worst case" corresponds to a completely flat ellipse (of perimeter 4a).
│ Perimeter │ Relative Error │ D for Earth │
│ Formula ├───────────┬─────────────────────────────┤ Meridian (m) │
│ │ Worst (%) │ Low Eccentricity │ │
│ (7) │ Kepler 1609 │ -100 │ -3e^4/64 [1+e^2 + ...] │ -84.61 m │
│ (6) │ Euler 1773 │ +11.072 │ e^4/64 [1+e^2 + ...] │ +28.20 m │
│ (5) │ Maertens 2000 │ +0.3619 │ (2y-3)e^4/64 [1+e^2 + ...] │ +1.97 m │
│ (1) │ │ -3.809 │ -3e^8/2^14 [1+2e^2 + ...] │ -1.49 10^-5 │
│ (2) │ Ramanujan I │ -0.416 │ -e^12/2^21 [1+3e^2 + ...] │ -1.75 10^-12 │
│ (4) │ Hudson 1917 │ -0.189 │ -9e^16/2^30 [1+4e^2 + ...] │ -1.39 10^-18 │
│ (3) │ Ramanujan II │ -0.0402 │ -3e^20/2^37 [1+5e^2 + ...] │ -1.63 10^-25 │
For more information, see our unabridged discussion.
If your "oval" is an ellipse of major radius a and minor radius b, its cartesian equation (with the proper choice of coordinates) is:
x^2/a^2 + y^2/b^2 = 1
The area of such an ellipse is simply S = pab.
The volume of an ellipsoid of equation x^2/a^2 + y^2/b^2 + z^2/c^2 = 1 is
V = (^ 4p/[3 ]) a b c
This is a good approximation for other egg-shaped ovals which are nearly elliptical: 2a is the diameter (i.e. the largest width), 2b is the largest width for a direction perpendicular to the
diameter and 2c is the width in the direction perpendicular to both previous directions. Each such width is measured between two parallel planes perpendicular to the direction being considered.
The (lateral) surface area of a circular cylinder of radius R and height H is 2pRH.
The surface area S of an oblate ellipsoid (generated by an ellipse rotating around its minor axis) of equatorial radius a and eccentricity e is given by:
S = 2pa^2 [ 1 + (1-e^2) atanh(e)/e ] , or
S = 2pa^2 [ 1 + (b/a)^2 atanh(e)/e ] [ See proof. ]
In this, e is Ö(1-b^2/a^2), where b<a is the "polar radius" (the distance from either pole to the center) and atanh(e) is ½ ln((1+e)/(1-e)) [also denoted argth(e) ].
The surface area S of a prolate ellipsoid ("cigar-like") generated by an ellipse rotating around its major axis (so that the equatorial radius b is smaller than the polar radius a) is given by:
S = 2pb^2 [ 1 + (a/b) arcsin(e)/e ]
This shows that a very elongated ellipsoid has an area of p^2ab (e is close to 1 and b is much less than a), which is about 21.46% less than the lateral area of the circumscribed cylinder (4pab),
whereas these two areas are equal in the case of a sphere, as noted by Archimedes of Syracuse (c.287-212BC).
Now, it's not nearly as easy to work out the surface area of a general ellipsoid of cartesian equation (x/a)^2+(y/b)^2+(z/c)^2=1. No elementary formula for this one! The general formula involves
elliptic functions, which "disappear" only for solids of revolution.
Background : The above general quadratic equation describes planar curves known as conic sections (because they can be obtained as the the intersection of a plane and a full cone, defined as
the surface generated by a straight line rotating around an axis that intersects it). A conic section may be an ellipse (possibly a circle), a parabola, or a hyperbola. It may also be a
so-called degenerate conic which consists of a pair of lines (intersecting, parallel or equal) in the case of a quadratic polynomial that is a product of two first-degree polynomials.
It's also possible for such an equation to describe what's called an imaginary ellipse, which is an empty set in the real plane (but would not be if imaginary coordinates were allowed). For
example, the equation of an imaginary circle could be something like: x^2+y^2+4=0. For completeness, a general quadratic equation with real coefficients could also describe a pair of
imaginary lines. Such lines correspond to a single solution point if they intersect [example: x^2+y^2 = 0, which is to say (x+iy)(x-iy) = 0], or an empty set in the real plane when they don't
[example: (x+y)^2+1 = 0, which is to say (x+y+i)(x+y-i) = 0].
As the question is only about real ellipses, so is the following discussion:
First, we notice that we may get rid of any existing cross term ("xy" with a nonzero C coefficient) by tilting the coordinate axes. If we do so by an angle q (see figure), the new coordinates X
and Y (note capitalization) are best obtained as the scalar products of the unit vectors of the new tilted axes. These vectors are (cos q,sin q) and (-sin q,cos q):
X = x cos q + y sin q conversely Þ x = X cos q - Y sin q
Y = -x sin q + y cos q (change q to -q ) y = Y sin q + Y cos q
The above expressions of x and y in terms of X and Y give us the curve's equation in the tilted frame. Equating to zero the coefficient of XY gives:
(B-A) sin 2q + C cos 2q = 0
We could thus obtain q within an integral multiple of p/2 as half an arctangent, but let's not rush things! What we really want is the inclination of the major axis, which is determined within an
integral multiple of p... When the above relation is satisfied, the rest of the equation reads:
[A cos^2q + B sin^2q + C cos q sin q ] X^2 + [A sin^2q + B cos^2q - C cos q sin q ] Y^2 + [D cos q + E sin q ] X + [-D sin q + E cos q ] Y + F = 0
Using the previous relation, we may reduce the above coefficients of X^2 and Y^2. We find the former equal to (1/2)[A(1+1/cos 2q) + B(1-1/cos 2q)], whereas the latter equals
(1/2)[A(1-1/cos 2q) + B(1+1/cos 2q)]. We are only interested in the elliptic case, so these two are of the same sign, which is also the sign of their sum A+B. As stated in our preliminary note,
we shall assume that sum to be positive (without loss of generality, since an equivalent equation is clearly obtained by changing the sign of all coefficients). Now, if we want the X-axis to be
the major one, the coefficient of X^2 is inversely proportional to the square of the major radius and is thus smaller than the coefficient of Y^2 (which is is inversely proportional to the square
of the minor radius). (A-B)/cos 2q is negative. With this in mind, we can fully specify the inclination of the major axis (within a multiple of p, of course) by giving the sine and cosine of the
angle 2q (we're assuming A+B > 0):
cos 2q = (B-A)/Q and sin 2q = -C/Q , where Q = Ö (A-B)^ 2 + C^ 2
This determination of the inclination isn't valid when Q=0. (Q=0 implies A=B and C=0, which corresponds to the trivial case where the ellipse is, in fact, a circle for which any direction may be
considered "major".)
The above coefficients of X^2 and Y^2 respectively boil down to (A+B-Q)/2 and (A+B+Q)/2. We shall need these simple expressions below.
We may also remark that the curve described is indeed an ellipse --real or imaginary-- when (A+B)^2 > Q^2 , so these two coefficients do have the same sign. This relation translates into 4AB
> C^ 2.
The coordinates of the ellipse center are fairly easy to compute directly in the original frame of reference: We are simply looking for (xo,yo) such that the transforms x=xo+u and y=yo+v yield an
equation where the coefficients of u and v are zero (so that the origin will be a center of symmetry)... This translates into the two simultaneous equations:
0 = 2 A xo + C yo + D
0 = C xo + 2 B yo + E
Therefore: xo = (CE-2BD)/(4AB-C^ 2 ) and yo = (CD-2AE)/(4AB-C^ 2 ).
This argument may be used to show that any conic section has a center, except in the case of the parabola, when 4AB = C^ 2.
To determine the principal radii of the ellipse, we first need the value of the equation's constant term (call it K) in a frame of reference centered at the above point (xo,yo). Knowing that the
tilt of the axes is irrelevant to this constant K, we may as well compute it at zero tilt, which yields:
K = F + (CDE - AE^ 2 - BD^ 2 ) / (4AB-C^ 2 )
In the properly tilted frame centered at (xo,yo), the equation of the ellipse is thus: (A+B-Q) X^2 + (A+B+Q) Y^2 + 2K = 0 , which we just need to identify with the standardized equation
X^2/a^2 + Y^2/b^2 = 1 in order to obtain the values of the principal radii, and/or their squares:
a^2 = -2K / (A+B-Q)
b^2 = -2K / (A+B+Q)
Thus, a real ellipse is described only when (A+B)K < 0 and 4AB > C^ 2.
By definition, a parabola is the set of points, in the Euclidean plane, that are equally distant from a given point (the parabola's focus F) and a prescribed straight line (the parabola's
directrix). The axis of a parabola is the perpendicular to the directrix trough the focus. The point at the intersection of the parabola and its axis is the parabola's apex (O).
More generally, we call apex of a planar curve any point where its curvature is extremal. There are four such points in a proper ellipse. An hyperbola has two apices (plural of apex). A
parabola has only one apex, which can be characterized as above.
If the apex O of a parabola is between two of its points A and B, we want a construction of the focal point F based on A, O and B.
Let's first determine the locus of the foci of all the parabolas through point A whose apex is at O.
In a parabola of equation y=x^2/(2p), the "parameter" p is twice the distance from the focal point to the apex (both points being on the parabola's axis of symmetry).
In the parabola y=x^2/50, the parameter is 25 and the focal distance is 12.5. Since the apex is at x=0 and y=0, the focal point is at x=0 and y=12.5.
This particular property is true of any optical system: The optical length from the object to the image is a constant regardless of the path taken (the optical length is proportional to the time
it takes light to travel in a given medium, so you have to take into account the index of refraction in the case of lenses, where glass is involved).
There's no glass in a reflector so the optical length and actual length are the same thing, hence the result. The only complication is that when the object is at infinity, you should count
distances from a plane perpendicular to the rays (that's what the "chord" in the question is all about) instead of dealing with infinite distances: The reasoning is that all points of such a
plane are "at the same distance" from the object; a small portion of such a plane can be seen as a portion of the sphere which is centered on the object at a great distance.
If you prefer a purely geometrical approach, you may consider that a parabola is what an ellipse becomes when you send one of its foci "to infinity". The fact that the sum of the distances to the
foci is constant on the ellipse translates into the property you are asked to prove for the parabola.
If neither of the above convinces you (or your teacher), you may use a more elementary approach, starting with the equation of the parabola y=x^2/4f (where f is the focal distance). The square of
the distance from a point (x,y) on the parabola to the focal point (0,f) is x^2+(y-f)^2 = 4fy+(y-f)^2 = (y+f)^2. In other words, the distance d[2] is (y+f). On the other hand, d[1] is equal to
A-y (where A is some constant which depends on how far you drew the "chord" described in the question). Therefore, d[1]+d[2] = f+A = constant.
This, by the way, is one way to actually prove that a parabolic mirror is an optical system which correctly "focuses" a point at infinity.
Use Guldin's theorem (named after Paul Guldin 1577-1643), which is also called Pappus theorem in the English-speaking world. The theorem states that the area of a surface of revolution is equal
to the product of the length of the meridian by the length of the circular trajectory of the meridian's centroid. (The volume of a solid of revolution is also obtained as the area of the meridian
surface by the length of the circular trajectory of the centroid of that surface.)
Make the segment rotate around the diameter of the circle which is parallel to the segment's chord and apply the theorem: Your meridian is the circular segment of radius R, length L and chord H=
2R´sin(L/2R). The surface is a spherical segment of area 2pRH. If D is the distance of the centroid to the center of the circle, its trajectory has a length 2pD and Guldin's theorem tells us
that: 2pRH=2pDL. Therefore: D=RH/L, and that gives you the position of the centroid.
Viewed in the direction of that axis, the cube appears as a regular hexagon. If the side of the cube is 1, the side of this hexagon is Ö6/3 (approximately 0.8165).
Now, in a regular hexagon of side A, we may inscribe a square of side (3-Ö3)A or about 1.268A (one of the sides of the square is parallel to one of the sides of the hexagon). When A is Ö6/3, this
means that a square of side Ö6 -Ö2 will fit.
Well, Ö6-Ö2 is about 1.03527618... so we may cut in a cube a square hole with a side 3.5% larger than the side of the cube. A cube "just slightly larger" will easily go through such a hole.
Thanks for the kind words, Adrian...
In a regular octagon of side a, the diameter d is the hypotenuse of a right triangle whose sides are a and a+2b (see figure), where b is the side of a square of diagonal a, so that we have 2b=a
Ö2 and d^ 2 = a^ 2 [1 + (1+Ö2)^2] or d^ 2 = a^ 2 [4 + 2 Ö2 ]. Take the square root of that, and you have the desired relation between the side a and the diameter d, which boils down numerically
to d/a = 2.61312592975... or, if you prefer, a/d = 0.382683432365..., which is half the square root of (2-Ö2).
The same result can be obtained with standard trigonometric functions: The ratio a/d is the sine of a p/8 angle (22.5°; a full turn divided by 16) which does equal 0.382683432365... according to
my trusty scientific calculator.
All told, your 6' diameter display should have a side almost exactly equal to 2.2961' (within 0.18 mm or about 1/700 of the width of a human hair) which is roughly 2' 3^ 9/16". Hope the display
will look good!
In the previous article, we could have noticed that 8 times the side of the octagon is, of course, its perimeter. For an n-sided polygon, the ratio P/d of the perimeter P to the diameter d is
n sin(p/n) , which tends to p as n tends to infinity. Listed below are the first values of this ratio which may be expressed by radicals. Gauss showed that this is the case if [and only if] n is
the product of a power of 2 by (zero or more) distinct Fermat primes (A003401).
Fermat primes are prime numbers of the form 2^2^n + 1 There are probably only five of these, namely: 3, 5, 17, 257 and 65537.
An explicit construction of a 65537^th root of unity with straightedge and compass was given in 1894 by Johann Gustav Hermes (1846-1912) after spending 12 years on the project... His 200-page
manuscript is now preserved in Göttingen.
│ n │ n-gon │ Perimeter/diameter ratio = n sin(p/n) │
│ 2 │ digon │ 2 │ 2 │
│ 3 │ triangle │ 2.598 076 211+ │ (3/2)Ö3 │
│ 4 │ square │ 2.828 427 125- │ 2 Ö2 │
│ 5 │ pentagon │ 2.938 926 261+ │ (5/2)Ö((5-Ö5)/2) │
│ 6 │ hexagon │ 3 │ 3 │
│ 8 │ octagon │ 3.061 467 459- │ 4 Ö(2-Ö2) │
│ 10 │ decagon │ 3.090 169 944- │ 5 (Ö5-1)/2 │
│ 12 │ dodecagon │ 3.105 828 541+ │ 3 (Ö3-1)Ö2 │
│ 15 │ pentadecagon │ 3.118 675 363- │ (15/8)[Ö(10+2Ö5 ) - Ö3 (Ö5-1)] │
│ 16 │ hexadecagon │ 3.121 445 152+ │ 8 Ö(2 - Ö(2+Ö2)) │
│ 17 │ heptadecagon │ 3.123 741 803- │ 17 Ö((1-c)/2) [ c=cos(2p/17) ] │
│ c = { 2Ö[ 17+3Ö17-Ö(2(17-Ö17))-2Ö(2(17+Ö17)) ] + Ö(2(17-Ö17)) - 1 + Ö17 } / 16 │
│ 20 │ icosagon │ 3.128 689 301- │ 5 Ö( 8 - 2Ö(10+2Ö5)) │
│ 24 │ tetracosagon │ 3.132 628 613+ │ 6 Ö( 8 - 2Ö2 - 2Ö6) │
│ 30 │ triacontagon │ 3.135 853 898+ │ (15/4) [ Ö(30-6Ö5) - Ö5 - 1 ] │
│ ¥ │ circle │ 3.141 592 654- │ p │
If a is the side of an n-gon of diameter d, the side b of the 2n-gon of the same diameter may be obtained simply with the pythagorean theorem as the hypotenuse of a right triangle whose sides are
a/2 and d/2-c, where c is the third side of a right triangle with hypotenuse d/2 and side a/2. All told, for a unit diameter, we have b^2 = ^1/2 [ 1 - Ö(1-a^2) ]. In other words, if x is the
square of the side of the n-gon of unit diameter, the square y of the side of the 2n-gon of unit diameter is given by y = ^1/2 [ 1 - Ö(1-x) ] (there's just one caveat --which is not a problem
with hand computation-- and that's about the difference of nearly equal quantities in the square bracket, which may cause a crippling loss of precision when fixed-precision computations are used
blindly with the formula "as is"). Starting with the trivial case of the hexagon, Archimedes of Syracuse (c.287-212BC) iterated this 4 times to compute the ratio of the circumference to the
diameter in a 96-sided polygon (namely 3.141 031 95... which is about 178.5 ppm below the value of p). Using a complementary estimate of the circumscribed polygon, Archimedes could then produce
the first rigorous bracketing of what we now call "p". Until better methods where found at the dawn of calculus, this was essentially the basic method used to compute more and more decimals of
p... The last person in history who used Archimedes' method to compute p with record precision was the Dutchman Ludolph van Ceulen (1539-1610): A professor of mathematics at the University of
Leyden, he published 20 decimals in 1596 and 32 decimals in a posthumous 1615 paper. It is said that, at the end of his life, he worked out 3 more decimals which were engraved on his tombstone in
the S^t Peter Church at Leyden. To this day, p is still sometimes called Ludolph's Number or the Ludolphine Number, especially by the Germans ("die Ludolphsche Zahl").
A regular polygon with n sides of length 1 consists of n congruent triangles of base 1 and height ½ / tan(p/n). Its area is therefore equal to:
¼ n / tan(p/n)
This happens to be equal to Ö3/4 for a triangle, 1 for a square, Ö(25+10Ö5)/4 for a regular pentagon, 3Ö3/2 for an hexagon, 2+2Ö2 for an octagon, etc.
The surface area of the regular heptagon of unit side cannot be expressed using just square roots, sorry! In general, you can express the area of an n-gon with just square roots only when the
n-gon is constructible with straightedge and compass. A beautiful result of Gauss (1796) says that an n-gon is so constructible if and only if n is equal to a power of two (1, 2, 4, 8, 16, ...)
possibly multiplied by a product of distinct so-called Fermat primes. Only 5 such primes are known (3, 5, 17, 257 and 65537) and there are most probably no unknown ones... Ruling out n=1 and n=2,
the only acceptable values of n are therefore 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, ... (A003401). For other values of n (namely 7, 9, 11, 13, 14, 18, 19, 21, 22, 23,
25, 26, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, ... A004169), you will have to be satisfied with the simple trigonometric formula given above.
The above result is the first entry (dated March 30, 1796) in the mathematical diary of Carl Friedrich Gauss (1777-1855). It was the solution to a problem that had been open for nearly 2000
years, and Gauss had solved it as a teenager! This discovery was decisive in helping Gauss choose a career in mathematics (he was also considering philology at the time). We should all be glad he
Among triangles of a given perimeter, the equilateral triangle is the one with the largest area. Similarly, among pentagons of a given perimeter, the regular pentagon is the one with the largest
If a regular pentagon and an equilateral triangle have the same perimeter, the pentagon has a larger area than the triangle (see below for the exact expressions of those areas).
On the other hand, for a given perimeter, you can build scalene triangles or irregular pentagons with as small an area as you wish (including a zero area for flat or "degenerate" polygons).
Therefore, for irregular polygons, there is no definite answer to your question.
For the record, a regular n-sided polygon of diameter D has a perimeter P = nD sin(p/n) and an area S = n D^2/4 sin(p/n) cos(p/n), which boils down to S=P^2/[4 n tan(p/n)]. If you know that tan
(x)/x is an increasing function of x when x is between 0 and p/2, you can easily deduce that S is an increasing function of n when P is held constant...
The more sides in a regular polygon of given perimeter, the larger the area. For example an equilateral triangle with a perimeter of 10 has a surface of 25/[3 tan(p/3)] or about 4.811257, whereas
the regular pentagon of the same perimeter has a surface 25/[3 tan(p/3)], which is about 6.8819096.
The limiting case is, of course, the circle: As n tends to infinity S tends to P^2/4p (or pR^2 with P=2pR, if you prefer). A circle with a perimeter of 10 has an area of 25/p, which is about
There are plenty of examples. The simplest is the so-called Reuleaux triangle, pictured at right and named after the German engineer Franz Reuleaux (1829-1905):
Just take the three vertices of an equilateral triangle and connect each pair of vertices with an arc of a circle centered on the third vertex. An interesting theorem due to Joseph Emile Barbier
(1839-1889) states that the perimeter of any curve of constant width is p times its width.
On 2000-10-09, Mark Barnes (UK) wrote:
You can do the same thing with any regular polygon [having an odd number of sides]. An example of a shape of "constant diameter" [constant width] in England is the fifty pence coin (also the
20p which is the same shape but smaller). You can form this shape by drawing a regular heptagon, then using a compass to construct arcs between adjacent corners, the centre for the arc being
the corner three corners along.
The shape was chosen when the fifty pence coin was introduced at the time of decimalisation in 1971. It was chosen because it was easily identified by feel - all other coins were circles -
but the "constant diameter" [constant width] allows it to roll like circular coins, meaning it can be used in slot machines.
Note that with any shape of constant width you can construct infinitely many new ones: The [convex hull of the] envelope of the circles of radius R centered on a curve of constant width is also a
curve of constant width. (If R is small enough, the envelope includes another shape inside the original one which may only be a scaled-down version of it.) The rounded shapes are also constructed
with arcs of circles centered on the vertices of the original polygon. The radius of each such arc is either R or R+D, where D is the (constant) diameter of the original shape with sharp
Why not coin a word for these shapes, à la Martin Gardner? Any curve of constant width might be called a roller and those based on polygons could be dubbed polygroller : triangroller,
pentagroller, heptagroller, etc.
You may have noticed that using a light-duty handheld drill on a thin piece of metal often results in a hole which is not round, but instead in the shape of a rounded "triangroller" (I am
sure that other shapes of constant width do occur, but they may be less frequent and/or less noticeable). This is because, in 2 dimensions, a drill bit cuts at two points a fixed distance
apart, but the axis of the drill bit may vibrate...
Such weird holes do not occur either with a thick piece of metal or when using a drill press (unless the bit happens to be very flexible). It is fairly easy to figure out why... Think about
Sure. There are plenty of such irregular curves of constant width...
Call them irregrollers!
Note: It's probably better to use the accepted term "width" in this context. Although little confusion is possible with "diameter" here, the standard meaning is different and you may have a
need to mention the diameter --universally understood as the largest width-- in related discussions.
You may build such a shape around a scalene triangle ABC as follows. In this description, we assume, AC is the longest side and BC the shortest (AC>AB>BC).
1. Draw the arc of the circle (of radius AC) centered on C going from A to the intersection B' with the line BC.
2. Draw the arc of the circle (of radius AC-BC) centered on B going from B' to the intersection B" with the line AB.
3. Draw the arc of the circle (of radius AB+AC-BC) centered on A from B" to the intersection C" with the line AC.
4. Draw the arc of the circle (of radius AB-BC) centered on C from C" to the intersection C' with the line BC.
5. Finally draw the arc of the circle (of radius AB) centered on B from C' back to A.
The five arcs you've drawn make up the perimeter of a shape of constant width (W=AB+AC-BC). It has at least one sharp corner (2 in case of an isoceles triangle with a base AC larger than the
other sides, and 3 sharp corners in case of an equilateral triangle).
To get a smooth curve, you may increase all of the above radii by the same quantity R. The construction is trivially modified by introducing only two points A' and A" at a distance R from A on AB
and AC respectively. The construction starts with A" and ends with an arc of radius R from A' to A", to close the curve. Alternatively, you may describe the new "rounded" shape as the set of all
points at a distance R from the (inside of) the previous shape...
You may want to notice that these curves need not involve any circular arcs at all... What you want is to have conjugate arcs (not necessarily circular) on opposite sides of your shape of
constant width W so that if the radius of curvature at one point of an arc is R, the radius of curvature at the corresponding point on the other arc is W-R. This means the two arcs have the
same evolute.
More precisely, if you roll a segment of length W on any curve you care to choose (without inflexion points) you obtain a pair of conjugate arcs as the trajectories of the segment's
endpoints! (Conjugate circular arcs correspond to the degenerate case, where the above "base" curve is reduced to a single point, so the segment just rotates instead of rolling.)
When using such building blocks to make an actual shape of constant width, you only have to make sure the perimeter closes up into a convex shape (you could easily end up with some kind of
double spiral).
Remarkably, a few symmetry remarks allow you to find immediately entire families of curves with constant width. Take the deltoid, for example (it does not have to be an exact deltoid; any
curve with the same general features and symmetries will do): All its (closed) convex involutes are curves of constant width!
They look very much like the rounded equilateral triangles you mentioned in your question, but without any circular arcs on the perimeters...
Surprisingly enough, an obvious three-dimensional generalization of the Reuleaux triangle doesn't work: Consider the Reuleaux tetrahedron pictured at right (image courtesy of FastGeometry). This
3D solid is obtained as the intersection of the four balls of radius R centered on the vertices of a regular tetrahedron of side R. If the solid is on an horizontal table, its highest point will
indeed be at a height R over the surface of the table, provided the point of contact [with the table] is either one of the 4 vertices or is somewhere in the midst of one of the spherical faces.
So far so good. However, the point of contact could also be on one of the edges, in which case the highest point does move on the opposite edge if we rotate the solid around the tangent to the
edge at the point of contact (as we may). This opposite edge is an arc of a circle whose axis of symmetry goes through the two extremities of the edge of contact. As [part of] this arc rotates
around a different axis, the height of its highest point varies, which shows that this solid does not have constant width; in fact, its width varies between R and (Ö3 - ½Ö2) R [» 1.024944 R].
In 1911, Ernst Meissner and Friedrich Schilling turned the above idea into an actual solution by "rounding" three of the six edges of the above solid. The resulting solid of constant width is now
called a Meissner tetrahedron (there are two distinct types, as the unrounded edges may either form a triangle or meet at a vertex). The original Meissner tetrahedra do not possess tetrahedral
symmetry, but there's a unique way to round all edges the same way to preserve that symmetry.
A simpler way to generate a 3D solid of constant width is to rotate any 2D shape of constant width around an axis of symmetry, if it has one (the Reuleaux triangle has 3). This works because, any
rotation of such a solid is a combination of three independent rotations which all preserve the width between two given parallel planes, namely: a rotation around the solid's axis of symmetry
(obviously), a rotation around an axis perpendicular to the two planes (think about it) and, finally, a rotation around an axis parallel to the planes and perpendicular to the axis of symmetry
(which is seen "sideways" as a 2D rotation of a cross-section of constant width). [Notice that the first two rotations may coincide, but only when there are 2 independent rotations of the last
type, so we always have 3 independent width-preserving elementary rotations.]
Once you have a solid of constant width, you may build infinitely many others, since, for any D>0, the set of all points within a distance D of some given solid of constant width is also a solid
of constant width...
Actual Meissner tetrahedra in action captured on video by Brady Haran (2013-11-11)
Matt Parker & Steve Mould (the man in the previous video) sell classical Meissner tetrahedra.
The more modern version (featuring tetrahedral symmetry) doesn't seem to be commercially available.
The construction(s) outlined at the end of the previous article seem to remain valid to obtain a symmetrical shape of constant width in N+1 dimensions from one in N dimensions.
Yes, absolutely!
That's what happens in a Euclidean space with 4 dimensions or more.
It may be difficult (or impossible) to visualize a space with more than 3 dimensions, but there's no great difficulty in considering the set of all quadruplets of real numbers (x,y,z,t), which is
what 4D space really is.
If R is the radius of a 4-D hypersphere, its hyper-volume is simply p^2 R^4 /2 .
More generally, in n dimensions, a sphere of radius R has a volume equal to:
V = R^n p^n/2 / G(1+n/2)
Using the definition of the Gamma function (G) in terms of factorials (the notation being k! = 1´2´3´ ... ´k ), the coefficient of R^n in the above is:
□ p^k/k! with k=n/2 when n is even, or
□ 2^n p^k k!/n! with k=(n-1)/2 when n is odd.
A formula valid in both cases (using the double-factorial notation) is given below. In other words, the "hypervolume" of an n-dimensional sphere of unit radius is:
□ 1 for n=0 (the "0-volume" of a point must be so defined for consistency!),
□ 2 for n=1 (length of a segment of "radius" 1),
□ p for n=2 (area of a unit disc), and 4p/3 for n=3 (volume of a sphere),
□ p^2/2 for n=4 (the original question), and 8p^2/15 for n=5,
□ p^3/6 for n=6, and 16p^3/105 for n=7,
□ p^4/24 for n=8, and 32p^4/945 for n=9,
□ p^5/120 for n=10, and 64p^5/10395 for n=11,
□ p^6/720 for n=12, and 128p^6/135135 for n=13,
□ ... ...
□ (p^k/k!) for n=2k, and (2^k+1p^k/n!!) for n=2k+1.
In the above, we used the (standard) double-factorial notation n!! as a shorthand for n(n-2)(n-4)... which is the product of all positive integers up to n which have the same parity as n:
0!!=1, 1!!=1, 2!!=2, 3!!=3, 4!!=8, 5!!=15, 6!!=48, 7!!=105 ...
To retain the relation n!! = (n-2)!! n for all positive integers, including n = 1, the convention is made that (-1)!! = 1. For completeness, a less useful extension is available for all odd
negative integers:
(-1)!! = 1, (-3)!! = -1, (-5)!! = 1/3, (-7)!! = -1/15, ...
Using the double-factorial notation, it's possible to give a cute formula valid in n dimensions, whether n is even (n=2k) or odd (n=2k+1), namely:
V = (p/2)^k (2R)^n / n!!^
The above hypervolume could be obtained by integrating the hyperarea of a shell from 0 to R. Conversely, if aR^n is the hypervolume of an n-dimensional ball of radius R, then naR^n-1 must be its
"hypersurface area" (S). (Except for n=0, which we rule out as meaningless.) For a hypersphere of unit radius in n dimensions, this means that the hypersurface "area" [i.e., the measure in (n-1)
dimensions] has the following values: 2 for n=1, 2p for n=2, 4p for n=3, 2p^2 for n=4, 8p^2/3 for n=5, p^3 for n=6... We may replace n/n!! by 1/(n-2)!! in the following formula [retaining the
case n=1 with the convention (-1)!! =1]:
S = 2 R^n-1 p^n/2 / G(n/2) = 2^n-k p^k R^n-1 n/n!!^ [where n is 2k or 2k+1]
Of particular interest is the so-called Einstein-Eddington universe, which is defined as the 3-dimensional boundary of the 4-dimensional hypersphere of radius R. The above shows that the volume
of the Einstein-Eddington universe is 2p^2 R^3. If this is meant to be a model of the Universe we live in [capital "U"], the distance R to the "center" of the 4-D sphere is quite literally out of
this world and it may be better to consider the maximal possible distance D between two points in the Universe. As D is simply pR, the volume of the Universe is 2D^3/p.
A polyhedron with 6 faces is called a hexahedron. The cube is an hexahedron, but that's certainly not the only one:
The so-called triangular dipyramid is another possibility (with 5 vertices and 9 edges, this solid may be obtained by "adding" one vertex to a tetrahedron to make it look like two tetrahedra
"glued" on a common face).
A third hexahedron is the pentagonal pyramid (6 vertices, 10 edges; a pyramid whose base is a pentagon). The above three are the only hexahedra which exist in a version where all 6 faces are
regular polygons.
least symmetrical of all hexahedra is the tetragonal antiwedge (it has only one possible symmetry, a 180° rotation). This skewed hexahedron has the same number of edges and vertices as the
pentagonal pyramid. Its faces consist of 4 triangles and 2 quadrilaterals. Such a solid may be obtained by considering two quadrilaterals that share an edge but do not form a triangular prism.
First, join with an edge the two pairs of vertices closest to the edge shared by the quadrilaterals. To complete the polyhedron, you must join two opposite vertices of the nonplanar quadrilateral
two types of tetragonal antiwedges which are mirror images of each other; each is called an enantiomer, or enantiomorph of the other. The tetragonal antiwedge is thus the simplest example of a
chiral polyhedron (in particular, any other hexahedron can be distorted into a shape which is its own mirror image). Because of this unique property among hexahedra, the tetragonal antiwedge may
also be referred to as the chiral hexahedron.
elongated square pyramid (the technical name for an obelisk) along a bisecting plane through the apex of the pyramid and the diagonal of the base prism, as pictured at left. For lack of a better
term, we may therefore call this hexahedron an hemiobelisk.
hemicube (or square hemiprism), obtained by cutting a cube in half using a plane going through two opposite corners and the midpoints of two edges. Its 6 faces include 2 triangles and 4
With 8 vertices and 12 edges, the cube (possibly distorted into some kind of irregular prism or truncated tetragonal pyramid) is not the only solution: Consider a tetrahedron, truncate two of its
corners and you have a pentagonal wedge. It has as many vertices, edges and faces as a cube, but its faces consist of 2 triangles, 2 quadrilaterals and 2 pentagons.
The above 7 types (8 if you counts both chiralities of tetragonal antiwedges) include all possible hexahedra. By contrast, there's only one tetrahedron. There are two types of pentahedra
(exemplified by the square pyramid and the triangular prism). There are 7 types of hexahedra, as we've just seen. 34 heptahedra, 257 octahedra, 2606 enneahedra, 32300 decahedra, 440564
hendecahedra, etc. (see our detailed table of the enumeration, elsewhere on this site). For an unabridged discussion of hexahedra and more general information about polyhedra, see our dedicated
Polyhedra Page...
We're talking about a finite cylinder; the "ordinary kind" with two parallel bases, which are usually circular (as opposed, say, to an infinite cylinder with an infinite lateral surface and no
The answer is, of course, that there are two edges; the two circles.
At first glance, this may look like a "counterexample" to the Descartes-Euler formula, which states that "in a polyhedron" the numbers of faces (F), edges (E) and vertices (V) obey the following
F - E + V = 2
Our cylinder has 3 faces (top, bottom, lateral), 2 edges (top and bottom circles) and no vertices, so that F-E+V is 1, not 2! What could be wrong?
Nothing is wrong if things are precisely stated. Edges and faces are allowed to be curved, but the Descartes-Euler formula has 3 restrictions, namely:
1. It only applies to a (polyhedral) surface which is topologically "like" a sphere (imagine making the polyhedron out of flexible plastic and blowing air into it, and you'll see what I mean).
Your cylinder does qualify (a torus would not).
2. It only applies if all faces are "like" an open disk. The top and bottom faces of your cylinder do qualify, but the lateral face does not.
3. It only applies if all edges are "like" an open line segment. Neither of your circular edges qualifies.
There are two ways to fix the situation. The first one is to introduce new edges and vertices artificially to meet the above 3 conditions. For example, put a new vertex on the top edge and on the
bottom edge. This satisfies condition (3), since a circle minus a point is "like" an open line segment. The remaining problem is condition (2); the lateral face is not "like" an open disk (or
square, same thing). To make it so, "cut" it by introducing a regular edge between the two new vertices. Now that all 3 conditions are met, what do we have? 3 faces, 3 edges and 2 vertices. Since
3-3+2 is indeed 2, the Descartes-Euler formula does hold.
The better way to fix the formula does not involve introducing unnecessary edges or vertices. It involves the so-called Euler characteristic, often denoted c (chi):
The Euler Characteristic c ( chi )
The fundamental properties of c (chi) may be summarized as follows[ ]:
A. Any set with a single element has a c of 1 : "x, c ( {x} ) = 1
B. c is additive: For two disjoint sets E and F, c(EÈF) = c(E) + c(F)
C. If E is homeomorphic to F, then c(E) = c(F)
("Homeomorphic" is the precise term for topologically "like".)
Using those three properties as axioms, we could show by induction that, if it's defined at all, the c of n-dimensional space can only be equal to (-1)^n. (HINT: A plane divides space into 3
disjoint parts; itself and 2 others...)
The c of shapes dissected into parts of known c can then be derived... For example, a circle has zero c because it's formed by gluing to a single point (c = 1) both extremities of an open line
segment (whose c is -1 because it's homeomorphic to an infinite straight line).
In particular, the ordinary Descartes-Euler formula is valid because the c of a sphere's surface is 2 and it's "made from" disjoint faces, edges and vertices, each respectively with a c of 1, -1
and 1.
In the "natural" breakdown of our cylinder (whose c is also 2), you have no vertices, two ordinary faces (whose c is 1) and one face whose c is 0 (the lateral face), whereas the c of both edges
is 0. The total count does match.
□ c (point) = 1
□ c (entire straight line, or open segment) = -1
□ c (plane or open disc) = 1
□ c (space or open ball) = -1
□ c (space with n-dimensions) = (-1)^n
□ c (circle, or semi-open segment) = 0
□ c (surface of a sphere) = 2
□ c (surface of an infinite cylinder) = 0
□ c (surface of torus) = 0
□ etc.
Note (2000-11-19) : The orthodox definition of the Euler-Poincaré characteristic does not use the above 3 fundamental properties as "axioms" but instead is closer to the historical origins of the
concept (generalized polyhedral surfaces). It would seem natural to extend the definition of c to as many objects as the axioms would allow. This question does not seem to have been tackled by
anyone yet...[ ]
Consider, for example, the union A of all the intervals [2n,2n+1[ from an even integer (included) to the next integer (excluded). The union of two disjoint sets homeomorphic to A can be arranged
to be either the whole number line or another set homeomorphic to A. So, if c(A) was defined to be x, we would simultaneously have x = x+x and -1 = x+x. Thus, x cannot possibly be any ordinary
number, and the latter equation says x is nothing like a signed infinity either [as (+¥)+(+¥) ¹ -1]. At best, x could be defined as an unsigned infinity (¥) like the "infinite circle" at the
horizon of the complex plane (¥+¥ is undetermined). This could be a hint that a proper extension of c would have complex values... | {"url":"http://www.numericana.com/answer/geometry.htm","timestamp":"2014-04-18T05:29:44Z","content_type":null,"content_length":"116926","record_id":"<urn:uuid:9fb1ee49-a7d9-4395-bb54-35b66fac31b5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doing Math in FPGAs, Part 1 | EE Times
Most Recent Comments
1:36:47 PM
Flash Poll
All Polls
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Engineer's Bookshelf
The Engineering Life - Around the Web
Surprise TOQ Teardown at EELive!
Caleb Kraft Post a comment
This year, for EELive! I had a little surprise that I was quite eager to share. Qualcomm had given us a TOQ smart watch in order to award someone a prize. We were given complete freedom to ...
Design Contests & Competitions
Engineering Investigations
Frankenstein's Fix: The Winners Announced!
Caleb Kraft 8 comments
The Frankenstein's Fix contest for the Tektronix Scope has finally officially come to an end. We had an incredibly amusing live chat earlier today to announce the winners. However, we ...
MORE EELife
Top Comments of the Week
Like Us on Facebook
Datasheets.com Parts Search
185 million searchable parts
(please enter a part number or hit search to begin) | {"url":"http://www.eetimes.com/messages.asp?piddl_msgthreadid=43642&piddl_msgid=280060","timestamp":"2014-04-20T18:48:27Z","content_type":null,"content_length":"173739","record_id":"<urn:uuid:bc45c4cd-5155-4b08-9f9a-a7a5d0fa925f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frequently Asked Questions
How do you construct regular polygons?
To construct a regular 3-gon (equilateral triangle), begin with segment AB, and construct two circles AB and BA.
The intersection of the two circles at C, will produce an equilateral triangle ABC.
The construct a square, construct a perpendicular bisector of AB and let the point of intersection of the bisector with AB be a new point O, then construct the circle OA.
The intersection of the circle with the perpendicular bisector produces points C and D and it should be possible to see that quadrilateral ACBD is square.
The construction of a pentagon is a little more difficult. Begin, as before, bisecting segment AB to locate point O and drawing circle OA, producing points C and D.
Then bisect segment OB to locate point E. By drawing circle ED locate point F on segment AO. The length DF is the required length of the side of an inscribed pentagon.
Please note that proof is not provided here, as it requires further proof that cos72^o = (√5 1)/4, and this would detract too much from polygonal constructions.
The construction of a regular hexagon is relatively simple. We construct a circle OA and length OA is the length of the side of the inscribed hexagon.
This is easily demonstrated to be true.
As OA is the radius, r, each base length will be r and so we form a series of equilateral triangles. As the centre angle is 60 degrees we have successfully constructed a regular hexagon.
It may be noted that this construction method allows an inscribed equilateral triangle to be drawn, by connecting alternating points.
By combining these constructions with a perpendicular bisector and projecting a ray onto the circle we can double the number of sides.
For example, we can construct a regular octagon from a square.
And we can continue this method as often as we wish to produce a 16-gon, 32-gon, etc. Similarly a pentagon allows the construction of the decagon (10-gon), 20-gon, etc. and the hexagon allows us to
construct a dodecagon (12-gon), 24-gon, and so on.
The Greeks, naturally, asked the question of other constructions. They knew a useful constructions would be the heptagon (7-gon). In addition they realised that trisecting an angle would be very
useful. Instead of doubling sides, this would provide a construction method for tripling the number of sides; for example, an equilateral triangle would give way to a nonagon (9-gon). | {"url":"http://mathschallenge.net/library/constructions/polygonal_constructions","timestamp":"2014-04-20T21:04:11Z","content_type":null,"content_length":"6324","record_id":"<urn:uuid:5bb78865-be88-46ae-a85d-588b21682806>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear shooting method
May 23rd 2010, 09:27 AM #1
Nov 2009
linear shooting method
does anyone know how to get y(a) and y(b) from the boundary conditions? i substituted the boundary conditions in 5.7 and 5.8 into the combined equation under those. i dont know how you get those
how do p(x) and q(x) vanish?
for ref, this is equation 5.6
also, does anyone know how to construct 5.7 and 5.8 from 5.6? i dont see any reason for splitting into 2 IVPs and solving
May 23rd 2010, 09:30 AM #2
Nov 2009 | {"url":"http://mathhelpforum.com/differential-equations/146106-linear-shooting-method.html","timestamp":"2014-04-17T14:49:19Z","content_type":null,"content_length":"33565","record_id":"<urn:uuid:c995b535-82c5-49eb-9c53-1728569e3022>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Allowable stress design of beams simply supported beam design
Spreadsheets for Allowable Stress Design of Beams
Where to Find an Excel Spreadsheet for Allowable Stress Design of Beams
For an Excel spreadsheet for allowable stress design of beams, click here to visit our spreadsheet store. Obtain a convenient, easy to use spreadsheet for allowable stress design of beams at a
reasonable price. Read on for information about the use of deflection limits and serviceability requirements for simply supported beam design.
Background for Allowable Stress Design of Beams
Design of a simply supported beam with uniform distributed load can be carried out as follows. Based on inputs of span length, elastic modulus, live load, dead load, allowable bending stress,
deflection limit for live load and deflection limit for live load and dead load acting simultaneously, the equations in the next section can be used to calculate maximum moment, maximum shear,
elastic section modulus, and minimum moments of inertia required to satisfy the constraints on deflection. The equations can also be used to check on whether a known design satisfies strength and
deflection requirements.
Equations for Allowable Stress Design of Beams
Equations for the first step in allowable stress design of beams calculations are as follows for a simply supported beam subject to a uniform distributed load:
M[max] = wL^2/8, where
• M[max] = maximum moment in the beam
• w = distributed load on the beam
• L = length of span
V[max] = wL/2, where
• V[max ] = maximum shear in the beam
• w and L are as defined above
M[allow] = SF[b], where
• M[allow] = the allowable moment in the beam
• S = elastic section modulus of the beam
• F[b] = maximum allowable stress in the beam
y[max] = 5wL^4/(384EI), where
• y[max] = the maximum deflection in the beam
• E = elastic modulus of the beam
• I = moment of inertia of the cross section of the beam
y[max] < L/L[d], where
• L[d] is a dimensionless number specified by code, depending on structural application and load type (typically L[d] = 120, 180, 240, 360, or 600)
A Spreadsheet for Allowable Stress Design of Beams
The screenshot below shows an Excel spreadsheet for allowable stress design of beams. Based on inputs of span length, elastic modulus, live load, dead load, allowable bending stress, deflection
limit for live load and deflection limit for dead load, the spreadsheet can be used to calculate maximum moment, maximum shear, elastic section modulus, and minimum moments of inertia required to
satisfy the constraints on deflection.
For low cost, easy to use spreadsheets to make these calculations in S.I. or U.S. units, as well as checking with a known design to see if strength and deflection requirements are met, click here to
visit our spreadsheet store.
You must be logged in to post a comment.
This entry was posted in structural analysis and design of beams and tagged allowable stress design, allowable stress design of beams, beam deflection, beams, deflection limits, flexural members,
servicability requirements, simply supported, structures. Bookmark the permalink. | {"url":"http://www.engineeringexcelspreadsheets.com/2012/12/allowable-stress-design-of-beams-2/","timestamp":"2014-04-21T04:34:33Z","content_type":null,"content_length":"29814","record_id":"<urn:uuid:657215a2-fd56-4684-9ad1-146c3fd34672>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Conversions' Brain Teaser
Science brain teasers require understanding of the physical or biological world and the laws that govern it.
Puzzle ID: #47847
Category: Science
Submitted By: Mogmatt16
When you convert from an English measurement to a Metric measurement, you multiply by a number. For example, to find out how many kilograms are in 10 pounds, you multiply 10 by .454. To find out how
many meters are in 10 yards, you multiply 10 by .914. This is the procedure for every conversion except for one. When you convert from Fahrenheit to Celsius, you subtract 32 and multiply by 1/9.
Why does the temperature conversion require a subtraction and a multiplication, while all the other conversions require just a multiplication?
Show Hint
For distance, weight, and every other measure, a result of zero is the same no matter what the units are. The temperature scales, however, have arbitrary zeros. This means that 0 degrees Celsius and
0 degrees Fahrenheit are not the same temperature.
(As an interesting side note, the Kelvin and Rankine scales both have their 0 degrees set to Absolute Zero, so you can convert from one to the other with a simple multiplication.) Hide
What Next?
unklemyke Well, actually, you subtract because Celsius sets the freezing point of water as 0º and Fahrenheit sets it at 32º. likewise, the boiling point of water is 100º C, 212º F.
Oct 11, 2010 The 180 point difference in F as opposed to the 100 point difference in C yeilds the constant - and it's 5/9 from F to C, not 1/9. Conversely, you multiply by 9/5 going form C to F.
Nerine Huh, it's quite obvious when you knoew the answer
Jan 06, 2011
AndrewWalker I didn't get the exact answer, but I do know, if you graph the resulting conversions as linear functions, each line has a unique slope. This means there is a non-linear relationship
Jan 31, 2012 between the two conversions. Also the two lines intersect at the point (-40,-40) so -40 degrees F= -40 degrees C
eighsse If you have a decent understanding of the temperature scales, this isn't much of a teaser, just asking an everyday question. But hey, not everyone does have a good understanding of it,
Jul 28, 2013 so I'm sure some people learned something here | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=47847&op=2&comm=1","timestamp":"2014-04-21T08:06:13Z","content_type":null,"content_length":"28235","record_id":"<urn:uuid:252c369d-b432-4f3a-af83-f4e4e1afa4e7>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'forcing multiverse-of-sets' Questions
There is a (very) long essay by Grothendieck with the ominous title La Longue Marche à travers la théorie de Galois (The Long March through Galois Theory). As usual, Grothendieck knew what he was ...
We all know that forcing can be seen (if you like things that way) as a category of sheaves over the poset of forcing conditions equipped with the double negation Grothendieck topology. As such it is | {"url":"http://mathoverflow.net/questions/tagged/forcing+multiverse-of-sets","timestamp":"2014-04-16T22:52:39Z","content_type":null,"content_length":"34250","record_id":"<urn:uuid:022e9691-bfa0-47da-8106-91c3714d10c9>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Five More Golden Rules: Knots, Codes, Chaos and Other Great Theories of 20th-Century Mathematics
This book is a follow-up to John Casti's Five Golden Rules. In this installment, the author talks about the development of five (more) mathematical theories and their applications.
Chapter 1 The Alexander Polynomial: Knot Theory
In the first chapter, we are introduced to the theory of knots: when (and how) a knot can be unraveled and the idea of knot invariants. Some of the invariants include knot colorings, linking numbers,
twisting numbers, writhing numbers, and the Alexander polynomial. Casti manages to relate the theory (at least in part) to the works of Albrecht Durer, to minimal energy configurations, and to the
raveling and unraveling of DNA. This last topic yields one of the most interesting observations I've seen in a while:
...the amino acids making up the proteins in living organisms as well as the nucleic acids forming the cellular DNA all come in both left- and right-handed forms. These two forms, although
chemically identical in the sense of being formed from exactly the same atoms, have entirely different chemical actions as a result of their "twisting" in opposite directions in space.
Interestingly, in observations of galactic clouds in space, as well as in experiments in earthly chemical laboratories, it seems that both forms arise naturally in more or less equal proportions.
Yet all life forms on Earth use exclusively left-handed amino acids to form proteins and right-handed nucleic acids to form the genetic material. As a consequence of this puzzling fact, you would
starve to death on a world where the steaks were all made out of right-handed proteins because your body chemistry would be unable to break these proteins down to extract their energy.
Chapter 2 The Hopf Bifurcation Theorem: Dynamical System Theory
In this chapter, we learn of the theory of dynamical systems, and the idea of stability and attractors. Casti provides many examples of such systems and uses them to roll out many of the major
results from the theory: the Linear Stability Theorem, the Hartman-Grobman Theorem (on linear approximation of a dynamical system near the origin), the Center Manifold Theorem (on equilibrium
solutions), and the Hopf Bifurcation Theorem (on the stability of a system's equilibrium). This leads to discussions on randomness and deterministic dynamical systems, the motion of planets,
fractals, and the music of Bach.
Chapter 3 The Kalman filter: Control Theory
This chapter starts with the idea of trying to determine if, given a set of rules governing the motion of an object (control inputs), we can reach a given position. This problem of reachability is
solved for linear dynamical systems. Casti then turns to the problem of observation: observation of things (like concentration of a drug in a patient's blood stream) which cannot be measured
directly. The problem of complete observability is solved. The solutions to the problems of reachability and observability look remarkably similar, which is no accident: the problems turn out to be
duals of each other. Next, Bernoulli's brachistochrone problem is used to introduce the calculus of variations and the theory of optimal control. In studying optimal control processes, we are lead to
the Pontryagin Minimum Principle and we find that the solutions to the optimal control problems have the form of an "open-loop" control law. On the other hand, in studying the stability of a control
system, we are led to the idea of feedback control and dynamic programming. It turns out that the calculus of variations and dynamic programming are duals of each other. The Kalman filter is then
introduced as a way of observing a system in which each observation is corrupted by some background noise. This piece of mathematics leads to the following application:
The Kalman filter is used in just about every inertial navigation system in existence. For example, Hexad, the gyroscopic system on the Boeing 777 aircraft uses a Kalman filter to estimate the
errors in each of the six gyros with respect to the others.
Chapter 4 The Hahn-Banach Theorem: Functional Analysis
This chapter provides a nice history of the development of some of the early topics of functional analysis. Casti once again exploits the idea of duality, relating functions to functionals. The study
of functionals leads us to the standard results: the Riesz Representation Theorem and the Hahn-Banach Theorem. The author then turns to the idea of operators and linear transformations, giving the
Contraction Mapping Theorem and the Spectral Theorem. These ideas are used to describe John von Neumann's attempt to apply functional analysis to quantum mechanics. The remainder of the chapter is
reserved for what happens with nonlinear operators.
Chapter 5 The Shannon Coding Theorem: Information Theory
This final chapter gives some of the history of information theory, which leads to the study of making codes. McMillan's Theorem on uniquely decipherable codes and Kraft's Theorem on instantaneous
codes are mentioned, and a nice description of Huffman's instantaneous coding scheme is given. The topic of optimal codes (codes with minimal average code-word length) culminates in Shannon's
Noiseless Coding Theorem. Casti then turns to the matter of signals and error-correcting codes, which leads to a description of Hamming's error-correcting codes. We return to the topic of DNA, which
is itself a coding scheme--a code which is both instantaneous and optimal. The chapter ends with an interesting discussion on the relation between word rank and frequency of use (Zipf's Law).
Casti's presentation of these topics is very readable. Many of the topics start as simple ideas, which he uses to lead us to some very rich mathematics. He sprinkles the topics with occasional brief
quotes and stories from historical characters. And he provides a nice set of references for each of the five themes, in which he not only lists, but also gives a short description of, each book or
With regards to the nonmathematical reader, this book walks a fine line: giving enough of the general view of each theory and its applications to keep the reader interested, while mathematically
justifying the steps (even if this reader does end up skipping over the equations and formulas). Mathematicians, however, should enjoy seeing how these abstract theories play themselves out in the
"real world". Perhaps a word of caution is in order, though: the mathematics (at least in the second, third, and fourth chapters) is heavily skewed toward analysis. So, if you really don't care for
integration and differential equations, then you may want to restrict yourself to the first and last chapters. But any professional mathematician should appreciate the beauty of these ideas.
Donald L. Vestal is an Assistant Professor of Mathematics at Missouri Western State College. His interests include number theory, combinatorics, reading, and listening to the music of Rush. He can be
reached at vestal@griffon.mwsc.edu. | {"url":"http://www.maa.org/publications/maa-reviews/five-more-golden-rules-knots-codes-chaos-and-other-great-theories-of-20th-century-mathematics","timestamp":"2014-04-17T19:51:46Z","content_type":null,"content_length":"102147","record_id":"<urn:uuid:c14455ad-05f6-4dcd-8c5b-eb432723ca2e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Why Skid Mark Why? Part 2' Brain Teaser
Why Skid Mark Why? Part 2
Logic puzzles require you to think. You will have to be logical in your reasoning.
Puzzle ID: #24703
Category: Logic
Submitted By: Question_Mark
When Question Mark unlocked the door, he thought that he would see his wallet straight away. But Skid Mark (Question's brother) decided to put the wallet in a safe. The combination is three 2-digit
numbers which can be expressed like this:
You are given the following clues to work out the combination:
The total of the three numbers is 39.
The second number is half of the third number.
The first number is the third number minus 1.
Can you find Question's wallet in time? It's all up to you.
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=24703&comm=0","timestamp":"2014-04-16T16:26:39Z","content_type":null,"content_length":"22592","record_id":"<urn:uuid:91990b7b-9999-4b30-be0d-be76e9d943b2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reductionism Is Not FundamentalismReductionism Is Not Fundamentalism
Ashutosh Jogalekar has a response to my post from yesterday complaining about his earlier post on whether multiverses represent a philosophical crisis for physics. I suspect we actually disagree less
than that back-and-forth makes it seem– he acknowledges my main point, which was that fundamental theoretical physics is a small subset of physics as a whole, and I don’t disagree with his point that
physics as a discipline has long been characterized by a reductionist sort of approach– always trying to get to smaller numbers of fundamental principles.
Our real point of disagreement, I think, is more subtle, and has to do with the implications of that reductionist approach. He seems to take the view that this necessarily entails always striving for
greater simplicity, while I’m happy to stop at a somewhat higher level. Not to put words in his mouth, but a kind of extreme version of this line of thought is seen in the comment from RM, putting
words in the mouth of a certain class of other physicists, who would claim that the sort of physics I do isn’t really physics, but stamp collecting.
Jogalekar isn’t that crass, but goes for more high-class antecedents, telling a story about Oppenheimer, who felt thatfinding fundamental equations was the only real physics, and “the study of
particular solutions of the equations would be a routine exercise for second-rate physicists or graduate students.” My feeling is that this probably says more about Oppenheimer and his character
flaws than it does about physicists in general, but I will also admit that I don’t know all that much about Oppenheimer.
(It’s kind of weird, really– he’s an important figure in 20th century physics, and plays a peripheral role in the stories of a lot of people I find more interesting, but somehow, Oppenheimer himself
has never seemed all that compelling to me. I’d like to read a good biography of Pauli at some point, but despite the ready availability of a new and fairly well regarded Oppenheimer bio, I’m just
not that interested.)
This does raise an interesting question, though, about what exactly characterizes the mindset of a physicist, as distinct from those other, stamp-collecting disciplines. Obviously, I have an interest
in avoiding any definition that has those of us who study whole atoms (let alone large collections thereof) classed as chemists with delusions of grandeur, but if I’m not willing to go down the
rabbit hole of endless searching for more fundamental laws, what is it that defines physicists?
I spent a bunch of time Wednesday thinking about this, and ended up offering a few comments on it to the general bafflement of my students in introductory E&M, who almost certainly don’t read this
blog. If you want a bumper-sticker version, though, you can probably go with a line often found pasted over photographs of Einstein in various social-media things, namely that everything should be
made as simple as possible, but no simpler.
I’ve gone on about this before, but when I think about what it means to think like a physicist, I keep circling around the idea of doing the best that you can with the smallest number of inputs.
Physicists will treat cows as spherical because you can capture a surprising amount of the essential elements of cow behavior while operating under that approximation. A more complicated model will
pick up some additional details, but the spherical model allows a more universal and elegant approach, and gets at the things that cows have in common with horses, sheep, and donkeys (all of which
can also be approximated as spheres, to lowest order).
But that commitment to simple rules and general principles does not imply a need to follow that all the way down, or any kind of spiritual connection to people who do. I’m not bothered by the
inability of string theorists to come up with an equation for everything that fits on a cocktail napkin, because I can write the Schrödinger equation on the back of a business card, and it’s as much
of a theory of everything as I need to fill an entire career.
You can take a reductionist approach without reducing things all the way to the Standard Model– even when they’re spherical, after all, cows are composite particles. Working out the consequences of
lots of simple particles interacting via simple rules is a fascinating field in itself, and does not entail the abandonment of an essentially reductionist approach. Reducing the rules and particle
properties to their essential core is reductionist already.
An anecdote from class Wednesday that might help to illustrate what I’m talking about. We’re in the third week of intro E&M, talking about the electric fields of various charge distributions. I
pointed out that, really, all you need to know to understand electrostatics is how to calculate the electric field of a point charge; the rest is just calculus. Then we worked through a few examples
of how to find the field of some simple symmetric shapes by adding up lots of point charges.
When we were done going over the field from a ring of charge, one of the students, a mechanical engineering major, asked “Why don’t you just give us the formula for calculating this for a charge
density distribution? Why are we talking about these shapes? I mean, if there’s a tiny bump on one side, that formula isn’t any good any more.”
I tried to explain that the point isn’t just to have a process that lets you turn the crank and get an answer, but to get some insight into the essential behavior. It’s perfectly true that the
formula for the field from a ring of charge doesn’t work for a ring of charge with a bump on one side. But if you know how to find the field of a ring, then you can approximate the ring-with-a-bump
as a ring with a point charge next to it. And knowing the simple cases gives you a sense of what you ought to expect from the complicated scenario when you turn the crank on the more detailed method.
And yeah, the exact details of the final answer might depend on the exact shape of the bump on the side of the ring, but the simple approximation will get you a basic idea, and provides an essential
sanity check.
I’m not sure that really convinced him that this was worthwhile, but then, that’s probably the difference between the worldview of a physicist and an engineer right there. As a physicist, I want to
model cows as spheres and sticky tapes as point charges, and leave the fiddly details to the engineers.
(Meanwhile, the chemistry majors in the class want a comprehensive and memorizable list of formulae covering all the possible shapes we might ask about…)
As to the question of whether multiverses pose a fundamental philosophical problem for physics, I just don’t see it. I find the whole thing a little silly, particularly in its more extravagant forms,
but I don’t think it represents any kind of existential crisis. It’s a phase that will shake itself out eventually– the more precisely defined multiverse models will eventually make predictions, and
clever observers and experimentalists will figure out something to test those, and everybody will move on. The less precisely defined and the essentially undefinable forms will fade away, or become
Even if I’m wrong, though, if the fundamentalist subfield remains forever as murky as pot smoke in a dorm room, that has essentially zero impact on what people in my field, or the many other thriving
subfields of physics, do with our lives. There’s a common worldview of sorts, but we’re really not drawing philosophical inspiration from quantum gravity theorists. We’re too busy doing physics for
1. #1 Peter Morgan January 23, 2014
It’s not so much that new theories have “essentially zero impact on what people in my field … do with our lives”, as that the effects of *successful* new theories will have a big impact in 40
years time, as QM did from, say, the 60s on. Before lasers and semiconductors, the impact was relatively small. The same was more-or-less true for Maxwell’s EM.
There’s plenty of room for some people to find it more interesting to try to make successful simpler theories, for others to find it more interesting to develop the consequences of successful
theories that have already been found, and for others still to find it more interesting to try to find really good ways to teach our successful theories to the next generation, with a constant
discussion of how much resource emphasis should be given to each at any given time.
2. #2 John Novak January 23, 2014
As a physicist, I want to model cows as spheres and sticky tapes as point charges, and leave the fiddly details to the engineers.
As an engineer, we want models accurate enough, and easy enough to compute, that we can use them to design and build something.
Your young engineer’s instinct was correct for the profession, but lacks a certain seasoning in the sense of figuring out the right level of accuracy and the right level of ease of use for a
given job.
3. #3 G January 23, 2014
As an engineer, I’m still ferociously fascinated with the physical sciences, and with the social sciences, and certain branches of the humanities as well. The curiosity to know and the desire to
build are two faces of the same coin, as with the sense of meaning and the sense of purpose.
The contrast is interesting, between your use of the term “fundamentalism” and its more common meaning as a strain of religion that is excessively literal and incapable of understanding metaphor
and abstraction. In any case, the desire to dominate others by asserting “more (whatever) than thou,” is a feature of both that no longer has adaptive value for our species.
In this vast and awesome universe of ours, there’s more than enough room for everyone to seek knowledge wherever they choose, without having to stir up competition or other dramatic distractions.
4. #4 Bee
January 23, 2014
Thanks for the link :) “I find the whole thing a little silly…” I think this echos the opinion of most physicists, just that the public doesn’t hear from most of them. That you’re here to voice
your opinion is one of the main reasons I think blogs are necessary to complement science journalism.
5. #5 Jacob Stewart January 23, 2014
Hey, not *all* chemistry majors are like that. There are those of us who end up in Physical Chemistry/Chemical Physics. :) I think those in my field (molecular spectroscopy) tend to approach
problems more like a physicist than a typical chemist would.
6. #6 David Brown January 23, 2014
“… the idea of doing the best you can with the smallest number of inputs.” Google “space roar dark energy”.
7. #7 Rationally Speaking: Is information physical? And what does that mean? | SelfAwarePatterns January 24, 2014
[…] Reductionism Is Not Fundamentalism (scienceblogs.com) […] | {"url":"http://scienceblogs.com/principles/2014/01/23/reductionism-is-not-fundamentalism/","timestamp":"2014-04-17T19:20:52Z","content_type":null,"content_length":"86443","record_id":"<urn:uuid:37919ae2-4a44-4e08-ba82-6b00e39c08a9>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
[postgis-users] TIN from point coverage
Stephen Woodbridge woodbri at swoodbridge.com
Thu Mar 24 16:23:49 PDT 2011
We have a use case for this in pgRouting.
Basically we compute the driving distance of a network and get a
collection of points that can be represented as x,y,z where z is the
cost to get to that point.
You can visualize this data like a cereal bowl where the start node is
in the center of the bowl with a cost of zero and then as you move
outwards the cost increases as you move up the edges of the bowl.
So we take these points and we need them triangulated.
Then we we intersect the triangles with parallel planes increasing in z
to get isochronal contour edge pieces, that we could then feed to
st_BuildArea to create polygons for each isochrone.
But all this starts with Delaunay triangulation.
On 3/24/2011 6:17 PM, Olivier Courtin wrote:
> On Mar 24, 2011, at 10:52 PM, Pierre Racine wrote:
> Pierre,
>> My goal is to define the way leading from a point coverage to a
>> raster. Something like ST_Interpolate("pointgeomtable",
>> "rastercolumn") -> raster. What would be the step leading to that?
>> I guess we have everything now to store a Delaunay Triangulation as
>> polyhedral surfaces right?
> Hum yes and no, in fact it depends :)
> As i said the TIN model is designed for a feature model storage (CIM use
> case),
> so in a single cell, a reasonable small TIN (i.e thousands of points is
> fine
> or even more, but not a whole LIDAR Coverage triangulation)
> If we want to handle this use case, i think
> we have to add a way to split and aggregate a big TIN into several small
> ones
> (as i recall it should be the way Oracle Spatial handle it)
>> Then from this polyhedral surface we can derive a raster...
>> Maybe a first ST_AsDelaunayTriangulation(geom) could be an aggregate
>> SET function returning a set of triangle...
> Yes, if you produce a set of Triangles rather than a single huge TIN,
> it could works with the current TIN implementation.
> Question: is it something liblas could already handle ?
>> and maybe ST_Interpolate(polyhedral surfaces) could be an aggregate
>> function that would aggregate those triangle into a raster?
> --
> Olivier
> _______________________________________________
> postgis-users mailing list
> postgis-users at postgis.refractions.net
> http://postgis.refractions.net/mailman/listinfo/postgis-users
More information about the postgis-users mailing list | {"url":"http://lists.osgeo.org/pipermail/postgis-users/2011-March/029183.html","timestamp":"2014-04-16T13:04:17Z","content_type":null,"content_length":"5702","record_id":"<urn:uuid:97e9f4a2-130e-4755-b9ee-8274c772997b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Quotes
Mathematics Quotes, Quotations, and Sayings
31 Mathematics Quotes
With me everything turns into mathematics. [Fr., Omnia apud me mathematica fiunt.]
Max Wilhelm Dehn Quotes
"The Mathematical Intelligencer" (vol. 5, no. 2)
0 out of 5 stars0 votes
As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.
Max Wilhelm Dehn Quotes
"The Mathematical Intelligencer" (vol. 5, no. 2)
0 out of 5 stars0 votes
Do not worry about your difficulties in mathematics, I assure you that mine are greater.
Max Wilhelm Dehn Quotes
"The Mathematical Intelligencer" (vol. 5, no. 2)
0 out of 5 stars0 votes
How can it be that mathematics, being after all a product of human thought independent of experience, is so admirably adapted to the objects of reality?
Max Wilhelm Dehn Quotes
"The Mathematical Intelligencer" (vol. 5, no. 2)
0 out of 5 stars0 votes
Definition of a Statistician: A man who believes figures don't lie, but admits than under analysis some of them won't stand up either.
Evan Esar Quotes
0 out of 5 stars0 votes
Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.
John Von Neumann Quotes
0 out of 5 stars0 votes | {"url":"http://www.worldofquotes.com/topic/Mathematics/2/index.html","timestamp":"2014-04-20T08:50:25Z","content_type":null,"content_length":"39820","record_id":"<urn:uuid:a8c74b2b-3cd3-4289-a316-a00b9ca28cce>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orthogonal Matrix proof
January 5th 2011, 02:34 PM #1
Junior Member
Oct 2010
Orthogonal Matrix proof
An n x n matrix Q is said to be an orthogonal matrix if the column vectors of Q form an orthornormal minimal spanning set of R^n. Prove the following theorem.
An n x n matrix Q is orthogonal if and only if Q^TQ = In (When Q^T is the transpose of Q)
I'm struggling with this.
Q = (Q1, Q2, ..., Qn) so Q^T = Row vector of Q.
So Q^TQ = Is an n x n matrix where the ij components { 1 if i=j or 0 if i =/ j }
So QTQ = In
If you can understand what I put, I kind of proved one way. Could someone help with the full proof?
I think you have it.
For column vectors $v_1, v_2, ...$,
Just mention that each entry of the product is $(Q^TQ)_{ij} = v_i \cdot v_j$, so { 1 if i=j and 0 if i =/ j } by orthogonality.
I am almost positive I have this proven in the Sticky in this forum.
I could be wrong but I think it is there.
Is anyone able to give me the full proof?
Iff. two ways.
1 direction
$P\Rightarrow Q$
Sticky PDF #23
January 5th 2011, 02:37 PM #2
Senior Member
Dec 2010
January 5th 2011, 02:38 PM #3
MHF Contributor
Mar 2010
January 5th 2011, 02:52 PM #4
Junior Member
Oct 2010
January 5th 2011, 02:53 PM #5
Junior Member
Oct 2010
January 5th 2011, 03:17 PM #6
Junior Member
Oct 2010
January 5th 2011, 03:20 PM #7
MHF Contributor
Mar 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/167553-orthogonal-matrix-proof.html","timestamp":"2014-04-21T13:11:21Z","content_type":null,"content_length":"45745","record_id":"<urn:uuid:216367fd-fbf7-4587-b327-fd2e1f437875>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frozen Core Options
These options specify which inner orbitals are frozen in post-SCF calculations. Gaussian 09 adds some additional options to the ones already available in the program [Austin02]. See also BD for
BD-specific choices.
This specifies a “frozen core” calculation, and it implies that inner-shells are excluded from the correlation calculation. This is the default calculation mode. Note that FC, Full, RW and Window are
mutually exclusive. It is equivalent to FreezeG2 for the 6-31G and 6-311G basis sets and to FreezeNobleGasCore for all other basis sets, except that the outer s and p core orbitals of 3rd row and
later alkali and alkaline earth atoms are not frozen (in accord with the G2/G3/G4 conventions).
In post-SCF calculations the largest noble gas core is frozen. FrzNGC is a synonym for this option.
In post-SCF calculations, the next to largest noble gas core is frozen. That is, the outermost core orbitals are retained. FrzINGC and FC1 are synonyms for this option.
Freeze orbitals according to the G2 convention: d orbitals of main group elements are frozen, but the outer sp core of 3rd row and later alkali and alkaline earth elements are kept in the valence.
Freeze orbitals according to the G3 convention.
Freeze orbitals according to the G4 convention.
This specifies that all electrons be included in a correlation calculation.
The “read window” option means that specific information about which orbitals are retained in the post-SCF calculation will be given in the input file. ReadWindow is a synonym for RW.
The required input section consists of a line specifying the starting and ending orbitals to be retained, followed by a blank line. A value of zero indicates the first or last orbital, depending on
where it is used. If the value for the first orbital is negative (-m), then the highest m orbitals are retained; if the value for the last orbital is negative (-n), then the highest n orbitals are
frozen. If m is positive and n is omitted, n defaults to 0. If m is negative and n is omitted, then the highest |m| occupied and lowest |m| virtual orbitals are retained.
Here are some examples for a calculation on C[4]H[4]:
0,0 Equivalent to Full.
5,0 Freezes the 4 core orbitals and keeps all virtual orbitals (equivalent to FC if the basis has a single zeta core).
5,-4 Freezes the four core orbitals and the highest four virtual orbitals. This is the appropriate frozen core for a basis with a double-zeta core.
6,22 Retains orbitals 6 through 22 in the post-SCF phase. For example, since C[4]H[4] has 28 electrons, if this is a closed shell calculation, there will be 14 occupied orbitals, 5 of which will be
frozen, so the post-SCF calculation will involve 9 occupied orbitals (orbitals 6-14) and 8 virtual orbitals (orbitals 15-22).
-6 Retains orbitals 9 through 20.
Performs the same function as the ReadWindow option, but takes its input as parameters in the route section rather than from the input stream.
The window read in during a previous job is recovered from the checkpoint file.
Reads a list of orbitals to freeze from the input stream, terminated by a blank line. Two lists are read for unrestricted calculations. A range of orbitals can be specified, e.g.: 2 7-10 14
Last update: 23 April 2013 | {"url":"http://www.gaussian.com/g_tech/g_ur/k_fc.htm","timestamp":"2014-04-18T10:41:54Z","content_type":null,"content_length":"13827","record_id":"<urn:uuid:5411f90e-f694-48d9-8814-1badfeac26e5>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Module::Hash - a tied hash that requires modules for you
use strict;
use Test::More tests => 1;
use Module::Hash;
tie my %MOD, "Module::Hash";
my $number = $MOD{"Math::BigInt"}->new(42);
ok( $number->isa("Math::BigInt") );
Module::Hash provides a tied hash that can be used to load and quote module names.
tie my %MOD, "Module::Hash", %options;
The hash is tied to Module::Hash. Every time you fetch a hash key, such as $MOD{"Math::BigInt"} that module is loaded, and the module name is returned as a string. Thus the following works without
you needing to load Math::BigInt in advance.
You may wonder what the advantage is of this hash, rather that using good old:
require Math::BigInt;
Well, the latter is actually ambiguous. Try defining a sub called BigInt in the Math package!
You can provide an optional minimum version number for the module. The module will be checked against the required version number, but the version number will not be included in the returned string.
Thus the following works:
$MOD{"Math::BigInt 1.00"}->new(...)
The following options are supported:
• prefix - an optional prefix for modules
tie my $MATH, "Module::Hash", prefix => "Math";
my $number = $MATH{BigInt}->new(42);
• optimistic - a boolean. If the hash is optimistic, then it doesn't croak when modules are missing; it silently returns the module name anyway. Hashes are optimistic by default; you need to
explicitly pessimize them:
tie my $MOD, "Module::Hash", optimistic => 0;
Attempting to modify the hash will croak.
If you just want to use the default options, you can supply a reference to the hash in the import statement:
my %MOD;
use Module::Hash \%MOD;
my $MOD;
use Module::Hash $MOD;
Little known fact: Perl has a built-in global hash called %\. Unlike %+ and %- and some other built-in global hashes, the Perl core doesn't use it for anything. And I don't think anybody else uses it
either. The following makes for some cute code...
use Module::Hash \%\;
... or an unmaintainable nightmare depending on your perspective.
This module also provides an object-oriented interface, intended for subclassing, etc, etc.
Please report any bugs to http://rt.cpan.org/Dist/Display.html?Queue=Module-Hash.
Most of the tricky stuff is handled by Module::Runtime.
Module::Quote is similar to this, but more insane. If this module isn't insane enough for you, try that.
Toby Inkster <tobyink@cpan.org>.
This software is copyright (c) 2012 by Toby Inkster.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | {"url":"http://search.cpan.org/~tobyink/Module-Hash/lib/Module/Hash.pm","timestamp":"2014-04-19T07:59:12Z","content_type":null,"content_length":"17335","record_id":"<urn:uuid:1ac3675e-3397-481d-8e1e-eceb8b467663>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient Algorithm ?
January 9th, 2011, 04:35 PM #1
Junior Member
Join Date
Apr 2009
Efficient Algorithm ?
I am trying to find a O (n) algorithm for this problem but unable to do so even after spending 3 - 4 hours. The brute force method times out (O (n^2)). I am confused as to how to do it ? Does the
solution requires dynamic programming solution ?
In short the problem is this:
There are some students sitting in circle and each one of them has its own choice as to when he wants to be asked a question from a teacher. The teacher will ask the questions in clockwise order
only. For example:
This means that there are 5 students and :
1st student wants to go third
2nd student wants to go third
3rd student wants to go first
4th student wants to go fifth
5th student wants to go fifth.
The question is as to where should teacher start asking questions so that maximum number of students will get the turn as they want. For this particular example, the answer is 5 because
You can see that by starting at fifth student as 1st, 2 students (3 and 5) are getting the choices as they wanted. For this example the answer is 12th student :
four students get their choices fulfilled.
Re: Efficient Algorithm ?
Don't worry guys ! I found out the O (n) algorithm.
Re: Efficient Algorithm ?
Don't worry guys ! I found out the O (n) algorithm.
Lets compare solutions.
If there are N students there are N possible starting positions.
You need an array of N counters each representing a starting position.
By knowing a student's position at the table you also know which starting position satisfies this student's preference.
So you only have to visit each student once and increment the counter associated with his/her preference. Afterwards the starting position with the largest count is the one that satisfies most
Visiting each students is an O(N) process. Updating a counter is a table look-up so it's an O(1) process. This means the overall complexity will be O(N).
One can note that compared with the brute force O(N^2) algorithm this one has lower time complexity but at the cost of more memory (the N counters). In the original problem formulation N can be
10^5 at the most so this O(N) algorithm is still feasible.
Since the students don't have to be visited in any particular order they can be visited at the same time. It means the algorithm can be run in parallel and if there are N processors it will be O
Last edited by nuzzle; January 14th, 2011 at 10:14 AM.
Re: Efficient Algorithm ?
Thanks nuzzle ! The O(n) algorithm I used is exactly same as yours. Thanks for your time.
January 9th, 2011, 07:42 PM #2
Junior Member
Join Date
Apr 2009
January 10th, 2011, 03:57 AM #3
Elite Member
Join Date
May 2009
January 25th, 2011, 07:58 PM #4
Junior Member
Join Date
Apr 2009 | {"url":"http://forums.codeguru.com/showthread.php?507317-Efficient-Algorithm&p=1990461","timestamp":"2014-04-21T06:03:58Z","content_type":null,"content_length":"80912","record_id":"<urn:uuid:d76dd75e-cd9c-43b1-a74f-92abc35ec673>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 28, Number 3
May/Jun 2012
The Algebra Problem
How to elicit algebraic thinking in students before eighth grade
The Algebra Problem, continued
It’s Crazy Hair Day at Marshall Elementary School in Boston’s Dorchester neighborhood—which is perfect, because Tufts University researcher Bárbara Brizuela has brought a hat.
In the stovepipe style and made from oaktag paper, the hat is one foot tall. Brizuela then asks, “If I’m five and a half feet tall, how tall will I be with the hat on?” Second-grader Jasmine, smiley
in a pink sweatsuit, answers, “Six and a half feet.” Rather than say, “Right!” Brizuela offers another question: “How do you know?”
Thus begins a math conversation that researchers like Brizuela believe may hold the key to tackling one of our biggest school bugaboos: algebra. As they talk, Jasmine uses words, bar graphs, and a
table to describe how tall each person they discuss will be if they put on the hat. Jasmine creates a rule—“add one foot to the number you already had”—and applies it to an imaginary person 100 feet
Brizuela even throws out a variable. “So, to show someone whose height I don’t know, I will use z feet,” she says, adding a z to Jasmine’s table. “What should I do now?” Jasmine pauses. “This is kind
of hard,” she says. Brizuela, whose pilot study explores mathematical thinking among children in grades K–2, understands. “Would you like to use a different letter?” she asks, erasing the z and
replacing it with a y. Jasmine smiles. She picks up her pencil and easily jots down the rule: y + 1 = z feet.
A Dreaded, Scary Subject
It may seem adorable that young children are stumped if asked to add 1 to z but not if asked to add 1 to y, but to Brizuela, director of the Mathematics, Science, Technology, and Engineering
Education Program in Tuft’s education department, it reveals the reasoning capacity of young minds and the need to engage them in algebraic thinking long before it becomes a dreaded and scary
To many, algebra is about the first or last three letters of the alphabet, and it provokes groaning, trash talk (think Forever 21’s “Allergic to Algebra” T-shirt), and heated debate. Should it be
mandated? At what grade? Algebra’s status as a “gatekeeper course” has made it a touchstone on matters of access and equity. As a result, in many places it’s become a graduation requirement.
Back in the early 1980s, one-quarter of high school graduates never even took algebra, says Daniel Chazen, director of the Center for Mathematics Education at the University of Maryland. Today,
educators are pushing students to take algebra even before high school. According to the National Assessment of Educational Progress (NAEP), the number of students taking Algebra I in eighth grade
more than doubled between 1986 and 2011, from 16 to 34 percent. Strikingly, eighth-grade NAEP math test scores have edged up too, with 43 percent scoring advanced or proficient in 2011, compared with
27 percent in 1996.
But amid the good news is a troubling reality: Many kids are failing algebra. In California, where standards call for Algebra I in grade 8, a 2011 EdSource report shows that nearly one-third of those
who took the course—or 80,000 students—scored “below basic” or “far below basic.” In districts across the country, failure rates for Algebra I vary but run as high as 40 or 50 percent, raising
questions about how students are prepared—and how the subject is taught.
Starting Algebra Early
Why is algebra so hard? For many students, math experts say, it is a dramatic leap to go from the concrete world of computation-focused grade school math to the abstract world of algebra, which
requires work with variables and changing quantitative relationships. It is not just the shock of seeing letters where numbers have been but also the type of thinking those letters represent.
“In arithmetic, you are dealing with explicit numbers,” says Hung-Hsi Wu, a professor emeritus of mathematics at the University of California, Berkeley. “Algebra says, ‘I have a number; I don’t know
what it is, but three times it and subtract three is 15.’ You have a number floating out there, and you have to catch it. It is the thinking behind catching the number that baffles students.”
While some argue that children must be developmentally ready to learn algebra—around ages 11–13, when they can grasp abstract thought—Brizuela and others say it’s critical to introduce it earlier.
“Kids need to develop some comfort with these tools,” she says. “Babies are exposed to written and spoken language, and after six years we expect them to become somewhat fluent with that. In math, we
just drop it on them like a bomb.”
Brizuela’s research spans more than a dozen years and seeks to find out if explicitly teaching algebraic thinking, including a comfort with letter variables and the ability to express mathematical
values in multiple forms (Jasmine’s words, table, and bar graph), might be helpful later on.
In a study to be published in October in Recherches en Didactique des Mathématiques, a French math education journal, Brizuela and her colleagues tracked 19 students in Boston Public Schools in
grades 3, 4, and 5 who received weekly algebra lessons plus homework, as compared with a control group, and followed them through middle school. Results showed that those students outperformed their
peers on algebra assessments given in grades 5, 7, and 8 and drawn from NAEP, Massachusetts state tests, and the Trends in International Mathematics and Science Study, or TIMMS.
Building Math Minds
Central to Brizuela’s work is a striking idea: Rather than pushing eighth-grade or high school algebra down to elementary school, she begins with what children already tend to do, such as
generalizing. For example, when children hear the word “hundred,” they know to add two zeros. Brizuela uses that natural ability to lure children into thinking about quantitative relationships that
then become algebraic rules. This exercises their natural mathematical reasoning, which is often pushed aside in favor of getting the “right” answer or learning to memorize or compute (see sidebar
“Laying the Groundwork for Algebra”).
Similarly, Barbara J. Dougherty, Richard G. Miller Chair of Mathematics Education at the University of Missouri, observes that first-graders naturally compare, often to be sure they have the same
amount (of whatever is in question) as somebody else.
“In starting with children at six, rather than starting with numbers, we ask, ‘How do you know if you have more than somebody else or less?’” says Dougherty. She and her colleagues use measurement as
a vehicle for discussing comparisons of, say, the height of a cereal box to the length of a pencil. Then, instead of writing down “the height of the cereal box” and “the length of the pencil,” she
says, “we’ll say, ‘Let b represent the height of the cereal box and l be the length of the pencil.’ It sounds pretty simple, but it is actually pretty powerful.” Dougherty, who has been following a
cohort of students at the University Laboratory School in Honolulu, Hawaii, since 2001, says that by the time the students reach high school, they consistently outperform peers in their understanding
of algebraic concepts like variables and quantitative relationships.
In the Lab School, whose student population reflects the state’s socioeconomic and racial composition, first-grade teacher Maria DaSilva says that rather than presenting the students with, say, a
number line right off, she lets the class puzzle through a problem—sometimes over the course of days—until they realize that having a number line will help them in their work (see sidebar “Algebra in
First Grade?”).
The Teaching Challenge
The drive to improve U.S. math performance among students has focused on two main worries: (1) Are students well enough prepared, and (2) are teachers prepared enough to teach math well?
William Schmidt, professor and codirector of the Education Policy Center at Michigan State University, says the new Common Core standards likely to be adopted by most states for 2013–2014, “capture
the logic of mathematics,”—an upgrade from the seemingly unrelated lessons that have made learning math “like reading the phone book.”
But he wonders: Will teachers be able to teach it? In a 2010 study, Breaking the Cycle: An International Comparison of U.S. Mathematics Teacher Preparation, comparing U.S. primary and middle school
teachers with peers in 16 countries, Schmidt and his colleagues found that American teachers had “weak training mathematically” and less math coursework than teachers in high-performing nations. “We
have this new demanding curriculum in the middle grades and teachers who are ill prepared to teach it,” he warns.
Meanwhile, excitement over raised standards has been met with a worry: What about the kids who are struggling now? Math researchers, like James J. Lynn at the University of Illinois at Chicago, with
colleagues in New York and Seattle, are in the third year of a four-year National Science Foundation–funded project to study 17,000 high school students who struggle with algebra. Their approach is
to promote sense-making, which they say has been lacking in many students’ earlier algebra experiences.
Along with work aimed at bolstering students’ sense of how quantities relate—including filling deficits as they go rather than undertaking long periods of “re-teaching”—the project also seeks to
change the mindset around algebra. Instead of viewing algebra as insurmountable, students learn that applying effort and wrestling with problems can grow brain connections and make them smarter and
better at math. “We try to shape their attitudes of themselves as capable learners,” says Lynn. The program is showing some gain, with about half the students scoring “high mastery” after the course
(most students scored “low mastery” prior to the course).
Given such difficulty, one has to wonder: Why even learn algebra?
According to Jon R. Star, associate professor at the Harvard Graduate School of Education, that’s like asking: “Why are they reading Wuthering Heights?” Star says the answer is that—like
literature—algebra tells us something about human nature and understanding. Algebra, he says, “is our students’ first exposure to what mathematics is.” It offers students the sort of critical
thinking about mathematical ideas that simply doesn’t come with the computation skills of early school math. Instead, he argues, we should simply point out that, when we get to algebra, “we are here
to learn some mathematics.” Not computation. Not calculation. But real math.
Freelance education writer and author Laura Pappano is a frequent contributor to the Harvard Education Letter. | {"url":"http://www.hepg.org/hel-home/issues/28_3/helarticle/the-algebra-problem_533","timestamp":"2014-04-16T19:07:34Z","content_type":null,"content_length":"90623","record_id":"<urn:uuid:7eedc2f2-1142-45ea-ad3f-51ca59bcf4bf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mental FlossCan You Really Tell How Close Lightning is by Listening for Thunder?
Lightning image via Shutterstock
It sounds a bit like an old wives' tale, but you actually can use the speeds of light and sound to get a rough estimate of how far away a storm is.
Scientists have come up with various devices and methods for determining the distance of lightning, but you can estimate it right in your head with just a little counting and a little math, using
what’s called the "Flash to Bang" method.
Sound travels through air at, well, the speed of sound. Officially, that’s 1,087 feet per second in dry air at 0 degrees Celsius/32 degrees Fahrenheit. Depending on the local temperature and
humidity, that mileage will vary. For a quick calculation in your head, though, the experts at the National Severe Storms Laboratory say you can use 1 mile per five seconds as a good approximation in
most conditions.
The speed of light is just a wee bit faster than sound, 186,282.397 miles per second. It’s fast enough that you see the lightning almost the instant it flashes. When that happens, start counting
until you hear the thunder, which is caused by a sonic shockwave created from air rapidly expanding in the presence of lightning’s extreme heat and pressure. Divide the number of seconds between the
flash of lightning and the bang of thunder by five to account for the sound’s slower speed, and you have a rough idea of how many miles away the lightning struck. If it takes 10 seconds for the
thunder to roll in after the flash, the lightning struck about 2 miles miles away.
Of course, getting struck by lightning is not something most of us want to experience, no matter how cool the scars, so if your distance is dwindling while you’re out there crunching numbers, seek | {"url":"http://mentalfloss.com/node/30967/atom.xml","timestamp":"2014-04-21T07:09:58Z","content_type":null,"content_length":"5118","record_id":"<urn:uuid:4fcb0f37-6f8e-4f29-ae08-373d083f056d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rates in Geology
Sample problems
Practice calculating rates (and rearranging the rate equation) below using the "rules" that you have just learned. Answers are provided (but try doing them on your own before peeking!)
Calculating rates
Problem 1:
You wake up at 6 am (EARLY!) and the temperature is 55
F. By the time you head off to your picnic lunch at noon, the temperature has risen to 91
F. What is the rate of temperature change in
F per hour?
1. Determine which changing variable is ΔX and which is Δt.
In the above, you need to calculate a change in temperature, so this will be ΔX and the amount of time that has elapsed, Δt.
2. Calculate ΔX and Δt to determine the change in the variable(s).
Lets begin by calculating the change in temperature.
ΔX=91°F - 55°F = 36°F
Next we need to calculate the amount of time that elapsed between getting up and heading off to your picnic. In other words, how many hours between 6 am and noon?
Δt=12 - 6 = 6 hours
3. Calculate the rate using ΔX and Δt.
4. Check to see what the units on your final number should be before continuing.
In this case, we are asked to calculate a rate in °F per hour. Do we have units of °F and hours? Yes!
5. Evaluate your answer. Does your answer seem reasonable?
6°F per hour is a reasonable number since over a couple of hours the temperature would change about a dozen degrees.
Problem 2.
The Hawaiian hot spot sits below the Pacific plate. As the plate moves over the hot spot, a chain of volcanoes is formed. The Waianae volcano on Oahu is 3.7 million years old and about 375 km from
the current location of the Hawaiian hot spot. Assuming that the hot spot is in a fixed location, how fast (at what rate) is the Pacific plate moving?
1. Determine which changing variable is ΔX and which is Δt.
In the above, you need to calculate a change in location which is distance, so this will be ΔX and the amount of time that has elapsed, Δt.
2. Calculate ΔX and Δt to determine the change in the variable(s).
In this case ΔX and Δt are given - no calculations are necessary. ΔX is given as the distance (375 km) and Δt the time (3.7 million years).
3. Calculate the rate using ΔX and Δt.
4. Check to see what the units on your final number should be before continuing.
In this case, no specific units are requested, so no need to worry about this!
5. Evaluate your answer. Does your answer seem reasonable?
Plates typically move 10-150 km/my, so this seems reasonable.
Problem 3:
The Hawaiian hot spot has produced about 775,000,000 km
of magma in the past 70 million years. What is the average rate of magma production per year?
Remember to follow the steps for calculating a rate:
1. Determine which changing variable is ΔX and which is Δt.
In this case the two values are volume for ΔX and, as is typical, time for Δt.
2. Calculate ΔX and Δt to determine the change in the variable(s).
In this case since the volume has gone from 0 km^3 to 775,000,000 km^3, so ΔX = 775,000,000 km^3. Δt = 70,000,000 years because that is the amount of time that has elapsed.
3. Calculate the rate
4. Check to see what the units on your final number should be before continuing.
The question asks for the average rate of magma production per year, so the final units should be km^3/yr, and it is!
5. Evaluate your answer
This is a bit difficult in this problem, since it is hard to know what a reasonable amount of magma production in a year is. However, if you had gotten a very small number (like .001 km^3/yr),
you might realize that there is no way that a giant set of volcanic islands could be made with that amount of magma.
Another way to evaluate your answer is to make a quick estimation. If the volcano erupts 10 km^3 per year (close to the answer you got) for 70 million years, that would be get 700 million km^3 of
magma. This is about what the question says is the amount, so it seems this is a reasonable answer.
Problem 4:
Rivers often form sinuous paths as they flow downstream. The river bends are called meanders and move over time as the river erodes its banks. In 2010, you purchased a house that was 150 meters from
the outside of a meander of the Rio Grande river. Looking at maps from 1955, you find that the meander has moved 230 meters toward you. How fast is the meander migrating?
1. Determine which changing variable is ΔX and which is Δt.
In this case the distance is ΔX and as always the time elapsed is Δt.
2. Calculate ΔX and Δt to determine the change in the variable(s).
The distance (ΔX) is clear - 230 meters - but the time may not be so clear. The river moved this distance between 1955 and 2010, so the time (Δt) is the difference between these, found by
Δt= 2010-1955 = 55 years.
So the river moved 230 meters in 55 years.
3. Calculate the rate
4. Check to see what the units on your final number should be before continuing.
In this case, no specific units are requested, so no need to worry about this!
5. Evaluate your answer
4.18 m is about 13 feet. While this is a lot of erosion, it is certainly reasonable.
Determining a rate from a graph
Problem 5:Examine the graph of the age and distance of the New England Seamounts. This chain of seamounts are thought to be created by a hotspot that underlies the oceanic plate that they sit on.
As the plate moves, it carries the seamounts with it. What is the rate of movement that they show? (You can click on the graph for a bigger version)
To solve this problem, first pick any two points on the line and then determine the slope of the line, which is the rate.
You can use any two points that you would like. The farther apart they are, however, the less likely you are to make mistakes that will matter in the end. Two points are marked in blue on the
graph to the right. The upper right point is at about 104 million years and 1030 km and the lower left point is at 81 million years and 0 km
Slope is rise divided by run. The rise is the difference between the vertical values (the distances), and the run is the difference between the horizontal values (the distances).
The rise then is
104 my- 81 my = 23 my
(my is million years)
The run will be
1030 km - 0 km = 1030 km
Dividing the two, (rise divided by run)
1030 km/23 my
= 44.8 km/my
The slope is equal to the rate as long as the horizontal axis is time. So in this case the rate is the slope!
The plates velocity is 44.8 km/my.
Age and location of the New England Seamounts
Figure modified from Duncan 1984. J. Geophys. Res. 89 (B12):9980-9990 and from http://oceanexplorer.noaa.gov/explorations/03mountains/background/geology/geology.html
I think I've mastered these rate problems! Let me try the quiz!
(If this is not how you feel see the links below for more practice!)
Still need more practice?
There are many web sites and books that walk you through the rates problems, although most will be distance, velocity and time problems. However, the mathematics is identical. | {"url":"http://serc.carleton.edu/mathyouneed/rates/ratessp2.html","timestamp":"2014-04-17T06:58:37Z","content_type":null,"content_length":"50494","record_id":"<urn:uuid:5de45109-251a-448d-b4d2-2d434a39724f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Cerrito ACT Tutor
Find an El Cerrito ACT Tutor
...I'm good at zeroing in on precisely what's giving you trouble, and will break each problem into pieces you can understand and learn. While a grad student at UC Berkeley, I received the
University teaching award for statistics, and my undergraduate students consistently gave me the top rating in ...
14 Subjects: including ACT Math, statistics, geometry, GRE
...Because English spelling is far less regular than, for example, Spanish, students of English need to not only learn spelling rules, but also need to spend time memorizing and practicing
vocabulary words. Reviewing and learning 5, 10, or 20 words a week, when combined with reading a few minutes e...
42 Subjects: including ACT Math, English, reading, writing
...I have a lot of experience helping students of all levels! In addition to teaching the material, I also like to emphasize study strategies and skills. I can't wait to work together! :)I tutored
a Cal undergrad in introductory Statistics last Spring.
27 Subjects: including ACT Math, chemistry, physics, calculus
...My goal for you isn't to just memorize formulas and regurgitate facts, but to fall in love with the subject the same way I did. I'll bring excitement to the subject that previously made you
fall asleep in class. Hopefully, I'll hear from you soon and we can start tackling those challenging problems together!
19 Subjects: including ACT Math, physics, calculus, writing
...Specifically, I can tutor in all math up to calculus (pre-algebra, algebra, geometry, pre-calculus, trigonometry, calculus, etc.) and science (physics and chemistry, including science
projects). Whatever I am tutoring, I use positive reinforcement to encourage students and let them know when the...
14 Subjects: including ACT Math, chemistry, calculus, physics | {"url":"http://www.purplemath.com/el_cerrito_act_tutors.php","timestamp":"2014-04-20T01:48:34Z","content_type":null,"content_length":"23580","record_id":"<urn:uuid:edb67b7f-74f4-4486-9ee2-6b9eda290690>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rock Climbing Forums: Climbing Information: The Lab: Re: [BetaRock] Theory about forces in a 3-legged cordelette: Edit Log
Jun 25, 2012, 7:00 PM
Views: 6757
Registered: Sep 15, 2010
Posts: 64
Re: [BetaRock] Theory about forces in a 3-legged cordelette
Report this Post Average:
A large part of my work includes structural design for cable-stayed architectural cladding systems. Perhaps I can shed some light on the mechanics of the problem we're looking at:
For a two-anchor system, you should be able to resolve the forces using basic free-body principles from physics or engineering statics --- IF the segments were perfectly unstretchable cables in a
known geometric configuration. Depending on this configuration (i.e., the angles between the bolts) - the resultants at each anchor bolt would be easily calculable - but already not necessarily
But things are even more complicated because you need to evaluate everything based on the geometry of the system after loading. The slings, webbing, rope, etc. stretch under load (according to a
stress-strain relationship for the material) and that changes this geometry. And, what's more, the longer the sling is, the more it stretches under load.
So, to determine that final geometry under your design load, you would need to know (a) the elastic modulous of the rope or webbing in each segment, (b) it's cross sectional area, (c) the EXACT
unstretched length of each segment, and (d) the exact magnitude of the loading at the masterpoint. It starts to get a little hairy when you see that the geometry, tension, and amount of stretch are
all inter-related.
This is all just for a system with two anchor bolts.
For a three-point system, it you're introducing additional angles, additional internal deflection, and more uncertainty (for a human approximating everything in the real world) in the final
[tensioned] geometry.
We still haven't accounted for the fact that even the best "self-equalizing" knots still deal with some amount of internal friction, the error in measurement of sling length, the assumption of linear
behavior in what are typically anistropic nonlinear materials, error in measurement of applied loads, et cetera et cetera.
It starts to become a fairly difficult and complicated problem, and it's not surprising to me at all that we should see such a large differential in the loads at each anchor point.
(This post was edited by Express on Jun 25, 2012, 7:03 PM) | {"url":"http://www.rockclimbing.com/cgi-bin/forum/gforum.cgi?do=post_editlog;post=2589819;","timestamp":"2014-04-17T17:34:17Z","content_type":null,"content_length":"36279","record_id":"<urn:uuid:f912974c-87df-4aa4-9934-201567fbd124>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] new numpy docs, missing function and parse error - dirichlet distribution
[Numpy-discussion] new numpy docs, missing function and parse error - dirichlet distribution
joep josef.pktd@gmail....
Thu May 22 10:11:10 CDT 2008
I was just looking around at the new numpy documentation and got a
xhtml parsing error on the page (with Firefox):
The offending line contains
$X pprox prod_{i=1}^{k}{x^{lpha_i-1}_i}$<
in the docstring of the dirichlet distribution
the corresponding line in the source at
.. math:: X \\approx \\prod_{i=1}^{k}{x^{\\alpha_i-1}_i}
(I have no idea, why it seems not to parse \\a correctly).
When looking for this, I found that the Dirichlet distribution is
missing from the new Docstring Wiki, http://sd-2116.dedibox.fr/doc/Docstrings/numpy/random
Then I saw that Dirichlet is also missing in __all__ in
As a consequence numpy.lookfor does not find Dirichlet
>>> numpy.lookfor('dirichlet')
Search results for 'dirichlet'
>>> import numpy.random
>>> dir(numpy.random)
contains dirichlet
>>> numpy.random.__all__
does not contain dirichlet.
To me this seems to be a documentation bug.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/034293.html","timestamp":"2014-04-17T13:12:10Z","content_type":null,"content_length":"4418","record_id":"<urn:uuid:9a6cf19b-d78b-4935-a49d-528245586b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conversion from Fahrenheit to Celsius problem
01-22-2008 #1
Registered User
Join Date
Jan 2008
Conversion from Fahrenheit to Celsius problem
I'm doing a simple code which converts an entered amount of degrees Fahrenheit to Celsius. I'm running sample test data and there appears to be a converting problem.
//This program will convert a temperature of Fahrenheit to Celsius
#include <iostream>
using namespace std;
//Function Prototypes
int Celsius(int);
int main()
int F, // # of degrees Fahrenheit
C; // # of degrees Celsius
cout <<"This program will convert a temperature of Fahrenheit to Celsius." << endl;
cout <<"Please enter the amount of degrees (integer) Fahrenheit." << endl;
cin >> F;
C = Celsius(F);
cout << F << " degrees Fahrenheit converted to Celsius is " << C << endl;
return 0;
//Definition of function Celsius. *
//This function converts Fahrenheit to Celsius. *
int Celsius(int F)
return ((5 / 9) * (F - 32)); // Formula used to convert Fahrenheit to Celsius
My first test data was 32 and it came out correctly with a conversion of 0 degrees Celsius. When I tried one (1), it gave me an output of zero (0) as with others numbers I have entered. Can you
help me identify the problem?
I know some answers will come out with a decimal, but I'm only concerned with "whole" numbers.
Last edited by -Syntax; 01-22-2008 at 12:46 PM.
5/9 is always 0
5.0/9.0 is better.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
Yes, probably.
The thing with math with integers only is that the results can only be integers.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
If you're only worried about outputting whole numbers, it's ok to leave the result as an int, as long as all of the calculations are done as doubles. Changing to 5.0/9.0 accomplishes that
already, so I don't think it's necessary to change the function to use doubles unless you want the output to include decimals.
If you write (F - 32) * 5 / 9, that will give the correct result, and avoid unnecessary convention to and from double. The key is that multiplication is done first.
The only issue that arises is that both my suggestion, and converting to and from double, will always round the result down. This can be adjusted by adding the following modifier: (int)( (F - 32)
* 5 %9 > 4 ).
Alternatively, you can have the function take and return doubles, and use 32.0, 5.0, and 9.0 as constants
It is too clear and so it is hard to see.
A dunce once searched for fire with a lighted lantern.
Had he known what fire was,
He could have cooked his rice much sooner.
01-22-2008 #2
01-22-2008 #3
Registered User
Join Date
Jan 2008
01-22-2008 #4
The larch
Join Date
May 2006
01-22-2008 #5
Registered User
Join Date
Jan 2005
01-22-2008 #6
Registered User
Join Date
Apr 2006 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/98178-conversion-fahrenheit-celsius-problem.html","timestamp":"2014-04-16T18:02:04Z","content_type":null,"content_length":"61435","record_id":"<urn:uuid:fed16db1-a6ec-45dd-92ca-4f40a687165e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
World of Warcraft - Beating the RNG Boss
By Cal Henderson, March 4th 2010.
One of the hardest bosses to fight is the RNG, or Random Number Generator. While the tactics are fairly straight forward (try, try, try again), defeating this boss can take a lot of patience. Lots of
players don't understand quite how this boss fight works, so i'll attempt to explain.
A few months back, I created a World of Warcraft addon called BunnyHunter, to help you track your progress when farming for rare pets.
The rarest pet of all is the Hyacinth Macaw which drops from Bloodsail Pirates off the south coast of Stranglethorn Vale. Each pirate has a 1 in 10,000 chance of dropping this rare pet. I built the
addon especially to help me in my quest to find the macaw.
Quite a few people have misunderstood how BunnyHunter calculates its percentage score. For instance, a BunnyHunter user posted this message to one of my favorite WoW blogs:
1. The app lets you know that there is a 1/1000 chance of an Azure Whelp dropping per kill when you are farming specific mobs in Aszhara.
2. The app lets you track "progress" toward getting a better chance of your drop because it lets you know how many whelp carriers you have killed. If you kill 150 mobs, it lets you know you have
approximately a 15% chance "so far" of getting the whelp.
3. It has a tracking bar that shows you your progress - so after 1000 kills, you have a 100% chance so far of getting your drop.
This shows a real fallacy in the RNG analysis which is that successive kills actually increase your chance of getting a rare drop. Despite the multiple attempts, you have the same chance of
getting the drop on your first kill as you do on your 999th kill which is 1/1000.
Of course, BunnyHunter doesn't work this way at all. This is an all-too-common misconception, however. The thing which governs this is called Probabilistic Independence - the fact that whether one
mob dropped the loot or not, this has no bearing on whether a second mob will drop the loot. By extension, having looted 1000 consecutive mobs which did not drop the loot has no effect on the next
mob you loot. If the drop chance is 1 in 100, there will still be a 1 in 100 chance that the next mob you loot will drop the item.
But if you use BunnyHunter and loot 1000 mobs that drop the Azure Whelp, it wont say 100%; it'll say 63.2%. The reason we can come up with any number at all, is because we can derive the probability
that a piece of loot will drop at least once in a given sequence of lootings.
Say that we have 3 mobs and each has a 50% chance to drop the thing we're looking for. There's an 87.5% chance we'll get the item (7/8ths) after looting all 3. To see why, we'll construct all of the
possible outcomes, which are all equally likely (we're cheating by using a 50% drop chance, to make the example easier):
first mob / second mob / third mob
1) no drop / no drop / no drop
2) no drop / no drop / drop
3) no drop / drop / no drop
4) no drop / drop / drop
5) drop / no drop / no drop
6) drop / no drop / drop
7) drop / drop / no drop
8) drop / drop / drop
In only one of these 8 possible outcomes did we fail to get the item, so the chance we'll get it is 7/8 = 87.5%. The trick here is that we care about the chance to have gotten the drop one or more
times, from 3 mobs. Another way to look at it is that the chance of getting the drop is 100% minus the chance of none of the 3 mobs dropping it.
This last part is the important bit. The chance of having gotten the Azure Whelp after 1000 loots is the same as one minus the chance to have not seen it. We can calculate the chance to have not
gotten it, 1000 times in a row, by taking the chance to not get it once (1 - 0.001 = 0.999) and multiplying it by itself 1000 times (for the 1000 mobs we looted). This gives us a 36.8% chance of not
seeing it drop once, so the inverse is that there's a 63.2% chance (100 - 36.8) that we will have seen it drop. Easy!
The generalized formula is:
chance to have dropped at least once = 1 - ((1 - per-loot drop chance) ^ loots)
That little symbol is the power operation.
BunnyHunter uses this formula to give you a percentage score of how likely you are to have gotten the drop so far. It won't tell you when that drop will happen - it could be the next mob or 1000 mobs
from now - but it will tell you how (un)lucky you have been.
A quick diversion
Not all drop rates in World of Warcraft actually exhibit probabilistic independence. Since WotLK, quest item drops actually use a progressive drop rate. This is to avoid the situation where you grind
and grind but are the unlucky person who kills 50 mobs and doesn't get a single quest item.
Blizzard probably did this for two reasons. Firstly, unlike vanity drops (such as the mythical macaw), you get stuck if you can't get the item. This is bad game design, since you need the items to
continue the quest. Secondly, and probably more importantly, is that until you start getting drops you are not sure you're doing the right thing. Having a quest item only drop after 50 loots gives
you 49 chances to think you're doing the wrong thing and give up.
But we're farming pets here, so it doesn't apply to us!
World of Dropcraft
The problem with probability calculations is that not only are they confusing, but people often use the wrong calculation and then stand behind their results. Rather than try and convince you with
math, I'll use good old brute force.
I created a simple PHP script which runs a number of 'drop trials', which simulate looting mobs in WoW. The script 'loots' a mob by creating a random number between 1 and the inverted drop chance -
for the Hyacinth Macaw, this is 1 in 10,000, so the script will generate a random number between 1 and 10,000 (inclusive). If the random number is 10,000, then we count it as a drop - the mob dropped
our parrot. If the number is anything else, we count it as the mob dropping something else (or nothing at all). In this case, we roll another number and start over. We keep rolling numbers until we
get a drop, and then we note down how many times we rolled a random number. This is how many loots it took to get our drop. So far, simple.
Because computers are nice and fast, we'll take this trial and perform it 10,000 times, noting down each time how many loots it took before we saw a drop. This will give us a big picture of how many
loots it generally takes to get our drop. So let's run the script!
World of dropcraft
Drop chance is 1/10000, running 10000 trials
Mean loots per drop: 9958.6041
Median loots per drop: 6888
Luckiest loots: 2
Unluckiest loots: 106335
After running 10,000 trials (it takes about 30 seconds on my PC), we're left with a list of 10,000 numbers - each of them how many loots it took on that trial to find our item. So, some analysis!
We can take the biggest and the smallest numbers and call them the luckiest and the unluckiest. Of our 10,000 trials (the equivalent of farming 10,000 Hyacinth Macaws in the game, and noting down how
many loots it took you each time), the quickest we ever saw the item drop was after 2 loots. Wow! I'd be pretty damn happy if that happened to me in game. I'd be very very lucky. At the other end of
the scale, one of the trials took 106,335 loots (over one hundred thousand!) before they got the drop. That would really suck. At my farming speed, that would take almost 2 solid weeks of farming.
We can see the average (mean) loots taken, by adding together all of the numbers and dividing them by the number of trials. When we do this, we get 9958.6, which is pretty close to 10,000. We would
expect this, since the drop rate is 1 in 10,000. If we ran more trials, we would expect this number to get closer to 10,000 (this is called convergence).
We can also calculate the median, which is the number of loots at which 50% of players would have gotten the drop. This is different to the mean, but can be a little confusing, so I'll explain with
an example.
Say you had a drop that was very common - most people got it on the first loot. If 9 players got it after the first loot, but 1 poor player took 10 loots, the average would be 1.9
([1+1+1+1+1+1+1+1+1+10]/10). However, we can easily see that more the half of the players would have gotten the drop after the first loot. That's because the median is 1. We calculate the median by
sorting the list into order and picking the middle value. The mode is another kind of similar average, but we'll ignore that here because it gets hard to calculate once we use big numbers and can be
So in this trial, the median is 6888. Why do 50% of trials get the drop after 6888 loots, while the average is 10,000? The answer lies in the way drops work. The fewest number of loots you can get a
drop in is 1; you have to loot at least one corpse to get a drop. But the highest number of loots is infinite. You could loot one million pirates and never get the parrot. It's unlikely, but it's
possible. So while there's a chance it will take one million loots, which throws off the mean, very few players will take this long, so the median stays lower.
Calculating the median
So what is the median good for? Well, it can tell us at what point we go from being lucky to unlucky. Someone who is lucky gets the drop in less loots than the majority of other players. Someone who
is unlucky gets the drop after more loots. While drop probabilities can't tell us when we will get a drop, they can tell us who unlucky we have been to not get it so far, or how lucky we would have
to be to get it by a certain number of loots.
So how do we calculate the median? Let's run another 10,000 trials against my favorite pet, the Hyacinth Macaw.
World of dropcraft
Drop chance is 1/10000, running 10000 trials
Mean loots per drop: 10082.6048
Median loots per drop: 6996
Luckiest loots: 2
Unluckiest loots: 91637
So after 10,000 trial sequences, it took an average of 10,082 loots to get the item. If we ran a lot more trials, this number would approach 10,000 (since it's a 1 in 10,000 drop chance).
The median, however, is around 7000, as before. This is the number of loots at which 50% of trials had found the drop. If you took less than 7k, you were lucky. If you took more than 7k, you were
unlucky. But where does this number come from?
Take our formula for drop chance in a sequence of n loots:
dropped_at_least_once_chance = 1 - ((1 - drop_chance) ^ loots)
We can rebalance this formula using logarithms into this:
loots = log(1 - dropped_at_least_once_chance) / log(1 - drop_chance)
If we plug in our numbers, 50% chance to have dropped at least once and a 1/10,000 drop chance, we get 6931, which is pretty close to the 6888 and 6996 results we saw in our tests.
We can confirm this by using higher drop rates, since the trials will converge faster. Using the formula, we can make the following predictions:
1/100 -> 50% after 69 loots
1/1000 -> 50% after 692 loots
So let's run the trials again with these drop rates and see what we get...
World of dropcraft
Drop chance is 1/1000, running 10000 trials
Mean loots per drop: 994.0178
Median loots per drop: 689
Luckiest loots: 1
Unluckiest loots: 11067
World of dropcraft
Drop chance is 1/100, running 10000 trials
Mean loots per drop: 100.116
Median loots per drop: 70
Luckiest loots: 1
Unluckiest loots: 915
Yup, that pretty much seals it. If you take less than 6931 loots to get the Hyacinth Macaw, you're lucky. If you took more than 6931 loots, you're unlucky. If you took exactly 6931 loots, well,
you're Mr Average.
An Odd Side-effect
So, earlier I said that the mode is an odd measure of 'averageness' for drop chances, but it can be used to point out something pretty strange. We can use some simple math to calculate the
probability that a drop will happen on a given loot. We'll generalize a little here - let's say that p is the chance to drop the loot we want.
drops on 1st loot = p
drops on 2nd loot = p * (1-p)
drops on 3rd loot = p * (1-p) * (1-p)
drops on 4th loot = p * (1-p) * (1-p) * (1-p)
A strange thing happens when we plug in the numbers. Let's use the good old macaw's 1/10000 drop rate.
drops on 1st loot = 0.0001 = 0.00010000
drops on 2nd loot = 0.0001 * 0.9999 = 0.00009999
drops on 3rd loot = 0.0001 * 0.9999 * 0.9999 = 0.00009998
drops on 4th loot = 0.0001 * 0.9999 * 0.9999 * 0.9999 = 0.00009997
Yes, that's right - the chance goes down over time. Remember, this is not the chance that we will have gotten the drop by now (the cumulative chance), but the chance we'll get it exactly now. This
means that the loot in which you are the most likely to see a drop, regardless of the drop chance, is always the first loot. More people will get the macaw on their first loot than on their second.
The mode is always 1. Yeah, that's weird.
Loot drops are a form of binomial distribution, inverted with a k of zero.
So we know that 50% of players will have gotten the Hyacinth Macaw after looting 6931 mobs, and the average number of loots is 10,000.
BunnyHunter tells me how long I've spent farming a particular mob. It does this by noting down each time I loot the relevant kind of mob. If the time between loots was less than a limit (5 minutes),
then the time between them is added to my total time. If the time between them was greater than this limit, by total is left as-is. This means that it measures the time spent running between mobs and
fighting, but not the 10 minute break I took to get coffee or check the auction house. There's a little bit of under estimation, since it doesn't include the time spent to find and kill the first mob
in each "session", but so long as you farm a few mobs at a time and not one every now and then it's fairly accurate.
After I killed my 3600th pirate, I saw that my timer just ticked off 11 hours. Wow. I'm wasting my life. However, this also tells me that I loot roughly 327 mobs for every hour I spend farming (3600
/ 11). I can use this to calculate how long it should take me on average to get the macaw and after how much farming I'll start to be unlucky for not having seen it.
3600 loots / 11hrs = 327 loots/hr
median 6931 loots -> 21.2 hrs
mean 10,000 loots -> 30.6 hrs
Ok, so I probably have quite a bit of farming ahead of me.
I might be a bit of a math nerd, but I make mistakes too. If you spot any glaring mistakes or omissions, drop me an email and teach me: cal [at] iamcal.com
BunnyHunter can be downloaded here, or installed through the Curse Client.
The source code for World of Dropcraft can be found here.
I play a Night Elf Hunter called Bees, who currently has a collection of 118 vanity pets. I am also the creator of Hunter Loot.
Copyright © 2010 Cal Henderson.
The text of this article is all rights reserved. No part of these publications shall be reproduced, stored in a retrieval system, or transmitted by any means - electronic, mechanical, photocopying,
recording or otherwise - without written permission from the publisher, except for the inclusion of brief quotations in a review or academic work.
All source code in this article is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. That means you can copy it and use it (even commerically), but you can't sell it and you must
use attribution.
17 people have commented
Well, you've missed some cases in your first little example, the two that begin drop / no drop. (2^3 is 8, not 6.) Rest looks okay.
Wow, not sure how I missed that. I'm blind to it after reading it so many times. Thanks :)
Silly Cal. If it can be equipped, its hunter loot.
So, essentially it's like asking: what's the probability of getting "heads" on the next coin flip in a series, versus the probability of generating a "heads" after 20 coin flips, then 21, then 22,
etc? And BunnyHunter continually recalculates this probability with each loot.
Very impressive stuff, I'd never considered game drop rates like this (if I did indeed grasp it haha). Thanks for this little brain stretch, I don't even play WoW I just saw this on Reddit
If you could sell it for 30k you would have made 2500g/hour (which is pretty darn high) non-auction house arbitrage methods max out around 500g/hour from what I've seen. If it took you the mean time,
every time, you'd make 1000g/hour. If you sold your gold you could earn a nice salary of $4-$10 / hour!
But farming is fun...right?
Jason: You could do that in theory, yes. However, if you do it too much, the price will come down. Demand isn't *that* high at the 30k price point. If you don't do it too much, then you face the risk
of hitting the long tail too often and wasting your time.
If you're pet farming for money, the raptor in Northrend probably makes more sense because you get a *lot* more gold from the farming aspect, especially if you have skinning. And since each cycle is
shorter, you should converge to the average a lot quicker.
Geometric probability! If you're bored and have a TI-84 or some similar graphing calculator you can do this as well.
2nd Vars (Distribution), GeometCDF, and then enter in P(probability), X(number of trials) and hit enter
Ex. The 1/10000 epic companion pet, say you kill 500 pirates.
GeometCDF(.0001,500) = .0488 or 4.88%
You can use Geometic models to find the probability of something occuring on an exact trial. You can add the probabilities of geometric sequences to find the probability of something occuring either
on or before a certain trial.
So the best way is to get someone who is lucky to farm for you.
Or failing that, someone who is gullible. ;)
Good article - just a note to readers: Look up something called Gambler's Fallacy and the Perch strategy in Roulette. In short, trying to assign a current trial probability using this doesn't work.
(current trials will always have 1/10000 probability for the macaw, irrespective of the number of trials before that. Think of it this way. In three tosses, HHH is as Probable as HHT. So you can't
really bet after the first two H that you will get a tail - because the first two instances has already happened.)
IOW, all that is predicted here is how lucky/unlucky you are - it doesn't influence the outcome of a given trial.
Subu, the author devoted quite a bit of time to explaining what you just did — he called it probabilistic independence.
"What's predicted here" is not how likely you are to get certain loot on the next drop, but about how much time you'll need to farm before you find the loot you want. The author was very clear about
Interesting article, I enjoyed. Gaming + math fascinates. Thanks!
Now that cata is out and all will u be updating the addon?
It was updated for cata in January - you'll see the Fox Kit is now tracked.
Nice explanation and add-on.. Just a quick question. Where did you get the drop rate for the Kit? It looks closer to 1/1000 0.09% based on the wowhead client.
Besides that, thanks for creating this lovely little add on that shows just how much of my life I am wasting trying to get a stupid little fox as a virtual pet :)
I commented on Curse under BH, but I'm not sure if you've seen that. I'm wondering how would I add a new "pet"? It's not really a pet, it's an enchanting formula, Formula: Enchant Weapon - Crusader
(http://www.wowhead.com/item=16252) and it only drops from Scarlet Archmage's (http://www.wowhead.com/npc=9451) and only about 1 in 200 loots. It would seem pretty easy to extend this Addon to do
this, if only I had a little more of an idea on just how to do so! Thanks!
Are you going to include mounts from vortex pinnacle, stonecore, tempest keep, u. pinnacle ???
Oops, I see Ashes of Alar and Blue Proto-Drake ... so just missing Vortex Pinnacle and Stonecore and Throne of the Four Winds and Alysrazor and the 3 in Dragon Soul
@ An Odd Side-effect
One thing that is important to note here, is that this shows the chance that you get your FIRST drop is higher on the first kill than on any other. Independently, you are as likely to get a drop on
your millionth kill as on your first kill. It could just be your 400th bird :)
(Wanted to post this originally but thought of something better to write)
It's not that odd if you think of it with a crazy example. Let's compare the chance you get it on the first drop vs. the chance that you get it on the millionth drop. It is now easy to see why the
first drop is more likely to be
Leave your own comment | {"url":"http://www.iamcal.com/beating-the-rng/","timestamp":"2014-04-20T16:11:23Z","content_type":null,"content_length":"39419","record_id":"<urn:uuid:0d7c0246-e1c0-4694-9f08-dd03804a04bc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangulation using sphere intersects
"Kus" wrote in message <k0s7dj$6b0$1@newscl01ah.mathworks.com>...
> Roger,
> Your explanation has helped me a lot. Thank you so much!
- - - - - - -
I'm afraid I went about demonstrating those three equations the hard way, Kus. It's not necessary to convert u1 to that second form.
The three equations in question are:
dot(u1,cross(p21,p31)) = 0
dot(u1,p21) = a
dot(u1,p31) = b
a = (sum(p21.^2)+r1^2-r2^2)/2 and
b = (sum(p31.^2)+r1^2-r3^2)/2
and where u1 can then be expressed in terms of a and b as
u1 = cross(a*p31-b*p21,cross(p21,p31)) / ...
Applying the "permutation" identity of vector analysis for scalar triple products,
dot(cross(A,B),C) = dot(cross(B,C),A) = dot(cross(C,A),B) ,
to the numerator of the first equation, the simpler procedure would be
dot(cross(a*p31-b*p21,cross(p21,p31)),cross(p21,p31)) =
dot(cross(cross(p21,p31),cross(p21,p31)),a*p31-b*p21) =
dot(0,a*p31-b*p21) = 0
which implies the first equation.
The numerator of dot(u1,p21) in the second equation can be expressed
dot(cross(a*p31-b*p21,cross(p21,p31)),p21) =
dot(cross(p21,a*p31-b*p21),cross(p21,p31)) =
dot(a*cross(p21,p31)-b*cross(p21,p21),cross(p21,p31) ) =
dot(a*cross(p21,p31)-0,cross(p21,p31)) =
When this is divided by the denominator, the two dot products cancel leaving just 'a' and we get the second equation
dot(u1,p21) = a.
A similar reasoning will demonstrate the third equation, dot(u1,p31) = b.
Roger Stafford | {"url":"http://www.mathworks.nl/matlabcentral/newsreader/view_thread/239659","timestamp":"2014-04-20T04:00:51Z","content_type":null,"content_length":"86114","record_id":"<urn:uuid:46108f4d-d4af-42e9-ae5f-c957a5c6230b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help understanding filtered probability spaces
March 4th 2011, 05:33 AM #1
Mar 2011
Help understanding filtered probability spaces
I am currently studying a course in financial economics for a professional actuarial qualification and I'm having trouble with some of the probability theory.
I'm having trouble understanding probability spaces and filtrations. Can anyone help? I figure that this level of mathematical theory won't appear on the exam but I'd feel more comfortable if I
understood it.
From what I've read, a probability space is a triple (S, F, P)
□ S is the space of all possible outcomes
□ F is a collection of subsets of S, a sigma-algebra (i.e. closed under complement in S, and under countable (possibly infinite) unions and hence under intersection)
□ P is a measure of the elements of F such that P:F -> [0,1] on the reals
For a discrete case, each s in S can be thought of as an event, a single outcome of running through an experiment or observing a share price move. Each element in F is a subset of S, a collection
of events (possibly satisfying some condition, like every outcome in which the price increases by a certain amount). The probability measure P assigns a value between 0 and 1 to each element in
S and the null-set are elements of any sigma-algebra over S and have probabilities P(S) = 1 and P(null-set) = 0. Intuitively, the probability of anything at all happening is 1 and the probability
of nothing happening is 0
I'm comfortable with everything above (though maybe I just think I understand it). My trouble is with filtrations.
A filtration {F_t}t>=0 is a collection of ordered sub-sigma algebras such that F_s is a subset of (or equal to) F_t if s <= t
□ Does this mean that each F_t is also a subset of F? Hence that each F_t is also a sigma-algebra on S?
If t is thought of as the time, then each F_t is the history of the process up to t... This I don't get at all.
I'll use an example of a three-step binomial tree to illustrate my problem (this is exactly equivalent to tossing a coin three times or a one dimensional random walk).
□ At each step, a value can randomly move up (u) or down (d).
□ Thus, the state space S = {uuu, uud, udu, udd, duu, dud, ddu, ddd} - all possible outcomes of three steps.
□ F could then be a collection of subsets of S. I think that F in a discrete state space is taken to be the power set of S: the set of all subsets of F, but I'm not sure.
How can the filtration {F_t} be understood as ths "history" of the process? My idea of what this means is outlined below, but even as I type it I don't think it makes sense.
Is F_2 a sigma algebra over a different state space, say S_2 = {uu, ud, du, dd}? In this case the state space after three steps would have to be reconstructed to include the elements of S_2 (and
by extension) S_1 in order to allow F_2 to be a sub-sigma algebra of F (defined over S).
Thus, rewrite S = {u, d, uu, ud, du, dd, uuu, uud, udu, udd, duu, dud, ddu, ddd}
and construct F = pow(S).
Say the first step is up, and the second set is down. So do we construct F_2 as the smallest sigma-algebra over S which contains subset {ud}? This doesn't seem to make sense as such a collection
would be (S, null, {ud}, S/{ud}), where S/{ud} is the complement.
I think that I'm rambling now, so I'll stop. Can anybody explain this to me, or tell me if I'm heading in completely the wrong direction?
Any help would be appreciated.
Many thanks
Last edited by tensorproduct; March 4th 2011 at 06:06 AM. Reason: inconsistent notation
Yes, $\mathcal{F}_t$ are increasing subsets of $\mathcal{F}$ such that each of them is a sigma algebra.
If t is thought of as the time, then each F_t is the history of the process up to t... This I don't get at all.
This is not true in general. This is the case when you take $\mathcal{F}_t:=\sigma(X_s:0\leq s\leq t)$ which is exactly the information of the paths up to time t (in case the notion is not clear,
it is the smallest sigma algebra generated by $X_s^{-1}(B)$ where B is a Borel set).
I'll use an example of a three-step binomial tree to illustrate my problem (this is exactly equivalent to tossing a coin three times or a one dimensional random walk).
□ At each step, a value can randomly move up (u) or down (d).
□ Thus, the state space S = {uuu, uud, udu, udd, duu, dud, ddu, ddd} - all possible outcomes of three steps.
□ F could then be a collection of subsets of S. I think that F in a discrete state space is taken to be the power set of S: the set of all subsets of F, but I'm not sure.
How can the filtration {F_t} be understood as ths "history" of the process? My idea of what this means is outlined below, but even as I type it I don't think it makes sense.
Is F_2 a sigma algebra over a different state space, say S_2 = {uu, ud, du, dd}? In this case the state space after three steps would have to be reconstructed to include the elements of S_2 (and
by extension) S_1 in order to allow F_2 to be a sub-sigma algebra of F (defined over S).
Thus, rewrite S = {u, d, uu, ud, du, dd, uuu, uud, udu, udd, duu, dud, ddu, ddd}
and construct F = pow(S).
Say the first step is up, and the second set is down. So do we construct F_2 as the smallest sigma-algebra over S which contains subset {ud}? This doesn't seem to make sense as such a collection
would be (S, null, {ud}, S/{ud}), where S/{ud} is the complement.
I think that I'm rambling now, so I'll stop. Can anybody explain this to me, or tell me if I'm heading in completely the wrong direction?
Any help would be appreciated.
Many thanks
You are getting confused between the value of the process and the state space. To define this process you want you need to consider $S=\{f: f:\{1,2,3\}\rightarrow \{u,d\}\}$.
I recommend reading Rogers & Williams if you need a reference for this stuff.
Hi Focus, thanks for the quick response.
You are definitely right in saying that I'm confused. I don't really know how to interpret that expression.
Is $f:\{1,2,3\}\rightarrow \{u,d\}$ a defined function?
Is there any way of listing explicitly the elements contained in $S$? How does it differ from a set of all possible outcomes?
Is that this Rogers and Williams?
You are definitely right in saying that I'm confused. I don't really know how to interpret that expression.
Is $f:\{1,2,3\}\rightarrow \{u,d\}$ a defined function?
Is there any way of listing explicitly the elements contained in $S$? How does it differ from a set of all possible outcomes?
I mean the set of functions that map 1,2,3 to u,d.
This is essential all the things your process could be. This set will be the same as your set S with one added bonus that you can define the process X_n to be $X_n(f)=f(n)$.
A better example would be a simple random walk. Think about the space of functions $f:\{0,1,2\}\rightarrow \mathbb{Z}$ and X_n defined as before. What is F_1? Well it is X_1 either 1 or -1 so $\
mathcal{F}_t=\{\{f:f(0)=0, f(1)=1\},\{f: f(0)=0, f(1)=-1\},\{f:f(0)=0, f(1)= \pm 1\}\}$.
Yes, except you need volume 1 (not 2).
Okay, but the elements of $S$ will be functions analagous to the elements I listed before: ${uuu, uud, ...}$
Where, for example, $uuu$ corresponds to a set of functions $\{f:f(1)=u,f(2)=u,f(3)=u\}$
Right? With $\mathcal{F}$ an algebra defined over these.
So, $\mathcal{F}_1$ would be the subset of $\mathcal{F}$ for which the outcome of the first move is known:
$\mathcal{F}_1=\{\{f:f(1)=u\},\{f:f(1)=d\},\{f:f(1) = u\text{ or }d\}\}$
Alternatively, in my previous notation:
$\mathcal{F}_1=\{\{uuu,uud,udu,udd\},\{duu,dud,ddu, ddd\},\emptyset,S\}$
(With the null-set there in order to make this an algebra.)
$\mathcal{F}_2=\{\{f:f(1)=u,f(2)=u\},\{f:f(1)=u,f(2 )=d\},...\}$
I'm not sure how $\mathcal{F}_2$ has "more" information than $\mathcal{F}_1$...
Yes, except you need volume 1 (not 2).
Well, I've tracked that down. I may need to refresh a lot of the basic analysis in my head before i get to the meat of it.
The sigma algebra F_2 strictly contains F_1. This is why you have more information. The bigger the sigma algebra, the more questions you can ask. Think of a sigma algebra as the set of questions
you can ask.
For example let $\Omega=\{1,2,3,4,5,6\}$ (roll of a dice) and let F be the discrete sigma algebra and $\mathcal{G}:=\{\emptyset,\{1,3,5\},\{2,4,6\},\Omeg a\}$. Now which sigma algebra gives you
more information? With G, you can only know if you rolled an even or an odd number, whereas with F you know exactly which number you rolled.
The sigma algebra F_2 strictly contains F_1. This is why you have more information. The bigger the sigma algebra, the more questions you can ask. Think of a sigma algebra as the set of questions
you can ask.
For example let $\Omega=\{1,2,3,4,5,6\}$ (roll of a dice) and let F be the discrete sigma algebra and $\mathcal{G}:=\{\emptyset,\{1,3,5\},\{2,4,6\},\Omeg a\}$. Now which sigma algebra gives you
more information? With G, you can only know if you rolled an even or an odd number, whereas with F you know exactly which number you rolled.
Aha, now I get it - or at least I think I do. I'm still a long way from understanding the fully continuous case, but this is a good start.
Thanks, Focus, you've been a great help.
Thanks Focus! I, too, have been struggling with F algebras, you just confirmed my intuitive understanding.
Interestingly, these concepts remind me of 'information sets' in game theory (extensive games with imperfect information), where the player sometimes knows and sometimes does not know his exact
position the game. The more 'partitioned' his information sets are, the better he is informed of his place in the game (and consequences of his future moves). I wonder if you know what I am
talking about, and if there are similarities with the above sigma algebra concepts.
March 5th 2011, 06:46 PM #2
March 6th 2011, 12:33 AM #3
Mar 2011
March 6th 2011, 03:05 PM #4
March 8th 2011, 01:01 AM #5
Mar 2011
March 9th 2011, 09:26 AM #6
March 9th 2011, 03:06 PM #7
Mar 2011
March 14th 2011, 06:48 PM #8
Senior Member
Nov 2010
Hong Kong | {"url":"http://mathhelpforum.com/advanced-statistics/173403-help-understanding-filtered-probability-spaces.html","timestamp":"2014-04-18T01:28:09Z","content_type":null,"content_length":"67435","record_id":"<urn:uuid:7b7144dd-020b-4edc-bbb3-b04bb73ac7e6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surface Logic
Architectural Investigations into Equation-Based Surface Geometries
Building on the preexisting deployment of equation-based surface geometries in architecture, surface logic explores the dialogue between twentieth-century pioneers of reinforced concrete and the
contemporary possibilities made accessible by the instrumentation of computation. Computational modeling of equation-based surfaces opens designers to unprecedented access and design sensibilities
driven by parametric variation, differential topological relationship, fabrication techniques, material analysis, and physical performance.
Surface Logic
Architecture has long been dominated by orthogonal Cartesian principles of design preferring two-dimensional planning and composition. Traditionally, three-dimensional surface principles, such as
domes and vaults, were implemented at positions predetermined by planimetrics. Although it is possible to produce complex three-dimensional space from such principles, the guiding parameters were
usually generated by orthographic projections: plans, sections, and elevations. Advancements in computation such as calculus-based non-uniform rational B-spline (NURBS) surfaces and the accessibility
of three-dimensional modeling interfaces have liberated architects from two-dimensional orthogonal logics. Surface logic attempts to describe a new way of thinking for architects guided by the
principles inherent to working with equation-based surfaces.
Surface Logic Precedent
It is important to first preface this argument with the fact that surface logics are not entirely new to architecture. One can trace the roots from complex three-dimensional principles of
double-helix stairs to extreme vaulting of high Gothic cathedrals. Even though these elements are extremely complex in themselves, it was not until the work of Antoni Gaudi that surface logic truly
manifested itself three-dimensionally at all levels of architectural design. Gaudi’s vaults at the Guell crypt marked the first use of the hyperbolic paraboloid [1]. The linear characteristic of this
surface forms a developable surface, which naturally integrates with the linear brick work of the masonry construction of the crypt. Unlike traditional vaulting, the logic of the surfaces was the
primary guiding principle of architectural space. The plan of the crypt can only be read as the result of such principles, not vice versa. Gaudi continued his investigation of equation-based surfaces
by exploring the principles of the catenary curve. The word “catenary” is derived from the Latin word for “chain”; it is the curve a hanging flexible wire or chain assumes when supported at its ends
and acted upon by a uniform gravitational force [2]. In order to access these principles, Gaudi constructed stereostatic models using weighted chains. Acting fully in tension, the inverted catenary
provides logic perfectly integral to the physics of compression for load-bearing structures. These surface logics not only provided a structural solution to the Sagrada Familia, but continue to
register at every scale of the design even to surface articulation and composition.
Figure 1. Typical geometry deployed in reinforced concrete.
Figure 2. Antoni Gaudi’s Guell crypt.
The evolution of these principles that Gaudi employed translated into the work of early twentieth-century reinforced concrete pioneers. Although reinforced concrete was a radical material departure,
able to act in compression and tension simultaneously, the equation-based developable surfaces were equally integral to the new material. The geometry of the surface performed structurally as in the
work of Gaudi and also provided logic for the construction of the wood formwork necessary to house the new fluid material. These principles were first deployed as singular spanning solutions to the
infrastructure of bridges and viaducts, but eventually made their way into architectural projects such as the Hippodrome of Eduardo Torroja, the Orly hangers of Eugène Freyssinet, and the concrete
exhibition hall of Robert Maillart. A direct relationship between the logic of the equation-based surfaces and the structural performance, constructability, material deployment, and spatial
organization informs all of these works.
Figure 3. Eugène Freyssinet Orly hangers.
Figure 4. Robert Maillart concrete hall and bridges.
Figure 5. Eduardo Torroja hippodrome.
Figure 6. Pier Luigi Nervi sports palace.
Figure 7. Work by Félix Candela.
Figure 8. Eero Saarinen TWA terminal.
Later, other concrete masters such as Pier Luigi Nervi, Félix Candela, Eero Saarinen, and Eladio Dieste exhausted such principles in countless variations. Because of the surface logic integration, it
is very hard to say whether the work of such builders is that of an engineer or an architect. At the same time it is important to note that other architects, such as Erich Mendelsohn, Rudolph
Steiner, and Frederick Kiesler, were making use of reinforced concrete. Unlike previous builders, their work relies on the ability of the surface aesthetics to convey notions of dynamism, religion,
or sculptural space. This difference is not an attempt to dismiss these works, but for the purpose of this article it is critical to understand that they were not working within the discipline of
equation-based surface logic. In return, this work employed reductive secondary logics for constructability and material performance and organization.
With the invention and standardization of prestressed concrete, the surface relation to geometry became internalized. The one-to-one relationship of geometry and structure of thin-shell concrete
again transitioned to the new materials of membrane structures and pneumatics. Although the guiding principles are similar due to the lightweight nature of the material of such surfaces, these
buildings rarely had the holistic organizational impact of their concrete and masonry predecessors. Conventional implementations usually followed tent typologies, allowing only roofing capabilities,
returning to elementary deployment similar to that of planimetric installations of the dome or vault.
New Surface: New Accessibility
In the last 10 years, the availability of personal computers and the advance in processing power have enabled architects everywhere to generate and manipulate complex surfaces with ease in the
digital realm. At first, architects integrated three-dimensional software with advanced rendering and dynamic packages from the movie industry. The resultant surfaces were usually smooth,
semi-transparent, and seductively rendered products. Initially, there was an attempt to legitimate such surface generation through postmodern processes of semiotics or collage. Typical projects in
this realm attempted to use and form surfaces by embedding external indexes or traditional manual art and sculpture techniques. Regardless of whether the metaphoric import was that of stock market
trends, animated site flows, or expressionistic dynamism, the surface technique rarely varied. In order to create a continuous blending of these external logics, the technique defaulted to freeform
lofting. Furthermore, the digital translation of such external logics usually resulted in a numerical set that allowed little more than a variety in narrative for the generation of similar forms in
which global syntactical principles of the index rarely provided any internal local logics to build upon. If the projects advanced further into the physical material realm, reductive orthogonal
conventions such as Cartesian sectioning were needed to provide a clear formal understanding of the logic outside of the creation of the surface. This just continued to typify the predictability of
such projects. If the formal translation of surfaces generated by external logics will eventually default into Cartesian bread-slicing, then there are only two options to pursue for the surfaces to
exist materially. The first is to accept the Cartesian slicing and start with it as a generator. The second is to generate surfaces by logic inherent to their formation.
Surfaces in Mathematics
In order to truly understand surface principles, it is important to learn from other disciplines that also work with surfaces. Two adjacent mathematical fields that are particularly relevant are
differential topology and differential geometry. As in architecture, surface logic in mathematics developed before the use of computation. Born on the bridges of Koenigsberg in 1735, topology emerged
out of the lack of an adequate language for describing forms [3]. The new field created a number of principles and tools for evaluating complex surfaces. Early twentieth-century plaster models by
Ludwig Brill and later Martin Schilling can still be found exhibited for the teaching of mathematical surfaces. Recently there has been a revived interest in equation-based surfaces [4]. This is due
largely to the new accessibility to complex surfaces made possible by computational programs such as Mathematica. Speed of computation, ease of representation, and computer-based manufacturing have
allowed radical advances in both the accessibility to traditional complex surfaces as well as the development of entirely new ones. Although minimal surfaces exist in natural forms such as soap
bubbles, now topologists are actually engineering new ones with the potential for applications in molecular and material design.
The following work represents a series of architectural investigations into the logics of equation-based surfaces.
Reinventing the George Washington Bridge Bus Station
Architecture Studio: Andrew Saunders
We first looked at the possibility of reinventing the George Washington Bridge Bus Station in New York City, originally designed by Pier Luigi Nervi. The studio started with common surfaces
documented in Modern Differential Geometry of Curves and Surfaces with Mathematica by Alfred Gray. Enabled by Mathematica, the students were able to gain quick access to parametrically plotted
surfaces. With the help of Mathematica notebooks from Matthias Weber of Indiana University, the students progressed into more advanced minimal surfaces. Students conducted a series of compositional
diagrams and stereolithography models to understand the complex symmetries and bipolar spatial relationship brought about by the kaleidoscopic patching composition of minimal surfaces. Given these
new advances in equation-based surfaces coupled with advances in manufacturing and fabrication, the studio speculated on how to evolve the vocabulary of Nervi and the other masters of reinforced
Figure 9. Sections: Ashley Hanrahan.
Figure 10. Renderings: Ashley Hanrahan.
Figure 11. Compositional analysis of minimal surface from Matthias Weber: Kerstin Kraft, Lexi Sanford, Ashley Hanrahan, Douglas Samuel, Emaan Farhoud, Joe Morin, John Davi, Justin Bosy, Monzoor
Tokhi, Adam LoGiudice.
Fabricating Differential
Seminar/Workshop: Andrew Saunders and David Riebe
The second project investigated the possibilities of material fabrication through differential geometry. The seminar again began with common surfaces in differential geometry from Modern Differential
Geometry of Curves and Surfaces with Mathematica. The students started by parametrically plotting 40 variations of a common surface such as the Enneper, catenoid, helicoid, and monkey saddle. At
first the manipulations of the formulas were random. Once the students analyzed the variants, certain characteristics of each of the original surfaces began to emerge. The students returned to the
original and started to guide the modification in pursuit of certain formal topological signatures that could inform fabrication techniques. When the students authored parametrically a variation of
the original surface, the investigation turned to the potential physical properties of the new surfaces. Students first physically modeled the surfaces with stereolithography. It is important to the
studio to advance beyond the representation of the surface by using the identified topological signatures to inform material organization.
Figure 12. Fabrication models enabled by the Math Plug-In for Rhino (by Jess Maertterer): Ryan Salvas, Eric Smith, Alex Lagula, Brent Hanson.
Figure 13. Mathematica differential hybrids: Brent Hanson, Alex Lagula, Ben Waserman.
Parachute Pavilion
Architecture: Andrew Saunders and Ted Ngai
The final project was a proposal for the Parachute Pavilion on Coney Island and the analysis of the structural performance of minimal surfaces. In this project for Coney Island, the brief asked for a
pavilion to be sited at the base of the historic parachute ride. The program is composed of public viewing, dining, and retail, as well as private rental and dining facilities. The project
incorporated the surface logic of the Riemann minimal surface. Topologically the surface acts as a spatial knot of circulation that negotiates the public and private programs vertically three levels
through the boardwalk. This knot creates an ambiguous relationship between the iconic and figural singularities of the popular rides of the theme park and the infrastructural ground condition of the
boardwalk. The kaleidoscopic patching of the minimal surface is directly translated into prefabricated local modules that mirror and rotate to form the trunk for the cantilevered pavilion.
Figure 14.
Figure 15.
Figure 16.
Figure 17.
Parachute Pavilion Structure
Structural Performance Analysis: Amie Nulman, Structural Engineer at Ove Arup & Partners, Ltd.
Complex curved geometrical surfaces must follow the same laws of gravity, physics, and material behavior as simple linear geometries. The primary difference for structural analysis between simple
linear surfaces and complex curved surfaces is the way the surface is supported and the subsequent analysis method. Complex curved surfaces can be constructed by either treating the surface as a
facade supported by a rationalized framing system or by requiring the surface itself to be a load-bearing structure.
Traditionally, curved surfaces were supported by a secondary rational framing system except for specific simple geometries, such as arches, vaults, and domes, where linear analysis approaches could
be used to determine and solve the element forces, stresses, and support reactions. Continuing advancements in computer-aided analysis permit increasingly complex geometries to be analyzed and
designed as independent load-bearing elements.
If a complex curved surface is supported on a secondary framing system, conventional structural analysis methods can be employed to solve stress calculations based on member orientation, spans,
loading, and support conditions. For instance, a complex curved roof system supported by beams and columns would be analyzed using a single structural member for each framing element. Initial sizing
for scheme design can sometimes be done using proven rules of thumb, and the final force calculations can occasionally be done by hand, without the aid of computer analysis. Still today, complex
curved geometries required to span significant unsupported distances, like sports arenas and concert halls, are likely achieved by introducing a secondary system.
If a complex curved surface is treated as a self-supported load-bearing structure, engineers use the finite element method to solve element stress calculations, as that allows them to correctly
capture the complicated geometry and the effects of loading and support conditions. Finite element analysis essentially transforms an unintuitive, complex form into a system of piecewise-continuous
uncomplicated objects by reducing compound geometries into a series of simple shapes with deflection compatibility parameters. Once the internal forces and stresses are determined by analysis,
material-specific design to determine final constructible parameters is a parallel process for structural elements, regardless of their shape.
The finite element method employed by computer programs to perform structural analysis is not a new discovery. It has been a feasible analysis alternative for a plethora of mathematical and
engineering tasks for over four decades, following significant development of digital computing in the 1960s.
Recent technological advances that enable the design of complex curved surfaces in architecture include capabilities to create surfaces digitally, transfer the geometries between development and
analysis programs, properly discretize the surfaces to ensure reliable analysis, and increase computer analysis capacities to analyze the resulting models. Now the decision on how to support complex
curved surfaces is less predetermined and is a result of collaboration between the architect and the structural engineer.
Analysis Approach
The central translational corridor of the proposed Parachute Pavilion evolved from symmetrical manipulations of a basic module of the mathematically derived Riemann minimal surface. Based on the
mathematical logic and the scale of the resulting minimal surface, the Pavilion surface was analyzed as an independent load-bearing structure. The remainder of the Pavilion structure would be
resolved following investigation of the central surface.
A finite element analysis computer program developed by Arup, Oasys GSA, was used to analyze the surface. The GSA processor solves for surface mesh node displacements based on specified material and
loading conditions and interpolates the results to compute element strains and stresses.
To create the finite element model, the Pavilion surface model is discretized into a mesh of finite elements using a preprocess program. The finite element mesh is the primary source of analysis
accuracy and an efficient finite element model will balance structurally accurate results with sensible computational demands. The mesh must be refined enough to correctly capture the complex
geometries of the surface but compact enough not to inhibit analysis.
Figure 18. Finite element mesh.
Figure 19. Finite element mesh.
Figure 20. Finite element mesh.
The finite element method is not an exact analysis, but the results of an appropriately constructed finite element model are accurate enough for engineering purposes, especially given the safety
factors and tolerances associated with structural engineering and building construction.
Behavior Assessment
The structural behavior of the Pavilion surface is unintuitive, so an initial run of the finite element model was performed to assess general characteristics of the shape as well as the level of mesh
refinement. The Pavilion surface mesh was imported into GSA, the elements were assigned material properties (concrete shell elements, 8″ uniform thickness), and the mesh was given boundary conditions
to represent the proposed structure beneath the surface in the full Parachute Pavilion design.
Based on the proposed level beneath the boardwalk of the Parachute Pavilion, the first two levels of the surface have vertical and horizontal supports around the edges and are therefore relatively
stiff areas with low displacements and stresses. The two levels above the boardwalk are modeled with no additional vertical or horizontal supports, and the load is thereby transferred down the
structure via the continuous surface. The interior central warped segment (where the opposing surface geometries meet) is the most rigid area above ground and therefore attracts load and resists
vertical and horizontal deflections.
Figure 21. Applied support conditions.
The vertical deflections of the surface are largest at the extreme unsupported ends (top surface) and are symmetric about the center. The horizontal deflections of the surface are largest at the
points farthest from the central rigid element and indicate that the structure is rotating—almost an identical amount at both ends—about the central stiff element.
Figure 22. Vertical deflection.
Figure 23. Rotation.
The deflections in all three directions
Figure 24. Horizontal deflection in long direction.
Figure 25. Horizontal deflection in short direction.
Figure 26. Vertical deflection.
Large stress concentrations formed at both the exterior and interior vertical surface faces in the central region. The stresses along the exterior vertical face result from the structural
discontinuity, and therefore change in stiffness, at the point adjoining two vertical levels of the surface. The stresses along the interior vertical face are a result of the opposing cantilever
structures that cause large tensile forces across the warped, stiff central support.
Figure 27. Axial stress.
Figure 28. Uniform and nonuniform loading deflection diagram.
Figure 29. Uniform and nonuniform loading bending moment diagram.
In order to achieve a more detailed and accurate finite element analysis, the mesh was further refined to replace the original quadrilateral mesh elements with triangle mesh elements (of half the
size) in the regions of high stress. This alleviates large stress gradients across elements and stress discontinuities between elements.
The deflections and stress patterns of the initial uniform surface analysis are analogous to a central vertically rigid element (column) with equivalent cantilevered horizontal elements (beams). This
structural system yields maximum deflections at the ends of the cantilevers and large tensile stresses in the horizontal members across the top of the vertically rigid element (column). Deflection
limits characteristically govern the required depth of the horizontal (beam) structure, and the system is particularly sensitive to unbalanced loading conditions.
Unbalanced loads result in an unbalanced moment at the column element. This unbalanced moment must be resolved by the column element because the moment is not counterbalanced with an equal and
opposite moment in the opposing beam element. This condition, unbalanced loading resulting in unbalanced moment transfer into the column, requires a stiffer (larger) column element than a balanced
Structural Analogies
Following the initial assessment of the structural behavior of the Pavilion surface, a further study was done to determine the structural efficiency of the surface. To accurately model the behavior
of the repeated Riemann surface module, comparisons to the entire Pavilion surface were complex and unnecessary.
The symmetry of the construction of the Pavilion surface (based on rotations and translations of the basic Riemann surface module) simplified the process of determining a smaller module to experiment
with. Using analogies to deflection behavior of the Pavilion surface model, the boundary conditions for modeling a double surface element (one rotation of the Riemann surface) were developed, and
this element was compared to numerous alternate structures under similar loading conditions.
The surface was compared to two different surface models, a cantilevered frame and a strict cantilever, as well as stick models of those shapes. Due to geometric differences such as longer beam
elements and columns unrestrained along the vertical face similar to the Riemann surface, it is logical to have larger resultant deflections of the comparison shapes. However, the resulting
displacements of the comparison shapes being four to five times greater is an unpredicted result and demonstrates the shape is clearly a more efficient structure.
Figure 30. Deflections of analogous surface and stick models.
Figure 31. Deflections from unbalanced loads.
Structural Analysis
The structural investigation of the surface was initially focused on understanding how the shape responds to a nonzero gravity environment. Under uniform loading, the symmetry of the shape is
accentuated by symmetrical deflections and element stresses. The surface behavioral characteristics can be compared to a double cantilever beam and column system, but it is evident that the geometry
of the surface renders a more efficient structural form.
In order to complete the structural analysis of the Pavilion surface, a final model of the surface was developed with what had been assessed about the surface behavior combined with material
properties and building code requirements (such as deflections and loadings).
Deflection criteria are determined based on a combination of building code requirements and interface of structural elements with other elements of the building. Building codes specify maximum
deflection criteria, which are typically set to ensure stability of the structural design. Where structural interfaces are sensitive, for example the connection of the perimeter horizontal structure
to the building facade, vertical deflection criteria will typically be imposed that are more onerous than the code requirements for stability.
In combination with the weight of the structure and any applied material loads (facade, floor finishes), building codes specify a minimum live load be applied to structures based on occupancies. The
live loads are applied across the entire horizontal surface as well as to alternating spans in order to fully represent the nature of an inhabited space with respect to people flow and congregation.
Finite element analysis of the structure under the various load combinations proved that alternately locating the live loads across the surface was the most onerous analysis case, as it caused
unsymmetrical deflection and stresses in both the vertical and horizontal elements.
Initial surface thicknesses are classically estimated using rules of thumb based on span and boundary conditions. The initial surface thicknesses are then specified in the finite element model and
adjusted, as necessary, to resolve member deflections and forces, following the first analysis run of the fully loaded model. For a strict cantilever, the initial thickness in the middle of the
surface would be estimated at almost twice the thickness the model actually requires, further emphasizing the positive structural contributions of the surface shape.
A post-processor was run directly following the finite element analysis to determine reinforcing ratios for specified concrete shell thicknesses and again to check that all internal stresses and
forces were within allowable ranges.
Figure 32. Concrete reinforcement quantities.
Additional Analysis
Although we were specifically interested in understanding the behavior of the mathematically derived Parachute Pavilion minimal surface, the next step was to explore structural rationalizations of
the Pavilion to make a more efficient structure.
As the structure acts roughly like a double cantilever, the forces in the surface are largest in the central region and thus the maximum thickness of the surface is required in this area. The more
load applied to the surface at locations away from the central region (at the ends of the cantilevers), the higher the resultant deflections and forces will be at the center, and subsequently, the
thicker the surface must be. Therefore, one approach to rationalization can be thinning out the surface structure as it progresses from the central region, in order to minimize excessive thickness
and avoid additional material weight. This approach can be seen as a departure from the pure mathematically derived surface, which would still apply if a uniformly thick surface was used. One surface
of the structure, top or bottom, can maintain the characteristics of the mathematically derived surface, while the curvature of the other surface adjusts to permit the changes in thickness.
Figure 33. Varied thickness elements.
The single point at the intersection of the vertically stacked surfaces needs to be thickened to allow a vertical load path down the sides of the surface. This alternate load path would minimize the
load required to transfer down the surface, which would minimize deflections and stresses in the surface. This rationalization can virtually be considered not to be a departure from the original
mathematically derived surface, as it naturally occurs from the inherent structural thickness of the surface.
Figure 34. Vertical thickened elements.
Figure 35. Edge trusses.
Figure 36. Edge trusses.
Figure 37. Edge trusses.
Figure 38.
Further consideration of materials and construction techniques may also provide structural rationalization without the need to compromise the original Pavilion surface. Structural engineers are
currently exploring advanced analysis and new construction materials and techniques in order to promote the increasing architectural explorations of complex geometric surfaces.
Although each investigation emphasizes different deployments of surface logic, five new characteristics of design thinking consistently emerge.
Thinking Parametrically
All projects begin by looking at preconceived equation-based surfaces ranging from common surfaces found in differential geometry to newer minimal surfaces. Because these surfaces are determined by
separate equations in the Mathematica allows unprecedented access to the internal logic of the surface equation. The slightest manipulation of code returns instant formal consequences for the
replotted surface. This is a radical shift in the way architects have traditionally mastered geometry. Initial frustration from not being able to guide the form by manually sculpting the geometry
turns into a revelation of inconceivable possibilities brought about from harnessing the true power of computation to inform surfaces.
Thinking Iteratively
One of the major advantages of working computationally is the extreme ease of generating and processing huge amounts of information at such a rapid pace. Traditionally, design is an iterative
process. For architects, one of the most critical parts of the design process is to learn through doing. Although the design process may be presented linearly, the actual process itself is a constant
reevaluation of certain premises through iterative investigation. The current speed of computation enables the iterative process to expand exponentially. It is only through these iterations that the
designer can start to gain a new intuition about certain signatures and predictability within equation-based surfaces. This ability to be on the one hand prolific and on the other specific and
precise is a whole new sensibility for design thinking.
Thinking Topologically
By studying the properties of geometric figures or solids that are not changed by homeomorphisms, topology puts preference on flexible formal relationships. Thinking topologically comes out of a
desire to focus on the possibilities of three-dimensional relationships that exist outside of Cartesian-defined geometry. It is important that the flexibility does not substitute for precision in
form making, but rather enables the exploration of the precise intricate relationships inherent in mathematical surfaces. The surfaces are not seen as the means to an end, but rather the motivating
diagram for complex connections.
Thinking Beyond the Representational
Modeling is a critical part of the architectural design process. One of the major practical uses of computers in the field of architecture is for three-dimensional modeling. As in Mathematica,
architectural modeling software proves useful in its ability to quickly generate convincing representations of complex three-dimensional forms. This form of three-dimensional representation even
extends into the physical realm, enabled by digital fabrication processes. Like the earlier mathematical models of Ludwig Brill and Martin Schilling, these models are very helpful in understanding
and teaching the three-dimensional consequences of surfaces derived from computation. Although these models are physical, they are not material. It is critical to move beyond a physical model that
only represents the precise geometry of the surface and into the possibilities of material organization. The surface logic of equation-based surfaces provides an opportunity for both material effects
and performance. These new surfaces are not reduced to flexible conduits representing foreign indexical systems or metaphoric import, but instead rely on internal logics of surface formation. These
principles have the ability to provide material organization that is coherent with the surface itself. Reductive logic is no longer needed in order to fabricate the physical.
Thinking Bottom Up
Surface characteristics can be divided into two types; local and global. Local characteristics can be described by examining local neighborhoods of points. These micro relationships are not dictated
by a macro organization, but in turn genetically compose the larger global characteristics, such as embeddedness, orientability, symmetry, and periodicity. As Stephen Wolfram states in A New Kind of
Science, simple rules combine to form complex behaviors. Although these behaviors can be observed in urban conditions, architects have never consciously deployed simple local rules in the attempt to
create an entirely different characteristic of global organization. The mosque hypostyle and the mat typologies do deploy algebraic relations that result in indeterminate field conditions, but they
do not exhibit entirely different characteristics of organization as a whole. Understanding the global design implications from the local conditions could have many architectural consequences at
every scale.
The results of these investigations into surface logic may appear motivated by purely formal or sculptural desires. Although this is a beneficial byproduct of the research that cannot be undervalued,
the use of Mathematica truly allows architects to access the logics of equation-based surfaces in pursuit of not only new form but also new performance.
[1] M. Burry, Expiatory Church of the Sagrada Familia Antoni Gaudi, London: Phaidon Limited Press, 1993 pp. 12-14.
[2] E. W. Weisstein, “Catenary” from Wolfram MathWorld—A Wolfram Web Resource. mathworld.wolfram.com/Catenary.html.
[3] J-M. Kantor, “A Tale of Bridges: Topology and Architecture,” Nexus Network Journal, 7(2), 2005 pp. 13-21.
[4] A. Gray, Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed., Boca Raton, FL: CRC Press, 1997. library.wolfram.com/infocenter/Books/3759.
Andrew Saunders, and Amie Nulman, “Surface Logic,” The Mathematica Journal, 2010. dx.doi.org/doi:10.3888/tmj.11.3-7.
Image References:
[1] J. Joedicke, Shell Architecture, New York: Reinhold Publishing Corporation, 1963.
[2] “Gaudiclub.” (May 31, 2009) www.gaudiclub.com.
[3] J. A. F. Ordonez, Eugene Freyssinet, Barcelona: 2c Ediciones, 1978.
[4] D. P. Billington, Robert Maillart’s Bridges: The Art of Engineering, Princeton, NJ: Princeton University Press, 1989.
[5] F. Levi, M. A. Chiorino, and C. B. Cestari, Eduardo Torroja. From the Philosophy of Structures to the Art and Science of Building, Milan: FrancoAngeli, 2003.
[6] P. L. Nervi, Aesthetics and Technology in Building, Boston: Harvard University Press, 1965.
[7] C. Faber, Canella the Shell Builder, Princeton, NJ: Princeton University Press, 1988.
[8] E. Stoller, The TWA Terminal, Princeton, NJ: Princeton Architectural Press, 1999.
About the Authors
Andrew Saunders is an assistant professor of architecture at Rensselaer Polytechnic Institute in New York. He received his master’s degree in architecture from the Harvard Graduate School of Design.
He has significant professional experience as project designer for Eisenman Architects, Leeser Architecture, and Preston Scott Cohen, Inc. He has taught and guest lectured at a variety of
institutions, including the Cooper Union and the Cranbrook Academy of Art. In 2004 Saunders was awarded the SOM Research and Travel Fellowship to pursue his research on the relationship of
equation-based geometries to early twentieth-century pioneers in reinforced concrete. His current practice and research interests lie in computational geometry as it relates to emerging technology,
fabrication, and performance. He is currently working on a book using parametric modeling as an analysis tool of seventeenth-century Italian Baroque architecture.
Amie Nulman is a structural engineer in California. She is an associate at Arup in Los Angeles and has also worked in the Boston and London offices on a variety of building types, including arts and
culture, education, residential, and sports venues. Recently Nulman completed the design and construction administration for Kroon Hall, the new Yale School of Forestry & Environmental Studies
building designed by Hopkins Architects.
Andrew Saunders
Rensselaer School of Architecture, Troy, New York
Amie Nulman
Ove Arup & Partners | {"url":"http://www.mathematica-journal.com/2010/02/surface-logic/","timestamp":"2014-04-21T14:40:34Z","content_type":null,"content_length":"55541","record_id":"<urn:uuid:af21eadc-5e2a-481b-b73c-4d1fe6bb2153>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re: st: RE: Econometrically sound to use Mills ratio after mprobit?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: Re: st: RE: Econometrically sound to use Mills ratio after mprobit?
From "Pavlopoulos.D. " <D.Pavlopoulos@uvt.nl>
To statalist@hsphsun2.harvard.edu
Subject Re: Re: st: RE: Econometrically sound to use Mills ratio after mprobit?
Date Tue, 11 Apr 2006 11:48:55 +0200
Dear all,
since I am also trying to do a two-step estimation using a multinomial logit in the first step, I had a look at the paper suggested by Rafa. Please correct me if I am wrong but what I understood was that what should be avoided in such a case is the approach of Lee(1983) except if we have small samples.
Instead the suggest the Dubin and Mc Fadden (1984) model or even beter their modification to it that does not use the strong assumptions made by Lee. In this case, the authors suggest that the multinomial logit performs very well even when the IIA assumption is violated.
They also have a stata command that can do such an estimation (I haven't tried it yet), the selmlog.
I would like to address a more difficult question to the list: How can I apply correction for endogeneity (with the endogenous variable having more than 2 categories) in a panel context? So, would it make sense to run an mlogit as the first step and then an xtreg fixed effects as the second step?
Dimitris Pavlopoulos
-----Original Message-----
From: "R.E. De Hoyos" <redeho@hotmail.com>
To: <statalist@hsphsun2.harvard.edu>
Date: Mon, 10 Apr 2006 17:55:33 +0100
Subject: Re: st: RE: Econometrically sound to use Mills ratio after mprobit?
You are assuming that the selection problem in a multinomial context can be
accounted for by the same technique as in the binary problem (Heckman 1979).
Using the multinomial logit as the first-step estimation, Bourguignon et al.
(2004) have shown that this is not the right way to approach the problem.
Generally speaking, the selection problem in a multinomial context can be
define as:
y1 = xb + u1
y^*_m = zl + u_m, m = 1...M (outcomes)
Where y^* is a latent function and E(u1 | x,z)=0. Define p1...pM as the
conditional probabilities of observing each of the M outcomes. Then, the
selectivity-adjusted y1 can be estimated as:
y1 = xb + mu(p1...pM) + e1
You would need to take into account the conditional probabilities of
observing NOT ONLY outcome (1) but all other outcomes as well. The problem
is how to parameterize function mu(.)?
PS. I have a working paper version of the reference, if you are interested I
can send it to you off-list.
F. Bourguignon, M. Fournier & M. Gurgand, «Selection bias corrections based
on the multinomial logit model : Monte-Carlo comparisons», Journal of
Economic Surveys, forthcoming
----- Original Message -----
From: "Stephen Johnston" <sjohns21@umbc.edu>
To: <statalist@hsphsun2.harvard.edu>
Sent: Monday, April 10, 2006 5:01 PM
Subject: Re: st: RE: Econometrically sound to use Mills ratio after mprobit?
> Hello Miet,
> Thanks for your advice. Here is how I understand the procedure to work,
> I read this on an archived statalist post and I have tested it.
> mprobit y x1 x2 x3
> predict phat if e(sample), xb outcome (1)
> capture drop phat
> capture drop mills
> gen mills = normden(phat)/norm(phat)
> reg y2 x1 x2 mills
> For the "outcome" command in the predict line, you have to specify which
> of the choice outcomes (in your dependent variable) you are predicting a
> probability for. In this case the probability predicted for the choice
> that is set equal to 1. You can then generate an IMR for each choice in
> y.
> You can check that this works by generating the inverse mills ratio from
> a probit and then including it in an OLS equation - then use the twostep
> command for the heckman procedure to make sure the results match. For
> this you will not need to specify an outcome since it will be generated
> from a probit. I hope this helps. Let me know if you have any trouble.
> Thanks,
> Stephen
> On Apr 10, 2006, at 4:35 AM, Maertens, Miet wrote:
>> Dear Stephen,
>> Maybe the following two articles by Wooldridge and by Lechner can help
>> you further:
>> http://www.msu.edu/~ec/faculty/wooldridge/current%20research/ ape1r5.pdf
>> http://ideas.repec.org/p/iza/izadps/dp91.html
>> I'm also trying to perform a similar estimation with stata but I'm
>> struggling with calculating the Inverse Mills ratio's. Could you let me
>> know how exactly you are implementing the procedure?
>> Thanks,
>> Miet
>> -----Original Message-----
>> From: owner-statalist@hsphsun2.harvard.edu
>> [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Stephen
>> Johnston
>> Sent: 07 April 2006 18:35
>> To: statalist@hsphsun2.harvard.edu
>> Subject: st: Econometrically sound to use Mills ratio after mprobit?
>> Hello,
>> I am estimating a multinomial probit for a selection equation with 3
>> choices and I am interested in using the inverse mills ratio
>> generated from the MNP in a second step equation. I know how to
>> implement this procedure, however, I have not been able to find any
>> literature that proves that the Heckman two-step estimation procedure
>> can appropriately and directly extend from a probit selection
>> equation to a multinomial probit selection equation. Does anyone
>> know of any papers that address this issue?
>> Thanks,
>> Stephen
>> *
>> * For searches and help try:
>> * http://www.stata.com/support/faqs/res/findit.html
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
>> Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
>> *
>> * For searches and help try:
>> * http://www.stata.com/support/faqs/res/findit.html
>> * http://www.stata.com/support/statalist/faq
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Dimitris Pavlopoulos
PhD student
Tilburg University
Faculty of Social Sciences
Warandelaan 2, Postbus 90153
5037 AB, TILBURG
Tel. ++31 13 466 3001/ Room S-173
email: D.Pavlopoulos@uvt.nl
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-04/msg00346.html","timestamp":"2014-04-19T06:52:46Z","content_type":null,"content_length":"13165","record_id":"<urn:uuid:eb45ecc7-3643-4fda-a311-69a0109f7e94>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
In current negative-index optical metamaterials, the damping is too large for real-world applications. It is presently unclear how much this can be improved by modified designs and other choices of
constitutive materials. This project will work to understand and reduce losses in metamaterials. (C. M. Soukoulis, Th. Koschny)
• (Left) Jiangfeng Zhou, Thomas Koschny, and Costas M. Soukoulis "An efficient way to reduce losses of left-handed metamaterials" Optics Express, Vol. 16, Issue 15, pp. 11147-11152 (2008)
• (Right) Phys. Rev. B, in print (2009) | {"url":"https://www.ameslab.gov/dmse/fwp/metamaterials/losses","timestamp":"2014-04-17T22:59:14Z","content_type":null,"content_length":"19243","record_id":"<urn:uuid:f111896a-b3f6-48bd-85e1-3212b589ef06>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: PN sequence from ASCII
Replies: 3 Last Post: Feb 16, 2013 5:43 PM
Messages: [ Previous | Next ]
Re: PN sequence from ASCII
Posted: Feb 16, 2013 5:43 PM
Thank you everyone
I know the idea that students who need to solve their assignment usually put questions here ... But I'm not solving an assignment.
I asked if this is possible?
all what I need is; to generate a watermark sequence with secret key I found through googling that someone was saying to convert string into its ASCII and generate random sequence, but that doesnt
generate the same sequence every-time ...
I'm just searching how to make a watermark sequence with secret Key and fixed length where I can regenerate every-time & usually what is used is a pseudo-random noise in this case but I don't know
how to make it.
so if anyone can help or suggest something, or explain to me what I seem don't understand, I'll appreciate his help
Date Subject Author
2/15/13 Stir frying matlab
2/16/13 Re: PN sequence from ASCII Derek Goring
2/16/13 Re: PN sequence from ASCII Stir frying matlab | {"url":"http://mathforum.org/kb/message.jspa?messageID=8350006","timestamp":"2014-04-16T08:02:07Z","content_type":null,"content_length":"18859","record_id":"<urn:uuid:d77b79df-78c2-40cb-bed1-c9c371de87bd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wheel and forces
November 29th 2010, 08:32 PM #1
Nov 2009
Wheel and forces
I am not sure about this question. Can anyone help?
It states this way:
A wheel of radius 0.5m rests on a level at a point C and makes contact with the edge E of a kerb of height 0.2m. A horizontal force of 240N, applied through the axle of the wheel at X, is
required just to move the wheel over the kerb. Show that the weight of the wheel is 180N.
I was thinking of taking moments but not even sure how to begin.
take moments around the point E
$W(0.1\sqrt{5^2-3^2})=240(0.5-0.2)<br />$
Thanks for your response. Can you kindly expand on the LHS of the equation. I dont fully understand how you obtained the perpendicular distance of the weight?
November 29th 2010, 09:23 PM #2
November 30th 2010, 12:06 AM #3
Nov 2009 | {"url":"http://mathhelpforum.com/math-topics/164812-wheel-forces.html","timestamp":"2014-04-18T11:03:24Z","content_type":null,"content_length":"33645","record_id":"<urn:uuid:24cce91b-f710-4d3c-956c-9379116410ac>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Poincaré Conjecture
December 22nd 2006, 09:11 PM #1
Junior Member
Jul 2006
Poincaré Conjecture
Hi everyone,
I need to research/give a presentation on the Poincaré Conjecture and I was wondering if anyone knew of some really good/thorough websites that explain the conjecture in the simplest way possible
as well as the recent goings on of Perelman and his contribution. Any current news as well as some history would be greatly appreciated also.
Hi everyone,
I need to research/give a presentation on the Poincaré Conjecture and I was wondering if anyone knew of some really good/thorough websites that explain the conjecture in the simplest way
possible as well as the recent goings on of Perelman and his contribution. Any current news as well as some history would be greatly appreciated also.
I would start by looking at the Wikipedia artice on the Poincaré Conjecture then following up the references it gives.
Also see this from the Clay Institute, and the other references from the Mathworld page.
Here are some Video.
This one is in Russian, I can translate it if necessary.
Here is another one.
WOW...a reply to a topic from 8 months ago. A new record..
The conjecture took a full century to be replied, so be quiet
December 22nd 2006, 09:57 PM #2
Grand Panjandrum
Nov 2005
August 20th 2007, 02:41 PM #3
Global Moderator
Nov 2005
New York City
August 20th 2007, 03:38 PM #4
Junior Member
Jul 2006
August 24th 2007, 04:46 AM #5 | {"url":"http://mathhelpforum.com/math/9198-poincare-conjecture.html","timestamp":"2014-04-20T01:54:55Z","content_type":null,"content_length":"41621","record_id":"<urn:uuid:dd72cc16-0216-4a69-b2be-2e9fad45db7c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex numbers: the fundamental theorem of algebra
As remarked before, in the 16th century Cardano noted that the sum of the three solutions to a cubic equation
x^3 + bx^2 + cx + d = 0
is b, the negation of the coefficient of x^2. By the 17th century the theory of equations had developed so far as to allow Girard (1595–1632) to state a principle of algebra, what we call now “the
fundamental theorem of algebra.” His formulation, which he didn’t prove, also gives a general relation between the n solutions to an n^th degree equation and its n coefficients.
An n^th degree equation can be written in modern notation as
x^n + a[1]x^n 1 + ... + a[n 2]x^2 + a[n 1]x + a[n] = 0
where the coefficients a[1], ..., a[n 2], a[n 1], and a[n] are all constants. Girard said that an n^th degree equation admits of n solutions, if you allow all roots and count roots with multiplicity.
So, for example, the equation x^2 + 1 = 0 has the two solutions √ 1 and √ 1, and the equation x^2 2x + 1 = 0 has the two solutions 1 and 1. Girard wasn’t particularly clear what form his solutions
were to have, just that there be n of them: x[1], x[2], ..., x[n 1], and x[n].
Girard gave the relation between the n roots x[1], x[2], ..., x[n], and x[n] and the n coefficients a[1], ..., a[n 2], a[n 1], and a[n] that extends Cardano’s remark. First, the sum of the roots x[1]
+ x[2] + ..., + x[n] is a[1], the negation of the coefficient of x^n 1 (Cardano’s remark). Next, the sum of all products of pairs of solutions is a[2]. Next, the sum of all products of triples of
solutions is a[3]. And so on until the product of all n solutions is either a[n] (when n is even) or a[n] (when n is odd).
Here’s an example. The 4^th degree equation
x^4 6x^3 + 3x^2 + 26x 24 = 0
has the four solutions 2, 1, 3, and 4. The sum of the solutions equals 6, that is 2 + 1 + 3 + 4 = 6. The sum of all products of pairs (six of them) is
( 2)(1) + ( 2)(3) + ( 2)(4) + (1)(3) + (1)(4) + (3)(4)
which is 3. The sum of all products of triples (four of them) is
( 2)(1)(3) + ( 2)(1)(4) + ( 2)(3)(4) + (1)(3)(4)
which is 26. And the product of all four solutions is 24.
Descartes (1596 1650) also studied this relation between solutions and coefficients, and showed more explicitly why the relationship holds. Descartes called negative solutions “false” and treated
other solutions (that is, complex numbers) “imaginary”.
Over the remainder of the 17th century, negative numbers rose in status to be full fledged numbers. But complex numbers remained in limbo through most of the 18th century. They weren’t considered to
be real numbers, but they were useful in the theory of equations. It wasn’t even clear what form the solutions to equations might take. Certainly complex numbers of the form a + b√ 1 were sufficient
to solve quadratic equations, but it wasn’t clear they were enough to solve cubic and higher-degree equations. Also, the part of the Fundamental Theorem of Algebra which stated there actually are n
solutions of an nth degree equation was yet to be proved, pending, of course, some description of the possible forms that the solutions might take. | {"url":"http://www.clarku.edu/~djoyce/complex/fta.html","timestamp":"2014-04-19T19:35:40Z","content_type":null,"content_length":"5339","record_id":"<urn:uuid:e62d61aa-391f-4006-8846-f01e275f01a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Welcome to the HCPSS Trigonometry Wikispace!
This wikispace will house resource materials for the Trigonometry Course. Before examining the pages for each unit, review the Essential Curriculum below:
Unit 0: Triangle Trigonometry Goal: The student will demonstrate the ability to define trigonometric ratios and apply trigonometry to solve real-world problems. Objectives - The student will be able
to: a. Define and evaluate the six trigonometric ratios. b. Solve triangles using trigonometric ratios. c. Use the Law of Sines and Law of Cosines to solve triangles (AAS, ASA, or SSA). d. Use the
Law of Sines and Law of Cosines to model and solve real-world problems. e. Use triangle trigonometry to model and solve real-world problems, including angles of elevation and depression, and indirect
measurement, and areas of triangles. Unit 1: Geometric Vectors Goal: The student will demonstrate the ability to use a problem-solving approach in exploring the properties of vectors and applications
of parametric equations. Objectives – The student will be able to: a. Define a geometric vector. b. Find the norm (or magnitude) and direction of a geometric vector. c. Use vectors to model and solve
real-world problems, including velocity, force, and air navigation. Unit 2: Circular and Trigonometric Functions Goal: The student will demonstrate the ability to define trigonometric ratios and
apply trigonometry to solve real-world problems. Objectives – The student will be able to: a. Define radian measure and convert angle measures between degrees and radians, including revolutions. b.
Find the measures of coterminal angles. c. Find and state the six trigonometric functions of special and quadrantal angles. d. Find and state the six circular and trigonometric functions. e. Identify
and distinguish between circular and trigonometric functions. f. Develop basic trigonometric identities. g. Use trigonometric functions to model and solve real-world problems, including right
triangle relations, arc length, speed, and uniform circular motion. Unit 3: Trigonometric Graphs Goal: The student will demonstrate the ability to sketch and analyze trigonometric graphs and apply
trigonometry to solve real-world problems. Objectives – The student will be able to: a. Graph the sine, cosine, and tangent functions. b. Identify the domain and range of a basic trigonometric
function. c. Sketch transformations of the sine, cosine, and tangent graphs. d. Sketch the cosecant, secant, and cotangent functions and their transformations. e. Identify and sketch the period,
amplitude (if any), phase shift, zeroes, and vertical asymptotes (if any) of the six trigonometric functions. f. Use trigonometric graphs to model and solve real-world problems. Unit 4: Inverse
Circular and Trigonometric Functions Goal: The student will demonstrate the ability to investigate and apply inverse circular and inverse trigonometric functions in order to prove basic identities.
Objectives – The student will be able to: a. Define the domain and range of the inverse circular functions. b. Evaluate the inverse circular functions. c. Define the domain and range of the inverse
trigonometric functions and sketch the graph. d. Evaluate the inverse trigonometric functions. e. Use inverse functions to model and solve real-world problems. Unit 5: Trigonometric Equations and
Identities Goal: The student will demonstrate the ability to solve trigonometric equations, prove and apply trigonometric identities. Objectives – The student will be able to: a. Apply strategies to
prove identities, including Pythagorean, and even and odd identities. b. Verify trigonometric identities graphically. c. Use the addition and subtraction identities for sine, cosine, and tangent
functions. d. Use the double-angle and half-angle identities. e. Use identities to solve trigonometric equations. f. Solve trigonometric equations graphically and algebraically. Unit 6: Analytic
Geometry Goal: The student will demonstrate the ability to explore conic sections algebraically and graphically. Objectives – The student will be able to: a. Define a circle and write its equation.
b. Analyze and sketch the graph of a circle. c. Define an ellipse and write its equation. d. Analyze and sketch the graph of an ellipse. e. Define a hyperbola and write its equation. f. Analyze and
sketch the graph of a hyperbola. g. Define a parabola and write its equation. h. Analyze and sketch the graph of a parabola. i. Write the equation of and graph a translated conic section. j. Use
conic sections to model and solve real-world problems. Unit 7: Complex Numbers and Polar Equations Goal: The student will demonstrate the ability to use a problem-solving approach in exploring the
relationships between the complex plane, the Cartesian plane and the polar coordinate system. Objectives – The student will be able to: a. Graph complex numbers on the complex plane. b. Find the
trigonometric form of complex numbers. c. Apply DeMoivre’s Theorem to complex numbers in trigonometric form. d. Change Cartesian coordinates to polar coordinates and vice versa. e. Plot points using
polar coordinates and graph polar equations. f. Change equations from rectangular form to polar form and vice versa. | {"url":"https://trigonometry.wikispaces.hcpss.org/?responseToken=01f8ec5f4309affe2c42bb201e3d177f6","timestamp":"2014-04-24T08:25:00Z","content_type":null,"content_length":"54442","record_id":"<urn:uuid:c1109e86-ce29-4a2c-9a1e-a77e16143661>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Devault Precalculus Tutor
Find a Devault Precalculus Tutor
...I promote using some imagination when looking at these topics, especially in physics. When someone can understand how a concept is working then they can apply it to solve a whole range of
problems and most memorization will be unnecessary. This approach will help aid students to achieve a higher understanding of these subjects and it will promote critical thinking.
16 Subjects: including precalculus, Spanish, physics, calculus
...I graduated summa cum laude, with a BS in mathematics, a BA in humanities, and a BAH in honors. I also minored in classics, philosophy, history, and theology. During my undergraduate career, I
tutored mathematics at the Villanova Mathematics Learning and Resource Center (MLRC), primarily in Calculus I, II, and III, Differential Equations, and Linear Algebra.
26 Subjects: including precalculus, English, writing, reading
...I look forward to working with you and your child!I am a certified and current teacher in the public schools. My New Jersey certification is k-12. In PA, I am certified k-6.
12 Subjects: including precalculus, geometry, trigonometry, algebra 2
...Roberts High School in Pottstown * Spring-Ford Senior High School in Royersford * Upper Merion High School in King of Prussia * St. Pius X High School in Pottstown * Radnor High School in
Radnor Under my tutelage, all students showed improvements in their grades at the end of the year. I have ...
13 Subjects: including precalculus, calculus, geometry, GRE
...I received an A. I used these topics in many chemical engineering courses after that. I received a Bachelor's in chemical engineering at Rensselaer Polytechnic Institute in 2010.
25 Subjects: including precalculus, chemistry, writing, physics | {"url":"http://www.purplemath.com/devault_pa_precalculus_tutors.php","timestamp":"2014-04-16T19:20:43Z","content_type":null,"content_length":"23957","record_id":"<urn:uuid:8ccef0dc-a5a0-4be5-b75a-c72e0d678a99>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Segmentation of Juxtapleural Pulmonary Nodules Using a Robust Surface Estimate
International Journal of Biomedical Imaging
Volume 2011 (2011), Article ID 632195, 14 pages
Research Article
Segmentation of Juxtapleural Pulmonary Nodules Using a Robust Surface Estimate
^1School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA
^2Department of Radiology, Mount Sinai School of Medicine, 1 Gustave L. Levy Place, New York, NY 10029, USA
Received 4 November 2010; Revised 3 June 2011; Accepted 4 June 2011
Academic Editor: Tiange Zhuang
Copyright © 2011 Artit C. Jirapatnakul et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
An algorithm was developed to segment solid pulmonary nodules attached to the chest wall in computed tomography scans. The pleural surface was estimated and used to segment the nodule from the chest
wall. To estimate the surface, a robust approach was used to identify points that lie on the pleural surface but not on the nodule. A 3D surface was estimated from the identified surface points. The
segmentation performance of the algorithm was evaluated on a database of 150 solid juxtapleural pulmonary nodules. Segmented images were rated on a scale of 1 to 4 based on visual inspection, with 3
and 4 considered acceptable. This algorithm offers a large improvement in the success rate of juxtapleural nodule segmentation, successfully segmenting 98.0% of nodules compared to 81.3% for a
previously published plane-fitting algorithm, which will provide for the development of more robust automated nodule measurement methods.
1. Introduction
One of the most reliable indicators of the malignancy of a pulmonary nodule is its growth rate [1, 2]. To accurately measure the growth rate of a nodule, automated methods need to repeatably and
robustly measure the volume of a nodule on several scans. Juxtapleural pulmonary nodules are attached to the chest wall and pleural surface; these nodules present a challenge to many automated
measurement algorithms due to the need to decide on a boundary between the nodule and chest wall without the presence of any difference in intensity between the two structures. In contrast, isolated
nodules, which do not abut any other structures such as airways or blood vessels, are substantially easier to segment. We developed an automated method to segment juxtapleural nodules using robust
surface-fitting techniques.
The problem of isolated nodule segmentation has been well studied; isolated nodules can often be segmented via intensity and shape-based methods, as in one method proposed by Zhao et al. [3].
Juxtapleural nodules are more difficult to segment accurately than isolated nodules due to the challenge in determining the location of the invisible boundary between the nodule and the lung wall. An
example of a juxtapleural nodule is shown in Figure 1. Note that the nodule is the same intensity as the lung wall, rendering intensity-based segmentation methods ineffective with juxtapleural
nodules—instead, a decision for locating the nodule boundary can only be based on properties of adjacent sections of the pleural surface. Previous works in this area have segmented juxtapleural
nodules using morphological filtering or various surface-fitting algorithms. Kostis et al. identified the thoracic wall and segmented nodules using morphological filtering with ellipsoid kernels [4].
Reeves et al. modeled the pleural surface with a plane, using an iterative procedure to find the optimal parameters for a plane that separates the nodule from the surface [5]. Way et al. used an
active contour approach to segment 23 isolated and attached nodules from the LIDC database [6]. Another approach by Okada et al. [7] used robust anisotropic Gaussian fitting followed by a
morphological opening operation; their method was able to achieve 94.8% correct segmentation on a dataset of 1312 nodules, both attached and isolated. Other research on this topic includes a study by
Shen et al. [8] which used a surface smoothing algorithm to segment nodules on the chest wall as well as a study by Kuhnigk et al. [9] which used a convex hull operation to perform segmentation.
Similar to juxtapleural segmentation, lung nodule detection systems often require segmentation of the lung parenchyma from the chest wall while ensuring that juxtapleural nodules are included in the
segmentation of the lung. Armato et al. proposed the use of a rolling ball filter to recover juxtapleural nodules removed from the lung segmentation [10], while Gurcan et al. used an indentation
detection method based on a ratio of distances computed from tracing the contour of the lung [11]. Ko and Betke relied on a rapid change in curvature of the lung border to indicate structures, such
as lung nodules or vessels, that were removed from the lung segmentation; these were recovered by the insertion of a border segment [12]. While these techniques show some preliminary success in
segmenting juxtapleural nodules, many of these methods have difficulty in segmenting nodules in regions with moderate to high curvature or nodules whose shape is moderately complex, and all of these
methods were evaluated on datasets with few juxtapleural nodules.
We propose a method to segment a nodule from the thoracic wall by fitting a polynomial function to the pleural surface. Surface-fitting methods have been used to solve problems in several different
areas, including range [13, 14] and medical image data [15]. The approach explored in this paper relies on identifying pleural surface points and using these points to fit a polynomial surface
function to the pleural surface. A statistically robust algorithm based on linear regression is used to identify relevant points and estimate surface parameters. This application is unique because it
requires an accurate representation of a missing section of the pleural surface (the section due to the nodule) in the presence of an irregularity (nodule). Performance of the algorithm is measured
by the number of “successful” segmentations, as assessed by visual observation, and the segmentation performance is compared to a previously published method on the same dataset of attached nodules.
2. Methods
The algorithm relies on several assumptions, the principal one being that the chest wall is a large surface with a curvature lower than the nodule surface. These assumptions are used in the
development of a robust algorithm for juxtapleural nodule segmentation.
2.1. Juxtapleural Pulmonary Nodule Model
We model the juxtapleural nodule into several different regions, illustrated in Figure 2. These regions are the lung parenchyma (LP), the segmented thoracic wall (TW), the segmented nodule (N), and
the modeled pleural surface (MPS). It is nearly impossible to determine whether the nodule has invaded the thoracic wall or is merely adjacent to it, so in our model, we consider only that portion of
the nodule that is inside the lung volume to be part of our nodule region. In general, the pleural surface is a closed surface around the lung with many features. However, the scale of these features
is much larger than that of the nodules. We expect the nodule to be the largest complete feature on the pleural surface in the region of interest. The pleural surface section within the region of
interest should be smooth with a curvature much lower than that of the nodule surface; therefore, we can define a smooth function that is a global model for the pleural surface in the region of
interest (MPS). In particular, the MPS accurately describes the location of the pleural surface inside the high voxel intensity region of the thoracic wall (TW) based on the cumulative pleural
surface evidence provided by the visible surface. Therefore, we can consider the segmented nodule region to be defined by its boundary with the lung parenchyma on one side and the modeled pleural
surface (MPS) on the other. The boundary is modeled as a cubic polynomial surface in the algorithm.
2.2. Algorithm
The pleural surface is modeled as a 3D cubic polynomial function, and the goal of the surface-fitting algorithm is to determine the polynomial function that best fits the pleural surface. To
accomplish this, the points belonging to the pleural surface are identified and used to estimate the parameters of the polynomial function. The algorithm is divided into the six stages shown in the
flowchart in Figure 3.
The surface points in a region of interest are identified by first segmenting the nodule from the lung parenchyma and other soft tissue attached structures. After the preliminary segmentation of the
nodule, the next task is to compute a coordinate transformation to assist in later stages of the algorithm that require computing the residuals from the estimated polynomial function. To ensure that
the estimated polynomial function is computed from just the pleural surface points (excluding points belonging to the surface of the nodule), the next three steps encompass an iterative process that
selects a subset of the surface points in the region, estimates a polynomial function from the subset of points, and determines if the change in residuals corresponds to the estimate of the
polynomial surface that includes all the pleural surface points but none of the nodule surface points. Finally, in the last stage, the estimated surface function is used to segment the nodule from
the pleural surface.
2.2.1. Initial Setup
The surface-fitting algorithm is an extension of previous work on pulmonary nodule segmentation by Reeves et al. [5] that is designed to specifically address the task of separating nodules from the
pleural surface. This algorithm relies on results of previous steps of the work by Reeves et al. which are summarized here.
The nodule segmentation system by Reeves et al. [5] can be divided into four main steps: (1) image preprocessing to select a region of interest based on a user-specified seed point within the nodule,
(2) segmenting the nodule from the lung parenchyma by thresholding, (3) removing vessels, and (4) the elimination of the pleural surface using a clipping plane. This fourth step is replaced with the
algorithm described in this paper.
In the image processing step, a small region of interest (ROI) sized more than twice the nodule diameter is extracted from entire CT scan centered at the seed point. From this ROI, an estimate of the
nodule size and location is computed using an iterative template matching technique. This information is used to further reduce the size of the ROI. The small ROI is resampled into isotropic space
(0.25mm^3 voxel size).
Next, the soft tissue in the resampled ROI is separated from the lung parenchyma by applying a threshold of −400HU to the ROI. This results in a binary image with the lung parenchyma assigned a
value of 0, and the soft tissue assigned a value of 1. To separate the nodule from other small, attached soft tissue structures, an iterative morphological filter is applied to the binary image which
removes attached structures such as blood vessels but will not remove larger attached structures such as the pleural surface.
After these steps, we have the following information: (1) the volumetric region of interest resampled into isotropic space, (2) an approximate nodule radius, (3) an approximate nodule center, and (4)
isotropic binary thresholded volume with the vessels removed. These are all used by the algorithm in subsequent steps.
2.2.2. Coordinate System Selection and Transformation
The nodule is separated from the pleural surface by creating a boundary based on local shape information. The boundary is modeled as an explicit polynomial surface of the form where one parameter
(observable variable), , can be described as a function of the other parameters (explanatory variables) and . The error measure of the estimated surface from the actual data points was defined as the
discrepancy in the observable variable.
This explicit function requires finding the parametrization of the surface using techniques similar to those described by Quek et al. [16]. The first step of the method is the detection of the
surface of the nodule and thoracic wall. From the binary image of the region of interest of the nodule obtained in Section 2.2.1, the surface is detected via an erosion operation with a spherical
kernel 3 voxels (0.75mm) in diameter followed by a logical XOR operation. This yields a binary image in which the voxels on the surface are 1 while all other voxels are 0. This binary image is used
as a mask to select the regions of the grayscale image corresponding to the surface. The next step involves finding the surface function that has the least error compared to the actual surface.
For an explicit surface function, we can improve the error computation by choosing a coordinate system that is parallel to the pleural surface. As shown in Figure 4(b), if the coordinate system is
parallel to the pleural surface, estimating the error using the observable variable () exhibits much less error than in a different coordinate system, such as in Figure 4(a). An overview of the
procedure for finding such a coordinate system is shown in Figure 5. First, the surface normals are estimated at all of the detected surface points using a 3D gradient operator. Second, the normal
estimates are averaged to obtain an average normal estimate, . The average normal is used as the -direction basis vector with respect to (1), and the other two basis vectors, corresponding to and in
(1), are selected to be orthogonal to the average normal and to each other.
In addition to the basis vectors of the coordinate system, computing a coordinate transformation requires a point for the origin of the coordinate system. In this method, a reference point on the
nodule surface is used as the origin for the coordinate system. This reference point is identified by starting from a point within the nodule and searching in the direction of the average normal
until the edge of the segmented region is detected, as shown in Figure 6. The point within the nodule is defined to be the nodule center of mass, which is found by computing the center of mass within
the largest spherical region within the ROI centered at the approximate nodule center. The center of mass is used to ensure the point is consistent and because the estimate of the nodule center may
have been affected by the presence of attached vessels, which have been removed prior to this step. Now all the surface points can be transformed into a coordinate system that has the nodule surface
point at the origin and one of the basis vectors pointing in the direction of the normal.
2.2.3. Pleural Point Subset Selection: Modeling the Region of Interest
At this point, we have a coordinate system and points along the boundary of the thoracic wall/nodule region and the lung parenchyma. To prevent bias in the surface estimate, the surface points
belonging to the nodule must be excluded. We use the following model as a basis for deciding which points belong to the nodule.
We consider the region of interest containing the nodule to consist of two sections: the pleural surface and the nodule surface. These sections can be separated by partitioning the region of interest
into two subregions. One such partition is illustrated in Figure 7(a). In this illustration, , the region inside the dotted line, is the region containing the nodule, and , the region outside the
dotted line, is the region containing only the pleural surface.
We define the following parameters that can be found without knowledge of the exact partition. A point that is known to be on the nodule surface is the reference point found in the previous step. The
minimum distance from to a surface point adjacent to the edge of the region of interest is labeled . In addition, based on the region partition, we define to be the maximum distance from to another
point in . In order to achieve a good segmentation, the nodule must be completely contained inside the region of interest. Although there are significant variations in nodule shapes, a generic
partitioning region can be defined as a region located outside of a sphere of radius centered at the nodule surface point , as shown by the region in white in Figure 7(b). Thus, contains only pleural
surface points when . In particular, is one set that can be easily generated and provides a good initial surface estimate, but with few surface points. A more robust solution can be found by
increasing the number of points in the subset used for parameter estimation. The success of the algorithm depends on the strategy for picking the order of points to be added and finding a good
stopping condition by detecting the presence of outliers.
Outliers or surface irregularities can be identified based on their inconsistency with the surface fit. A distance-based measure, such as mean squared error, can be used to determine the consistency
of points with a surface model. To ensure that the mean squared error can be used to detect outliers, a surface model must be selected that can represent clean sections of the pleural surface but not
ones containing the nodule. We make the following assumptions about the nodule surface: the nodule surface contains multiple inflection points, all the nodule surface points are on the lung side of
the pleural surface, and the points that are farthest from the origin are adjacent to the pleural surface.
Inflection points are defined as points where the convexity of the surface changes. The presence of multiple inflection points differentiates a surface containing the nodule from the normal pleural
surface. Several inflection points, labeled , are seen in a two-dimensional nodule representation, shown in Figure 8, and even more can be found in the 3D image. The second condition ensures that the
nodule is one connected region located exclusively inside the lung parenchyma. The third condition indicates that a nodule is a compact structure, where the distance from the origin to the nodule
periphery,, is larger than the distance to the farthest peak, labeled in Figure 8.
In contrast to the nodule, the pleural surface has low curvature with few inflection points. Figure 9 shows a typical slice of a whole lung scan and an outline of the pleural surface from this slice
with inflection points marked by dots. In most cases, these inflection points are far apart from each other. In fact, no inflection points at all are apparent in the exterior lung region. On the
other hand, the nodule attached to the pleural surface can be seen to have two inflection points in one slice with more present in the 3D image. Thus, in a small region of interest, the pleural
surface should have very few inflection points compared to the nodule surface, which indicates that a family of second or third degree polynomial functions can be used to represent the pleural
surface but not the nodule.
2.2.4. Pleural Point Subset Selection and Parameter Estimation
Once we have a method to select which surface points to include in our surface estimation algorithm, our next step is to find the optimal polynomial function that fits those points. An overview of
the algorithm is shown in Figure 10.
The algorithm uses a forward search, starting with a small subset of points that is known to contain no or few outliers. The radius of the initial clean subset is initialized to a value of 10% higher
than the known clean subset , , to ensure that several diagnostic values for clean subsets are generated by the algorithm. At each step , the subset radius is decreased in increments of , in other
words,. The algorithm uses a value of voxel or 0.25mm. Given the points in the subset, the parameters of the best-fitting polynomial function, were calculated using least squares regression. The
diagnostic value, , is defined as the average difference of the pleural surface points from the polynomial function added at the current step, as suggested by Atkinson and Riani [17], where . New
points are added by reducing the radius of the clean subset. From this algorithm, we obtain a sequence of diagnostic values and corresponding polynomial function estimates.
As the algorithm iterates between and , we expect the diagnostic value to increase slowly, while the quality of the surface estimate improves. For subsets with radius less than , the diagnostic
values will rise much faster, and the quality of the fit will deteriorate. Several iterations of the algorithm are shown in Figure 11, with the top row of images showing several subsets of points
(points are those on the thoracic wall/nodule and lung parenchyma boundary outside of the area enclosed in the circle) and the bottom row showing the resulting segmentation images. Note that the best
radius,, includes as many points on the pleural surface as possible without including points on the nodule surface; subsets that include greater or fewer number of points have worse segmentation
results. The appropriate subset can be found automatically by detecting a change in the behavior of the sequence of diagnostic values.
To detect the change in behavior of the diagnostic sequence, we used a likelihood-based approach for constant level signals with noise described by Gustafsson [18], details of which are given below.
Given a discrete-time signal , its likelihood based on the model is denoted . The likelihood can be decomposed as a product of likelihoods of each point, due to independence For change detection, we
are interested in the time where the signal changes, . We can write parameters and in terms of time by using the most likely parameters based on the sequence represents the likelihood of measurement
given change time . The most probable change time can be estimated by the maximum likelihood principle For each , the likelihood of the sequence can be decomposed into a product of two parts where .
Finally, the most likely sequence is found by minimizing the following expression: where the likelihood of a subsequence can be calculated based on the estimated parameters
To find the change time for the sequence of diagnostic values, we formulate the problem using the framework just described. We treat the sequence of diagnostic values as a discrete time function and
make the assumption that the function increases linearly, with a change in the slope of the function at the nodule radius, . We model the sequence in the interval, where is the maximum element of the
sequence, with the following expression: where the rate of increase before ,, is much lower than the rate of increase after, , , where nodule surface points are included in the subset. Error terms
and are included in the expressions to represent the discrepancies between the model and actual residuals. When , the error is due to small features present on the pleural surface and is expected to
be smaller than the error, , when and the subset of surface points contain outliers that depend on the nodule shape.
To identify the time where the change in slope occurs, we can define a difference sequence , which results in the following model for the resulting sequence: The change time can now be determined
using the likelihood-based signal change detection method described by Gustafsson. The polynomial function associated with the change time will best fit the subset of the pleural surface points that
excludes points on the nodule surface.
2.2.5. Nodule Separation
Once the surface parameters have been estimated, there is enough information to segment the nodule from the thoracic wall. We start with a binary image containing the thoracic wall and the nodule and
eliminate voxels that are below the estimated pleural surface, where “below” is defined as the opposite direction of the average surface normal. Figure 12(a) illustrates a 3D light-shaded model of a
region of interest, with the surface estimate shown in Figure 12(b). Voxels below the surface are removed, leaving the nodule and some pixels due to small surface features and an imperfect
representation of the surface. These can be removed by performing morphological opening followed by connected component analysis, and selecting the largest connected component. The results of
segmentation are shown in Figure 12(c).
2.3. Materials
This study used a dataset of 150 solid attached nodules with one primary attachment from 114 patients selected from the Weill Cornell Medical Center database. The solid consistency of the nodules was
confirmed by a radiologist; juxtapleural nodules were noted by a radiologist and confirmed by visual inspection. Of the 150 nodules, six nodules were on whole-lung scans while the remainder were on
targeted scans. All nodules were imaged on thin-slice scans, with 129 nodules on 1.00mm scans and 21 nodules on 1.25mm scans. All of the scans were acquired using the scanners and parameters shown
in Table 1. The nodules ranged in size from 1.5mm to 22.6mm, as determined by a semiautomated volume measurement method, with a mean size of 5.2mm.
2.4. Experiment
The “true” segmentation of a juxtapleural nodule is difficult to accurately determine, even for radiologists. Studies have shown that there is high interobserver variability in nodule measurements [
19, 20]. Thus, instead of directly comparing the overlap of the segmented regions, the segmentations for each nodule and method were visually inspected and ranked on a scale of 1 to 4, with 1
representing completely unacceptable segmentation and 4 representing very good segmentation for the purpose of volumetric measurement. Segmentations with a rating of 3 or 4 were considered to be
acceptable for volumetric evaluation. Three raters (A. C. Jirapatnakul, A. P. Reeves, and D. F. Yankelevitz) reviewed the segmentations and arrived at a consensus. D. F. Yankelevitz is a
board-certified radiologist. Examples of segmentations and their associated ratings are shown in Figure 13. The raters were presented with all slices of the region of interest around the nodule with
the segmented regions overlaid on top of the original CT image region in a translucent color. To prevent bias, the raters were not aware of which segmentation method was being presented nor the order
of presentation of the methods. All nodules were read in a single session. For the intersection of the set of nodules that were judged to have acceptable segmentations by both methods, the volumes
were compared to determine if there was a significant difference between the methods. The initial seed point, region of interest, and other parameters were consistent across methods, with the initial
seed point manually specified.
3. Results
The new surface-fitting method was compared to the latest published method from Reeves et al. [5]. Of the 150 nodules in the database, the surface-fitting algorithm acceptably segmented 147 nodules
(98.0%), while the plane-cutting method by Reeves et al. acceptably segmented 122 nodules (81.3%). These results are summarized in Table 2.
The average rating of the surface-fitting algorithm was 3.28 over all the nodules, while the average rating for the algorithm by Reeves et al. was 2.95, with the distributions shown in Figure 14. The
volumes measured by both methods were compared using a paired -test, which indicated statistically significant differences between the methods (). This did not change if we limited the analysis to
the 122 nodules successfully segmented by both methods (rating of 3 or 4) (. The median volume difference of these 122 nodules was 5.7%, with only 17 nodules differing by more than 20%. There was no
clear relationship between the size of the nodule and the success of the algorithm.
The runtimes of both the methods were measured on a dual-processor Intel Xeon 3.0GHz computer. Both methods were implemented in unoptimized research software. For most nodules, the runtimes of both
methods were only a few seconds. The surface-fitting algorithm was slower than the plane-cutting method, with a range of runtimes of approximately 200ms to 14 seconds. The plane-cutting method had
runtimes which ranged from approximately 100ms to 10 seconds. The runtimes of both methods were higher for larger nodules.
4. Discussion
A new algorithm for juxtapleural nodule segmentation was developed which combined robust surface estimation methods with knowledge of the characteristics of juxtapleural nodules to improve upon
previous segmentation algorithms without requiring any additional user intervention. Although the majority of the nodules in this study were on targeted CT scans, the algorithm should work
effectively on whole-lung CT scans as well.
Unlike previous studies which reported their segmentation results on both isolated and attached nodules, the dataset used in this study consisted of only the more challenging juxtapleural nodules. On
this set of juxtapleural nodules, this new algorithm performed better than a previously published method [5], which used a plane to represent the pleural surface, due to its ability to better model
curved sections of the pleural surface. In our testing, the surface-fitting algorithm successfully segmented 98.0% of the attached nodules in our database, as compared to the previous algorithm using
an iterative plane-cutting approach which only succeeded on 81.3% of nodules in the database. Most of the improvement came from nodules located on highly curved regions of the pleural surface where a
plane would not be able to accurately represent the pleural surface. An example of a case where the surface-fitting algorithm is markedly better than the plane-cutting algorithm is shown in Figure 15
. The surface-fitting algorithm is able to accurately separate the nodule from the wall, while the plane-cutting method segments a large portion of the wall. This is due to the fact that the pleural
surface in this region is curved while the nodule is small, so the plane immediately includes portions of the wall.
Although there was an improvement in the success rate of nodule segmentation with the surface-fitting algorithm, there were still several cases where both algorithms failed. Many of these failures
were due to either respiratory motion, or an apparent shift of the nodule attachment point along the pleural surface. In one case, the nodule was located near the diaphragm. Several CT scan slices of
the region of interest for this nodule are shown in Figure 16. Due to the high curvature of the diaphragm, the juxtapleural surface appears to move by a large amount between frames 9 and 11. The
surface-fitting approach does not fully segment the nodule, due to not being able to accommodate the rapid change in position of the diaphragm. The plane-cutting method was able to get more of the
nodule, but it included portions of wall. In another case, shown in Figure 17, respiratory motion caused the movement of the pleural surface in several slices; on these slices (frames 13–18), the
surface-fitting algorithm incorrectly segmented portions of the wall. Aside from those frames, the segmentation is acceptable. While both algorithms were affected by this motion, the surface-fitting
algorithm failed because it was not able to compensate for the shift, resulting in the segmentation of a large part of the pleural surface, whereas the plane-cutting algorithm was not able to get
very close to the pleural surface, but was able to avoid segmenting the pleural surface. In the second case, the nodule was located near the diaphragm with a vessel-like attachment. The
surface-fitting algorithm included portions of the attachment in the segmentation, whereas the plane-cutting algorithm did not, as shown in Figure 18.
The runtime of the surface-fitting method was slightly longer than the runtime required for the plane-cutting method. Much of the runtime occurred in the iterative process of selecting a set of
pleural surface points and using these points to estimate the parameters of a polynomial function. The runtime could be dramatically reduced by additional program optimizations, possibly by a factor
of 10 or more for the larger nodules.
In this study, the segmentation results were subjectively evaluated by a three raters. Having a radiologist manually contour, every nodule on each slice is time-consuming, and though the Lung Image
Database Consortium (LIDC) [21] provides contours for nodules in the database greater than 3mm, previous studies have shown a large interobserver variation between radiologists [20, 22].
Additionally, the dataset used for evaluation in this study contained many more juxtapleural nodules than the LIDC database. Given these considerations, consensus ratings from visual inspection was
used for this study.
5. Conclusion
We have presented a robust, surface estimation approach to accurately segment solid juxtapleural nodules. In contrast to previous approaches using morphological filtering, plane-cutting, or convex
hull operations, this approach fits a polynomial function to a robust set of pleural surface points. We evaluated the performance of this algorithm on a database of 150 solid juxtapleural nodules and
compared its performance to a previously published method using an iterative plane-cutting algorithm. Our method performs much better than the plane-cutting approach, correctly segmenting 98% of the
nodules compared to 81% with the previous method. The surface estimation approach especially excels with nodules attached to pleural surfaces with high curvature. However, the algorithm is still
affected by image problems such as respiratory motion, but this will become less of a problem with improvements in CT scanner technology. This approach improves the success rate of juxtapleural
nodule segmentation and will allow for more accurate volumetric measurement of juxtapleural pulmonary nodules.
Conflict of Interests
D. Yankelevitz is a named inventor on a number of patents and patent applications relating to the evaluation of diseases of the chest including measurement of nodules. Some of these, which are owned
by Cornell Research Foundation (CRF), are nonexclusively licensed to General Electric. As an inventor of these patents, D. Yankelevitz is entitled to a share of any compensation which CRF may receive
from its commercialization of these patents. C. I. Henschke is a named inventor on a number of patents and patent applications relating to the evaluation of disease of the chest including the
measurement of nodules. Some of these patents, which are owned by the Cornell Research Foundation (CRF), are nonexclusively licensed to GE Healthcare. As an inventor, C. I. Henschke is entitled to a
share of any compensation that CRF may receive from the commercialization of these patents but has renounced any compensation since April 2009. A. P. Reeves is a coinventor on patents and pending
patents owned by Cornell Research Foundation, which are nonexclusively licensed to GE and related to technology involving computer-aided diagnostic methods, including measurement of pulmonary nodules
in CT images.
1. V. P. Collins, R. K. Loeffler, and H. Tivey, “Observations on growth rates of human tumors,” The American Journal Of Roentgenology, Radium Therapy, and Nuclear Medicine, vol. 76, no. 5, pp.
988–1000, 1956. View at Scopus
2. M. H. Nathan, V. P. Collins, and R. A. Adams, “Differentiation of benign and malignant pulmonary nodules by growth rate,” Radiology, vol. 79, pp. 221–232, 1962. View at Scopus
3. B. Zhao, D. Yankelevitz, A. Reeves, and C. Henschke, “Two-dimensional multi-criterion segmentation of pulmonary nodules on helical CT images,” Medical Physics, vol. 26, no. 6, pp. 889–895, 1999.
View at Publisher · View at Google Scholar · View at Scopus
4. W. J. Kostis, A. P. Reeves, D. F. Yankelevitz, and C. I. Henschke, “Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images,” IEEE Transactions
on Medical Imaging, vol. 22, no. 10, pp. 1259–1274, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
5. A. P. Reeves, A. B. Chan, D. F. Yankelevitz, C. I. Henschke, B. Kressler, and W. J. Kostis, “On measuring the change in size of pulmonary nodules,” IEEE Transactions on Medical Imaging, vol. 25,
no. 4, pp. 435–450, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
6. T. W. Way, L. M. Hadjiiski, B. Sahiner et al., “Computer-aided diagnosis of pulmonary nodules on CT scans: segmentation and classification using 3D active contours,” Medical Physics, vol. 33, no.
7, pp. 2323–2337, 2006. View at Publisher · View at Google Scholar · View at Scopus
7. K. Okada, V. Ramesh, A. Krishnan, M. Singh, and U. Akdemir, “Robust pulmonary nodule segmentation in CT: improving performance for juxtapleural cases,” in 8th International Conference on Medical
Image Computing and Computer-Assisted Intervention (MICCAI '05), vol. 3750 of Lecture Notes in Computer Science, pp. 781–789, October 2005. View at Publisher · View at Google Scholar
8. H. Shen, B. Goebel, and B. Odry, “A new algorithm for local surface smoothing with application to chest wall nodule segmentation in lung CT data,” in Medical Imaging 2004: Imaging Processing,
Proceedings of SPIE, pp. 1519–1526, San Diego, Calif, USA, February 2004. View at Publisher · View at Google Scholar
9. J.-M. Kuhnigk, V. Dicken, L. Bornemann, D. Wormanns, S. Krass, and H.-O. Peitgen, “Fast automated segmentation and reproducible volumetry of pulmonary metastases in CT-scans for therapy
monitoring,” in the 7th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '04), vol. 3217 of Lecture Notes in Computer Science, pp. 933–941, September
10. S. G. Armato, M. L. Giger, C. J. Moran, J. T. Blackburn, K. Doi, and H. MacMahon, “Computerized detection of pulmonary nodules on CT scans,” Radiographics, vol. 19, no. 5, pp. 1303–1311, 1999.
View at Scopus
11. M. N. Gurcan, B. Sahiner, N. Petrick et al., “Lung nodule detection on thoracic computed tomography images: preliminary evaluation of a computer-aided diagnosis system,” Medical Physics, vol. 29,
no. 11, pp. 2552–2558, 2002. View at Publisher · View at Google Scholar · View at Scopus
12. J. P. Ko and M. Betke, “Chest CT: automated nodule detection and assessment of change over time—preliminary experience,” Radiology, vol. 218, no. 1, pp. 267–273, 2001. View at Scopus
13. P. J. Besl and R. C. Jain, “Segmentation through variable-order surface fitting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 2, pp. 167–192, 1988. View at
Publisher · View at Google Scholar · View at Scopus
14. E. R. van Dop and P. P. L. Regtien, “Fitting undeformed superquadrics to range data: improving model recovery and classification,” in Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pp. 396–401, June 1998.
15. E. Bardinet, L. D. Cohen, and N. Ayache, “Fitting of iso-surfaces using superquadrics and free-form deformations,” in Proceedings of the IEEE Workshop on Biomedical Image Analysis, pp. 184–193,
16. F. K. H. Quek, R. W. I. Yarger, and C. Kirbas, “Surface parameterization in volumetric images for curvature-based feature classification,” IEEE Transactions on Systems, Man, and Cybernetics Part
B, vol. 33, no. 5, pp. 758–765, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
17. A. Atkinson and M. Riani, Robust Diagnostic Regression Analysis, Springer, Berlin, Germany, 2000.
18. F. Gustafsson, Adaptive Filtering and Change Detection, John Wiley & Sons, New York, NY, USA, 2000.
19. J. J. Erasmus, G. W. Gladish, L. Broemeling et al., “Interobserver and intraobserver variability in measurement of non-small-cell carcinoma lung lesions: implications for assessment of tumor
response,” Journal of Clinical Oncology, vol. 21, no. 13, pp. 2574–2582, 2003. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
20. A. P. Reeves, A. M. Biancardi, T. V. Apanasovich et al., “The lung image database consortium (LIDC): a comparison of different size metrics for pulmonary nodule measurements,” Academic Radiology,
vol. 14, no. 12, pp. 1475–1485, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
21. M. F. McNitt-Gray, S. G. Armato, C. R. Meyer et al., “The lung image database consortium (LIDC) data collection process for nodule detection and annotation,” Academic Radiology, vol. 14, no. 12,
pp. 1464–1474, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
22. C. R. Meyer, T. D. Johnson, G. McLennan et al., “Evaluation of lung MDCT nodule annotation across radiologists and methods,” Academic Radiology, vol. 13, no. 10, pp. 1254–1265, 2006. View at
Publisher · View at Google Scholar · View at PubMed · View at Scopus | {"url":"http://www.hindawi.com/journals/ijbi/2011/632195/","timestamp":"2014-04-16T19:25:26Z","content_type":null,"content_length":"178199","record_id":"<urn:uuid:1d985a6a-d02e-466c-bad5-23dfd6b392f1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of Correlation Coefficient | Chegg.com
A researcher wonders if there is a difference in perfectionism across individuals who receive differ... Show more
A researcher wonders if there is a difference in perfectionism across individuals who receive different amount of parental critism. She samples individuals and divides them into groups-no critism,
weekly critism, daily critism. She administers a perfectionism scale where higher scores indicate higher levels of perfectionism.
a. whatr is the research hypothesis (Null and Alternative)
b. Based on the following data, what can this researcher conlcule? (write 2 sentences explaining results.
no critism-8,8,10,9,10
weekly critism-10, 12, 8, 9, 11
daily critism 13,14,12,15,16
• Show less | {"url":"http://www.chegg.com/homework-help/definitions/correlation-coefficient-31","timestamp":"2014-04-19T08:53:45Z","content_type":null,"content_length":"46607","record_id":"<urn:uuid:153f343e-bed6-4d2d-9f15-9e1f0c46c8c9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodside, CA Algebra Tutor
Find a Woodside, CA Algebra Tutor
...I generally work with my students weekly in order to keep up with current materials and be able to prepare for any upcoming tests or quizzes. I am a cheerful and energetic person. I like to
have variety in the materials that I work with.
15 Subjects: including algebra 1, algebra 2, English, geometry
...Im' currently working in a company as an accountant. I have tutored accounting and finance for the past 6 years. I was born and raised in Hong Kong for over 16 years.
7 Subjects: including algebra 1, accounting, Chinese, Microsoft Excel
...I could explain the main points precisely and concisely (Algebra is my PhD area). All have improved in their understanding and in various degree their grades. I provide extra practice and
drill problems. I have substantial experience tutoring Calculus (including AP, AB/BC) and Multivariate Calculus.
15 Subjects: including algebra 2, algebra 1, calculus, GRE
...Not only can I teach every detail behind solving each question, but I can also present useful shortcuts and strategies that can will help drastically improve efficiency while taking the test.
Part of my approach includes introducing concepts and practice materials developed by various SAT prep g...
58 Subjects: including algebra 1, algebra 2, reading, English
...I have engineering background that I recognize that it is critical to break down problems and concepts into steps, the size of which depends directly on the needs of the student. I got my BS
from on of the premier IITs in India. I worked as TA and RA there, while attending classes.
13 Subjects: including algebra 1, algebra 2, chemistry, calculus
Related Woodside, CA Tutors
Woodside, CA Accounting Tutors
Woodside, CA ACT Tutors
Woodside, CA Algebra Tutors
Woodside, CA Algebra 2 Tutors
Woodside, CA Calculus Tutors
Woodside, CA Geometry Tutors
Woodside, CA Math Tutors
Woodside, CA Prealgebra Tutors
Woodside, CA Precalculus Tutors
Woodside, CA SAT Tutors
Woodside, CA SAT Math Tutors
Woodside, CA Science Tutors
Woodside, CA Statistics Tutors
Woodside, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Woodside_CA_Algebra_tutors.php","timestamp":"2014-04-18T16:12:51Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:ad6a92ab-3624-4fe3-99d7-40d5e16815d2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Long Island City Algebra 1 Tutor
...I'm a scientist, and I love science! I've been tutoring for the ACT for ten years. Unfortunately, even AP level science classes often will not prepare a student for the ACT science.
17 Subjects: including algebra 1, calculus, geometry, biology
...Since I am currently in graduate school, I have a flexible schedule and can work around you or your child's schedule. I also have a car, so travel is not an issue. I'm excited to work with new
students and pass along some of the lessons I have learned!I have been a straight-A student my entire life, thanks in large part to dedicated study skills.
12 Subjects: including algebra 1, reading, writing, English
...Most of my experience in tutoring is for students at a junior high level. I have done a range of subjects and have especially had success with bilingual students (Serbo/Croatian and English). I
have always found great enjoyment in school and hope to pass on my positive energy to potential studen...
26 Subjects: including algebra 1, reading, English, anatomy
...When I was in my country, I used to tutor elementary, middle and high school students in Math and Science. When I arrived in the United States, I could not do this job because I did not have
the language to teach students since my first language is French. Now I am ready for that because I have increased my knowledge of English greatly.
15 Subjects: including algebra 1, chemistry, French, calculus
Hello parents and students. I am a NY State licensed math teacher with one year classroom experience and six years tutoring experience. I also have a Masters degree in math education.
14 Subjects: including algebra 1, reading, geometry, SAT math | {"url":"http://www.purplemath.com/long_island_city_algebra_1_tutors.php","timestamp":"2014-04-19T17:48:35Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:33f6cd31-33e0-4ec8-bd78-e9f4e511f85c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
urgent please.needed for tomorrow
October 28th 2007, 03:24 AM #1
3. The entropy change, ΔS, associated with the isothermal (i.e.
constant temperature) change in volume of n moles of an ideal
gas from volume V1 to volume V2 is given by the equation
ΔS = nR ln(V2/V1)
where R is the gas constant.
(a) If a compression occurs (V2 < V1) use Equation 1 to deduce
whether there will there be an increase (ΔS is positive) or
decrease (ΔS is negative) in the entropy of the gas.
(b) If an expansion occurs (V2 > V1) use Equation 1 to deduce
whether there will there be an increase or decrease in the
entropy of the gas.
(c) The ideal gas law states that pV = nRT, where p is the
pressure of the gas. Use this expression and Equation 1 to
obtain an equation relating the entropy change to the initial
and final pressures (p1 and p2). (Note that n,R and T will
be constant at the two different pressures).
for part a) entropy will decrease because if V2is smaller than V1, inside the brackets is a fraction, this will make ln(answer) a negative number and therefore the whole right side negative.
for part b) entropy will increase because if V2is larger than V1, inside the brackets is larger than 1, this will make ln(answer) a positive number and therefore the whole right side positive.
Another way to think about it logically is to think of the definition of entropy- measure of disorder or the number of ways or arranging the molecules of gas (in this case). If you compress the
gas (part a) then there are less ways to arrange the molecules. If you expand, more ways to arrange.
for part c) make V the subject of the ideal gas equation: V= (nRT)/p. And since n,R and T are the same for both pressure/volume you can sub this into the change in entropy equation to get: ΔS =
nR ln[(nRT)/p2/] / [(nRT)/p1] sorry for how that looks btw. The nRT cancel out leaving you with (1/p2) / (1/p1) flip (1/p1) around to make : (1/p2)*p1 which equals (p1/p2). Therefore the full
ΔS = nR ln(p1/p2).
July 24th 2008, 05:55 AM #2
Jul 2008 | {"url":"http://mathhelpforum.com/math-topics/21487-urgent-please-needed-tomorrow.html","timestamp":"2014-04-17T08:15:50Z","content_type":null,"content_length":"34038","record_id":"<urn:uuid:68f3d4dd-2b0c-4c11-8a89-03789f167638>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Newbie: generating a truth table
Dougal Stanton ithika at gmail.com
Tue Feb 6 04:20:14 EST 2007
Quoth phiroc at free.fr, nevermore,
> loop = do
> and True True
> and True False
> and False True
> and False False
For this construct to work you'd have to have 'and' as a monadic
function - let's say, for argument's sake, in IO. And because there are
no results being carried around we can surmise that the type signature
> and :: Bool -> Bool -> IO ()
This will help you do the putStr statements too, although I think it can
be done better. For example, you want to create a truth table, and you
want to print the values of this table. These should really be done
separately, so you don't mix IO with pure computation.
You might want to write a function of the type:
> and :: Bool -> Bool -> (Bool, Bool, Bool)
instead, where the two arguments passed in are stored alongside the
> Is there a better way to repeatedly call "and"?
> Furthermore, is there a way in Haskell to loop through the Boolean values (True
> and False) and call "and" each time?
These can be answered in one sweep with list comprehensions. An
expression of the form
[ f x y | x <- xs, y <- ys ]
will form a list where every value is 'f x y' for all values of x and y
in the two source lists xs and ys. I hope this is of some help. I've
been purposefully vague in case this is a homework question. If not, let
us know and I'm sure people will be more than happy to provide fuller
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-February/022221.html","timestamp":"2014-04-17T05:05:04Z","content_type":null,"content_length":"4243","record_id":"<urn:uuid:f31be387-349f-4dcc-a098-2577b9186a5a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimal Polynomials
February 3rd 2010, 07:39 AM #1
Jan 2010
I am wondering how to find the minimal polynomial of 2+\sqrt{i} over \mathbb{Q}.
I have got it to x^{4}-2x^{2}+9 but I don't know how to shaw that this is irreducible!
I don't know from where did you get that polynomial. I got:
$x=2+\sqrt{i}\Longrightarrow (x-2)^2=i\Longrightarrow (x-2)^4=-1\Longrightarrow x^4-8x^3+24x^2-32x+17$ , and this polynomial is irreducible by Eisenstein's criterium with $p=2$ after carrying on
the transformation $x \rightarrow x+1$
I actually meant \sqrt{2}+i.
February 3rd 2010, 09:39 AM #2
Oct 2009
February 3rd 2010, 10:11 AM #3
Jan 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/126970-minimal-polynomials.html","timestamp":"2014-04-18T17:41:32Z","content_type":null,"content_length":"36139","record_id":"<urn:uuid:2630ce0e-ee45-477d-82af-f72de14d53e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell Code by HsColour
{-# LANGUAGE TypeOperators, TypeFamilies #-}
-- | A small collection of specialized 'Int'-indexed priority queues dealing with both untagged 'Int's and association pairs with 'Int' keys. The implementation is a simple bootstrap from 'IntMap'. (Note: Duplicate keys /will/ be counted separately. No guarantees are made on the order in which values associated with equal keys are returned.)
module Data.Queue.IntQueue (IntQueue, IntAssocQueue) where
import Data.Queue.Class
import Data.Maybe
import qualified Data.IntMap as IM
-- | A 'Queuelike' type with @QueueKey IntQueue ~ Int@.
newtype IntQueue = IQ (IM.IntMap Int)
-- | A 'Queuelike' type with @QueueKey (IntAssocQueue e) ~ e :-> Int@.
newtype IntAssocQueue e = IAQ (IM.IntMap [e])
instance Queuelike IntQueue where
type QueueKey IntQueue = Int
x `insert` IQ m = IQ (IM.alter (Just . maybe 1 (+1)) x m)
xs `insertAll` q = q `merge` fromList xs
extract (IQ m) = fmap (\ ((k, ct), m') -> (k, IQ (if ct > 1 then IM.insert k (ct - 1) m' else m'))) (IM.minViewWithKey m)
isEmpty (IQ m) = IM.null m
empty = IQ IM.empty
singleton x = IQ (IM.singleton x 1)
fromList xs = IQ (IM.fromListWith (+) [(x, 1) | x <- xs])
IQ q1 `merge` IQ q2 = IQ (IM.unionWith (+) q1 q2)
mergeAll qs = IQ (IM.unionsWith (+) [m | IQ m <- qs])
instance Queuelike (IntAssocQueue e) where
type QueueKey (IntAssocQueue e) = e :-> Int
(v :-> k) `insert` IAQ m = IAQ (IM.alter (Just . maybe [v] (v:)) k m)
xs `insertAll` q = q `merge` fromList xs
extract (IAQ m) = fmap (\ ((k, v:vs), m') -> (v :-> k, IAQ (if null vs then m' else IM.insert k vs m'))) (IM.minViewWithKey m)
isEmpty (IAQ m) = IM.null m
empty = IAQ IM.empty
singleton (v :-> k) = IAQ (IM.singleton k [v])
fromList xs = IAQ (IM.fromListWith (++) [(k, [v]) | (v :-> k) <- xs])
IAQ q1 `merge` IAQ q2 = IAQ (IM.unionWith (++) q1 q2)
mergeAll qs = IAQ (IM.unionsWith (++) [m | IAQ m <- qs]) | {"url":"http://hackage.haskell.org/package/pqueue-mtl-1.0.6/docs/src/Data-Queue-IntQueue.html","timestamp":"2014-04-16T16:38:19Z","content_type":null,"content_length":"15432","record_id":"<urn:uuid:cc225740-cb65-4889-9db3-3b7bf8e285c9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
I want to find the x=f(a,b,c,d) in other (how does x change with a,b,c,d).
1. March 13th 2012, 04:11 AM #1
Last edited by zizodev; March 13th 2012 at 04:22 AM.
2. March 19th 2012, 12:57 AM #2
Re: I want to find the x=f(a,b,c,d) in other (how does x change with a,b,c,d).
Similar Math Help Forum Discussions
Search Tags | {"url":"http://mathhelpforum.com/algebra/195902-i-want-find-x-f-b-c-d-other-how-does-x-change-b-c-d.html","timestamp":"2014-04-17T13:14:22Z","content_type":null,"content_length":"33216","record_id":"<urn:uuid:eaeec984-ed64-474b-bca5-ee36243afc01>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
At this moment, I am finishing my PhD in the Botlzmann Machines topic under Prof. Ferran Mazzanti from UPC guidance. A Boltzmann Machine is a neural network with the ability of learning and
extrapolating probability distributions. The original model was born as a parallel implementation of the Simulated Annealing optimization algorithm, but it was later shown that it could be applied a
leanring algorithm to become a Hopfield like model. Learning in BMs is often carried out by a gradient descent process that needs some Monte Carlo simulations over the neural network to compute the
quantitites needed to update its connections; this makes the learning process slow. We are currently working in some mathematical methods that can be used to speed up this process for any BM
The PhD started with the analysis of a process known as decimation that could be used to analytically compute the quantities to carry out the learning process to the BM; this work is due to L. Saul
and M. Jordan. However, this method could only be applied to certain topologies of BM, referred to as Boltzmann Trees; thus presenting a limitation to its applicability.
We have proposed an extension to this method which consists on applying a Walsh-Hadamard transform over the neural network thus allowing any topology to be decimated. This method can be used on High
Order Boltzmann Machines-this is, BM whose weights connect more than two units-and it is able of analitically computing any order of BM to find the exact value needed to carry out the learning | {"url":"http://grsi.salle.url.edu/efarguell/research/","timestamp":"2014-04-19T14:29:29Z","content_type":null,"content_length":"6851","record_id":"<urn:uuid:6b9fe0f4-139b-41f9-8887-6fde248437ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help accumilative variables
03-28-2009 #1
Registered User
Join Date
Feb 2009
Help accumilative variables
Ok I'm tryin to add numbers together but just add the total of negative and then the total of positive numbers.
value = -2.3;
while (value <= 2.9) {
cout << value << " ";
value = value + .4;
Now after this I'm tryin to just add the negative and then the positive but I am not sure. I will have the negative variable as neg and the positive variable as pos. I am not sure how to make it
only read the neg/pos numbers. I know that greater or lesser but just don't know how to set it up.
I know everytime thru the loop add the next value to the accumulator variable, switching to the other accumulator variable after you cross 0, but not sure how to write this?
could someone help me?
This would seem to be what if-statements were born to do (if positive do this otherwise do that).
And also I am very very new to this.
well you have the logic of the code down correct and you know what you need to do. simply write it out on paper in pseudo code first, post what you come up with and we'll take it from there. A
little effort on your part goes a long way.
Warning: Opinions subject to change without notice
The C Library Reference Guide
Understand the fundamentals
Then have some more fun
the thing I don't know is if I have to create a new int.
You need to break it down into logical chunks of what needs to happen then put those chunks together using if statements.
you've assigned value as -2.3 so that will need to be looped until it hits 0 then you need to do the other loop to it.
Refer to your source on using if statements. Your best bet is to write as much as you can and then post it, you'll get more help that way
if (0 < value)
cout << I am not sure how to add it right here
I am not even sure if this is right. I would just think you do this.
The cout will happen if the statement in parentheses is true, i.e., if 0 < value. I leave it to you to determine (a) if 0 < value, whether value is positive or negative and then (b) how to use =
and + in some order to make the addition happen.
well I know this:
if (0 < value)
cout << not sure what to put here.
cout << not sure what to put here
Try putting your original code in there.
value = -2.3;
while (value <= 2.9) {
cout << value << " ";
value = value + .4;
if (0 < value)
cout << not sure what to do
cout << not sure what to do
I am not sure if above is right however
If you can add to value (as evidenced by the line "value = value + .4") why can't you add to pos and neg?
03-28-2009 #2
Registered User
Join Date
Feb 2009
03-28-2009 #3
Registered User
Join Date
Feb 2009
03-28-2009 #4
03-28-2009 #5
Registered User
Join Date
Feb 2009
03-28-2009 #6
Registered User
Join Date
Feb 2009
03-28-2009 #7
03-28-2009 #8
Registered User
Join Date
Feb 2009
03-28-2009 #9
03-28-2009 #10
Registered User
Join Date
Feb 2009
03-28-2009 #11
03-28-2009 #12
Registered User
Join Date
Feb 2009
03-28-2009 #13
03-28-2009 #14
Registered User
Join Date
Feb 2009
03-28-2009 #15 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/114030-help-accumilative-variables.html","timestamp":"2014-04-19T16:10:07Z","content_type":null,"content_length":"96231","record_id":"<urn:uuid:c1f9603b-c235-4e1a-af3e-3dfb1d654b53>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
ok, if we start with an equal triangle number 1 with short sides 2m. Then the long side is 2.8m. That then becomes one of triangle number 2's short sides.so triangle number 2s long side is 4m. that
then becomes a side for triangle number 3, etc etc.that creates a spiral made of triangles that increase in size in relation to the previous triangle. (I've rounded the math up a bit).
using what you put above, the closest I can get right now is (using a,b,and c )
but that only looks to be part of it. | {"url":"http://www.mathisfunforum.com/post.php?tid=19870&qid=280968","timestamp":"2014-04-21T12:36:08Z","content_type":null,"content_length":"23416","record_id":"<urn:uuid:0a2e9fea-fe03-4230-83c3-97b137e28741>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
HowStuffWorks "Brakes: Leverage and Hydraulics"
Leverage and Hydraulics
In the figure below, a force F is being applied to the left end of the lever. The left end of the lever is twice as long (2X) as the right end (X). Therefore, on the right end of the lever a force
of 2F is available, but it acts through half of the distance (Y) that the left end moves (2Y). Changing the relative lengths of the left and right ends of the lever changes the multipliers.
The basic idea behind any hydraulic system is very simple: Force applied at one point is transmitted to another point using an incompressible fluid, almost always an oil of some sort. Most brake
systems also multiply the force in the process. Here you can see the simplest possible hydraulic system:
Simple hydraulic system
In the figure above, two pistons (shown in red) are fit into two glass cylinders filled with oil (shown in light blue) and connected to one another with an oil-filled pipe. If you apply a downward
force to one piston (the left one, in this drawing), then the force is transmitted to the second piston through the oil in the pipe. Since oil is incompressible, the efficiency is very good -- almost
all of the applied force appears at the second piston. The great thing about hydraulic systems is that the pipe connecting the two cylinders can be any length and shape, allowing it to snake through
all sorts of things separating the two pistons. The pipe can also fork, so that one master cylinder can drive more than one slave cylinder if desired, as shown in here:
Master cylinder with two slaves
The other neat thing about a hydraulic system is that it makes force multiplication (or division) fairly easy. If you have read How a Block and Tackle Works or How Gear Ratios Work, then you know
that trading force for distance is very common in mechanical systems. In a hydraulic system, all you have to do is change the size of one piston and cylinder relative to the other, as shown here:
Hydraulic multiplication
To determine the multiplication factor in the figure above, start by looking at the size of the pistons. Assume that the piston on the left is 2 inches (5.08 cm) in diameter (1-inch / 2.54 cm
radius), while the piston on the right is 6 inches (15.24 cm) in diameter (3-inch / 7.62 cm radius). The area of the two pistons is Pi * r^2. The area of the left piston is therefore 3.14, while the
area of the piston on the right is 28.26. The piston on the right is nine times larger than the piston on the left. This means that any force applied to the left-hand piston will come out nine times
greater on the right-hand piston. So, if you apply a 100-pound downward force to the left piston, a 900-pound upward force will appear on the right. The only catch is that you will have to depress
the left piston 9 inches (22.86 cm) to raise the right piston 1 inch (2.54 cm).
Next, we'll look at the role that friction plays in brake systems. | {"url":"http://auto.howstuffworks.com/auto-parts/brakes/brake-types/brake1.htm","timestamp":"2014-04-17T18:33:56Z","content_type":null,"content_length":"120754","record_id":"<urn:uuid:704a065f-988d-4048-b1d2-3aa64f365d60>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Class of Subexponential Distributions
The Annals of Probability
The Class of Subexponential Distributions
The class $\mathscr{J}$ of subexponential distributions is characterized by $F(0) = 0, 1 - F^{(2)} (x) \sim 2\{1 - F(x)\}$ as $x \rightarrow \infty$. New properties of the class $\mathscr{J}$ are
derived as well as for the more general case where $1 - F^{(2)} (x) \sim \beta\{1 - F(x)\}$. An application to transient renewal theory illustrates these results as does an adaptation of a result of
Greenwood on randomly stopped sums of subexponentially distributed random variables. | {"url":"http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aop/1176996225&page=record","timestamp":"2014-04-24T14:05:54Z","content_type":null,"content_length":"29281","record_id":"<urn:uuid:0aa89feb-d7cb-4efe-abcd-6add3f762619>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Local data for elliptic curves over number fields
Let \(E\) be an elliptic curve over a number field \(K\) (including \(\QQ\)). There are several local invariants at a finite place \(v\) that can be computed via Tate’s algorithm (see [Sil2] IV.9.4
or [Ta]).
These include the type of reduction (good, additive, multiplicative), a minimal equation of \(E\) over \(K_v\), the Tamagawa number \(c_v\), defined to be the index \([E(K_v):E^0(K_v)]\) of the
points with good reduction among the local points, and the exponent of the conductor \(f_v\).
The functions in this file will typically be called by using local_data.
sage: K.<i> = NumberField(x^2+1)
sage: E = EllipticCurve([(2+i)^2,(2+i)^7])
sage: pp = K.fractional_ideal(2+i)
sage: da = E.local_data(pp)
sage: da.has_bad_reduction()
sage: da.has_multiplicative_reduction()
sage: da.kodaira_symbol()
sage: da.tamagawa_number()
sage: da.minimal_model()
Elliptic Curve defined by y^2 = x^3 + (4*i+3)*x + (-29*i-278) over Number Field in i with defining polynomial x^2 + 1
An example to show how the Neron model can change as one extends the field:
sage: E = EllipticCurve([0,-1])
sage: E.local_data(2)
Local data at Principal ideal (2) of Integer Ring:
Reduction type: bad additive
Local minimal model: Elliptic Curve defined by y^2 = x^3 - 1 over Rational Field
Minimal discriminant valuation: 4
Conductor exponent: 4
Kodaira Symbol: II
Tamagawa Number: 1
sage: EK = E.base_extend(K)
sage: EK.local_data(1+i)
Local data at Fractional ideal (i + 1):
Reduction type: bad additive
Local minimal model: Elliptic Curve defined by y^2 = x^3 + (-1) over Number Field in i with defining polynomial x^2 + 1
Minimal discriminant valuation: 8
Conductor exponent: 2
Kodaira Symbol: IV*
Tamagawa Number: 3
Or how the minimal equation changes:
sage: E = EllipticCurve([0,8])
sage: E.is_minimal()
sage: EK = E.base_extend(K)
sage: da = EK.local_data(1+i)
sage: da.minimal_model()
Elliptic Curve defined by y^2 = x^3 + (-i) over Number Field in i with defining polynomial x^2 + 1
• [Sil2] Silverman, Joseph H., Advanced topics in the arithmetic of elliptic curves. Graduate Texts in Mathematics, 151. Springer-Verlag, New York, 1994.
• [Ta] Tate, John, Algorithm for determining the type of a singular fiber in an elliptic pencil. Modular functions of one variable, IV, pp. 33–52. Lecture Notes in Math., Vol. 476, Springer,
Berlin, 1975.
• John Cremona: First version 2008-09-21 (refactoring code from ell_number_field.py and ell_rational_field.py)
• Chris Wuthrich: more documentation 2010-01
class sage.schemes.elliptic_curves.ell_local_data.EllipticCurveLocalData(E, P, proof=None, algorithm='pari', globally=False)
Bases: sage.structure.sage_object.SageObject
The class for the local reduction data of an elliptic curve.
Currently supported are elliptic curves defined over \(\QQ\), and elliptic curves defined over a number field, at an arbitrary prime or prime ideal.
□ E – an elliptic curve defined over a number field, or \(\QQ\).
□ P – a prime ideal of the field, or a prime integer if the field is \(\QQ\).
□ proof (bool)– if True, only use provably correct methods (default controlled by global proof module). Note that the proof module is number_field, not elliptic_curves, since the functions that
actually need the flag are in number fields.
□ algorithm (string, default: “pari”) – Ignored unless the base field is \(\QQ\). If “pari”, use the PARI C-library ellglobalred implementation of Tate’s algorithm over \(\QQ\). If “generic”,
use the general number field implementation.
This function is not normally called directly by users, who may access the data via methods of the EllipticCurve classes.
sage: from sage.schemes.elliptic_curves.ell_local_data import EllipticCurveLocalData
sage: E = EllipticCurve('14a1')
sage: EllipticCurveLocalData(E,2)
Local data at Principal ideal (2) of Integer Ring:
Reduction type: bad non-split multiplicative
Local minimal model: Elliptic Curve defined by y^2 + x*y + y = x^3 + 4*x - 6 over Rational Field
Minimal discriminant valuation: 6
Conductor exponent: 1
Kodaira Symbol: I6
Tamagawa Number: 2
sage.schemes.elliptic_curves.ell_local_data.check_prime(K, P)
Function to check that \(P\) determines a prime of \(K\), and return that ideal.
□ K – a number field (including \(\QQ\)).
□ P – an element of K or a (fractional) ideal of K.
□ If K is \(\QQ\): the prime integer equal to or which generates \(P\).
□ If K is not \(\QQ\): the prime ideal equal to or generated by \(P\).
If \(P\) is not a prime and does not generate a prime, a TypeError is raised.
sage: from sage.schemes.elliptic_curves.ell_local_data import check_prime
sage: check_prime(QQ,3)
sage: check_prime(QQ,ZZ.ideal(31))
sage: K.<a>=NumberField(x^2-5)
sage: check_prime(K,a)
Fractional ideal (a)
sage: check_prime(K,a+1)
Fractional ideal (a + 1)
sage: [check_prime(K,P) for P in K.primes_above(31)]
[Fractional ideal (5/2*a + 1/2), Fractional ideal (5/2*a - 1/2)] | {"url":"http://sagemath.org/doc/reference/plane_curves/sage/schemes/elliptic_curves/ell_local_data.html","timestamp":"2014-04-20T23:41:45Z","content_type":null,"content_length":"70613","record_id":"<urn:uuid:0db2c383-eb04-419f-8402-5fd58aa06d8d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method of image charges: any extension for oscillating fields?
Sure, take a look at image currents. Many antennas, i.e. whip and monopole, use a ground plane and image theory in their operation.
Of course, alternative fields can produce alternating images.
However, this does not imply that this would lead to a useful image method of calculation.
For static fields, the method is obvious and simple, when applicable.
However, for alternating fields, delays have to be taken into account.
For my application (pedagogical), I need to keep a good account of these causal delays.
Would you have some reference for your suggestion? | {"url":"http://www.physicsforums.com/showthread.php?t=325357","timestamp":"2014-04-18T03:07:59Z","content_type":null,"content_length":"26650","record_id":"<urn:uuid:0488bac4-1fc0-4a13-bf02-5c1afd6635f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |