content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Post a reply
After my brief intro It's time I get this problem off my chest.
I'm into online mapping at the moment and have successfully been using the following formula to find 'nearest' whatevers based on someone's IP address mapped to a guesstimated but at region level
fairly accurate latitude / longitude pair.
D= 6,370,997*arcos(sin(LAT1)*sin(LAT2) + cos(LAT1)*cos(LAT2)*cos(LONG1-LONG2))
D = distance in meters
LAT1 = latitude of point1 (in radians)
LONG1 = longitude of point1 (in radians)
LAT2 = latitude of point2(in radians)
LONG2 = longitude of point2 (in radians)
arcos = arc cosine
cos = cosine
sin = sine
and 6,370,997 is the radius of the sphere in meters.
Latitude and longitude can be converted from degrees to radians by dividing by 57.29577951
But now I have a different need for which I think I can re-use this formula by re-arranging the variables but for the life of me I can't work it out. Grammar school has been 6 years and it seems
almost all gone!
Here's what I'd like to achieve and I'm hoping you could give me some directions.
The above gives me the distance between two lat/lon pairs. But now I want to provide the distance, say 250 Kilometers, and one lat/lon pair. What I need back is just four lat/lon pairs so I can draw
a box around the base lat/lon, or 'centroid, if you like.
The 4 points can just be 12 o'clock, 3 o'clock, 6 o'clock and 9 o'clock if you think of the center of a clock as the given lat/lon pair. That should make it easy since that way you only need 3
different latitudes and 3 longitudes.
For 250Km the equation becomes:
250000 = 6370997 * arcos( sin( $lat1 ) * sin( $lat2 ) + cos( $lat1 * cos( $lat2 ) * cos( -1.7 - $long2 ) ) )
Or 250000 = 6370997 * 0.04 i.e.
0.04 = arcos( sin( $lat1 ) * sin( $lat2 ) + cos( $lat1 * cos( $lat2 ) * cos( -1.7 - $long2 ) ) )
$lat1 and $lon1 will be given, that's the 'centroid' I was talking about. For instance, if I base the calculations on the centroid being located in Little Rock, Arkansas that lat/lon could be:
Latitude: 34.750971
Longitude: -92.345512
Put a cross hair over that with lines 250km long north, east, south and west of the centroid and I get 4 lat/lon pairs on those simple axes.
But how? I've been trying to reverse engineer this for days in order to re arrange the elements but it doesn't help I learned Maths in Dutch and 12-6 years ago.
Can anyone guide me to the next step in solving this? Googling around didn't yield anything useful so I was hoping someone here would know their inverse cosine from their sines etc. | {"url":"http://www.mathisfunforum.com/post.php?tid=2226&qid=20701","timestamp":"2014-04-20T16:47:35Z","content_type":null,"content_length":"21364","record_id":"<urn:uuid:3e8ced9d-e951-4fa6-ad23-5d8fc0d28588>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
trig equation... :(
Hey everyone,
I have to solve this equation below:
After too many simplifications and factorizations, I got to: (I hope it's right tho)
So yeah, I factorized everything pretty much, but what step to take after that, so I can solve this equation ?? | {"url":"http://www.physicsforums.com/showthread.php?t=94346","timestamp":"2014-04-19T15:06:08Z","content_type":null,"content_length":"27532","record_id":"<urn:uuid:9435ed17-3398-4854-8e99-4e4d4a6813d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
BIOL398-03/S13:Class Journal Week 6
From OpenWetWare
(Difference between revisions)
(→Helena M. Olivieri 6 Shared Journal) (→Helena M. Olivieri 6 Shared Journal)
← Previous diff Next diff →
Line 118: Line 118:
#*The fourth project seems to be the most open-ended and likely the most difficult because it #*The fourth project seems to be the most open-ended and likely the most difficult because it
requires the development of a completely original model. requires the development of a completely original model.
#How might you tweak/revise/recreate the matlab codes we've developed to analyze your model? #How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
#*We will be manipulating the current chemostat models in order to account for the effects of a #*We will be manipulating the current chemostat models in order to account for the effects of a
- possible fluctuating dilution rate. + possible fluctuating dilution rate.
[[User:Helena M. Olivieri|Helena M. Olivieri]] 02:01, 22 February 2013 (EST) [[User:Helena M. Olivieri|Helena M. Olivieri]] 02:01, 22 February 2013 (EST)
Revision as of 03:03, 22 February 2013
Laura Terada Week 6 Journal
1. What was the purpose of this assignment?
□ The purpose of this assignment was for us to think about what we want to do for our research project. We narrowed down a topic and identified two outside sources.
2. Which project is easiest? Why?
□ The third option might be the easiest because it's only factoring one other variable, the ammonia feed.
3. Which project is hardest? Why?
□ I think 4 is the hardest because it asks to develop a model of a ammonia sensor. This requires us to think about 2 nitrogen sensors, which I think might be difficult to think of.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ We will probably have to introduce new variables and set of equations if we want to factor in another component of the system.
Laura Terada 00:59, 22 February 2013 (EST)
Kevin McKay week 6 Journal
• The purpose of this assignment was to use the skills we have been practicing over the last few weeks in modeling and analyzing bioogical systems.
• Personally I find none of them to be easy, I am not great at math modeling.
• Number 4 Seems to be the hardest one because that is the one I would find hardest to begin.
• I am going to have to tweak the differential equations so I get more accurate graph results.
Kevin Matthew McKay 14:04, 21 February 2013 (EST)
Kasey O'Connor Week 6 Shared Journal
1. What was the purpose of this assignment?
□ This assignment allowed us to begin to think about our project topic. It also gave us the opportunity to find sources other than the original terSchure papers so we have more information to
be able to completely understand our topic.
2. Which project is easiest? Why?
□ I think they all seem to have a challenging, however, I think project three is the easiest. It only requires looking at the ammonia feed, and just altering the current models we have from per
cell to per chemostat.
3. Which project is hardest? Why?
□ I think project two might be the hardest. From what we discussed in class, it would require having an in depth knowledge of anaerobic and aerobic reactions in yeast, as well as how to model
the reactions based on the amount of glucose.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ We will have to determine the constant to keep the ammonia feed rate at, and then run tests at different dilution rates to see what happens to the model. We can look at the changes made to
the model and graphs if the rate coming in is different than the rate coming out.
Kasey E. O'Connor 18:26, 21 February 2013 (EST)
James P. McDonald Week 6 Journal
1. What was the purpose of this assignment?
□ The purpose of this assignment was to obtain resources that will aid us in our modeling project. It presents a starting point for us and got us to choose which project we would like to work
2. Which project is easiest? Why?
□ All of the projects seem difficult as math modeling is difficult for me, but I would have to say 3 looks easiest at first glance as you only have to account for one variable, the ammonia
3. Which project is hardest? Why?
□ The fourth project seems the hardest because you have to account for a two-component sensing system and I am unsure how I would begin on that.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ We will have to add new equations to the chemostat model to account for the oxygen consumption and carbon dioxide production.
James P. McDonald 21:19, 21 February 2013 (EST)
Matthew E. Jurek Week 6
• What was the purpose of the assignment?
□ The purpose of this assignment was to get us thinking about our project. This was a three-step process. First we had to decide on which of the four projects we'd be working on. Not only that,
we had to think about what approach we would take regarding the specific problem we chose. Then we had to find a biologically relevant source that will hopefully help us fill in any missing
pieces of information needed to complete our system of equations. Lastly, we had to look for a mathematical reference that will help us in the technical aspect of modeling the scientific
process we've chosen.
• Which project is easiest? Why?
□ I don't think any of them are easy. The idea of modeling is somewhat overwhelming for me. After looking at the project prompt I find myself asking where to begin on all of them. However, in
class today we spent some time on the third project which helped me realize how/where to start on that one.
• Which project is hardest? Why?
□ I think the fourth project is rather difficult. When dealing with a two-component regulatory system, there is obviously two components. This means that something triggers a chain reaction.
There's a sensor that senses and outside factor which then triggers some intracellular reaction. I think it would be hard to factor all of these things into a model since there are so many
moving parts.
• How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ As mentioned in my journal. A few additional things need to be considered to properly model the third project. We will be incorporating the influx within the chemostat. This will be added to
the dni/dt equation that was derived in class today. We also feel that the large amount of glutamate seen in the figures of the paper is due to something outside of the conversion reactions
discussed in class. To account for this, we will be including this additional glutamate source into the dx/dt equation derived in class. This will require outside research to fully achieve
what we hope to model.
• Matthew E. Jurek 23:44, 21 February 2013 (EST):
Anthony J. Wavrin Week 6
1. What was the purpose of this assignment?
□ The purpose of this assignment is to have us get comfortable looking at scientific literature as a source. Additionally, creating our model for our presentations will be a combination of math
skills and biological understanding. There will be information about yeast and their functioning that is necessary to know in order to create an accurate model, thus we will have to resort to
this literature to find this information.
2. Which project is easiest? Why?
□ I believe project 1, terSchure and coauthors (Microbiology, 1995, 141:1101-1108) considered other conditions, such as changing the dilution rate. Apply our chemostat model to these
conditions, is the easiest because it is more of an exploratory project. There is already a model made, it is just changing the parameters of that model. This does not require a deep level of
thinking to type in the different values.
3. Which project is hardest? Why?
□ I believe project 4, The last line of the terSchure paper is as follows: “If the ammonia concentration is the regulator, this may imply that S. cerevisiae has an ammonia sensor which could be
a two-component sensing system for nitrogen, as has been found in gram-negative bacteria (4).” Develop and analyze a model based on this concept, is the hardest because we do not have any
foundation of this model. The other ideas we have played with in class and discussed. Project 4 is a project from scratch and requires a deep level of understanding on the biological side,
what is really happening in this two-component sensing system, and in math, creating the working model from nothing.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ Luckily we have quite a bit of code written in matlab. Simply changing in a constant parameter with a different set of values will be the easiest way to attempt to tweak the codes. Also,
using the Michaelis-Menton type of modeling will help in adding new components to the model.
Anthony J. Wavrin 23:56, 21 February 2013 (EST)
Elizabeth Polidan Week 6 Shared Journal
1. What was the purpose of this assignment?
□ This assignment got us thinking about our projects. It also pushed us to begin finding both biology/biochemistry and math resources to help as we develop and implement our model.
2. Which project is easiest? Why?
□ The first project seems easiest because all one must do is vary the ammonium concentrations.
3. Which project is hardest? Why?
□ The fourth project has the potential of being the most difficult. It is so open ended and one could get lost.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ I have to figure out how I am going to modify the set of equations to reflect a two-component sensing system. I have a ways to go before I can give more details.
Elizabeth Polidan 00:48, 22 February 2013 (EST)
Paul Magnano
1. What was the purpose of this assignment? The purps of this asignmnt was to help us bgin our project by forcing us to fnd and analyze two sources for the topic we picked.
2. Which project is easiest? Why? The third choice only has one additional variable, ammonia feed so fee like that would be the easiest.
3. Which project is hardest? Why? The fourth choice asks you to consider 2 different nitrogen sensrs, a subject we havent talked about, so I feel like that would be the most difficult.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model? We might add a fourth eqution to our script (incorperating the previous three for nitrogen, carbon, and
yeast) to account for oxygen consumption and carbon dioxide production.
Paul Magnano 01:00, 22 February 2013 (EST)
Salman Ahmad 6 Shared Journal
1. What was the purpose of this assignment?
□ The purpose of this assignment was for us to decide what project we wanted to work on. It was also useful to learn how to find papers to help us with specific topics in our projects.
2. Which project is easiest? Why?
□ All the projects seem like they will be difficult, but the third one might be a little bit easier because we went over parts of it in class today.
3. Which project is hardest? Why?
□ The fourth project seems like it could be the hardest. It involves developing and analyzing a new model for the concept. I don't think I could develop a brand new model and get it to work in
MATLAB yet.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ We will have to change the MATLAB variables to try out the different conditions that our project requires. The code should stay the same because all the formulas are the same, but the values
of the variables (the concentrations in particular) will have to be changed.
Salman Ahmad 01:41, 22 February 2013 (EST)
Helena M. Olivieri 6 Shared Journal
1. What was the purpose of this assignment?
□ The purpose of this assignment was to encourage us to delve further into scientific paper analysis and research. Through this assignment we were also encourage to hone in and begin the
preparation process for presentation.
2. Which project is easiest? Why?
□ I find all of the projects slightly daunting for various reasons. I'm not sure I could identify a single one as the easiest. I suppose, any of the first three would be the easiest because we
have worked through various aspects of each in class.
3. Which project is hardest? Why?
□ The fourth project seems to be the most open-ended and likely the most difficult because it requires the development of a completely original model.
4. How might you tweak/revise/recreate the matlab codes we've developed to analyze your model?
□ We will be manipulating the current chemostat models and, thus, codes, in order to account for the effects of a possible fluctuating dilution rate.
Helena M. Olivieri 02:01, 22 February 2013 (EST) | {"url":"http://openwetware.org/index.php?title=BIOL398-03/S13:Class_Journal_Week_6&diff=678526&oldid=678524","timestamp":"2014-04-17T14:06:45Z","content_type":null,"content_length":"37391","record_id":"<urn:uuid:37e871c5-d973-48a7-98a8-b1313369b3b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
commutative and additive rotation
September 19th 2009, 02:57 AM #1
Sep 2007
Prove mathematically that two successive rotations on the XY plane are
commutative and additive. IE:
Rz(THETA1) . Rz(THETA2) = Rz(THETA2) . Rz(THETA1) = Rz(THETA1 + THETA2)
WHERE: z, 1, 2 are subscripts
How would i solve that?
Last edited by taurus; September 19th 2009 at 03:08 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-math-topics/103078-commutative-additive-rotation.html","timestamp":"2014-04-20T01:30:35Z","content_type":null,"content_length":"29354","record_id":"<urn:uuid:2bd0a2df-b4f4-4a28-9ed3-4d10a6f2ca9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lucerne, CO ACT Tutor
Find a Lucerne, CO ACT Tutor
...If you are looking for help with Calculus, I believe that I can help you. What I can offer is an understanding of Calculus with an ability to explain the concepts, problem solving techniques,
tricks and methods in ways that are easy to understand. I don't focus on jargon, but rather simple language that allows you to understand the material and actually learn it.
14 Subjects: including ACT Math, calculus, GRE, physics
...Looking up at the sky (at night) gives us a rare opportunity to see and understand the world beyond on normal experience. Astronomy gives us some of the basic tools to learn about our world
and our universe (from the viewing of comets, asteroids, and moons) to understanding why stars die and wh...
47 Subjects: including ACT Math, chemistry, physics, calculus
...As a teacher I have taught for 4 years, and tutor privately for 3 years prior to my teaching career. During my educational career I have worked with a wide variety of mathematics classes
ranging from 7th Grade Math to Pre-Calculus, Secondary English electives, and 6-12th grade physical education...
11 Subjects: including ACT Math, geometry, algebra 1, precalculus
...Since then, I have found that all areas of a student's interest can be used for learning algebra. Algebra should not be just memorization. It is fun!
19 Subjects: including ACT Math, accounting, algebra 1, algebra 2
...I would be quite comfortable and effective teaching beginning classical guitar I have been a professional software engineer since 1978. I am fluent in several computer languages, including
Java, Octave, Groovy and Python. I have worked in small startups and large corporations, including Apple Computer.
17 Subjects: including ACT Math, geometry, algebra 1, statistics
Related Lucerne, CO Tutors
Lucerne, CO Accounting Tutors
Lucerne, CO ACT Tutors
Lucerne, CO Algebra Tutors
Lucerne, CO Algebra 2 Tutors
Lucerne, CO Calculus Tutors
Lucerne, CO Geometry Tutors
Lucerne, CO Math Tutors
Lucerne, CO Prealgebra Tutors
Lucerne, CO Precalculus Tutors
Lucerne, CO SAT Tutors
Lucerne, CO SAT Math Tutors
Lucerne, CO Science Tutors
Lucerne, CO Statistics Tutors
Lucerne, CO Trigonometry Tutors | {"url":"http://www.purplemath.com/lucerne_co_act_tutors.php","timestamp":"2014-04-19T14:50:44Z","content_type":null,"content_length":"23596","record_id":"<urn:uuid:067086b4-f609-480f-9746-8ec1416f01ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
5.1 ChainedHashTable: Hashing with Chaining
A ChainedHashTable data structure uses hashing with chaining to store data as an array, , of lists. An integer, , keeps track of the total number of items in all lists (see Figure 5.1):
List<T>[] t;
int n;
The hash value of a data item , denoted is a value in the range . All items with hash value are stored in the list at . To ensure that lists don't get too long, we maintain the invariant
so that the average number of elements stored in one of these lists is .
To add an element, , to the hash table, we first check if the length of needs to be increased and, if so, we grow . With this out of the way we hash to get an integer, , in the range and we append to
the list :
boolean add(T x) {
if (find(x) != null) return false;
if (n+1 > t.length) resize();
return true;
Growing the table, if necessary, involves doubling the length of and reinserting all elements into the new table. This is exactly the same strategy used in the implementation of ArrayStack and the
same result applies: The cost of growing is only constant when amortized over a sequence of insertions (see Lemma 2.1 on page ).
Besides growing, the only other work done when adding to a ChainedHashTable involves appending to the list . For any of the list implementations described in Chapters 2 or 3, this takes only constant
To remove an element from the hash table we iterate over the list until we find so that we can remove it:
T remove(T x) {
Iterator<T> it = t[hash(x)].iterator();
while (it.hasNext()) {
T y = it.next();
if (y.equals(x)) {
return y;
return null;
This takes time, where denotes the length of the list stored at .
Searching for the element in a hash table is similar. We perform a linear search on the list :
T find(Object x) {
for (T y : t[hash(x)])
if (y.equals(x))
return y;
return null;
Again, this takes time proportional to the length of the list .
The performance of a hash table depends critically on the choice of the hash function. A good hash function will spread the elements evenly among the lists, so that the expected size of the list is .
On the other hand, a bad hash function will hash all values (including ) to the same table location, in which case the size of the list will be . In the next section we describe a good hash function.
5.1.1 Multiplicative Hashing
Multiplicative hashing is an efficient method of generating hash values based on modular arithmetic (discussed in Section 2.3) and integer division. It uses the operator, which calculates the
integral part of a quotient, while discarding the remainder. Formally, for any integers and , .
In multiplicative hashing, we use a hash table of size for some integer (called the dimension). The formula for hashing an integer is
Here, is a randomly chosen odd integer in . This hash function can be realized very efficiently by observing that, by default, operations on integers are already done modulo where is the number of
bits in an integer. (See Figure 5.2.) Furthermore, integer division by is equivalent to dropping the rightmost bits in a binary representation (which is implemented by shifting the bits right by ).
In this way, the code that implements the above formula is simpler than the formula itself:
int hash(Object x) {
return (z * x.hashCode()) >>> (w-d);
The following lemma, whose proof is deferred until later in this section, shows that multiplicative hashing does a good job of avoiding collisions:
Lemma 5..1 Let and be any two values in with . Then .
With Lemma 5.1, the performance of , and are easy to analyze:
Lemma 5..2 For any data value , the expected length of the list is at most , where is the number of occurrences of in the hash table.
. Let be the (multi-)set of elements stored in the hash table that are not equal to . For an element , define the indicator variable
and notice that, by Lemma
, . The expected length of the list is given by
as required.
Now, we want to prove Lemma 5.1, but first we need a result from number theory. In the following proof, we use the notation to denote , where each is a bit, either 0 or 1. In other words, is the
integer whose binary representation is given by . We use to denote a bit of unknown value.
Lemma 5..3 Let be the set of odd integers in , Let and be any two elements in . Then there is exactly one value such that .
. Since the number of choices for and is the same, it is sufficient to prove that there is
at most
one value that satisfies .
Suppose, for the sake of contradiction, that there are two such values and , with . Then
But this means that
for some integer . Thinking in terms of binary numbers, we have
so that the trailing bits in the binary representation of are all 0's.
Furthermore since and . Since is odd, it has no trailing 0's in its binary representation:
Since , has fewer than trailing 0's in its binary representation:
Therefore, the product has fewer than trailing 0's in its binary representation:
Therefore cannot satisfy (
), yielding a contradiction and completing the proof.
The utility of Lemma 5.3 comes from the following observation: If is chosen uniformly at random from , then is uniformly distributed over . In the following proof, it helps to think of the binary
representation of , which consists of random bits followed by a 1.
. [Proof of Lemma
] First we note that the condition is equivalent to the statement ``the highest-order bits of and the highest-order bits of are the same.'' A necessary condition of that statement is that the
highest-order bits in the binary representation of are either all 0's or all 1's. That is,
when or
when . Therefore, we only have to bound the probability that looks like (
) or (
Let be the unique odd integer such that for some integer . By Lemma 5.3, the binary representation of has random bits, followed by a 1:
Therefore, the binary representation of has random bits, followed by a 1, followed by 0's:
We can now finish the proof: If , then the higher order bits of contain both 0's and 1's, so the probability that looks like (
) or (
) is 0. If , then the probability of looking like (
) is 0, but the probability of looking like (
) is (since we must have ). If then we must have or . The probability of each of these cases is and they are mutually exclusive, so the probability of either of these cases is . This completes the
The following theorem summarizes the performance of the ChainedHashTable data structure:
Theorem 5..1 A ChainedHashTable implements the USet interface. Ignoring the cost of calls to , a ChainedHashTable supports the operations , , and in expected time per operation.
Furthermore, beginning with an empty ChainedHashTable, any sequence of and operations results in a total of time spent during all calls to . | {"url":"http://opendatastructures.org/versions/edition-0.1e/ods-java/5_1_ChainedHashTable_Hashin.html","timestamp":"2014-04-17T12:29:34Z","content_type":null,"content_length":"55735","record_id":"<urn:uuid:99283442-ee85-47e1-9caa-dbf05d71373b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00207-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kelly Betting & Unsure Advantage
October 22nd, 2011, 07:01 AM
Join Date: Feb 2007
Executive Member
Posts: 2,267
err umm
Originally Posted by
another game is in the mix as well, with higher EV & lower SD albeit weakly known SD.
errhh no not including bank growth, but from an original $3,000 bank, but come on bja, that (
) says
, no?
meh, bj ==>>
Play the other game!
You added a level of complexity. The higher EV lower SD game will improve things the more you play it. Even if the EV is lower, its a better game? given your ror
Ok 3g bank, 21% ror
So you tripled it from 1g?
Either you were lucky or the game is better then you think, even though limited trials.
Still don't like the fixed bets with; to me, is a high ror. Also, borrowed money?
October 22nd, 2011, 11:38 AM
Join Date: Apr 2006
Executive Member
Posts: 5,141
Originally Posted by
blackjack avenger
Play the other game!
You added a level of complexity. The higher EV lower SD game will improve things the more you play it. Even if the EV is lower, its a better game? given your ror
Ok 3g bank, 21% ror
So you tripled it from 1g?
Either you were lucky or the game is better then you think, even though limited trials.
Still don't like the fixed bets with; to me, is a high ror. Also, borrowed money?
it's dead, i tell yah, dead, dead, dead dead.
thanks fer yer help
October 22nd, 2011, 08:32 PM
Join Date: Feb 2007
Executive Member
Posts: 2,267
sorry couldn't be of more help
You won, very good
You Mr. Frog, Definitely one of the more colorful & likable personalities. I really do try. My friends think I am one of the funniest people they know
October 23rd, 2011, 01:58 AM
Join Date: Apr 2006
Executive Member
Posts: 5,141
oh, we a'int done yet
Originally Posted by
blackjack avenger
You won, very good
You Mr. Frog, Definitely one of the more colorful & likable personalities. I really do try. My friends think I am one of the funniest people they know
ty, ditto back at yah.
this here thang may go a loooong loong long time, (hopefully) and so probably gonna have lotsa data, definitely have more questions...........
circa 0.21% ROR
..... ca'pice?
edit: more on that ROR from Qfit's calculator, ehhmm tried it in excel using the formula
ROR =((1-(F29/E31))/(1+(F29/E31)))^(3000/E31)
in the image, agreed with Qfit's calculator pretty close.....
edit: got the wifey on the other game.
Last edited by sagefr0g; October 23rd, 2011 at 09:27 PM.
October 23rd, 2011, 04:05 PM
Join Date: Apr 2006
Executive Member
Posts: 5,141
worst case & data
i'm surprised there hasn't been any comments on consideration of a known value, ie. the worst case known value. but i haven't a clue for making a comment on that either, lol.
wouldn't worst case be a 'cap' on standard deviation? errhhh, but i guess problem is one doesn't know how often worse case might happen?
so but, worst case occurrences would be at the far, far, far, far.... bad side of the bell curve, no?
so can't one surmise a frequency of worst case (at least qualitatively) from that?
edit: guess maybe it would help to be able to enumerate all the other possible cases as well, work from there?
Last edited by sagefr0g; October 23rd, 2011 at 06:06 PM.
November 8th, 2011, 01:11 PM
Join Date: Apr 2006
Executive Member
Posts: 5,141
what sort of conjectures can one make
so one has some data for two games (albeit severely limited)
the EV is relatively certain, while the SD is derived from the limited data
graphs of the data are depicted below.
what sort of conjectures about the data, SD, earnings and graphs, is reasonable (albeit steeped in uncertainty)?
errhh, i'm not knowledgeable about stuff like normal distributions and all.....
but i think maybe some distributions can be skewed, maybe? ie. perhaps the bell curve representing the distribution may be not a perfect bell curve but fatter on one side than the other, sorta thing?
well anyway, far as the speculation, conjecture..... errhh (and i realize it's just that, speculation offa small sample size, uncertain SD, ect) ok, but what i'd speculate follows:
so far it looks as if maybe the bell curve 'could' be fatter on the 'sweet' side?
errrhhh, i mean, looking at the graphs, fluctuation is such that only one low 2sd has occurred, while high sd's are more numerous.
so is it reasonable to feel hopeful or lucky? or maybe both? or just happy to have a plus ev situation? or all three? lol
November 8th, 2011, 01:46 PM
Join Date: Feb 2007
Executive Member
Posts: 2,267
Let's say someone is going to play a coin toss game, they don't know it but the game is biased in their favor.
First flip they are favored to win.
As flips increase they are likely to win. The bell curve moves to the right.
However, the potential for big losses increases with flips. As an example one cannot lose 3 in a row until at least 3 flips have occurred. Of coutse as flips continue variance becomes background
noise compared to EV. It's a bit of a errr Ummm situation.
So a big loss can occur but hopefully one is up enough that the EV overwhelms the variance
If you are winning & wins are increasing its a good sign.
November 9th, 2011, 04:32 PM
Join Date: Apr 2006
Executive Member
Posts: 5,141
just what if stuff, ie. what if the data was giving good info.
so from limited data
figured a not so certain SD
but EV is relatively certain...
went from there to let excel whip up some 'what if' values for plays made and plays into the future, not yet made.
blue line is expected value versus number of plays
green line is one standard deviation versus number of plays
red line is for low three standard deviations versus number of plays.
what can one theorize from the graphs?
oh the highlited in yellow numbers is where 23 plays passed and one standard deviation was equal to the expected value for 23 plays.
is that like N0 ? yes? lol
ok, also the low three standard deviation line, it cross's zero dollars at about the 188 plays point. gotta kind of neat curve to it coming up out of negative territory. seems to be heading up, up &
away into positive territory.....
good sign, N0?
any thoughts, anyone?
November 9th, 2011, 06:01 PM
Join Date: Feb 2007
Executive Member
Posts: 2,267
23 plays may be the N0 for this game;looking with BJ goggles, seems short? However, if the game is as well defined as this the variance seems very low.
I can't see any of the numbers from the graph on my phone, but no random walk? Virtually no variance on the graph itself? From what I can see the variance is lower then BJ?
If winning, continuing to win & % of wins increasing, your looking pretty good for an angry froggie.
Your basic ? Seems to be?
I think I am playing a winning game
I am winning
Should I continue?
The basic answer would seem to be yes.
As bank grows its confirmation of a winning game & ror goes down.
Last edited by blackjack avenger; November 9th, 2011 at 06:15 PM.
November 9th, 2011, 06:13 PM
Join Date: Feb 2007
Executive Member
Posts: 2,267
ok, symetry of sessions
Each play is a projected session? Each session is made of multiple plays? The sessions are all so consistent that it seems that is what you should expect moving forward. If if your actual play
results reflect your projections.
If you map out sessions like this for BJ, the results would be everywhere showing many more trials needed to have a clear handle on whats going on.
Oh, if I read correctly the penalty for being wrong on variance is higher then being wrong on EV, from a Kelly perspective. I'm sorry if that is not comforting. However your ror is .21% (1/3 fixed
Kelly, cuz can't really resize bank?) and dropping with wins? That should be conservative enough?
Moving forward obviously if you take a loss bigger then you have experienced that changes everything.
Last edited by blackjack avenger; November 9th, 2011 at 06:48 PM.
Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code
code is
HTML code is Off
Forum Rules
All times are GMT -6. The time now is 11:04 PM. | {"url":"http://www.blackjackinfo.com/bb/showthread.php?t=23713&page=2","timestamp":"2014-04-21T05:04:04Z","content_type":null,"content_length":"86031","record_id":"<urn:uuid:23171dd6-3ac8-4b76-acd6-60d80f8b32b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yarrow Point, WA Algebra 2 Tutor
Find a Yarrow Point, WA Algebra 2 Tutor
...This is where the concepts for calculus are truly laid and any shakiness in understanding the topics here lead to hesitation in moving on to Calculus. This where I show students where the
graphical representation and equation come together so that the visualization carries on with them when they...
16 Subjects: including algebra 2, geometry, algebra 1, precalculus
...I love math and empowering others to learn math too. If your looking for a fun, creative, and EFFECTIVE way to improve your math skills- contact me for a tutoring session and you won't be
disappointed. To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room.
17 Subjects: including algebra 2, calculus, special needs, college counseling
...I have been published multiple times and feel very confident in my abilities in writing. I have always sought avenues that allowed me to work with students and peers in teaching settings. As
an undergrad I was a peer mentor for an intro to engineering class, where I guided students across vario...
14 Subjects: including algebra 2, writing, geometry, biology
...I also won soccer skills contests sponsored through Coca Cola in my age group (first place local and third place in the state). Now, I love to coach. I encourage while giving the players new
opportunities to try skills in teamwork and individual soccer skills as well. I've played clarinet si...
46 Subjects: including algebra 2, reading, English, algebra 1
...Once they know how to approach a problem, then they can tackle the calculations and methods for going through the problems step by step. Beyond tutoring children how to approach and think
through challenge topics, I aim to instill what I feel is the most important part of succeeding in any educa...
25 Subjects: including algebra 2, chemistry, physics, geometry
Related Yarrow Point, WA Tutors
Yarrow Point, WA Accounting Tutors
Yarrow Point, WA ACT Tutors
Yarrow Point, WA Algebra Tutors
Yarrow Point, WA Algebra 2 Tutors
Yarrow Point, WA Calculus Tutors
Yarrow Point, WA Geometry Tutors
Yarrow Point, WA Math Tutors
Yarrow Point, WA Prealgebra Tutors
Yarrow Point, WA Precalculus Tutors
Yarrow Point, WA SAT Tutors
Yarrow Point, WA SAT Math Tutors
Yarrow Point, WA Science Tutors
Yarrow Point, WA Statistics Tutors
Yarrow Point, WA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Beaux Arts Village, WA algebra 2 Tutors
Bellevue, WA algebra 2 Tutors
Clyde Hill, WA algebra 2 Tutors
Duvall algebra 2 Tutors
Highlands, WA algebra 2 Tutors
Houghton, WA algebra 2 Tutors
Hunts Point, WA algebra 2 Tutors
Kirkland, WA algebra 2 Tutors
Medina, WA algebra 2 Tutors
Mercer Island algebra 2 Tutors
Monroe, WA algebra 2 Tutors
Redmond, WA algebra 2 Tutors
Seahurst algebra 2 Tutors
Suquamish algebra 2 Tutors
Woodway, WA algebra 2 Tutors | {"url":"http://www.purplemath.com/Yarrow_Point_WA_algebra_2_tutors.php","timestamp":"2014-04-17T22:06:55Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:528d3dc0-e985-486c-a778-36f8aa79934d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Singular Parabolic Anderson Model
Carl E Mueller (University of Rochester) Roger Tribe (University of Warwick)
We consider the heat equation with a singular random potential term. The potential is Gaussian with mean 0 and covariance given by a small constant times the inverse square of the distance. Solutions
exist as singular measures, under suitable assumptions on the initial conditions and for sufficiently small noise. We investigate various properties of the solutions using such tools as scaling,
self-duality and moment formulae. This model lies on the boundary between nonexistence and smooth solutions. It gives a new model, other than the superprocess, which has measure-valued solutions.
Full Text: Download PDF | View PDF online (requires PDF plugin)
Pages: 98-144
Publication Date: February 25, 2004
DOI: 10.1214/EJP.v9-189
1. Albeverio, S. and Rockner, M. (1989), Dirichlet forms, quantum fields and stochastic quantization, in Stochastic analysis, path integration and dynamics (Warwick, 1987), Pitman Res. Notes Math.
Ser., 200, pages 1--21, Harlow, Longman Sci. Tech. MR 90j:81097
2. Bentley, P.W. (1999), Regularity and inverse SDE representations of some stochastic PDEs, PhD Thesis at the University of Warwick.
3. Carmona, R.A. and Molchanov, S.A. (1994). Parabolic Anderson problem and intermittency, AMS Memoir 518, Amer. Math. Soc. MR 94h:35080
4. Cox, J.T., Fleischmann, K., and Greven, A. (1996), Comparison of interacting diffusions and an application to their ergodic theory, Prob. Th. Rel. Fields 105, 513-528. MR 97h:60073
5. Cox, J.T., Klenke A., and Perkins E.A. (2000), Convergence to equilibrium and linear systems duality, in Stochastic models (Ottawa, ON, 1998), pages 41-66 CMS Conf. Proc., 26, Amer. Math. Soc.,
Providence, RI.
6. Dawson, D.A. (1993), Measure-valued Markov processes. in Ecole d'et'e de probabilit'es de Saint-Flour, XXI-1991, Springer Lecture Notes in Mathematics} 1180, 1-260. MR 94m:60101
7. Dawson, D.A. and Salehi, H. (1980), Spatially homogeneous random evolutions, Journal of Multivariate Analysis 10, 141-180. MR 82c:60102
8. Da Prato, G. and Zabczyk, J. (1992), Stochastic Equations in Infinite Dimensions, Vol. 44 of Encyclopedia of mathematics and its applications, Cambridge University Press. MR 95g:60073
9. Ethier, S. and Kurtz, T. (1986), Markov Processes, Characterization and Convergence, Wiley. MR 88a:60130
10. Falconer, K.J. (1985), The Geometry of Fractal Sets, Vol. 85 of Tracts in mathematics, Cambridge University Press. MR 88d:28001
11. Gartner J., Konig, W., and Molchanov S.A. (2000). Almost sure asymptotics for the continuous parabolic Anderson model, Prob. Th. Rel. Fields, 118, 547-573. MR 2002i:60121
12. Holden, Helge; Oksendal, Bernt; Uboe, Jan, and Zhang, Tusheng, Stochastic partial differential equations, a modeling, white noise functional approach., Probability and its Applications,
Birkhauser Boston Inc., Boston, MA. MR 98f:60124
13. Ito, K. (1984), Foundations of stochastic differential equations in infinite dimensional spaces, Vol. 47 of CBMS-NSF Regional Conference Series in Applied Mathematics. MR 87a:60068
14. Kunita, H. (1990), Stochastic flows and stochastic differential equations, Vol. 24 of Cambridge studies in advanced mathematics, Cambridge University Press. MR 91m:60107
15. Liggett, T.M. (1985), Interacting particle systems, Springer-Verlag. MR 86e:60089
16. Nualart David; and Rozovskii, Boris (1997) Weighted stochastic Sobolev spaces and bilinear SPDEs driven by space-time white noise. J. Funct. Anal., 149(1), 1997. MR 98m:60100
17. Nualart, D. and Zakai M. (1989), Generalized Brownian functionals and the solution to a stochastic partial differential equation, J. Funct. Anal., 84, 279-296. MR 90m:60076
18. Revuz, D. and Yor M. (1991), Continuous Martingales and Brownian Motion, Springer-Verlag. MR 2000h:60050
19. Rogers, L.C.G. and Williams, D. (2000), Diffusions, Markov processes and martingales Vol. 2, Ito calculus, 2nd edition, Cambridge University Press. MR 2001g:60189
20. Walsh, J.B. (1986), An introduction to stochastic partial differential equations, Ecole d'et'e de probabilit'es de Saint-Flour, XIV-1984, Springer Lecture Notes in Mathematics 1180, 265-439. MR
21. Yor, M. (1980), Loi de l'indice du lacet Brownien, et distribution de Hartman-Watson, Prob. Th. Rel. Fields, 53(1), 71-95. MR 82a:60120
This work is licensed under a
Creative Commons Attribution 3.0 License | {"url":"http://www.emis.de/journals/EJP-ECP/article/view/189.html","timestamp":"2014-04-17T13:19:38Z","content_type":null,"content_length":"22120","record_id":"<urn:uuid:719a8a57-8f56-4222-9193-84b0be02c681>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Collapsing Over Limited Set
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Collapsing Over Limited Set
From Nick Winter <nw53@cornell.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Collapsing Over Limited Set
Date Thu, 15 Sep 2005 15:47:58 -0400
I believe this will do it:
bysort product: egen n_p = count( !mi(markup) & !mi(company) )
by product: egen sum_p = sum( markup * !mi(company) )
bysort company product : gen n_cp = _N
by company product: egen sum_cp = sum(markup)
gen averagemarkup = (sum_p-sum_cp) / (n_p-n_cp)
bysort product: gen n_p = _N
by product: egen sum_p = sum(markup)
bysort company product : gen n_cp = _N
by company product: egen sum_cp = sum(markup)
gen averagemarkup = (sum_p-sum_cp) / (n_p-n_cp)
That is, get the sum and number of observations for each product overall, then get the sum and number of observations for each company-product pair...then subtract off these latter quantities from
the former to calculate the average you want.
This will trip over missing values on company, I believe, because then the n_p and sum_p will be wrong.
--Nick Winter
At 03:11 PM 9/15/2005, you wrote:
Hi there,
I would really appreciate someone's help with this question.
I'm trying to generate a dataset of statistics by collapsing another
dataset, but each of the id's I'm collapsing by, I want to use every
observation in the dataset except the one for the id under consideration.
Would there be a way to do this?
For example, I have data of the form:
company product markup
100 31 .3
100 55 .2
111 55 .1
120 31 .1
120 55 .1
Now I want to ask the question: for each company, calculate the average
markup of each product it produces, where the average is taken over all
companies that sell the product except the company itself. So I want to
end up with
company product averagemarkup
100 31 .05
100 55 .1
111 31 .2
111 55 .15
120 31 .15
120 55 .15
Obviously collapsing the data the standard way is not going to do this. I
need to do this for hundreds of thousands of observations (hundreds of
companies and thousands of products) so am looking for a way to do this
that would be relatively quick. I would be grateful for any suggestions.
Thanks very much.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Nicholas J. G. Winter 607.255.8819 t
Assistant Professor 607.255.4530 f
Department of Government nw53@cornell.edu e
Cornell University falcon.arts.cornell.edu/nw53 w
308 White Hall
Ithaca, NY 14853-4601
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-09/msg00471.html","timestamp":"2014-04-17T06:44:05Z","content_type":null,"content_length":"7955","record_id":"<urn:uuid:884eae55-ab7f-47e4-b533-990bcb83cbe5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monomeric Bistability and the Role of Autoloops in Gene Regulation
Genetic toggle switches are widespread in gene regulatory networks (GRN). Bistability, namely the ability to choose among two different stable states, is an essential feature of switching and memory
devices. Cells have many regulatory circuits able to provide bistability that endow a cell with efficient and reliable switching between different physiological modes of operation. It is often
assumed that negative feedbacks with cooperative binding (i.e. the formation of dimers or multimers) are a prerequisite for bistability. Here we analyze the relation between bistability in GRN under
monomeric regulation and the role of autoloops under a deterministic setting. Using a simple geometric argument, we show analytically that bistability can also emerge without multimeric regulation,
provided that at least one regulatory autoloop is present.
Citation: Widder S, Macía J, Solé R (2009) Monomeric Bistability and the Role of Autoloops in Gene Regulation. PLoS ONE 4(4): e5399. doi:10.1371/journal.pone.0005399
Editor: Mark Isalan, Center for Genomic Regulation, Spain
Received: January 22, 2009; Accepted: March 23, 2009; Published: April 30, 2009
Copyright: © 2009 Widder et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Funding: This work was supported by the EU grant CELLCOMPUT, the James McDonnell Foundation and by the Santa Fe Institute. The funders had no role in study design, data collection and analysis,
decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Bistability is known to pervade key relevant biological phenomena [1]. Many relevant examples can be found including, e.g. the determination of cell fate in multicellular organisms. This occurs with
Xenopus oocytes, which convert a continuously variable concentration of the maturation-inducing hormone progesterone, into an all-or-none biological maturation response [2]. Stem cells on the other
hand present a switch where the expressions of the involved transcription factors (OCT4, SOX2, and NANOG) are stabilized by a bistable switch. When they are expressed and the switch is on, the
self-renewal genes are on and the differentiation genes are off. The opposite holds when the switch is off [3]. A third example is the cell-cycle regulation, which exhibits a temporally abrupt
response of Cdc2 to non-degradable cyclin B [4]. This capacity of achieving multiple internal states is at the core of a plethora of regulatory mechanisms, often associated to small genetic circuits,
including both switches [5], [6], [7], [8], [9] and oscillators [10], [11]. Understanding their logic and how it changes under parameter tuning are two important goals of systems biology.
A general consensus indicates that such switches are based on a mutual regulation of two transcription factors (figure 1), e.g. mutual inhibition: protein A inhibits the synthesis of protein B and
vice versa [12]. Depending on the type of regulation they can be in two different stable states and may change from one to the other spontaneously or due to an external signal [12], [13], [14], [15].
For example, during the embryonic development of Drosophila melanogaster the expression of the hp gene responsible for hunchback formation is activated by Bicoid (Bcd) protein. In early
embryogenesis, the diffusion of Bcd, translated from the mRNA located at the anterior end of the egg, forms an exponential concentration gradient, establishing the anterior–posterior axis. Upon this
signal, a bistable mechanism allows for large changes in hb promoter occupancy under small changes in Bcd concentration across some threshold generating an on–off expression pattern. This bistable
mechanism explains the sharpness of the Hb expression, from highest to lowest values taking place in a spatial scale spanning just 10% of the egg length [16]. In other natural scenarios bistability
can be implemented by non-transcription factors. However, even in these cases mutual regulation is required. An example of bistable systems based on mutual inhibition of non-transcription factors can
be found in signalling pathways. In Saccharomyces cerevisiae, signal transduction pathways involved in sensing external stimuli often share the same or homologous proteins, e.g. high osmolarity
pathway and pheromone pathway. Despite potential cross-wiring, cells show specificity of response. This specificity can be achieved, among other mechanisms, by mutual inhibition of the shared
proteins. When a single cell is exposed to osmostress and pheromone induction simultaneously, only one of the two pathways is activated inhibiting the activation of the other pathway. In this case,
the activated pathway corresponds to one of the two possible stable states of the bistable system (see [17] and references therein).
Figure 1. Schematic representation of a general genetic circuit with two components.
In (a) a genetic circuit with monomeric autoloops and cross-regulation involving two genes (G[A], G[B]) coding for two proteins (A, B) acting as transcription factors. Under certain conditions, this
type of genetic circuit can show bistability. Here all possible regulatory modes are shown (+/−). (b) Simplified diagram summarizing the logic of this system.
Focusing on two-components genetic circuits, their regulatory proteins are known to form homodimers (or multimers) to be effective transcription factors allowing to turn ON or OFF the state of target
genes [12], [18], [19]. Multiple examples can be found in natural systems e.g. in the lambda-phage where the change from lysogenic to lytic behaviour in response to environmental changes is regulated
by a switching two-component circuit. In this case, the two transcription factors involved, CI and Cro, must form homodimers to be effective [20], [21]. The same requirements allow for bistability in
synthetic designs. This is the case of the genetic toggle switch in Escherichia coli, where bistability of the toggle arises from the mutually inhibitory arrangement of the repressor genes. The
regulatory transcription factors TetR and LacI form homodimers and homotetramers respectively. The transition from one to the other stable state is triggered by external inducers (aTc and IPTG) [6].
For general systems without any specific assumptions, multimeric regulation was assumed to be essential to obtain bistable behaviour [22], [23]. The inability to exhibit bistability in monomeric
circuits without autoloops, was previously demonstrated [24]. These results indicate that linear or Michaelis-Menten kinetics cannot provide bistability and higher degree of non-linear genetic
regulation is required. Different mechanisms can introduce this non-linearity. Positive cooperativity of binding is one such mechanism. It can result from non-independent binding at two adjacent
operator sites. A similar effect results if a repressor is effective only as a dimer (or multimer) and the monomer-monomer affinity is weak [24]. Several models of bistable systems involving only
positive regulation also require cooperativity of binding [25], [26]. Despite the above, monomeric bistability has been found in particular, bimolecular systems with Michaelis-Menten kinetics under
the indispensable key-assumption of constancy of the total amount of proteins [27]. Also, some kind of multistability is possible in a stochastic scenario without cooperative binding [13], [28], but
under fully symmetric interactions. However, the flips between the two states are also stochastic and the observed alternative states cannot be stabilized (as it occurs in real biological switches)
due to the effectively monostable character of the system without noise.
In deterministic dynamics, bistability requires the existence of three fixed points. In this paper we demonstrate, to our knowledge for the first time, that deterministic bistability can emerge for
two-component gene circuits by considering solely auto-regulatory loops. This is unlike the previously briefly mentioned cases [13], [27], where bistability is not generated by the intrinsic topology
of the circuits. In other words, we demonstrate that bistability in monomeric two-component circuits can be implemented exclusively by the topology of interactions, with no additional constraints,
being a single autoloop enough to obtain it. Our analysis is based on simple geometrical features associated to the system's nullclines and their crossings. As shown below, the presence of an
autoloop introduces essential geometrical constrains responsible for the existence of three fixed points. Our results can help understanding the essential role of autoloops in small natural circuits
and their synthetic counterparts.
Geometrical features
In order to perform a general analysis of the nullclines, as introduced in Materials and Methods, we study the single components (numerator and denominator) of the expressions independently, see
figure 2(a). The numerator is a parabolic function having two analytically well defined crossing points (ξ[+]>0 and ξ[−]<0 ) with the horizontal axis given by(4)
Figure 2. Qualitative shapes of the nullclines.
(a) Graphical representation of the nullcline's components. The numerator is the parabolic curve and the denominator the straight line. Two feasible scenarios are shown: the solid line denotes ξ^+>φ,
the dashed line corresponds to ξ[+]<φ. (b),(c) Qualitative behaviour of the nullclines applying the two possible conditions.
The denominator is a lineal function crossing the horizontal axis in φ = γ[A]α^B[c]/d[A]. The points φ and ξ[+] are the upper and lower bound of the protein concentrations of the system within the
biological meaningful region. Combining the two components, two different scenarios are feasible, φ<ξ[+] or φ>ξ[+], comprising different geometrical features. In both cases we find two crossing
points with the horizontal axis in ξ[±], no inflection points, and the nullclines tending towards their oblique asymptotes with an identical slope m = −ω^A[l]/ω^B[c] for both settings A→±∞. From this
expression we see that the autoloop is related with certain geometrical features. Systems without auto-regulatory loops (ω^A[l] = 0) do not exhibit oblique asymptotes, but horizontal. As shown later
the existence of oblique asymptotes is closely related with the number of possible fixed points and bistability.
In the first case, ξ[+]>φ, we obtain a vertical asymptote in φ with its lateral behaviour given by lim[A→φ±](B)[dA/dt = 0] = ±∞ For the second case, ξ[+]<φ, we find similar asymptotes with opposite
lateral behaviour according to lim[A→φ±](B)[dA/dt = 0] = ∓∞. In order to determine possible extrema of the nullcline (dB/dA = 0), we find, after some algebra, that the inequality(5)
must be met to provide valid solutions, hence extrema. Rewriting the conditions φ>ξ[+] and φ<ξ[+] by using the previous expressions for φ and ξ[+] we conclude that only φ>ξ[+] satisfies condition (5)
and hence provides extrema. However, according with the vertical asymptotic behaviour and the existence of only one crossing point (ξ[+]) within the positive domain, we conclude that the extrema are
located within B<0. Hence, no extrema can be obtained within the biologically meaningful domains, i.e. by imposing the biological constraint that the levels of proteins must be positive (A>0, B>0),
for either scenario. In figure 2(b),(c) the two different types of possible behaviour are shown. Furthermore, a similar analysis has been performed for a system without basal transcription and the
geometrical features are not affected qualitatively.
Fixed point analysis
Using the previous geometrical approach, we are in the position to reassemble both nullclines within the biological meaningful region determining how many crossing points between both nullclines can
arise under different regulatory conditions. The crossings between nullclines define the so called fixed points, i.e. the levels of proteins A and B such that dA/dt = 0 and dB/dt = 0 simultaneously,
thus no changes in protein concentration will take place Four possible cases are obtained based on the symmetry of the expressions for nullcline dA/dt = 0 and dB/dt = 0. They are shown in figure 3.
For the cases [ξ[+]>φ][dA/dt = 0] ∧ [ξ[+]<φ][dB/dt = 0] and [ξ[+]<φ][dA/dt = 0] ∧ [ξ[+]>φ][dB/dt = 0] (3(a) and 3(b), respectively), equal geometrical arguments apply. In both cases the nullclines
exhibit opposite monotonies and opposite curvatures within the entire domain due to the absence of extrema and inflexion points. These conditions solely allow for a single crossing, hence
monostability. In the case [ξ[+]<φ][dA/dt = 0] ∧ [ξ[+]<φ][dB/dt = 0], depicted in figure 3(c), the nullclines exhibit opposite curvature, but equal monotonies. Again, the absence of extrema and
inflection points does not allow for three crossings, however under the special condition of [ξ[+]][dA/dt = 0] = [ξ[+]][dB/dt = 0] = 0 two crossing point arise. In accordance with expression (4),
these conditions can be satisfied, if 4d[i]γ[i]ω^i[l] = 0 with i = {A, B}. Since γ[i]>0 and d[i]>0, only ω^i[l] can be zero and in this case (for a system without autoloop regulation) the nullclines'
expressions now read:(6)
where the fixed points can be analytically solved. The solutions are determined by the roots of a polynomial of second degree allowing for two possible fixed points at most. However, the polynomial
crosses the vertical axis at −γ[A] γ[B]ω^A[c]α^A[c] forcing one of the roots to be located within the negative domain. Hence, without autoloops only monostability is possible in monomeric gene
circuits. This result is consistent with analysis previously reported [24]. For the setting [ξ[+]>φ][dA/dt = 0] ∧ [ξ[+]>φ][dB/dt = 0] both nullclines show the same type of curvature and monotony. Due
to the oblique asymptote, introduced by the autoloop, no analytical constraints prevent the existence of three crossing points. In figure 3(d) we show an example of bistability with monomeric
Figure 3. The four possible scenarios of nullcline combinations.
Dashed line corresponds to nullcline dA/dt = 0, solid line to dB/dt = 0, φ^A and φ^B denote the location of the asymptote for dA/dt = 0 and dB/dt = 0, respectively. Due to the symmetry of the
nullclines' expressions, the vertical asymptote of dB/dt = 0 corresponds to the horizontal of dA/dt = 0. Analogously, ξ^A[+] and ξ^B[+] are the crossing points with the axis. (d) The geometrical
features of the nullclines allow for two possible cases. Three crossing points (depicted) or a single crossing (not depicted).
In order to determine the impact of the number of autoloops on bistability, we have numerically analyzed the effect of downsizing the system from two to one autoloop (ω^i[l] = 0, ω^j[l] = 0). As
figure 4 shows, only one autoloop is required to allow bistability. In figure 4(a) the nullclines of a circuit with two autoloops are depicted and three fixed points appear for a given set of
parameters. The stability analysis reveals two stable fixed points separated by an unstable one resulting in the corresponding basins of attraction. Figure 4(b) shows a system with a single autoloop.
These numerical examples demonstrate that genetic circuits with monomeric regulation are able to exhibit deterministic bistability, whereby only a single autoloop is required to satisfy the necessary
geometrical constraints.
Figure 4. Numerical simulations and stability analysis.
In (a) circuit with two autoloops and in (b) circuit with one autoloop are shown. Circle denotes a stable, square an unstable fixed point. The basins of attraction are shown in grey and white. The
following sets of parameters have been used: (a) γ[A] = 1, d[A] = 1, α^A[l] = 10, ω[l]^A = 1, ω^B[c] = 1, α^B[c] = 0, γ[B] = 1.1, d[B] = 0.1, α^Bl = 2.1, ω^B[l] = 0.1, ω^A[c] = 1.1, α^A[c] = 0 and
(b) γ[A] = 5, d[A] = 8, α^A[l] = 9, ω[l]^A = 1, ω^B[c] = 1, α^B[c] = 0, γ[B] = 8.5, d[B] = 1, α^B[l] = 0, ω^B[l] = 0, ω[c]^A = 1, α^A[c] = 0.
Impact of regulation type on monomeric bistability
In the previous sections the type of regulatory interactions, given by α^i[l] and α^i[c] was handled generally. However, the individual regulatory interactions, i.e. activation or inhibition,
introduce additional constraints for the emergence of bistability. Applying some algebra to condition φ<ξ[+] (bistability), we obtain an equivalent expression as in (5) with the opposite inequality.
Focusing on the type of regulation, it can be rewritten as(7)
This leads us to two different instances: (a) if α^B[c]>1, then α^A[l]>α^B[c] and (b) if α^B[c]<1, then α^A[l]>α^B[c] ∨ α^A[l]<α^B[c]. As a consequence systems with inhibitory regulation in the
autoloop and activatory cross-regulation can not exhibit bistability. In all the other cases no geometric impediments are present. Figure 5 shows all possible regulatory topologies which cannot
exhibit bistability, irrespective of the specific set of parameters used.
Figure 5. All possible regulatory combinations preventing bistability.
In other circuit topologies the emergence of bistability is possible but conditioned to the specific parameters of the system.
To summarize, a general, analytic set of conditions for bistability in simple two-element genetic circuits has been derived for monomeric regulation. Although previous work suggested that such kind
of mechanism would be unlikely to be observed, here a simple geometric argument reveals that wide parameter spaces allow monomeric regulation to generate multiple stable states. These results permit
to predict the expected scenarios where a reliable switch could be obtained. Current efforts in engineering cellular systems [29], [30], [31] would benefit from our general analysis. In this context,
although dimerization seems to be a widespread mechanism in GRNs, our study indicates that potential scenarios for monomeric regulation could be easily achieved. The current state of the art in
synthetic biology allows for a customized engineering of monomeric transcription factors e.g. Zinc finger TFs can be easily designed to bind different DNA sequences [32]. Building these monomeric
transcription factors in a properly designed network [33], the experimental implementation of monomeric bistable circuits seems thus to be feasible.
Finally, further work should explore how noise can act on these types of dynamical systems. In eucaryotic cells, dimerization has been shown to provide a source of noise reduction at least at the
level of simple GRNs [34]. Future studies should see how our monomeric circuits are affected by noise and what types of limitations and advantages can be obtained.
Materials and Methods
Genetic circuit
We focus our analysis on the most general system formed by two genes. Gene A is expressed under the constrains of two different monomeric regulatory modes. Protein A exhibits an auto-regulatory loop
by binding to its own promoter, as well as a cross-regulation mediated by protein B. Gene B expression is analogously regulated (see figure 1). We consider the general case without any specific
assumptions about the type of regulatory interactions, i.e. activation or inhibition, but introduce them as a tunable parameters α. The basic dynamical properties of the circuit can be described by
the following set of ODEs obtained from the set of biochemical reactions:(1)
We are assuming basal transcription, the standard rapid equilibrium approximations supposing that binding and unbinding processes are faster than synthesis and degradation, and constancy of the total
number of promoter sites. Furthermore, the concentration of the other biochemical elements involved remains constant during time and can be subsumed in the kinetic constant γ[i]. The binding
equilibrium of the autoloop and the cross-regulators are denoted by ω^i[l] and ω^i[c], respectively. Furthermore α ^i[l] and α ^i[c] denote the regulatory rates with respect to the basal
transcription, for the autoloop and cross-regulation respectively. Values<1 correspond to inhibitory regulation, whereas >1 accounts for activation. Finally, d[i] is the degradation rate of protein i
. For a detailed description of this type of calculus, see [21].
Nullcline analysis
In order to analyze the system's dynamics we obtain the following expressions for the nullclines imposing dA/dt = 0 and dB/dt = 0 considering monomeric regulation:(2)
The number of crossing points between (2) and (3) defines the number of different fixed points within the system. Both nullclines have mathematically symmetric expressions, tunable by the set of
parameters. This symmetry facilitates their analysis due to interchangeability of the characteristic features. Hence, the problem can be evaluated by reducing the analysis to one expression. Here (2)
is analyzed.
We thank the members of the CSL for useful discussions. We also thank an anonymous referee for valuable comments, particularly the potential implementation of our proposed mechanism.
Author Contributions
Conceived and designed the experiments: SW JM. Performed the experiments: SW JM. Analyzed the data: SW JM. Wrote the paper: SW JM RVS. | {"url":"http://www.ploscollections.org/article/info:doi/10.1371/journal.pone.0005399","timestamp":"2014-04-20T02:11:13Z","content_type":null,"content_length":"127818","record_id":"<urn:uuid:04d7d4b7-a856-49e2-a7ff-fe75941103ee>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
750 - Q46 V48
Author Message
This post received
Manager KUDOS
Joined: 20 Jun After putting it off for months I sat the GMAT yesterday. I was very pleased with the outcome (even if it's slightly biaised towards Verbal Reasoning).
A big thank you to everybody on the forum - I've posted when I've had something to say, but I've mainly lurked and learned from the more prolific GMAT Clubbers.
Posts: 158
As for the exam itself. It certainly seemed less gruelling in practice than I'd anticipated. The two AWA half-hours flew. From that into Quant where I was immediately stumped by
Followers: 1 several weird exponent questions (fractions to the power of -1/2, -1/4 etc) - I simply hadn't a clue how to deal with these. Very little on probability, nothing on Permutations /
Combinations, quite a bit of Geometry. By the time VR started I was tired and got bored with the subject matter really easily. I pulled myself together, tried to concentrate and
Kudos [?]: 10 finished up with a good bit of time to spare - I think the VR time allowance is quite luxurious.
[1] , given: 0
Anyway that's just my two cent worth on the exam - not worth getting too intimidated about. Thanks again to everybody for all the help.
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Manhattan GMAT Discount Codes
Senior Manager
Joined: 25 Jul 2
This post received
Posts: 378 KUDOS
Location: congrats dude, nice score!
Times Square
Baruch /
Followers: 4
Kudos [?]: 36
[2] , given:
buffdaddy This post received
Raffie wrote:
Joined: 14 Oct
2007 After putting it off for months I sat the GMAT yesterday. I was very pleased with the outcome (even if it's slightly biaised towards Verbal Reasoning).
Posts: 759 A big thank you to everybody on the forum - I've posted when I've had something to say, but I've mainly lurked and learned from the more prolific GMAT Clubbers.
Location: As for the exam itself. It certainly seemed less gruelling in practice than I'd anticipated. The two AWA half-hours flew. From that into Quant where I was immediately stumped by
Oxford several weird exponent questions (fractions to the power of -1/2, -1/4 etc) - I simply hadn't a clue how to deal with these. Very little on probability, nothing on Permutations /
Combinations, quite a bit of Geometry. By the time VR started I was tired and got bored with the subject matter really easily. I pulled myself together, tried to concentrate and
Schools: finished up with a good bit of time to spare - I think the VR time allowance is quite luxurious.
Anyway that's just my two cent worth on the exam - not worth getting too intimidated about. Thanks again to everybody for all the help.
Followers: 13
nice one champ!
Kudos [?]: 169
[1] , given: 8 just a couple of Qs,
- How did u fair in the various CATS?
- What is your time management like? In quant do u guess straight away if you know a question will take you a long time? or do you fight it for a couple of minutes?
This post received
Joined: 20 Jun KUDOS
I got 740 and 780 on the GMAT Preps, but didn't think anything like that was realistic because I'd already seen so many of the questions on this forum.
Posts: 158
In terms of time management, I was worried about not having enough time to finish quant so I probably rushed a few of the earlier questions - I was trying to stay within two minutes
Followers: 1 per question all the time. Looking back, I'd say I was too conscious of staying ahead of the clock - you can probably afford to go three or four minutes behind in the middle of the
exam and still complete all questions in the 75 minutes.
Kudos [?]: 10
[1] , given: 0
Joined: 20 Aug
2007 This post received
Posts: 853
Hi Raffie,
Location: Can you give some tips on your verbal study technique/strategy?
Chicago I'm particularly curious about your RC and CR techniques. What materials did you use?
Schools: thanks
Chicago Booth
Followers: 9
Kudos [?]: 82
[2] , given: 1
Hi Soni
To be honest, I think most of CR and RC is common sense. Whenever I saw "strategies" for dealing with these, my eyes glazed over! I guess the most important thing is to be familiar
Joined: 20 Jun with the format and know what to expect - the easy way to do that is just to chip away at the forum here. If you're pretty good with the questions posted here, I'd say keep practicing
2007 and don't overelaborate. If you're having trouble, then it might be time to try identifying your weaknesses and think about an approach..
Posts: 158 Sorry I can't be of much more help!
Followers: 1
Kudos [?]: 10
[0], given: 0
This post received
dominion KUDOS
Manager Raffie wrote:
Joined: 15 Nov After putting it off for months I sat the GMAT yesterday. I was very pleased with the outcome (even if it's slightly biaised towards Verbal Reasoning).
A big thank you to everybody on the forum - I've posted when I've had something to say, but I've mainly lurked and learned from the more prolific GMAT Clubbers.
Posts: 135
As for the exam itself. It certainly seemed less gruelling in practice than I'd anticipated. The two AWA half-hours flew. From that into Quant where I was immediately stumped by
Followers: 1 several weird exponent questions (fractions to the power of -1/2, -1/4 etc) - I simply hadn't a clue how to deal with these. Very little on probability, nothing on Permutations /
Combinations, quite a bit of Geometry. By the time VR started I was tired and got bored with the subject matter really easily. I pulled myself together, tried to concentrate and
Kudos [?]: 23 finished up with a good bit of time to spare - I think the VR time allowance is quite luxurious.
[1] , given: 2
Anyway that's just my two cent worth on the exam - not worth getting too intimidated about. Thanks again to everybody for all the help.
Raffie, PLEASE talk about the exponents and geometry problems you encountered.
This post received
Joined: 21 Oct KUDOS
Great score Raffie.
Posts: 204
Gud luck for the applications.
GRE 1: 1250
Q780 V540 can you post your experiences in preperation
Followers: 5 _________________
Kudos [?]: 45 If you like my post, please press Kudos+1
[2] , given:
dominion wrote:
Raffie Raffie wrote:
Manager After putting it off for months I sat the GMAT yesterday. I was very pleased with the outcome (even if it's slightly biaised towards Verbal Reasoning).
Joined: 20 Jun A big thank you to everybody on the forum - I've posted when I've had something to say, but I've mainly lurked and learned from the more prolific GMAT Clubbers.
As for the exam itself. It certainly seemed less gruelling in practice than I'd anticipated. The two AWA half-hours flew. From that into Quant where I was immediately stumped by
Posts: 158 several weird exponent questions (fractions to the power of -1/2, -1/4 etc) - I simply hadn't a clue how to deal with these. Very little on probability, nothing on Permutations /
Combinations, quite a bit of Geometry. By the time VR started I was tired and got bored with the subject matter really easily. I pulled myself together, tried to concentrate and
Followers: 1 finished up with a good bit of time to spare - I think the VR time allowance is quite luxurious.
Kudos [?]: 10 Anyway that's just my two cent worth on the exam - not worth getting too intimidated about. Thanks again to everybody for all the help.
[0], given: 0
Raffie, PLEASE talk about the exponents and geometry problems you encountered.
I would hate for Raffie to lose his marvellous score, so I request that you not ask him questions about actual questions
Raffie wrote:
dominion wrote:
dominion Raffie wrote:
Manager After putting it off for months I sat the GMAT yesterday. I was very pleased with the outcome (even if it's slightly biaised towards Verbal Reasoning).
Joined: 15 Nov A big thank you to everybody on the forum - I've posted when I've had something to say, but I've mainly lurked and learned from the more prolific GMAT Clubbers.
As for the exam itself. It certainly seemed less gruelling in practice than I'd anticipated. The two AWA half-hours flew. From that into Quant where I was immediately stumped by
Posts: 135 several weird exponent questions (fractions to the power of -1/2, -1/4 etc) - I simply hadn't a clue how to deal with these. Very little on probability, nothing on Permutations /
Combinations, quite a bit of Geometry. By the time VR started I was tired and got bored with the subject matter really easily. I pulled myself together, tried to concentrate and
Followers: 1 finished up with a good bit of time to spare - I think the VR time allowance is quite luxurious.
Kudos [?]: 23 Anyway that's just my two cent worth on the exam - not worth getting too intimidated about. Thanks again to everybody for all the help.
[0], given: 2
Raffie, PLEASE talk about the exponents and geometry problems you encountered.
I would hate for Raffie to lose his marvellous score, so I request that you not ask him questions about actual questions
Apologies, did not want him to talk about actual problems -- just surprising concepts in problems like the exponent ones he encountered/mentioned.
Joined: 22 Oct
2006 This post received
Posts: 1443
Good job! I, like you, seem to just "get" RC and CR, however what was your approach to SC?
Chicago Booth
Followers: 7
Kudos [?]: 141
[2] , given: | {"url":"http://gmatclub.com/forum/750-q46-v48-58606.html","timestamp":"2014-04-20T06:15:48Z","content_type":null,"content_length":"167770","record_id":"<urn:uuid:04c1c09a-43e0-46a8-b46c-dac49a9cf0ae>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lithia Springs ACT Tutor
Find a Lithia Springs ACT Tutor
...The grade levels include grades 4 thru 12. I have developed special individualized programs for each of these special needs students. These classes were self contained with no more than 8
students in the class.
47 Subjects: including ACT Math, chemistry, English, physics
...Awarded a full tuition scholarship for graduate work in Business Administration at the College of William and Mary in Virginia. At the end of the first year my peers gave me the Mason Gold
Standard Award which is a recognition of one who unselfishly contributes to the academic achievement of others through mentoring. Graduated with a focus in Finance and passed the CFA Level 1
28 Subjects: including ACT Math, calculus, GRE, finance
...I enjoy writing papers, reading books and analyzing different material! I received a scholarship my Freshman year of college for a paper I wrote. The fundamentals of English can be established
as early as elementary school.
38 Subjects: including ACT Math, English, reading, writing
...It is crucial that a student knows how to work with fractions; both constants and variables, indices, linear equations, quadratics, polynomials, rational expressions and radical expressions as
well as basic probability and data analysis. These topics will prepare student for further advancement and understanding. I have tutored Algebra 2 for more than two years now.
36 Subjects: including ACT Math, calculus, web design, discrete math
...I am currently looking for a part time tutoring job to help other students with various subjects. I have several years of experience tutoring. I have tutored individuals from elementary school
up to college level.
18 Subjects: including ACT Math, reading, geometry, algebra 1
Nearby Cities With ACT Tutor
Aragon, GA ACT Tutors
Atlanta Ndc, GA ACT Tutors
Austell ACT Tutors
Braswell, GA ACT Tutors
Chattahoochee Hills, GA ACT Tutors
Lebanon, GA ACT Tutors
Lovejoy, GA ACT Tutors
Mableton ACT Tutors
Pine Lake ACT Tutors
Powder Springs, GA ACT Tutors
Red Oak, GA ACT Tutors
Roopville ACT Tutors
Taylorsville, GA ACT Tutors
Whitesburg, GA ACT Tutors
Winston, GA ACT Tutors | {"url":"http://www.purplemath.com/lithia_springs_act_tutors.php","timestamp":"2014-04-19T05:26:07Z","content_type":null,"content_length":"23825","record_id":"<urn:uuid:c133e9b9-23fd-45d8-9a4f-f03c01e50cab>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Löb's Theorem to Spreadsheet Evaluation
> import Control.Monad.Reader
I've run 3 miles, the Thanksgiving turkey's in the oven, now I just need to do something impossible before I can have breakfast.
As I've mentioned in the past, sometimes you can write useful Haskell code merely by writing something that type checks successfully. Often there's only one way to write the code to have the correct
type. Going one step further: the Curry-Howard isomorphism says that logical propositions corresponds to types. So here's a way to write code: pick a theorem, find the corresponding type, and find a
function of that type.
One area that seems fruitful for this approach is modal logic. The axioms of various
modal logics
correspond to the types of familiar objects in Haskell. For example the distribution axiom:
Looks just like the type of
ap :: (Monad m) => m (a -> b) -> m a -> m b
So I'm looking at the books on my shelf and there's
The Logic of Provability
by Boolos. It's about a kind of modal logic called
provability logic
in which □a roughly means "a is provable". One of the axioms of this logic is a theorem known as Löb's theorem.
Before getting onto Löb's Theorem, I should mention
Curry's Paradox
. (Before today I didn't know Haskell Curry had a paradox associated with his name - though I've met the paradox itself before as it got me into trouble at (high) school once...) It goes like this:
Let S be the proposition "If S is true, Santa Claus exists".
S is true.
If S is true, Santa Claus exists.
So, still assuming our hypothesis, we have
S is true and if S is true, Santa Claus exists.
And hence
Santa Claus exists.
In other words, assuming S is true, it follows that Santa Claus
exists. In otherwords, we have proved
If S is true then Santa Claus exists
regardless of the hypothesis.
But that's just a restatement of S so we have proved
S is true
and hence that
Santa Claus exists.
Fortunately we can't turn this into a rigorous mathematical proof though we can try, and see where it fails. In order to talk about whether or not a proposition is true we have to use some kind of
Gödel numbering scheme to turn propositions into numbers and then if a proposition has number g, we need a function True so that True(g)=1 if g is the Gödel number of something true and 0 otherwise.
But because of Tarski's proof of the
indefinability of truth
, we can't do this (to be honest, the argument above should be enough to convince you of this, unless you believe in Santa). On the other hand, we can replace True with Provable, just like in Gödel's
incompleteness theorems, because provability is just a statement about deriving strings from strings using rewrite rules. If we do this, the above argument (after some work) turns into a valid proof
- in fact, a proof of
Löb's theorem
. Informally it says that if it is provable that "P is provable implies P" then P is provable. We did something similar above with P="Santa Claus exists". In other words
So I'm going to take that as my theorem from which I'll derive a type. But what should □ become in Haskell? Let's take the easy option, we'll defer that decision until later and assume as little as
possible. Let's represent □ by a type that is a Functor. The defining property of a functor corresponds to the theorem □(a→b)→□a→□b.
> loeb :: Functor a => a (a x -> x) -> a x
So now to actually find an implementation of this.
Suppose a is some kind of container. The argument of
is a container of functions. They are in fact functions that act on the return type of
. So we have a convenient object for these functions to act on, we feed the return value of
back into each of the elements of the argument in turn. Haskell, being a lazy language, doesn't mind that sort of thing. So here's a possible implementation:
> loeb x = fmap (\a -> a (loeb x)) x
Informally you can think of it like this: the parts are all functions of the whole and
resolves the circularity. Anyway, when I wrote that, I had no idea what
So here's one of the first examples I wrote to find out:
> test1 = [length,\x -> x!!0]
loeb test
is [2,2]. We have set the second element to equal the first one and the first one is the length of the list. Even though element 1 depends on element 0 which in turn depends on the size of the entire
list containing both of them, this evaluates fine. Note the neat way that the second element refers to something outside of itself, the previous element in the list. To me this suggests the way cells
in a spreadsheet refer to other cells. So with that in mind, here is a definition I found on the web. (I'm sorry, I want to credit the author but I can't find the web site again):
> instance Show (x -> a)
> instance Eq (x -> a)
> instance (Num a,Eq a) => Num (x -> a) where
> fromInteger = const . fromInteger
> f + g = \x -> f x + g x
> f * g = \x -> f x * g x
> negate = (negate .)
> abs = (abs .)
> signum = (signum .)
With these definitions we can add, multiply and negate Num valued
functions. For example:
> f x = x*x
> g x = 2*x+1
> test2 = (f+g) 3
Armed with that we can define something ambitious like the following:
> test3 = [ (!!5), 3, (!!0)+(!!1), (!!2)*2, sum . take 3, 17]
Think of it as a spreadsheet with
being a reference to cell number
. Note the way it has forward and backward references. And what kind of spreadsheet would it be without the
function? To evaluate the spreadsheet we just use
loeb test
. So
is the spreadsheet evaluation function.
Now don't get the wrong idea. I'm not claiming that there's a deep connection betwen Löb's theorem and spreadsheet evaluation (though there is a vague conceptual similarity as both rely on a notion
of borrowing against the future). Provability logic (as defined by Boolos) is classical, not intuitionistic. But I definitely found it interesting the way I was led directly to this function by some
abstract nonsense.
Anyway, happy Thanksgiving!
PS There are other uses for
too. Check out this implementation of factorial which shows
to be usable as a curious monadic variant of the Y combinator:
> fact n = loeb fact' n where
> fact' 0 = return 1
> fact' n = do
> f <- ask
> return $ n*f (n-1)
24 comments:
Modal logics usually have these rules:
f (a -> b) -> f a -> f b
(This corresponds to the Applicative class, which may one day be made a superclass of Monad.)
f a -> a
Monads do not in general satisfy this. Perhaps comonads or something?
Eeep! The defining property of a functor is this:
fmap :: (a -> b) -> f a -> f b
and not
ap :: f (a -> b) -> f a -> f b
fmap is definitely not provable in any modal logic.
As I say, I'm not sure there's a deep connection with Loeb's theorem! (But there is surely some link.)
OK, I have some spare moments now. I've spent quite a bit of time tinkering round with various axioms of modal logic but as you notice, things have a habit of not quite working out the way you
want. There *is* a nice interpretation of □ as something like C/C++'s 'const'. There's a bunch of papers about it and I said something about it here. I've also seen some papers that uses the
axiom GL (ie. Loeb's theorem) as a basis for describnig recursive types.
I've been playing around with the axioms to see if either □ or ◊ behave like some well-known Haskell class.
Firstly, neither of them are Functors, since you cannot prove either of these even in S5:
The second is less obvious than the first, but consider temporal logic: just because 'a' implies 'b' now, and 'a' is sometimes true, it doesn't mean that 'b' is ever true.
This is unfortunate, since axioms T and 4 are suggestive of Monads for ◊:
T: a → ◊a
4: ◊◊a → ◊a
K on the other hand doesn't look like anything I've seen in Haskell:
K: (◊a → ◊b) → ◊(a → b)
Interesting application for a compiler. The other approach would be to type a constant t as "t" and code that calculates t as "Code t". Less like modal logic, though.
Is your way of proving Löb's theorem anything like the regular proof? Your code uses recursion in a way that looks dangerously like a circular proof.
In the special case where f is the identity functor, loeb is precisely the usual Y combinator, so it's about as circular as you can get. I just used the type corresponding to Löb's theorem as
inspiration for a function definition of that type. However, I must read this paper which makes proper use of a complete provability logic.
Do you know Benton et al.'s paper?
Just had a look at that paper, which I didn't know. It looks like it addresses stuff I've been tossing about in my head for a little while. Not being in academia, I've no idea what are the good
papers to read unless they get posted on Lambda the Ultimate. :-)
I realise this is now something of an old post but hopefully you're still picking up the comments. Just a quick question.
Your test3 doesn't seem to be a monomorphic list (and therefore ill-typed in Haskell), is this deliberate? Or were the 3 and 17 meant to be const 3 and const 17 respectively?
Great mind-boggling stuff, by the way :)
Disregard my last comment. After loading up the post in GHCi, I see why it works. The Num definition for functions allows 3 and 17 to be functions which are basically const 3 and const 17.
Num's sneaky!
Modal logics usually have these rules:
f (a -> b) -> f a -> f b
(This corresponds to the Applicative class, which may one day be made a superclass of Monad.)
f a -> a
Monads do not in general satisfy this. Perhaps comonads or something?
Yes, they are connected to comonads.
Two functions
extract :: w a -> a
extend :: (w a -> b) -> w a -> w b
which follow these laws:
extend extract == id
extract . extend f == f
extend f . extend g == extend (f . extend g)
define a comonad.
You have a neat derivation of "cfix" there!
I bumped into this once myself when attempting to improve on the Essence of Dataflow semantics. I eventually found this post by David Menendez.
cfix :: Comonad d => d (d a -> a) -> a
which is dual to
mfix :: Monad t => (a -> t a) -> t a
We know mfix needs a special Haskell definition for each monad, but cfix does not, it's applicable to all comonads.
Usually, modal box matches a comonads an modal diamond matches a monad. S4's theorems support the functions:
a -> Dia a
Dia (Dia a) -> Dia a
Box a -> a
Box a -> Box (Box a)
which are (co)unit and (co)join of the (co)monad.
Along the lines of your spreadsheet analogy, you're not alone! I've written up a "non-empty graph" data structure as a comonad with the intent of making a cool spreadsheet semantics in Haskell. I
need to dust that off someday...
In regards to fmap not being provable in modal logic, there might be a nuance here. In haskell, the type A matches Box A from a corresponding modal logic: values in Haskell are immutable and thus
"necessary" (i.e. Haskell always assumes the trivial Identity comonad). So the type of fmap (for a monad) actually corresponds to:
fmap :: Box (a -> b) -> Dia a -> Dia b
which is a modal logic theorem (called Dia-K).
Satoshi Kobayashi's "Monad as Modality" is good reference (but sometimes hard to obtain). In regards to Yakeley's Code modality, monads seem to play a role in staged-computing, don't remember a
reference though. Template Haskell maybe?
- Nick Frisby
Well spotted! I looked hard on the web for something with the same signature as loeb but couldn't find anything. I haven't yet fully proved to myself that loeb and cfix are the same thing. They
look like they do the same thing but it seems curious that I'm able to define loeb for any Functor, not just for a Comonad. When I have some time I'll investigate further - comonads being a bit
of a hobby of mine these days.
If you're still reading, I just worked out a tiny bit more detail here.
Do you know my opinion of a Num instance for functions?
In both the 'humour' and 'proposal' categories!
For the interested, Eliezer Yudkowski has a
'Cartoon Guide' to Löb's theorem
This comment has been removed by the author.
If the box is represented in Haskell as a monad. What will be the representation of the diamond?
I would like to know how should the diamond operator be represented in Haskell :)
Noting for posterity, as this is where folks go when they seek the definition for this combinator:
This version improves the sharing slightly.
loeb x = xs where xs = fmap ($xs) x | {"url":"http://blog.sigfpe.com/2006/11/from-l-theorem-to-spreadsheet.html?showComment=1337716492597","timestamp":"2014-04-20T08:16:25Z","content_type":null,"content_length":"96718","record_id":"<urn:uuid:95f9de93-46ac-4a9c-9a89-0c2b38531e16>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is a homogeneous space a variety?
up vote 11 down vote favorite
Let $G$ be a Lie group and let $H$ be a closed subgroup of $G$. Then $G/H$ may not be a group, but it will be a homogeneous space for $G$ with stabilizers conjugate to $H$. Sometimes, this is a
variety, for instance, when $G$ is a complex reductive group and $H$ is a complex subgroup, and it will even be projective when $H$ is parabolic (by definition).
However, when we take real Lie groups, the situation seems more subtle. For instance, if $G=Sp(g,\mathbb{R})$ and $H=U(g)$, then $G/H$ is the Siegel upper half space, which is an (analytic) open set
in a variety, namely, the variety is the space of symmetric $g\times g$ matrices and the open set is given by the ones with positive imaginary part. Similarly, many constructions in Hodge theory,
particularly that of period domains, end up coming from real Lie groups, and so may be varieties, open subsets of varieties, or not varieties at all a priori. Clearly, the quotient being even
dimensional is a necessary condition, but I'd be surprised if it were sufficient.
So the first part of the question is
When is a homogeneous space (an open subset of) a variety?
Now, additionally, these period domains often can be quotiented by a discrete (I believe Griffiths says arithmetic) subgroup of the original group to actually get a variety. For instance (if I'm
understanding right) if we take $\mathfrak{h}_g=Sp(g)/U(g)$ above, we can quotient further by $Sp(g,\mathbb{Z})$ and this gives us $\mathcal{A}_g$, the moduli space of abelian varieties.
When is there a discrete subgroup that we can take a further quotient by to get a variety?
ag.algebraic-geometry lie-groups dg.differential-geometry hodge-theory
add comment
2 Answers
active oldest votes
I'll try to answer both questions, though I will change the first question somewhat. Let's work in the setting of a real reductive algebraic group $G$ and a closed subgroup $H \subset
Your first question asks when $G/H$ is an open subset of some (presumably complex) variety. I think that this question should be modified in a few ways.
You can't really say that $G/H$ "is a subset" of a variety, since $G/H$ is not a priori endowed with a complex structure. So you need a bit more data to go with the question -- a complex
structure on the homogeneous space $G/H$. Such a complex structure can be given by an embedding of the circle group $U(1)$ as a subgroup of the center of $H$. Let $\phi: U(1) \rightarrow
G$ be such an embedding, and let $\iota = \phi(i)$ be the image of $e^{pi i} \in U(1)$ under this map. Such an embedding yields an integrable complex structure on the real manifold $G/
H$, I believe (though I haven't seen this stated in this degree of generality).
So now one can ask if $G/H$, endowed with such a complex structure, is an open subset of a complex algebraic variety. But again, I have some objection to this question -- it's not really
the right one to ask. Indeed, it's very interesting when one finds that some quotients $\Gamma \backslash G /H$ are (quasiprojective) varieties -- but such quotients are not obtained as
quotients in a category of varieties, from $G/H$ to $\Gamma \backslash G / H$. They are complex analytic quotients, but not quotient varieties in any sense that I know.
So what's the point of knowing whether $G/H$ is an open subset of a variety? Really, one needs to know properties of $G/H$ as a Riemannian manifold and complex analytic space (e.g.
curvature, whether it's a Stein space). That's the most important thing!
up vote 6
down vote As Kevin Buzzard and his commentators note, under the assumption that $G$ comes from a reductive group over $Q$, and under the assumption that $H$ is a maximal compact subgroup of $G$,
accepted and under the assumption that there is a "Shimura datum" giving the quotient $G/H$ a complex structure, the quotient $G/H$ is a period domain for Hodge structures, and the quotients $\
Gamma \backslash G / H$ are quasiprojective varieties when $\Gamma$ is an arithmetic subgroup of $G$.
But these are quite strong conditions, on $G$ and on $H$! I have also wondered about other situations when $X = \Gamma \backslash G / H$ might have a natural structure of a
quasiprojective variety. A general technique to prove such a thing is to use a differential-geometric argument. A great theorem along this line is due to Mok-Zhong (Compactifying
complete Kähler-Einstein manifolds of finite topological type and bounded curvature, Ann. of Math 1989). The theorem, as quoted from MathSciNet, reads:
"Let $X$ be a complex manifold of finite topological type. Let $g$ be a complete Kähler metric on $X$ of finite volume and negative Ricci curvature. Suppose furthermore that the
sectional curvatures are bounded. Then $X$ is biholomorphic to a Zariski-open subset $X'$ of a projective algebraic variety $M$."
Such results can be applied to prove quasiprojectivity of Shimura varieties of Hodge type. I believe I first learned this by reading J. Milne's notes on Shimura varieties.
I tried once to apply this to an arithmetic quotient of $G/H$, where $H$ was a bit smaller than a maximal compact (when $G/H$ was the twistor covering of a quaternionic symmetric space)
-- I couldn't prove Mok-Zhong's conditions for quasiprojectivity, and I still don't know whether such quotients are quasiprojective.
add comment
Part 2: Baily-Borel! At least if $G$ is the real points of, say, a reductive algebraic group, and $H$ is a maximal compact subgroup, and EDIT furthermore if $G/H$ admits a complex structure
(I think this is equivalent to saying that $G/H$ is a bounded symmetric domain; note that this rules out e.g. $G=SL_n(\mathbf{R})$ for $n>2$). I guess admitting a complex structure is a
necessary condition for being a variety though!
In their seminal Annals paper, Baily and Borel construct sufficiently many theta functions on the quotient of a bounded symmetric domain by an arithmetic subgroup (i.e. "$G(\mathbf{Z})$"
(this makes sense up to some finite error if $G$ is a reductive algebraic group over the rationals)) that they can embed the quotient into a big projective space, giving the quotient the
up vote 4 structure of a quasi-projective variety over the complexes. This is general enough to explain the symplectic group example you give in the question, for example.
down vote
Deligne then went on, axiomatising work of Shimura, to show that if furthermore $G$ satisfied certain axioms, then all of this would go through through over a number field. See Deligne's
"Travaux de Shimura" and his article on Shimura varieties in the Corvallis proceedings. This explains why the moduli space of princ polarized ab vars is a variety over the rationals, for
I know someone will fix the currently broken wikipedia link, but can they also tell me how they did it? – Kevin Buzzard Mar 4 '10 at 16:43
There's a button to create a link. I just linked it to the words "Baily-Borel" at the beginning and pasted in the url – Charles Siegel Mar 4 '10 at 16:47
here's something I don't understand in this answer: you need some hypothesis on G for the quotient G()K to be a complex manifold. Ah, I think I see; you are assuming that this is complex,
and then discussing the second part of the question. – Emerton Mar 4 '10 at 16:53
@Emerton: I'm sure you understand this stuff better than I do. How will it work? You're absolutely right: I (or rather Baily-Borel) need to assume that G(R)/K is a bounded symmetric
domain, which is a condition on G. Then Gamma\G(R)/K is a variety. I'll edit. – Kevin Buzzard Mar 4 '10 at 17:07
@Kevin Buzzard. Another method is to pass the special characters through URL encoding and then put them in the link. For example: w3schools.com/TAGS/ref_urlencode.asp – Regenbogen Mar 4
'10 at 17:31
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry lie-groups dg.differential-geometry hodge-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/17100/when-is-a-homogeneous-space-a-variety","timestamp":"2014-04-19T05:10:25Z","content_type":null,"content_length":"66480","record_id":"<urn:uuid:64cb2634-2909-4590-8c07-de5e01549d00>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
High School Entrance & Exit Exams
Texas high school students are required to pass a number of subject-specific exit-level exams, which they can begin taking during the eleventh grade, in order to receive their high school diplomas.
Together these tests are called the Texas Assessment of Knowledge and Skills, or TAKS.
The science exam consists of 55 questions, most of which will be four-part multiple-choice questions. The multiple-choice answer choices are alternately labeled A, B, C, D, and F, G, H, J. "Not Here"
may be used as the fourth answer choice in some multiple-choice questions. A few questions will be open-ended gridable questions. For this question type, you will be given an eight-column grid to
record and bubble in your answer.
You will have a few tools at your disposal to help you along with this test. First, you will be given a science chart that contains a chart of formulas on one side and the periodic table of elements
on the other side. Make sure that you're familiar with the formulas and understand the constants/conversions table at the bottom of the page. The formulas page will also have a 20cm metric ruler
along the edge.
In addition to the science chart, you are allowed to use a calculator. Some problems may involve multiple steps and calculations from data given. Even though no question will require the use of a
calculator — that is, each question can be answered without a calculator — using a calculator on some problems may save you valuable time.
• Before doing an operation, check the number that you keyed in to make sure that you keyed in the right number. You may wish to check each number as you key it in.
• Make sure that you clear the calculator after each problem. You may need to clear your calculator as you work a problem to go on to the next part. If this is the case, be sure to write down your
answer before you go on to the next step.
Take advantage of using a calculator on the test. You can use jusst about any calculator on the test, from a simple four-function calculator to a scientific or graphing calculator — as long as it
doesn't have a typewriter-style (QWERTY) keyboard on it. Just make sure you know how to use the calculator efficiently before you begin the test.
As you approach a problem, first focus on how to solve that problem and then decide if the calculator will be helpful. Remember, a calculator can save you time on some problems, but also remember
that each problem can be solved without a calculator. Also remember that a calculator will not solve a problem for you by itself. You must understand the problem first.
Although the Science Test has no time limit, the suggested time is 2 hours. | {"url":"http://www.cliffsnotes.com/test-prep/high-school-entrance-exam-and-exit-exam/test-prep-taks-science-exam","timestamp":"2014-04-16T13:33:28Z","content_type":null,"content_length":"93463","record_id":"<urn:uuid:2c86dd84-d5a5-4af2-9c16-e8fcdf28b18e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: My final formal answer as to what classes are and what class
membership is!
Replies: 7 Last Post: Apr 7, 2013 4:34 AM
Messages: [ Previous | Next ]
Re: My final formal answer as to what classes are and what class
membership is!
Posted: Apr 2, 2013 6:25 PM
On Mar 28, 3:02 pm, Zuhair <zaljo...@gmail.com> wrote:
> See:http://zaljohar.tripod.com/sets.txt
> Below is the full quote from the above link.
> *********************************************************************************
> What Are Classes!
I swear to Allah I was going to ask the same question. Not because I
figured out the answer or wondered, but because the answer hit me in
the face in my refutations of faulty proofs that NST is inconsistent.
The umpteen axioms etc. below are shit - worthless. That misses the
whole point.
1. If you throw out a bunch of axioms, then you are making arbitrary
decisions and there is no way you can say that a class means that and
only that.
2. The idea is to move AWAY from a bunch of arbitrary decisions and
ask what is really wanted/needed.
3. Class is a primitive and so anything complex - having millions of
smaller subsets - would define millions of things even more
primitive. That is counterintuitive and contrary to the intent.
Why are there classes?
Because people are inconsistent.
1. Everything is a set.
2. x ~e x is not a set.
So what is x ~e x? A formula.
So you have the set of values that make a formula true, including
something that is not a set . . .
Alls I know is that in Frege Logic we have concepts that are total
Boolean functions, and that x ~e X is not total so we have a partial
Sets are total functions from everything to {TRUE,FALSE}.
Classes are partial functions from everything to {TRUE,FALSE}.
> This account supplies THE final answer as to what classes are,
> and what is class membership relation, those are defined in
> a rigorous system with highly appealing well understood primitive
> notions that are fairly natural and easy to grasp. It is aimed to be
> the most convincing answer to this question. The formulations are
> carried out in first order logic with Identity, Part-hood and Naming
> binary relations. Identity theory axioms are assumed and they are
> part of the background logical language of this theory. The
> mereological
> axioms are those of GEM (Generalized Extensional Mereology), they are
> the standard ones. The two axioms of naming are very trivial.
> The definitions of classes and their membership are coined with the
> utmost care to require the least possible assumptions so they don't
> require grounds of Atomic Mereology or unique naming or the alike..,
> so they can work under more general situations. Also utmost care was
> taken to ensure that those definitions are nearer to the reality of
> the
> issue and not just a technical fix. I simply think that what is given
> here
> do supply the TRUE and FINAL answer to what classes are and to
> what is their membership!
> The General approach is due to David Lewis. Slight modifications are
> adopted here to assure more general and nearer to truth grounds.
> Language: FOL(=,P,name)
> Axioms: ID axioms +
> 1.Reflexive: x P x
> 2.Transitive: x P y & y P z -> x P z
> 3.Antisymmetric: x P y & y P x -> x=y
> Define: x O y iff Exist z. z P y & z P x
> 4.Supplementation: ~y P x -> Exist z. z P y & ~ z O x
> 5.Composition: if phi is a formula, then ((Exist k. phi) ->
> Exist x (for all y. y O x iff Exist z. phi(z) & y O z)) is an axiom.
> Definition: x is a collection of phi-ers iff
> for all y. y O x iff Exist z. phi(z) & y O z
> 6.Naming: n name of y & n name of x -> y=x
> Definition: n is a name iff Exist x. n name of x
> 7.Discreteness: n,m are names & ~n=m -> ~n O m
> /
> Definitions of "Class" and "Class membership":
> Define: x E y iff Class(y) & Exist n. n P y & n name of x.
> 1. Class(x) iff x is a collection of names.
> 2. Class(x) iff x is a collection of names Or x never overlap with a
> name.
> when x never overlaps with a name then it is to be called an inert
> object.
> Definition: x is inert iff ~Exist n. n is a name & x O n
> 3. Class(x) iff
> x is a sum of an inert object and (an inert object or a collection of
> names)
> Sum defined as:
> Sum(x,y) = z iff for all q. q O z iff q O x Or q O y.
> 1 is incompatible with the empty class.
> 2 is incompatible with the subclass principle that is :
> "Every subclass of x is a part of x".
> 3 does the job but it encourages gross violation of Extensionality
> over classes
> since having multiple names for an object is the natural expectation!
> If we assume the subclass principle and use definition 3 then full
> Extensionality
> over classes is in place and it follows that the empty Class is an
> atom.
> Although attractive on the face of it (since the empty set is just a
> technical fix),
> however it is not that convincing since there is no real
> justification
> for such atom-hood.
> If we strengthen the subclass principle into the principle that:
> "For all classes X,Y (Y subclass of X iff Y P X)", then only
> definition 1
> can survive such a harsh condition, and this would force all names to
> be atoms and shuns the existence empty classes altogether! such
> a demanding commitment that despite the clear aesthetic gain of
> having internally pure classes in the sense that all classes are only
> composed of parts that are classes, yet still this is a very
> demanding
> commitment that do not seem to agree with basic natural expectations
> about naming.
> So a definition of classes that proves Extensionality over them
> without
> restricting multiple naming per object is what is demanded.
> Define: x is an equivalence collection of names iff
> there exist y such that x is the collection of all names of y.
> Define: y is a fusion of equivalence collections of names iff
> y is a collection of names & for all a,b,c (a P y & a name of b & c
> name of b -> c P y)
> Define V' as the collection of ALL inert objects.
> 4. Definition: Class(x) iff
> x is a sum of V' and (V' or a fusion of equivalence collections of
> names)
> As far as the concept of class is concerned Extensionality is at the
> core of it,
> so 4. is the right definition of classes.
> It is nice to see that the *Empty Class* is just the collection of all
> inert objects.
> For the sake of completion of this approach, we may say that
> Definition 4.
> is an Equivalence rendering of Definition 3. Similarly we can
> introduce two
> further definitions that are Equivalence renderings of Definition 1
> and Definition 2.
> But those are rarely applicable in class\set theories.
> Now one can easily define a set as a class that is an element of a
> class.
> An Ur-element is defined as an element of a class that is not a
> class.
> Or alternatively a non-class object. All kinds of circular membership
> can be explained;
> paradoxes can be easily understood. Also non definability of some
> classes
> can be understood.
> This account explains membership and classes in a rigorous manner.
> And actually supplies the FINAL answer!
> Somehow those definitions might be helpful in orienting thought about
> some
> philosophical questions about mathematics founded in set theory. For
> example
> identity and part-hood are expected natural relations and they can be
> reasoned
> about as being human independent, but Naming might present some
> challenge,
> definitely it favors human dependency but still it can be human
> independent!
> Philosophical debate about the nature of sets would become a debate
> about the
> nature of naming procedures.
> Zuhair Al-Johar
> March 21 2013
> ******************************************************************
> Zuhair | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2443994&messageID=8810005","timestamp":"2014-04-20T09:58:40Z","content_type":null,"content_length":"32219","record_id":"<urn:uuid:40da6b98-6f97-47f9-aa78-0afe9d64680a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Bouncy Ball' Brain Teaser
Bouncy Ball
Math brain teasers require computations to solve.
Puzzle ID: #230
Category: Math
Submitted By: Phyllis
Corrected By: cnmne
If you drop a bouncy ball from a distance of 9 feet from the floor, the ball will continue to bounce. Assume that each time it bounces back two thirds of the distance of the previous bounce. How far
will the ball travel before it stops?
Show Hint
45 feet.
The distance can be calculated as 9 + (9)*(2/3)*(2) + (9)*(2/3)*(2/3)*(2)+..., with the 2 accounting for the upward and downward travel distance. Adding 9 to this will result in the formation of an
infinite geometric series: 18*(2/3)^0 + 18*(2/3)^1 +... Factoring out the 18 will result in a series that is equal to (1/(1-(2/3))), which equals 3. Multiplying by the 18 and then subtracting the
nine used to create the series will result in 54 - 9 = 45. Hide
What Next?
Unidus if the ball continues to bounce, like stated in the question, then the ball will never stop, and it will travel an infinite distance
Mar 10, 2002
bobbrt I think that I can explain this mathematically: In the initial drop the ball will travel 9 ft. On the first bounce it will travel (9 ft) * (2/3) * 2 = 12 feet (the "2" is there
Jun 19, 2002 because the ball travels the distance up and then on the way down again). On the second bounce it will travel (9 ft) * (2/3) * (2/3) * 2 = 8 ft. And on and on. Therefore the
equation to be used is D = 9 + sum of [9* 2 * (2/3)^n], where D is the total distance traveled and n is the number of bounces. If you plug this into the engineer's best friend
"MathCad", you get the answer of 45 feet. The reason the ball doesn't travel an infinite distance is that this is an exponential decay - as the number of bounces gets higher, the
distance is so small that it doesn't increase the total distance traveled by a significant amount.
jimbo Answer is 45 feet. It is a geometric progression. If the ball takes 1 second to complete the first bounce and two thirds of the time for each successive bounce, how long will it
Dec 17, 2002 take before it stops bouncing and how many bounces will it have done?
Rowsdower Wow, bobbrt and Jimbo are the guys to talk to for help with math!
Dec 23, 2003
canu I wish jimbo had stated under what experimental conditions he would observe his bouncing ball.
Jul 13, 2004
It is not on the surface of the Earth, where the first bounce (6 feet up and 6 feet down) would take about 1.22 seconds.
It is not on any other planet, nor in any situation where the accelaration of gravity is constant, because there each successive bounce would take SQUARE ROOT OF 2/3 the time of the
previous one (thus on Earth, the 2nd bounce would take about 1 second).
It must have been in a rocket undergoing a constantly increasing acceleration. I will let anybody else describe the specifications of such a rocket.
I don't have MathCad but I remember something of the freshman high school physics of 60 years ago.
Atropus If it is exactly 2/3 decay then it keeps going infinitely doesn't it? or until the movement is so infinitesimal that it gets molecular ^_^ bwahah
Feb 09, 2005
Sane Troppy, it keeps thirding though, so no matter how many more bounces you make it do at such a small height, it technically would never get to 51.
Mar 23, 2005
Like 1/1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 ...
...never gets to 2/1.
darthforman Who cares.
May 27, 2005
musicmaker21113 I agree with Sane. Technically, the way this teaser is phrased, this ball will never stop bouncing....as it always bounces back up 2/3rds the distance it dropped. It will keep
Jan 09, 2006 getting closer and closer to 45 feet, but will never actually get that far.
mr_brainiac I need to reboot my brain. I'm suffering mental brownouts again. I had my choo choo on the right track but somehow I got derailed.
Jan 10, 2006
SPUTNIK2 very clever!
Jan 13, 2006
keveffect1 Nice teaser to remind me of how use geometrics
Feb 21, 2006
jntrcs I think the answer is infinite. think of it this way: It gets 2/3 size so it is growing continually and even though it would get so small of a bounce nothing could determine it.
Mar 18, 2006 (Although odly like pi) but it keeps going and getting larger. I'm no scientist but that is how i see it.
albuquer i agree that the ball will never stop bouncing because of the wording of the question, it travels an infinite distance, but yes if you talking exponential decay a limits, there
May 12, 2006 would be a mathematical answer you can come up with, but the way the answer was worded it would never stop "traveling" if it travels one millionth of a foot one million times, thats
another foot
Smudge It would never stop, as is the wording of the question.
May 25, 2006 Nor would it ever reach the stated answer of 45 feet.
When you all say that it would go an infinite distance, you're wrong.
Yes, it would bounce an infinite number of times, but the distance for each bounce is so small that it would just keep adding another decimal place to the distance travelled, such
that it would be 44.99999999999999999999... for a hell of a long time (which is infinity, i guess).
Smudge it's like if you add 1 + 1/10 + 1/100 + 1/1000 + 1/10000 + 1/100000 + 1/1000000 + ...
May 25, 2006
after one iteration - 1
after two iterations - 1.1
after three iterations - 1.11
after four iterations - 1.111
after five iterations - 1.1111
after six iterations - 1.11111
babygirl8195 what?
Jun 08, 2006
udoboy Dear "Sane"
Jun 19, 2006
1/1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 ...
Is already greater than 2/1 when you add the 1/4 term.
It's the summation of (1/2ⁿ); n ≥ 0 that doesn't reach 2.
xdbtcp used excel - quit at 44.99972....
Jun 21, 2006
GebbieRose Close enough! Nice teaser.
Aug 11, 2006
flowerz2010 Wouldn't the ball never get to the floor, but very close because 2/3 of 1 is 2/3 and to get 2/3 of that you would have to multiply it by 2/3. Therefore, the ball may get very close
Jan 30, 2007 to landing on the floor, but would still be bouncing.
flowerz2010 Wouldn't the ball never get to the floor, but very close because 2/3 of 1 is 2/3 and to get 2/3 of that you would have to multiply it by 2/3. Therefore, the ball may get very close
Jan 30, 2007 to landing on the floor, but would still be bouncing.
Jerrythellama ah, good old calculus
Feb 04, 2007
bgil7604 I didn't think the instructions were written well eneough. I thought it was a trick question, making it 0, but still really cool.
Feb 21, 2007
musa_karolia the answer will tend to 45 without reaching it .... move on. ty
Apr 11, 2007
javaguru After 198 bounces, the distance the ball would travel on the 199th bounce is less than the Planck length (1.616 x 10^-35 meters = 5.30 x 10^-35 feet). At that point, no smaller unit
Jan 28, 2009 of movement is possible and the ball is either at at rest, or more likely, is oscillating between the two positions. Either way, the precision of the measurement of the position of
the ball is far beyond the limits of what the Heisenberg uncertainty principle allows.
Disregarding the effect of quantized space, which would mean that the distance the ball travels is limited to multiples of the length of the quanta, the answer is then
45 +/- 5.3 x 10^-35 feet
I just rounded this to 45 feet given that the effects of quantized space make the error unmeasurable. | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=230&op=2&comm=1","timestamp":"2014-04-19T14:44:49Z","content_type":null,"content_length":"47945","record_id":"<urn:uuid:fc85c1db-e4c6-47b7-a645-a061692d22bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dimension of Range and Relations
February 20th 2011, 10:34 PM #1
Junior Member
Nov 2010
Dimension of Range and Relations
If we define the linear map T: Rn --> Rm and x --> Ax and the dimension of the nullspace of A is 2, what is the dimension of the range of A? What can you say about the relation between n and m?
I think the dimension of the range of A is m-2 because dim(nullspace(A)) + dim(range(A)) = dim(A) and that would mean m>n but I could be understanding this wrong.
I don't think you're quite applying the rank-nullity theorem correctly here. That theorem says that if you have a linear transformation $T:\mathbb{R}^{n}\to\mathbb{R}^{m},$ then
$\text{dim}(\text{range}(T))+\text{dim}(\text{kerne l}(T))=n,$ not $m.$
How does this change your result?
Oh I didn't realize that the rank-nullity theorem would apply to Rn. And kernel is equivalent to nullspace, right?, so dim(range(T)) = n-2. So does this mean that m<n because if the dimension of
the range is less than n, then that means that the dimension of the codomain is less than the dimension of the domain?
Thank you for your help.
Kernel = nullspace. That is correct.
Your figure of n - 2 for the range is correct, as well.
The dimension of the codomain is, I believe, always less than or equal to the dimension of the domain. You can think of this as a "conservation of information" theorem, almost. A linear
transformation can't produce new information, it can only manipulate existing information, or, if the rank of the matrix is less than the number of rows, it can destroy information.
Just remember that if you have a linear transformation $T:V\to W,$ then
$\text{rank}(T)\le\dim(V),$ and
Make sense?
Oh ok. So what I'm getting from this is that m<=n but also that n-2<=m since rank is less than the dimension of the domain and codomain so m is in between n-2 and n?
Well, I think you have to distinguish between the range and the co-domain. See here for an explanation. The fact that vectors go from Rn to Rm via T does not mean that T is onto Rm (that is,
every vector in Rm can be "hit" by a vector coming from Rn via T). How are you defining range and codomain? And what does the notation
$T:V\to W$
mean to you? Does it mean automatically that $T$ is onto $W?$
February 21st 2011, 02:03 AM #2
February 21st 2011, 05:14 AM #3
Junior Member
Nov 2010
February 21st 2011, 05:22 AM #4
February 21st 2011, 08:22 AM #5
Junior Member
Nov 2010
February 21st 2011, 10:53 AM #6 | {"url":"http://mathhelpforum.com/advanced-algebra/172000-dimension-range-relations.html","timestamp":"2014-04-20T14:08:20Z","content_type":null,"content_length":"48181","record_id":"<urn:uuid:fce7c35e-e176-41bf-a181-323f89476fc8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to find the intercepts of y=x^3-3x
July 9th 2012, 09:04 AM
How to find the intercepts of y=x^3-3x
How to find the intercepts of y=x^3-3x. I keep coming up with x intercept= (sq.rt. 1, 0) and y intercept (0,0) but was told a part of that answer or the whole thing is wrong or it is incomplete.
July 9th 2012, 09:20 AM
Re: How to find the intercepts of y=x^3-3x
July 9th 2012, 09:29 AM
Re: How to find the intercepts of y=x^3-3x
(x^2-3) factors to (x-3) and (x-1). so...x intercepts are (3,0) and (-1,0)?
July 9th 2012, 09:34 AM
Re: How to find the intercepts of y=x^3-3x
July 9th 2012, 09:44 AM
Re: How to find the intercepts of y=x^3-3x
so... x intercepts are (sq.rt.3,0) and (-sq.rt.3, 0) and i have no idea about the third intercept. How do you enter symbols into the forum? Thanks a lot!!
July 9th 2012, 10:01 AM
Re: How to find the intercepts of y=x^3-3x | {"url":"http://mathhelpforum.com/pre-calculus/200796-how-find-intercepts-y-x-3-3x-print.html","timestamp":"2014-04-19T19:10:28Z","content_type":null,"content_length":"7665","record_id":"<urn:uuid:64a6a544-ebee-484c-8164-b2debb977c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic integrals as honest martingales — exponential damping
up vote 2 down vote favorite
We have a given positive martingale ρt, with the dynamics: $$\textrm{d}\rho_t = \lambda_t \rho_t \textrm{d}W_t$$ where $W_t$ is a standard Brownian motion. Now we have an "exponentially dampened"
process $p_t$: $$p_t=\int_0^t \exp(-\int_0^s r_u \textrm{d} u)\textrm{d} \rho_s$$ where $r_u \geq 0$. If needed I could add stronger assumptions for $r_u$, e.g.:
(1) $r_u>0$
(2) $\exp(-\int_0^t r_u \textrm{d} u) \rightarrow 0$ as $t\rightarrow \infty$
I know (from the answer to my previous question Stochastic integrals as honest martingales -- comparison criterion) that in generality $p_t$ is not guaranteed to be a martingale, but is it the case
in one of these specific cases? If not I would appreciate a (simple) counterexample.
@Douglas, you're right, cheers! – Grzenio Sep 6 '11 at 7:12
@Grzenio : I think there is a little problem with your integrand because as written you have $p_t=exp(−\int_0^t r_u.du).ρ_t$, shouldn't it be $p_t=\int_0^texp(−\int_s^t r_u.du).dρ_s$ ? Regards –
The Bridge Sep 6 '11 at 11:53
@Grzenio (and myself): By the way in both case you don't have necessarily a martingale, in your case because $p_t=exp(-\int_0^tr_u.du).\rho_t$ obviously need not to be a martingale in general and
in the case I suggested it is the same, if you have $p_t=\int_0^t exp(−\int_s^tr_u.du).dρ_s$ then you are not assured that is a martingale (or a local martingale) because of the dependence of the
integrand in $t$ (apply Itô-Leibniz rule or Stochastic-Fubini to see that a non nul drift appears in the general case). – The Bridge Sep 6 '11 at 13:29
Hi @TheBridge, thanks for pointing me to the typo, of course there should not be $t$ dependence in the integrand! – Grzenio Sep 6 '11 at 14:37
@ Grzenio : May be you could have a look at Novikov's criterion it is usually used to get a true martingale from a local martingale but it might be applied here or be useful to impose a condition
over the $r$ process. Regards – The Bridge Sep 6 '11 at 14:42
show 3 more comments
1 Answer
active oldest votes
Yes, in this case it is true that $p$ is a proper martingale! Note that your integrand $\exp\left(-\int_0^tr_u\\,du\right)$ is an adapted, continuous, and decreasing process bounded by 1.
So, the following statement implies that $p$ is a martingale.
Let $\rho$ be a cadlag martingale and $\xi$ be an adapted left-continuous and decreasing process with $0\le\xi\le1$. Then, $M=\int\xi\\,d\rho$ is a martingale.
As stochastic integration with respect to a bounded predictable integrand preserves the local martingale property, $M$ must at least be a local martingale. Then, it is standard that a local
martingale $M$ is a proper martingale if and only if the set $$ \left\{M_{\tau\wedge t}\colon\tau{\rm\ is\ a\ stopping\ time}\right\} $$ is uniformly integrable, for each fixed $t\in\mathbb
{R}^+$. Processes satisfying this uniform integrability property are sometimes said to be of class (DL), which is a restriction of the class (D) property to finite time intervals $[0,t]$.
up vote
3 down Let's show that $M$ is of class (DL). Without loss of generality, by subtracting $\rho\_0$ from $\rho$ if necesary, we can assume that $\rho\_0=0$. As $\rho$ is a martingale, it is of class
vote (DL), so the set $$ S=\left\{\rho_{\tau\wedge t}\colon\tau{\rm\ is\ a\ stopping\ time}\right\} $$ is uniformly integrable, for fixed $t\in\mathbb{R}^+$. As $\xi$ is decreasing and left
accepted continuous, we can define random times $\tau\_x=\inf\left\{t\in\mathbb{R}^+\colon\xi\_t < x\right\}$ for $0\le x\le1$. These are stopping times, either by applying the debut theorem or using
the fact that $\{\tau\_x < s\}=\{\xi\_s < x\}$ (strictly speaking, this requires right-continuity of the underlying filtration but, as the martingale property of adapted processes is
unchanged by replacing the filtration by its right-continuous version, this is not important). For each positive integer $n$, the process $$ \xi^{(n)}\_s=\frac1n\sum_{k=1}^n1_{[0,\tau\_{k/
n}]} $$ satisfies $\vert\xi^{(n)}-\xi\vert\le\frac1n$. So, for any stopping time $\tau$, bounded convergence gives the following limit in probability. $$ M_{\tau\wedge t}=\lim\_{n\to\infty}\
int\_0^{\tau\wedge t}\xi^{(n)}\_s\\,d\rho\_s =\lim\_{n\to\infty}\frac1n\sum\_{k=1}^n\rho\_{\tau\_{k/n}\wedge\tau\wedge t}. $$ As $\tau\_{k/n}\wedge\tau$ is a stopping time, this expresses
$M_{\tau\wedge t}$ as a limit in probability of convex combinations of elements of $S$. As taking the convex hull and closure in probability of a set of random variables preserves the
uniform integrability property, this means that the set of all $M_{\tau\wedge t}$ for stopping times $\tau$ is uniformly integrable. So, $M$ is of class (DL) and is a proper martingale.
Thanks for your reply, its really convenient that this process IS a martingale in my model! I am trying to understand the proof and everything seems straightforward apart from the last
paragraph, where you use the fact that the limit in the last equation preserves the uniform integrability property. Could you provide some (citeable) reference? Btw, is there a reference
for the whole proof that I could cite? – Grzenio Sep 12 '11 at 9:07
There's the following which is not really citeable and was written by myself, so not an independent statement either, but it has proofs. planetmath.org/encyclopedia/… – George Lowther Sep
12 '11 at 10:25
Just to double check, this construction works if $\xi_S$ doesn't go to zero at infinity? Then the stopping times $\tau_x$ can become infinite, but it shouldn't matter - the indicator
function would in these cases be always one. Is that correct? – Grzenio Sep 14 '11 at 16:12
That's correct. Also, the definition of a martingales only involves finite time intervals. If a process is a martingale over all finite time intervals then it is a martingale. You
shouldn't worry about what happens at infinity. – George Lowther Sep 14 '11 at 21:00
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability stochastic-calculus stochastic-processes martingales or ask your own question. | {"url":"http://mathoverflow.net/questions/74567/stochastic-integrals-as-honest-martingales-exponential-damping","timestamp":"2014-04-17T04:19:58Z","content_type":null,"content_length":"65367","record_id":"<urn:uuid:95c7e49c-62b7-4d6a-bd8c-1ff4dc322b61>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Centre of mass
February 25th 2009, 04:43 PM #1
Sep 2008
[SOLVED] Centre of mass
A uniform semicircular lamina has mass M. A is the mid-point of the diameter and B is on the circumference at the other end of the axis of symmetry. A particle of mass m is attached to the lamina
at B. The centre of mass of the loaded lamina is at the mid-point of AB. Find in terms of pi, the ratio M:m
The answer given is 3pi : (3pi-8)
using the theorem of Pappus ...
let $R$ = semicircle fixed radius
$r$ = axis of area rotation
$\frac{\pi R^2}{2} \cdot 2\pi r = \frac{4}{3}\pi R^3$
$r = \frac{4R}{3\pi}$
let point A be 0 ...
$x_{cm} = \frac{M\frac{4R}{3\pi} + mR}{M+m}$
$\frac{R}{2} = \frac{M\frac{4R}{3\pi} + mR}{M+m}<br />$
$\frac{1}{2} = \frac{M\frac{4}{3\pi} + m}{M+m}$
$2\left(M\frac{4}{3\pi} + m\right) = M+m$
$\frac{8M}{3\pi} + 2m = M+m$
$m = M\left(1 - \frac{8}{3\pi}\right)$
$\frac{1}{1 - \frac{8}{3\pi}} = \frac{M}{m}$
$\frac{3\pi}{3\pi - 8} = \frac{M}{m}$
February 25th 2009, 05:13 PM #2 | {"url":"http://mathhelpforum.com/advanced-applied-math/75789-solved-centre-mass.html","timestamp":"2014-04-20T09:40:40Z","content_type":null,"content_length":"35714","record_id":"<urn:uuid:eec6aae5-faa9-4845-84a7-fcb3f5e7242b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
We found the derivatives of sin and cos, and now that we have the quotient rule we can take derivatives of all those other trig functions we didn't discuss yet.
We used three general steps in the above problem.
• We rewrote the function in terms of sin and cos,
• we used the quotient rule, and
• we wrote the answer in terms of trig functions.
Find the derivative of tan x.
Find the derivative of each function.
Find the derivative of each function.
Find the derivative of each function. | {"url":"http://www.shmoop.com/computing-derivatives/derivative-trig-functions-help.html","timestamp":"2014-04-19T14:47:54Z","content_type":null,"content_length":"38934","record_id":"<urn:uuid:6885dbae-51b5-49c1-b0ac-a6a45261d9ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: -gmm- Heckman problem
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: -gmm- Heckman problem
From Nick Cox <njcoxstata@gmail.com>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject Re: st: -gmm- Heckman problem
Date Wed, 19 Sep 2012 01:30:35 +0100
Thanks for fixing the post. The nesting of macros implies that the listing of instruments here includes, in either case, an expression, whereas the syntax is that only a varlist is allowed in that
place. At a minimum, I guess all your instruments must already exist as variables at the time of calling -heckman-. Whether your (implied) instruments make sense I can't say.
Use -macro list- before your final command to see the consequences of your definitions.
On 18 Sep 2012, at 22:12, shetty sandeep <getsane@hotmail.com> wrote:
Hello Nick,
I am sorry about not using the initial capitals for proper nouns. I will keep that in mind.
The $mills in the previous email is a typo only in the email and not in the code. In the code it is $mil - this I define. I am pasting the full code again. When I run the code (pasted below) I
get some errors. I describe them below.
*Main code
global xb "{b1}*tage+{b2}*sqage+{b3}*child_less_18+{b4}*marry+{b5} *faminc+{b6}*efnp+{b7}*metro+{b8}*race+{b9}*firmsize+{b0}"
global phi "normalden($xb)"
global Phi "normal($xb)"
global mil "$phi/ $Phi" // calculating the Inverse Mills ratio global a1 "($phi/($Phi*(1-$Phi)))*(trad-$Phi)" // derivative of the selection likelihood. Here trad is the dummy for observed log
//eq 2
global xb2 "{beta1}*tage+{beta2}*sqage+{beta3}*marry+{beta4}*metro+ {beta5}*race+{beta6}*firmsize+{beta0}" // wage equation
//First gmm command
gmm (eq1:$a1)(eq2:lnwage-$xb2-{gamma}*($phi/$Phi)), ///
instruments(eq1: tage sqage child_less_18 marry faminc efnp metro race firmsize) /// instruments(eq2: tage sqage marry metro race firmsize $mil) winitial (identity)
When I run the above gmm command where instruments for the second equation includes $mil, I get the following error
normalden({b1}*tage+{b2}*sqage+{b3}*rfoklt18+{b4}*marry+{b5}*faminc+ {b6}*efnp+{b7}*metro+{b8}*r
numlist in operator invalid
I have checked the other parts of the code and this error is mainly to do with the $mil as an instrument in the gmm command.
When I run the gmm code replacing $mil with $phi and $Phi as instruments as below
gmm (eq1:$a1)(eq2:lnwage-$xb2-{gamma}*($phi/$Phi)), ///
instruments(eq1: tage sqage rfoklt18 marry faminc efnp metro race firmsize) /// instruments(eq2: tage sqage marry metro race firmsize $phi $Phi) winitial(identity)
I get the following error as "{ Invalid name"
I would like to know why is this the case and how should I specify the Inverse Mills ratio ($mil) as an instrument in the -gmm- command?
Thank you for any help.
From: njcoxstata@gmail.com
To: statalist@hsphsun2.harvard.edu
Subject: Re: st: -gmm- heckman problem
Date: Tue, 18 Sep 2012 21:02:20 +0100
According to this, you define $mil, but don't use it, and try to use
$mills, but don't define it. Otherwise put, what you are showing us
appears to be incomplete at best, or even incorrect. Even if that's
typos as far as you are concerned, nevertheless it's hard to figure
out what you expect us to figure out if you don't show us code that
should be expected to work.
Pedantry corner: Heckman is a
major figure, and Mills was a minor figure; either way their names get
initial capitals.
On 18 Sep 2012, at 20:42, shetty sandeep <getsane@hotmail.com> wrote:
I am trying to estimate a heckman wage model using gmm. But I am
facing a problem with the stata -gmm- command. I consistently get an
error as "{ invalid name". The inverse mills ratio in the second
stage moment condition seems to be the problem. My code is pasted
below. It is likely that I may have some conceptual issues with gmm
heckman. Any help is greatly appreciated.
global xb "{b1}*tage+{b2}*sqage+{b3}*child18+{b4}*marry+{b5}*faminc+
{b6}*famsize+{b7}*metro+{b8}*race+{b9}*firmsize+{b0}" //>0
global phi "normalden($xb)" //>0
global Phi "normal($xb)" //>0
global mil "$phi/$Phi" //>0
global a1 "($phi/($Phi*(1-$Phi)))*(trad-$Phi)" // derivative of the
selection likelihood
//eq 2
global xb2 "{beta1=0.02}*tage+{beta2=-0.0004}*sqage+{beta3=-0.007}
*marry+{beta4}*metro+{beta5}*race+{beta6=1}*firmsize+{beta0=1}" //
wage equation
gmm (eq1: $a1)(eq2: lwr1-$xb2-{gamma}*($phi/$Phi)), ///
instruments(eq1: tage sqage child18 marry faminc famsize metro race
firmsize) ///
instruments(eq2: tage sqage marry metro race firmsize $phi $Phi)
I have also tried using $mills as an instrument in which case I get
an error as "numlist in operator invalid". I figured out that the
division operator "/ " in instruments option of gmm is reserved for
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-09/msg00646.html","timestamp":"2014-04-17T04:34:26Z","content_type":null,"content_length":"14296","record_id":"<urn:uuid:b7ba9312-1b29-4f53-a655-a3c414174bcd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Defining Quotient Bundles
up vote 5 down vote favorite
This is an extremely elementary question but I just can't seem to get things to work out. What I am looking for is a natural definition of the quotient bundle of a subbundle $E'\subset E$ of $\mathbb
R$ (say) vector bundles over a fixed base space $B$. Every source I find on this essentially leaves the construction to the reader. I would like to glue together sets of the form $U\times E_x/E'_x$
where $x\in U$ is a locally trivial neighborhood by some sort of transition function derived from those corresponding to $E$ and $E'$, but this doesn't actually make sense in any meaningful way.
While I am tagging this as differential geometry, I would like a construction that works in the topological category (i.e., does not invoke Riemannian metrics) and avoids passing to the category of
locally free sheafs.
Sorry if this is a repost (I'm sure it is, but I can't seem to find anything) and thanks in advance.
dg.differential-geometry gn.general-topology
This is a good challenge for someone learning the ins and outs of vector bundles. Just take it slow and struggle with it for a while. It is important to do things that are not canonical or
2 functorial when trivializing the quotient bundle. I suggest trying to work with local frames of sections of the three bundles in question and build trivializations to $U \times \mathbb{R}^k$ (with
different values of $k$ for each bundle of course). – Deane Yang Jun 8 '10 at 3:44
add comment
3 Answers
active oldest votes
(I was going to leave this as a comment but decided that it's a bit long for that)
A couple of remarks:
1. You express an aversion to Riemannian metrics because you want to be able to apply this in the topological category. That's fine, except for two things: firstly, Riemannian metrics
would not be explicitly involved in this construction as it is a general construction that applies to all vector bundles, not just tangent bundles. Secondly, having inner products on
the fibres of a vector bundle is not something that is special to the smooth category. Using a partition of unity argument (assuming you're working over a sufficiently nice space, or
your vector bundle is a pull-back of a universal one - look up "numerable cover" for more on this - but note that all the answers to this question tacitly assume this), any finite
dimensional vector bundle admits a continuous choice of inner product on its fibres. So the standard argument: "choose an orthogonal structure and take the orthogonal complement" works
up vote 3 equally well in the continuous category as the smooth one.
down vote
accepted 2. What is really going on here is a reduction of structure group. The structure group of the big bundle is $Gl(n)$. The inclusion of the subbundle implies that it reduces to the subgroup
that preserves $\mathbb{R}^k \subseteq \mathbb{R}^n$. At this point, you should work out what this subgroup consists of - think in terms of matrices if you don't see it immediately.
General Nonsense (although for this case, the more junior Lieutenant Nonsense will do) implies that this subgroup is homotopy equivalent to $Gl(k) \times Gl(n-k)$. A reduction to this
defines an isomorphism $E \cong E' \oplus E''$, where $E''$ is the required quotient bundle. The two previous answers can be viewed as constructing this reduction. The "standard
orthogonality" argument rests on the observation that $Gl(m) \cong O(m)$ for any $m$ so we can reduce everything to the corresponding orthogonal group. Thus we start with $O(n)$, reduce
to the subgroup that preserves $\mathbb{R}^k$ and then ... but there is no "and then" because this subgroup is already $O(k) \times O(n-k)$. So either way, you are doing one "reduction
of structure group", the difference between the two methods is simply a choice of doing $Gl(n) \to O(n)$ at the start, or doing $Gl(n;k) \to Gl(k) \times Gl(n-k)$ in the middle.
Andrew, I agree. Isn't that what is going on in my construction, in guise? I construct $\bar \psi$ first as a vector bundle in which $E'$ sits in as a direct sum. – David Carchedi Jun 8
'10 at 11:23
Absolutely! As I tried to make clear, this is a comment on the question and the two answers. As this is (as the questioner admits) a fairly basic question, the educator in me could not
resist adding a little of the background story to the explicit constructions (and the phrase about "Riemannian metrics" needed clarifying!). – Andrew Stacey Jun 8 '10 at 11:55
add comment
I agree this isn't completely obvious. Here's a slightly different take on it. Our intended vector bundle is
$E/E' :=\coprod E_x/E'_x$,
the disjoint union of the quotient vector spaces of fibres. We just need to specify the topology on it. We do this by describing a family of maps which we intend to be continuous
local trivializations for the bundle.
So, take a point $p\in B$, and a nhd $U$ of $p$ on which we have a frame $(e_1, \cdots e_{n+k})$ for $E$. Choose $1\leq i_1<\ldots< i_k\leq n+k$ such that on the fibre $p$,
$E_p=E_p' + span(e_{i_1},\ldots e_{i_k})$.
up vote 1 down By the continuity of the determinant function, in fact there's a neighbourhood of $p$ on which this is true; that is, there's a (perhaps smaller) nhd $V$ of $p$ such that for all $x\
vote in V$,
$E_x=E_x' + span(e_{i_1},\ldots e_{i_k})$.
So at each $x\in V$, we have a basis
$(e_{i_1}+E_x',\ \ldots \ e_{i_k}+E_x')$
for $E_x/E'_x$. We demand that this collection of bases give a (continuous) frame for $E/E'$ over $V$. It's an easy check that the transition functions between two thus-constructed
local trivializations are continuous, as required.
add comment
Let $h:E' \hookrightarrow E$ denote the inclusion of vector bundles. Let $p:Coker(h) \to B$ be defined in the obvious way (fiber at $x$ is $E_x/h(E'_x)$). It suffices to construct local
trivializations of it. Let $E'$ have rank $n$ and $E$ rank $n+k$. Let $U$ be an open subset of $B$ over which both bundles admit a trivialization. Consider the induced map $h:\pi'^{-1}(U) \to
\pi^{-1}(U)$ where $\pi'$ and $\pi$ are the associated projections. Let $\phi:\pi'^{-1}(U) \to U \times \mathbb{R}^{n}$ and $\psi:\pi^{-1}(U) \to U \times \mathbb{R}^{n+k}$ be the associated
up trivializations. Let $\sigma=\psi \circ h \circ \phi^{-1}:U \times \mathbf{R}^{n} \to U \times \mathbb{R}^{n+k}$. Note, $\sigma$ must be fiber-wise linear since we have a vector-bundle. So we
vote have an identification $R^{n+k} \cong Im(\sigma_x) \oplus Coker(\sigma_x)$ for each x. So we get an iso $U \times R^{n+k} \cong U \times Im(\sigma) \oplus Coker(\sigma)$. Composing this with $
-1 \sigma$ yields a morphism $U \times \mathbb{R}^{n} \to U \times Im(\sigma) \oplus Coker(\sigma)$. Call this composition $g$. Let $f$ be the map $U \times \mathbb{R}^{n} \oplus \mathbb{R}^{k} \
down to U \times \mathbb{R}^{n+k}$ which is identify on $U$ and $g \oplus I_k$ on $\mathbb{R}^{n} \oplus \mathbb{R}^{k}$, where $I_k$ is the $k \times k$-identity matrix. Then $\bar \psi:=f^{-1} \
vote circ \psi:\pi'^{-1}(U) \to U \times \mathbb{R}^{n} \oplus \mathbb{R}^{k}$ is a bundle-chart. Note that $Ker(proj_2 \circ \bar \psi)=Im(h)$. So, $\bar \psi$ induces a diffeomorphism $p^{-1}(U)
\to U \times \mathbb{R}^{k}$ via $(x,w+h(E'_x)) \mapsto (x,(proj_2 \circ \bar \psi)(w))$. This will serve for the trivialization.
Hi David, I'm a little worried about this. First, do you mean "coker" instead of "ker" throughout? Secondly and more importantly, I think as you've set this up the identification of $Im(\
sigma_x)\oplus Coker(\sigma_x)$ with $\mathbb{R}^{n+k}$ isn't canonical. This is a problem because it then isn't clear that the identification on each fibre can be chosen so as to "vary
continuously from fibre to fibre" (which is the key point for the local trivialization). – macbeth Jun 8 '10 at 2:23
(Reading Deane's comment, I notice my remarks might be ambiguous: by "canonical" I mean "uniquely specified from the data (including trivializations) so far given," not "independent of the
choice of trivializations.") – macbeth Jun 8 '10 at 8:00
No, no. Every short-exact sequence of vector spaces SPLITS, so I mean kernel, not cokernel. Now, to define the splitting in a natural way, you need to pick a basis this is true- but then
this boils down to the same comment you made about the continuity of the determinant function. Everything is fine.. – David Carchedi Jun 8 '10 at 10:23
Maybe this is just a misunderstanding of notation: as I've read your answer, $\sigma$ denotes the inclusion of fibres of $E'$ into fibres of $E$, and in particular has zero kernel. Sure,
the ses $0\to E'_x\to E_x\to E_x/E_x'=coker(\sigma)\to 0$ splits, and I'm happy to believe that essentially our answers say the same thing. – macbeth Jun 8 '10 at 11:43
(The style of the question is that of a beginner to DG so I'm being more pedantic than usual. I was sure you knew what was going on but couldn't be sure the questioner did.) You should be
1 careful to distinguish between a (local) section of the vector bundle and a (local) section of the associated principal bundle. The latter is, of course, equivalent to a trivialisation of
the vector bundle (by definition, if you set things up correctly) but the former is most definitely not. As originally phrased, your answer read as if you just needed a local section of the
vector bundle. – Andrew Stacey Jun 8 '10 at 12:26
show 5 more comments
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry gn.general-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/27401/defining-quotient-bundles","timestamp":"2014-04-17T12:39:08Z","content_type":null,"content_length":"72135","record_id":"<urn:uuid:69df5216-b8fd-4b23-ba87-3e15bb57e5c4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
convert 550 mm to inches
You asked:
convert 550 mm to inches
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/convert_550_mm_to_inches","timestamp":"2014-04-21T10:32:25Z","content_type":null,"content_length":"56440","record_id":"<urn:uuid:29e9c59d-5183-4ebf-af91-c5399aa3b0e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Circle Construction
Replies: 3 Last Post: Jul 31, 2008 10:48 AM
Messages: [ Previous | Next ]
Soroban Circle Construction
Posted: Oct 14, 2003 9:48 PM
Posts: 278
Registered: 12/6/04 Hello!
Draw the line segment AB.
Through the midpoint C of AB, construct line M parallel to line L.
From any point on M, drop a perpendicular to line L.
This is the radius R of the circle.
Using point B as center, swing an arc of radius R, cutting line M at
O is the center of the desired circle. | {"url":"http://mathforum.org/kb/message.jspa?messageID=1082535","timestamp":"2014-04-17T04:39:46Z","content_type":null,"content_length":"19612","record_id":"<urn:uuid:be0daf44-d347-4328-877d-d2c427234c53>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
Particle Swarm Optimization Algorithm for Unrelated Parallel Machine Scheduling with Release Dates
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 409486, 9 pages
Research Article
Particle Swarm Optimization Algorithm for Unrelated Parallel Machine Scheduling with Release Dates
Department of Industrial Engineering and Systems Management, Feng Chia University, P.O. Box 25-097, Taichung 40724, Taiwan
Received 6 September 2012; Revised 11 December 2012; Accepted 25 December 2012
Academic Editor: Baozhen Yao
Copyright © 2013 Yang-Kuei Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We consider the NP-hard problem of minimizing makespan for jobs on unrelated parallel machines with release dates in this research. A heuristic and a very effective particle swarm optimization (PSO)
algorithm have been proposed to tackle the problem. Two lower bounds have been proposed to serve as a basis for comparison for large problem instances. Computational results show that the proposed
PSO is very accurate and that it outperforms the existing metaheuristic.
1. Introduction
This research considers the problem of scheduling jobs on unrelated parallel machines in the presence of release dates. The performance measure, makespan, is defined as , where is the completion time
of job . Minimizing makespan not only completes all jobs as quickly as possible but also is a surrogate for maximizing the utilization of machines. Following the three-field notation of Graham et al.
[1], we refer to this problem as . This problem is NP hard [2].
Chen and Vestjens [3] used the largest processing time (LPT) to minimize makespan for identical parallel machines with release dates (). The release date of a job is not known in advance, and its
processing time becomes known at its arrival. Kellerer [4] proposed algorithms for the problem and problem. Koulamas and Kyparisis [5] considered uniform parallel machine scheduling problems (). They
proposed a heuristic and derived a tight worse-case ratio bound for this heuristic. Centeno and Armacost [6] showed that the LPT rule performed better than the least flexible job (LFJ) rule for the
problem with machine eligibility restrictions (). Lancia [7] applied a branch-and-bound (b & b) procedure to solve scheduling problems with release dates and tails on two unrelated parallel machines
(). Similarly, Gharbi and Haouari [8] also presented a b & b procedure to solve the problem. Carlier and Pinson [9] reported new results on the structures of Jackson’s pseudopreemptive scheduling
applied to the problem. Li et al. [10] used a polynomial time approximation scheme for scheduling identical parallel batch machines (). Li and Wang [11] proposed an efficient algorithm for scheduling
with inclusive processing set restrictions and job release times ().
To the best of our knowledge, no research has yet been published that develops an efficient algorithm to minimize makespan for unrelated parallel machines with release dates. The rest of this paper
is organized as follows. Section 2 presents our proposed lower bounds. Section 3 presents the proposed PSO. In Section 4, the computational results are reported. Section 5 presents our conclusions
and suggestions for future research.
2. Lower Bounds to
We propose two straightforward and easily implementable lower bounds for the studied problem. is the maximum value of each job’s release date plus the minimum processing time (across all machines).
is set to the minimum release date (among all jobs) plus the sum of all jobs’ minimum processing times (across machines) divided by the number of machines. We set lower bound LB equal to the maximum
value of and :
To illustrate the proposed lower bounds, we consider example 1, which has 2 machines and 7 jobs. The matrix of processing times for example 1 is given in Table 1:
3. The Proposed PSO Algorithm
PSO was first introduced by Kennedy and Eberhart [12] for solving continuous nonlinear function optimization problems. PSO is based on the metaphor of social interaction and communication in flocks
of birds or schools of fish. In these groups, there is a leader (the one with the best performance) who guides the movement of the whole swarm. In a PSO, each individual is called a “particle,” and
each particle flies around the search space with some velocity. In each iteration, a particle moves from its previous location to a new location at its newly updated velocity, which is calculated
based on the particle’s own experience and the experience of the whole swarm.
A population of particles are assumed to evolve in an -dimensional vector search such that each particle is assigned the position vector and velocity vector , where represents the location and
represents the velocity of particle in the dth dimension of the search space at the tth iteration and , . Each particle knows its position and the corresponding objective function. The local best
position for each particle is encoded in the variables , and the global best position among all particles is encoded in the variable . The standard PSO equations can be described as follows [12]:
where is the weight that controls the impact of the previous velocities on the current velocity, is the cognition learning factor, is the social learning factor, and and are random numbers uniformly
distributed in .
PSO has been successfully applied to a variety of continuous nonlinear optimization problems. In recent years, considerable effort has been expended on solving scheduling problems by PSO algorithms.
Articles [13, 14] used PSO algorithms to solve scheduling problems similar to the problems in this paper. Reference [13] provided a PSO algorithm for scheduling identical parallel machines to
minimize makespan (). Reference [14] presented a PSO algorithm for scheduling nonidentical parallel batch processing machines to minimize makespan (). This research uses PSO for the problem. The
following five headings describe the PSO algorithm used in this research: particle representation, initial population generation, particle velocity and sequence metric operators, local search,
stopping criteria, and parameter settings.
3.1. Particle Representation
A coding scheme developed in [15] is used to represent a solution to the problem at hand. The coding scheme uses a list of job symbols and partitioning symbols. A sequence of job symbols, denoted by
integers, represents a possible sequence of jobs. The partitioning symbol, an asterisk, designates the partition of jobs to machines. Generally, for an -machine -job problem, a solution contains
partitioning symbols and job symbols, resulting in a total size of . For example, for a schedule with 7 jobs and 2 machines, the particle can be represented as shown in Figure 1.
The completed schedule is thus jobs 7, 6, 1, and 4 on machine 1; jobs 3, 2, and 5 on machine 2. This coding scheme specifies not only which jobs are assigned to which machine but also the order of
the jobs on each machine. These pieces of information are important, since we are scheduling unrelated parallel machines with release dates.
3.2. Initial Population Generation
In order to give the PSO algorithm good initial solutions and to increase the chances of getting closer to regions that yield good objective functions, we propose a heuristic, named SRD_Reassign. The
proposed heuristic SRD_Reassign is described as follows.
3.2.1. Heuristic SRD_Reassign
Step 1. Let be the set of unscheduled jobs; let be the sum of the processing times of the jobs that have already been scheduled on machine , . Initially, set and , for .
Step 2. Arrange the jobs in the order of the shortest release date (SRD) first, and then assign the job to machine that has the minimum processing time, that is, . Repeat until all jobs have been
scheduled to generate a complete schedule.
Step 3. Let be the set of scheduled jobs on machine , ; let represent the maximum completion time and denote the set of candidate jobs for reassignment as . Initially, set .
Step 4. Identify machine for which . For every job , , search for machine , such that if job was reassigned to machine and the jobs on machine were sorted in SRD, the new calculated would be smaller
than , that is, . If job and machine can be found, update the candidate set by adding job on machine to the candidate set by setting . If , then go to Step 6.
Step 5. Select machine and job from for the reassignment that has maximum , where . Reassign job to machine by setting . Sort the jobs on machine in SRD and update . Remove job from machine by
setting . Sort the jobs on machine in SRD and update . Set and . Go to Step 4.
Step 6. Terminate the procedure.
The first two particles are generated by first-come, first-serve (FCFS) rule and SRD_Reassign. The remaining particles are generated by applying local search to the solution found by SRD_Reassign.
For FCFS rule, we consider all unscheduled jobs and schedule each one on the first available machine according to FCFS. Local search is done by randomly choosing two jobs and from the solution found
by SRD_Reassign and then interchanging jobs and to generate a new solution. From the initial population pool, we identify the best current solution and update the global best location ().
3.3. Particle Velocity and Sequence Metric Operators
Kashan and Karimi [13] worked on the classical PSO equations to provide a discrete PSO algorithm that maintained all major characteristics of the original continuous PSO equations when solving
parallel machine scheduling problems. In this research, we use the two equations proposed in [13] to update the particle velocity and the sequence metric operators as shown in (4) and (5): In (4),
and represent the velocity and position arrays of particle at the tth iteration, respectively. and represent the local best position for each particle and the global best position among all particles
visited so far. and are 1-by-() arrays in which each digit is 0 or 1. These random arrays are generated from a Bernoulli distribution. , , and are subtraction, multiplication, and addition operators,
respectively. The definitions of the operators are as follows.
The subtraction operator () defines the differences between the current position of the kth particle, , and a desired position (or ). It first finds elements that do not have the same content in and
(or ). It schedules those elements based on their orders in (or ). Next, it finds elements that have the same content in and (or ) and gives those elements zero values. It puts zero-valued elements
in SRD and then schedules them on whatever machine that offers the earliest completion time (ECT). Figure 2 demonstrates the manner in which the operator works for example 1.
The multiplication operator () can enhance our PSO algorithm’s exploration. It first generates a 1-by-() binary vector for a solution vector, and then it does a multiplication process where the
asterisk positions from solution are kept. These random binary arrays perform subroutines within PSO that use random numbers to enhance the exploration ability of PSO. Figure 3 demonstrates the
manner in which the operator works for example 1. The nonzero-valued elements in are scheduled first. Then, all zero-valued elements in are sorted in SRD, and then each zero-valued element is
assigned to the machine that offers the ECT.
The addition operator () is a crossover operator that is commonly used in genetic algorithms. Here, we used a crossover that was proposed in [15]. The crossover scheme has three main steps: it
obtains asterisk positions from the first parent ; it obtains a randomly selected subschedule from the first parent ; it scans the second parent from left to right and fills the gaps in the child’s
() schedule with jobs taken from the second parent . Figure 4 shows an illustration of this crossover scheme.
During the execution of the , , and operators, whenever a complete schedule generates a better solution than the best current solution , we will update the best current solution and the global best
location ().
3.4. Local Search
It is well known that evolutionary memetic algorithms can be improved by hybridization with local search. For each particle , we do the following local search procedure (LSP) to further improve the
current solution.
3.4.1. Local Search Procedure (LSP)
Step 1. Initially, set .
Step 2. Identify machine that has maximum completion time (). Randomly choose one job from machine and randomly choose one job from machine . Insert job after job on machine . If a better is found,
update and go to Step 2; otherwise, go to Step 3.
Step 3. Randomly choose two jobs and from ; and can be on the same machine or on two different machines. Interchange jobs and . If a better is found, update and go to Step 2; otherwise, go to Step 4.
Step 4. If , stop; otherwise set and go to Step 2.
Again, during the execution of LSP, whenever a complete schedule generates a better solution than the best current solution , we will update the best current solution and the global best location ().
3.5. Stopping Criteria and Parameter Settings
We studied the effects of five important parameters (, , local search moves , population size, and number of iterations) on the performance of our proposed PSO. The model was tested and parameterized
through a factorial study. The selected PSO parameters were , , , population size, and number of iterations. The appendix includes a detailed description of our parameterization study.
4. Computational Results
In this section, we present several computational results of the proposed PSO algorithm. We compare our proposed PSO algorithm with a mixed integer programming (MIP) model developed in our previous
research [16] on the problem. The MIP model [16] was coded in AMPL and implemented in CPLEX 11.2. The proposed heuristic, SRD_Reassign, and the proposed PSO algorithm were implemented in C. The MIP
model, heuristic, and PSO algorithm were executed on a computer with a 2.5GHz CPU and 2GB of memory. Processing times were generated from the uniform distribution . Release dates were generated in
a manner similar to that of Mönch et al. [17]. We generated release dates from the uniform distribution . controlled the range of release dates. High values of tend to produce widely separated
release dates. values were set at 0.1, 0.25, and 0.5. We used 4 machines with 18 jobs () to represent small problem instances and 10 machines with 100 jobs () to represent large problem instances.
For each , 20 problem instances were randomly generated. The effectiveness of each algorithm was evaluated by the mean performance and the required computation time (labeled as “Avg. time”). For
small problem instances, a ratio was calculated by dividing the algorithm’s makespan by the optimal MIP makespan. For large problem instances, a ratio was calculated by dividing the algorithm’s
makespan by the makespan from LB. The mean performance of the algorithm for each was the average ratio obtained from 20 runs of the algorithm.
4.1. Comparison of Heuristics for Problem
We compared the proposed heuristic SRD_Reassign with the optimal solutions obtained from the MIP model [16] and FCFS. FCFS is a dispatching rule that is commonly used for practical problems with
release dates. Computational results for small and large problem instances are given in Tables 2 and 3, respectively. The results show that the proposed SRD_Reassign outperformed FCFS in terms of
makespan. For small problem instances, the average SRD_Reassign makespan was 1.08 times the optimum, and the average FCFS makespan was 1.41 times the optimum. Both heuristics outperformed the MIP
model in terms of computation time. When was small, both heuristics had larger ratios to the optimal solutions than they had when was large. Also, the MIP took more computation time to find optimal
solutions when was small. This probably indicates that problems with small release date ranges are harder to solve than problems with large release date ranges. For large problem instances, the
average SRD_Reassign makespan was 1.22-times greater than the lower bound (LB), and the average FCFS makespan was 1.67 times the LB. Both heuristics were calculated very quickly (in less than 1
second) even for large problem instances.
4.2. Comparison of Metaheuristics for Problem
We compared the proposed PSO with an existing metaheuristic, namely, the version of simulated annealing (SA) described by Lee et al. [18]. This SA variant was originally designed for solving the
problem. SA is a metaheuristic, and it can be used without any problem-dependent knowledge; therefore, it can be used to solve the problem. In order to provide a fair comparison, we used the same
initial solution (SRD_Reassign) for both PSO and SA. We also adjusted the SA parameters to ensure that both PSO and SA ran for similar computation times. The termination criterion for SA was set to
run for 12 seconds for small problem instances and 83 seconds for large problem instances. If SA found a solution equal to LB, the program would terminate earlier. Computational results for small and
large problem instances are given in Tables 4 and 5, respectively.
Computational results show that the proposed PSO outperformed the SA in terms of makespan. For small problem instances, the PSO found optimal solutions at all three settings. The average SA makespan
was 1.05 times the optimum. Both metaheuristics outperformed the MIP model in terms of computation time. The last column in Table 4 reports how many times a given algorithm produced a better makespan
than the other algorithm. For instance, a value of in column PSO/SA means that, out of 20 problems, there were problems for which PSO yielded a better solution than SA, problems for which SA
performed better, and 20-- problems for which PSO and SA yielded the same makespan.
For large problem instances, the average PSO makespan was 1.08-times greater than the LB, and the average SA makespan was 1.15 times the LB. The last column in Table 5 shows how many times out of 20
the LB was obtained by and how many times the LB was obtained by . When was small, provided a better lower bound than ; however, when was large, provided a better lower bound than . This suggests
that performs better for problems with narrow release date ranges, and performs better for problems with wide release date ranges.
4.3. The Effects of the Proposed PSO
Next, since the proposed PSO effectively incorporates a number of ideas (initial solutions, SRD, ECT, and LSP), we examine which parts are essential to its functionality. We examine these effects by
disabling a single component, running the proposed PSO without that component, and observing performance. We choose to study large problems. These experiments are described as follows.
PSO-Initial Heuristics: instead of generating an initial population by heuristics, we randomly generated an initial set of solutions to make up the initial population.
PSO-SRD: instead of sorting elements by SRD and then scheduling them on whatever machine that offered the ECT within PSO operators ( and ), we randomly chose unscheduled jobs and then scheduled them
on whatever machine that offered the ECT.
PSO-ECT: instead of sorting elements by SRD and then scheduling them on whatever machine that offered the ECT within PSO operators ( and ), we sorted elements by SRD and then scheduled them on the
first available machine.
PSO-LSP: local search procedure was disabled.
[13]: local search procedure was disabled and a local search algorithm used in [13] was applied. Since the formulation in step 4 [13] was not suitable for the unrelated parallel machines environment,
we modified it to find two jobs from and such that an exchange of those two jobs was able to improve the current best makespan.
Table 6 lists the average makespan ratio of PSO-variant to standard PSO (which has initial heuristics, SRD, ECT, and LSP by default). Table 6 shows that the PSO performed poorly when the initial
heuristics were not applied and when the initial population was randomly generated. The PSO also performed poorly when the ECT strategy was not applied within the PSO operators. The average ratio of
PSO without initial heuristics to standard PSO was 1.019, the average of PSO without SRD to standard PSO was 1.006, the average of PSO without ECT to standard PSO was 1.020, and the average of PSO
without LSP to standard PSO was 1.007. In all, all of the proposed PSO versions without any one of the parts (heuristic initial solutions, SRD, ECT, and LSP) performed worse than the PSO with all of
Moreover, we compared our proposed PSO with another existing PSO. The closest existing PSO that we were able to find was the hybridized discrete PSO (HDPSO) proposed in [13]. The HDPSO [13] was
designed to minimize makespan for identical parallel machines (). The proposed PSO and the HDPSO both are designed to minimize makespan for parallel machines and they both use formulas (4)-(5) to
update each particle’s velocity and position. However, the HDPSO is still quite different from our PSO. The HDPSO considers problems without release dates; hence, its coding scheme does not consider
the order of jobs on the same machine. Also, HDPSO considers an identical parallel machine environment; it uses the LPT rule to assign jobs with zero-valued elements within , , and operators in
formulas (4)-(5). It is well known that the LPT rule does not perform well in unrelated parallel machine environments. If HDPSO is used to solve the problem without modifications, it will not perform
very well. Table 7 shows a comparison between HDPSO and our PSO. Since the , , and operators within formulas (4)-(5) are quite different in both PSOs, there is no point in comparing them. We focus
our comparison on initial heuristics and local searches. The first column in Table 6 indicates that our PSO with an initial heuristic performs better than a version without an initial heuristic.
HDPSO might exhibit similar performance differences. The last column in Table 6 indicates that our proposed LSP performs better than the local search algorithm used in [13]. The standard PSO (LSP is
embedded) versus PSO-LSP+LS [13] is 1.000 versus 1.009. Moreover, the proposed LSP outperforms the local search algorithm used in [13] in terms of average computation time. Therefore, we can conclude
that our proposed PSO provides better and more efficient strategies for parallel machine makespan minimization problems than what HDPSO provides. Our PSO is more likely to provide promising results
than HDPSO.
5. Conclusions and Future Work
We studied the problem of scheduling jobs on unrelated parallel machines with release dates to minimize makespan. In this research, we proposed two lower bounds for the studied problem. We also
proposed a heuristic, SRD_Reassign, and a metaheuristic, PSO, to tackle the problem. Computational results showed that SRD_Reassign outperformed the commonly used heuristic, FCFS, in terms of
makespan. The proposed PSO outperformed a comparable variant of SA in terms of makespan. Future work can extend our approach for other performance criteria or even for multiobjective parallel machine
scheduling problems.
We studied the effects of five important parameters (, , local search moves , population size, and number of iterations) on the performance of our proposed PSO. In order to test the significance of
each parameter, was chosen as a representative problem instance; the objective was to minimize in an unrelated parallel machine environment with release dates. Because problems with small ranges of
release dates are harder to solve than problems with large ranges of release dates, the release date factor was set to 0.1. In order to obtain information about the importance of each of the factors,
we conducted an initial screening experiment. Each parameter was categorized as being at a high or low level, as shown in Table 8. We conducted a screening experiment to determine which factors were
significant. The results of this experiment are shown in Table 9 where each value is the average of 20 problem instances. The half normal plot provided by Design Expert indicates that (factor ),
population size (factor ), and iterations (factor ) were significant to the response in the screening experiment. The model in terms of coded factors is . The model shows that increasing the , , and
values could decrease makespan.
Next, we used the method of steepest descent (Myers et al. [19]) to provide information about the region of improved response. The path of steepest descent is shown in Table 10. The results show that
a reduction in was experienced after Run 2. Although Run 2 improved by 2.4% compared with Run 0, its computation time was very long. Run 1 improved by 2.1% relative to Run 0 and used less computation
time. We chose the settings of Run 1 as our final set of parameters. Hence, PSO parameters were set to , population size, and iterations. Since the and were not significant, they were kept at low
levels ( and ) to save computation time. | {"url":"http://www.hindawi.com/journals/mpe/2013/409486/","timestamp":"2014-04-18T07:07:48Z","content_type":null,"content_length":"279784","record_id":"<urn:uuid:b1fd04f2-21dd-4a72-b50d-36f0ddbcde26>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
August 17th 2012, 09:38 AM #1
Aug 2012
Show that any 2 groups with order 2 are isomorphic
So I guess we can start off with
Suppose $G=(\{e,a\}, \cdot )$ and H=({i,b},*) are 2 groups with identity elements e and i
And Like i am getting stuck here. Not sure how I should proceed
Re: Isomorphic
well, any isomorphism must be a bijection of sets. and isomorphisms must "preserve the operation" that is:
h(x.y) = h(x)*h(y).
note that this means:
h(x) = h(e.x) = h(e)*h(x) for all x in G, so h(e) must be the identity i of H.
for h to be bijective, h(a) must then be b.
but now we need to check that for all x,y in G, h(x.y) = h(x)*h(y). fortunately, G is small, so there are only 4 products to check: e.e, e.a, a.e, and a.a.
note that by the definition of an identity of a group e.a = a.e = a, and e.e = e. what is a.a? there's only two possibilities: a.a = a, or a.a = e.
i claim a.a cannot possibly be a. for a^-1 is one of {e,a}, so if a.a = a then:
a^-1.(a.a) = a^-1.a
(a^-1.a).a = e (associativity on the left, definition of inverse on the right)
e.a = e (definition of inverse on the left)
a = e (definition of identity on the left).
but this is a contradiction, a is different than e. thus a.a = e (which shows that a is its own inverse).
similar reasoning shows i*i = i, i*b = b*i = b, b*b = i. now we can prove h, defined by:
h(e) = i
h(a) = b is an isomorphism.
h(e.e) = h(e) = i = i*i = h(e)*h(e)
h(a.e) = h(a) = b = b*i = h(a)*h(e)
h(e.a) = h(a) = b = i*b = h(e)*h(a)
h(a.a) = h(e) = i = b*b = h(a)*h(a)
in other words, G and H "act the same" we just "re-named" a to b, e to i, and . to *.
August 17th 2012, 10:12 AM #2
MHF Contributor
Mar 2011 | {"url":"http://mathhelpforum.com/advanced-algebra/202263-isomorphic.html","timestamp":"2014-04-23T17:41:04Z","content_type":null,"content_length":"34380","record_id":"<urn:uuid:66842fc9-2c1a-4173-a71c-8425fe7f1c90>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Missouri's Frameworks for Curriculum Development
Table of Contents
• Rationale for the Study of Mathematics
• Purpose of this Framework
• Major Organizing Strands
• Process Strands
• Content Strand
• Organizational "Roadmap"
VI. Geometric and Spatial Sense (MA 2)
VII. Data Analysis, Probability and Statistics (MA 3)
VIII. Patterns and Relationships (MA 4)
IX. Mathematical Systems and Number Theory (MA 5)
X. Discrete Mathematics (MA 6)
• Basic Knowledge of the Concept of Mathematics
• Use of Technology
• Connections and Active Listening
• Problem Solving
• Varied Instructional Methods
• Communication and Reasoning
• Classroom Climate
• Assessment
APPENDIX A
□ Examples of Quality Student Work | {"url":"http://dese.mo.gov/divimprove/curriculum/frameworks/over4.html","timestamp":"2014-04-21T07:05:53Z","content_type":null,"content_length":"10330","record_id":"<urn:uuid:9c25cb0d-b242-471c-9769-c3ab8d74068b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do you write Roman numeral 19?
Roman numerals, the numeric system used in ancient Rome, employs combinations of letters from the Latin alphabet to signify values. The numbers 1 to 10 can be expressed in Roman numerals as follows:
The Roman numeral system is a cousin of Etruscan numerals. Use of Roman numerals continued after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced in most
contexts by more convenient Hindu-Arabic numerals; however this process was gradual, and the use of Roman numerals in some minor applications continues to this day.
Related Websites: | {"url":"http://answerparty.com/question/answer/how-do-you-write-roman-numeral-19","timestamp":"2014-04-17T21:53:19Z","content_type":null,"content_length":"21152","record_id":"<urn:uuid:da012e0c-f33e-421f-b744-dc6c4571d479>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numerical Simulation of Mixed Convection in a Rotating Cylindrical Cavity: Influence of Prandtl Number
Advances in Mechanical Engineering
Volume 2013 (2013), Article ID 950765, 8 pages
Research Article
Numerical Simulation of Mixed Convection in a Rotating Cylindrical Cavity: Influence of Prandtl Number
Universidad Autónoma del Estado de Morelos, Centro de Investigación en Ingeniería y Ciencias Aplicadas, Avenida Universidad 1001, Colonia Chamilpa, 62209 Cuernavaca, MOR, Mexico
Received 25 January 2013; Accepted 23 March 2013
Academic Editor: Bo Yu
Copyright © 2013 Gustavo Urquiza et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
A numerical study of the flow and heat transfer on a rotating cylindrical cavity solving the mass, momentum, and energy equations is presented in this work. The study describes the influence of the
Prandtl number on flow in critical state on a cavity which contains a cooling fluid. Problem studied includes Prandtl numbers , aspect ratios and Reynolds numbers . Differential equations have been
discretised using the finite differences method. The results show a tendency followed by heat transfer as the Reynolds number increases from 300 to 600; in addition, emphasis on the critical values
of the Rayleigh number for small Prandtl numbers shows that thermal instability in mixed convection depends on the Prandtl number.
1. Introduction
Flows in mixed convection heat transfer frequently happen in engineering systems and natural phenomena. An important issue is the understanding of this phenomenon on cavities, which have applications
in the growth of crystals, glass production, flows in nuclear reactors, and atmospheric prediction among others. For the development of new technologies in gas turbines design, which operate at high
temperatures, studies of the cooling systems, generators, and rotors are necessary. Optimal design of these cooling systems will be able to improve heat transfer in the region of the rotor. Due to
the centrifugal acceleration and temperature difference in the walls on cavity, a flow induced by the action of the centrifugal and body forces will occur, becoming an unstable and oscillating
characteristic for the critical regime.
The study of Laje et al. [1] shows the effect of the Prandtl number and the Rayleigh number on the convection of Bénard and concludes that the critical Rayleigh number Ra[c] is increased
substantially as the Prandtl number decreases to small values. Verhoeven [2] and Soberman [3] investigated the values of the Ra[c] for (liquid metals like mercury and sodium), and Bertin and Hiroyuki
[4], on the other hand, studied the influence of the Prandtl number, in the rank of 0,001–1000 on the critical Rayleigh number. Chao et al. [5] developed a study of the influence of the Prandtl
number on the convection of Bénard, reaching the same conclusion of the study of Laje et al. [1].
A numerical study presented/displayed by Gelfgat et al. [6] shows the transition of stable convective flow oscillating for small Prandtl numbers ( to ) in rectangular cavities laterally warmed up in
the interval. They showed that the oscillating instabilities were a product of a supercritical bifurcation caused by small infinitesimal disturbances, which means that the heat transfer has a strong
effect of the instability of small values of Prandtl numbers. Also, they showed that the critical frequencies of oscillation as well as the number of critical Grashof are responsible for the change
for the aspect ratio.
In other studies, Zhang and Nguyen and Durand et al. [7, 8] have investigated the forced convection heat transfer between two short concentric cylinders. The inner-rotating cylinder drives the forced
flow, while the end plates and the outer cylinder are stationary. The annulus is filled with a heat-generating fluid where the internal heat source may be due to viscous dissipation or to exothermic
reactions. The normal modes (with an inlet flow along the end plates) were studied in terms of the Reynolds and Prandtl numbers for a radius ratio and an aspect ratio . Was found that as flows
develops on 2-cell, 4-cell and 2-cell regimes; as Re increases from zero. These results are consistent with the general description given in the papers of Benjamin [9, 10] on the bifurcation
phenomena in viscous flows between short cylinders.
Existing studies have thus confirmed the importance and complex interrelation between the centrifugal force due to rotation and the viscous force at the end plate. As a consequence, the flows have
been developed in a very different way compared to the case of pure Taylor vortices, and the concepts of primary flow, secondary flow, normal and anomalous modes have been theoretically developed and
experimentally realized in an attempt to construct a unified picture of the complex bifurcation phenomena in finite-dimensional systems.
In spite of the works previously developed, very few of them have focused on the behavior of the flow in cylindrical cavities in rotation. In order to obtain a better understanding of the flow and
heat transfer, the behavior of the flow within cylindrical cavities in rotation with emphasis on the effect of the Prandtl number appears in the instability of flow in mixed convection.
2. Description of the Problem
The physical problem studied is schematically shown in Figure 1. The system consists of two vertical concentric cylinders of radii and and a finite length . The cylinders are closed by two end
plates. The outer cylinder and the end plates are fixed, while the inner cylinder rotates with a constant angular velocity .
The annulus is filled with an incompressible Newtonian fluid with density , viscosity , diffusivity , and thermal expansion coefficient . All thermophysical properties of the fluid may be considered
constant, except the variation of density in the buoyancy force according to the Boussinesq approximation. The two cylinders (rotating walls) are maintained at a same uniform temperature and the end
plates (static walls) are perfectly insulated.
For axisymmetric flows, the system may be described by a stream function , vorticity , swirl velocity , and a temperature field .
By using the scales for time, for length, for velocity, and for temperature, the nondimensional governing equations may be written as follows: where and are the Prandtl and Grashof numbers,
The boundary conditions for the problems under consideration are where is the Reynolds number.
The previous systems of equations were solved by a finite difference method based on a control-volume formulation [7, 8]. The discretised equations for the flow field were derived using central
differences for spatial derivatives and forward difference for the time derivative. The control volume formulation in conjunction with the power-law interpolation scheme was used for high Prandtl,
Reynolds, and/or Rayleigh numbers.
The numerical approach used in this study has been validated by comparison with the results obtained by Ball and Farouk [11] and Fasel and Booz [12]. The agreement was satisfactory within the
graphical precision.
A mesh was selected, because with it the maximum stream function value of the 2-cell mode at only changed by 0.2% [8].
This condition was applied to all quantities at all mesh points as a convergence criterion for the following steady solution:
3. Results and Discussion
The effect of the Prandtl number on the instability of flow has attracted a great scientific interest after experimental investigations showed that the oscillations caused by the thermal-dynamic
instability produce a change in the structure of the flow of numerous processes in growth of crystals in liquid phase (Hurle et al., 1974 [13]). Nevertheless, for very small Prandtl numbers, the
dependency of the transition from the stable state to oscillating on the change in the aspect ratio widely has not been studied until 1997 by Gelfgat et al. [6].
3.1. Effects of the Prandtl Number on the Heat Transfer
This problem was studied for a range of the Prandtl number , with aspect ratios , finding the critical Rayleigh numbers for the interval of the Reynolds number to be , making emphasis on the critical
values of the Rayleigh number for small Prandtl numbers, and demonstrating that thermal instability in mixed convection depends on the Prandtl number. If in the system heating did not exist , flow
only becomes unstable due to dynamic effects by increasing the Reynolds number. For this, the dynamic instability is strongly stabilized by an increase in the potential of temperatures and the flow
returns to destabilize itself when the body forces are increased to their critical value, showing 2 types of instabilities of thermal origin.
At first, as shown in Figures 2(a), 2(b), 3(a), and 3(b), for small values of Prandtl number (), there is a weak influence of body forces on the centrifugal forces, streamlines are predominant in
opposite to clockwise, and heat transfer has a conductive nature. Second, Figures 2(c), 2(d), 3(c), and 3(d) for high Prandtl numbers () the flow becomes unstable by the increase of the body effects
that distort the velocity profiles; for this, the flow becomes unstable by the increase of the body effects that distort the velocity profiles; in this case, the fluid is not a good heat conductor,
the disturbance of temperatures is located more in a region, and the strong action of the body forces initiates the instability.
For small values of the Prandtl number, thermal effects do not have influence on the flow instability; the temperature difference is small, so that the centrifugal forces come to be the main
influence on flow. This is shown in Figure 4; the critical Rayleigh number is seen to be strongly increased in agreement with the viscous fluid.
3.2. Effects of the Prandtl Number on the Reynolds Number
While the Reynolds number is smaller, the body forces will affect the centrifugal forces from small Prandtl numbers. This is shown by means of the comparison of Figure 6(a) with Figure 7(a) and the
streamlines of Figure 2(a) with Figure 3(a). In Figure 2(a), for , it is possible to observe that the cell formed in the hot wall of the internal cylinder shows a growth as the Prandtl number grows;
this cell is the one that determines the magnitude of the body forces and has a clockwise direction of rotation.
The previous examples explain why the critical Rayleigh number decreases from fluid with high Prandtl numbers to low Prandtl numbers, for moderate Reynolds numbers . In order to exemplify the
previous observation, Figure 5 shows clearly this tendency. For an increase in the Reynolds number, the critical Rayleigh and Nusselt numbers are increased and the body forces are cushioned by the
centrifugal forces. On the other hand, an increase in the Prandtl number causes a strong increase in the heat transfer in the hot walls, the isotherms are highly distorted, and the convective terms
come to be predominant. Figures 6, 7, 8, and 9 visualize the mentioned behaviors.
For detailed analysis, a study of the effect of the Prandtl number on the number of Nusselt in the walls of the cavity for numbers of 0–600 and aspect ratios , 0.5, and 0.25 was carried out, where
the number of Grashof was taken like 1000. This analysis did not reach a regime of critical flow, showing that the tendency of heat transfer for a flow below the critical Grashof number, for
different ratios, is similar to the ones produced in a regime of flow for critical Grashof numbers and superior to these.
Figures 10, 11, and 12 show the effect of Prandtl and Reynolds numbers on the Nusselt on the inferior surface of the cavity, which is warmed up, as well as the surface of the internal cylinder. The
continuous lines represent the natural convection within the cavity and show an ascending heat transfer from . This should happen only because body forces have action over the flow, and there is a
cell with clockwise turn. To increase Reynolds number, a strong action of the centrifugal forces on those of body will increase the percentage of transfer of heat in the walls.
This effect begins in the interval of ; the flow undergoes drastic changes in the dynamic patterns, where it stops values of the inferior Prandtl number to 0.1 is dominated by the forces of body and
values superior to 0.5 that by the centrifugal forces. Making comparisons between Figures 10, 11, and 12, makes it possible to observe the change in the flow patterns that happens between and has a
smaller effect for smaller aspect ratios. On the other hand, the heat transfer in the inferior wall is increased in agreement with the aspect ratio decreases.
From the results of Figures 10, 11, and 12, some correlations to determine the number of Nusselt of the inferior wall can be deduced. For all the cases, the number of Grashof is 1000, that is why the
flow is in laminar regime.
The correlation for natural convection is expressed in (4), all the values for this variables are mentioned in Tables 1, 2, and 3:
The correlation for mixed convection is expressed in (5); all the values for this variable are in Tables 4, 5, and 6:
The values of constant used in the correlation for mixed convection (5) are shown in Table 2.
4. Conclusions
Present work demonstrates that for values of , the convective terms of heat transfer are negligible. This means that body forces are constant and only hydrodynamic effects cause instability. In spite
of the weak influence of the forces of body on the flow for small Prandtl numbers, the study demonstrated that the thermal instability in mixed convection depends strongly on the Prandtl number.
The dynamic instability is strongly stabilized by an increase in the potential of temperatures, and the flow returns to destabilize itself when the body forces are increased to their critical values.
Two instabilities of thermal origin first happened for very small values of the Prandtl number, where the heat transfer has a conductive character. Second, for high Prandtl numbers, it is shown that
the body effects that distort the velocity profiles cause heat transfer to have a convective character.
For small Reynolds numbers the body forces affect with greater magnitude the centrifugal forces from very small Prandtl numbers. Then, the critical Rayleigh number, for moderate Reynolds numbers ,
diminishes from convective fluids with high Prandtl numbers to conductive fluids with very low Prandtl numbers.
The tendency of the heat transfer from a flow below the number of critical Grashof to different aspect ratios is very similar to the one produced in a regime of flow for numbers of Grashof that are
critical and superior to these. On the other hand, the heat transfer in the inferior wall increases as the Prandtl number increases and aspect ratio decreases.
1. J. L. Laje, A. Bejan, and J. Georgiadis, “On the effect of the Prandtl number on the onset of Bénard convection,” Journal of Heat and Fluid Flow, vol. 12, no. 2, pp. 184–188, 1991. View at
Publisher · View at Google Scholar
2. J. D. Verhoeven, “Experimental study of thermal convection in a vertical cylinder of mercury heated from below,” Physics of Fluids, vol. 12, no. 9, pp. 1733–1740, 1969. View at Scopus
3. R. K. Soberman, “Onset of convection in liquids subjected to transient heating from below,” Physics of Fluids, vol. 2, no. 2, pp. 131–138, 1959. View at Scopus
4. H. Bertin and O. Hiroyuki, “Numerical study of two-dimensional natural convection in a horizontal fluid layer heated from below, by finite-element method: influence of Prandtl number,”
International Journal of Heat and Mass Transfer, vol. 29, no. 3, pp. 439–449, 1986. View at Scopus
5. P. Chao, S. W. Churchill, and H. Ozoe, “The dependence of the critical Rayleigh number on the Prandtl number,” in Convection Transport and Instability Phenomena, J. Zierep and H. Oertel, Eds.,
pp. 55–70, G. Braun Karlsruhe, FRG, 1982.
6. A. Y. Gelfgat, P. Z. Bar-Yoseph, and A. L. Yarin, “On oscillatory instability of convective flows at low Prandtl number,” Journal of Fluids Engineering, vol. 119, no. 4, pp. 823–830, 1997. View
at Scopus
7. X. L. Zhang and T. H. Nguyen, “Forced convection by Taylor-Couette flow between two concentric rotating cylinders with internal heat generation,” in Proceedings of the Fundamentals of Forced
Convection Heat Transfer, vol. 210, ASME, 1992. View at Scopus
8. S. G. Durand, B. G. Urquiza, and H. T. Nguyen, “Thermoconvective fluid flow between concentric rotating cylinders of finite lenght,” in Proceedings of the 2nd International Thermal Energy
Congress (ITEC '95), pp. 480–486, Agadir, Morocco, June 1995.
9. T. B. Benjamin, “Bifurcation phenomena in steady flows of a viscous fluid. Part I: theory,” Proceedings of the Royal Society A, no. 359, pp. 1–26, 1978. View at Scopus
10. T. B. Benjamin, “Bifurcation phenomena in steady flows of a viscous fluid. Part II. experiments,” Proceedings of the Royal Society A, no. 359, pp. 27–43, 1978. View at Scopus
11. K. S. Ball and B. Farouk, “Bifurcation phenomena in Taylor-Couette flow with buoyancy effects,” Journal of Fluid Mechanics, vol. 197, pp. 479–501, 1988. View at Scopus
12. H. Fasel and O. Booz, “Numerical investigation of supercritical Taylor-Vortex flow for a wide gap,” Journal of Fluid Mechanics, vol. 138, pp. 21–52, 1984. View at Scopus
13. D. T. J. Hurle, E. Jakeman, and C. P. Johnson, “Convective temperature oscillations in molten gallium,” Journal of Fluid Mechanics, vol. 64, no. 3, pp. 565–576, 1974. View at Scopus | {"url":"http://www.hindawi.com/journals/ame/2013/950765/","timestamp":"2014-04-20T04:09:52Z","content_type":null,"content_length":"195383","record_id":"<urn:uuid:7f818e26-aaee-4b4d-85f5-e932c4396b78>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lorton, VA Statistics Tutor
Find a Lorton, VA Statistics Tutor
...Math can be a challenging subject, but there is a true sense of satisfaction when you see the student actually understand the material and feel a confidence in working the problems. I believe
Math is best learned by doing and working problems. My goal is to reduce frustration and create confidence.
11 Subjects: including statistics, calculus, physics, geometry
I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and
algebra problems. I enjoy teaching and working through problems with students since that is the best way ...
14 Subjects: including statistics, calculus, geometry, algebra 1
I have 11 years' experience teaching and mentoring college undergraduate and graduate students on quantitative research projects, including teaching applied statistics and research methods, and
using SPSS, STATA, and Excel. I also have over 20 years of research experience in the social sciences, mo...
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...I use PowerPoint in business and university settings. I am very familiar with what makes a good presentation since I have given several hundred of them. I am very familiar with all of the
physical sciences.
10 Subjects: including statistics, algebra 1, algebra 2, Microsoft Excel
...I see this as a high calling. And, I make a promise, 100 percent satisfaction or your money back! I have education background in economics and agricultural economics, management, public
health, finance, statistics to name just a few.
64 Subjects: including statistics, reading, English, writing
Related Lorton, VA Tutors
Lorton, VA Accounting Tutors
Lorton, VA ACT Tutors
Lorton, VA Algebra Tutors
Lorton, VA Algebra 2 Tutors
Lorton, VA Calculus Tutors
Lorton, VA Geometry Tutors
Lorton, VA Math Tutors
Lorton, VA Prealgebra Tutors
Lorton, VA Precalculus Tutors
Lorton, VA SAT Tutors
Lorton, VA SAT Math Tutors
Lorton, VA Science Tutors
Lorton, VA Statistics Tutors
Lorton, VA Trigonometry Tutors | {"url":"http://www.purplemath.com/lorton_va_statistics_tutors.php","timestamp":"2014-04-19T02:04:34Z","content_type":null,"content_length":"24086","record_id":"<urn:uuid:22c19bb3-571a-4c02-8aac-70f39ed14a65>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transitive closure relation.
March 6th 2009, 04:22 PM #1
Senior Member
Jul 2006
Transitive closure relation.
The question is as follows:
The transitive closure of a relation R on S x S is another relation on S x S called Tr(R) such that $(s,t) \in Tr(R)$ iif there exists a sequence $s=s_1, s_2, s_3, \ldots, s_n = t$ such that $
(s_i, s_{i+1}) \in R$ for each $i$.
a) What is the transitive closure of the successor relation? (Defined previously on N x N as: $\lbrace (m, n) | m = n+1\rbrace$).
b) What is the transitive closure of the > relation?
The book introduced this concept right in this exercise question without showing any examples. I'm a bit stuck as to how I might interpret this somewhat complex definition [of a transitive
closure] and derive the solution.
Relations on NxN
Hello scorpion
The question is as follows:
The transitive closure of a relation R on S x S is another relation on S x S called Tr(R) such that $(s,t) \in Tr(R)$ iif there exists a sequence $s=s_1, s_2, s_3, \ldots, s_n = t$ such that $
(s_i, s_{i+1}) \in R$ for each $i$.
a) What is the transitive closure of the successor relation? (Defined previously on N x N as: $\lbrace (m, n) | m = n+1\rbrace$).
b) What is the transitive closure of the > relation?
The book introduced this concept right in this exercise question without showing any examples. I'm a bit stuck as to how I might interpret this somewhat complex definition [of a transitive
closure] and derive the solution.
The successor relation on $\mathbb{N} \times\mathbb{N}$ contains all ordered pairs of the form $(n+1, n), n \in \mathbb{N}$. And we can form a sequence of such ordered pairs, starting at any $n$,
and finishing at any $m<n$. For example, starting at $7$ and finishing at $3$:
$(7,6), (6,5), (5,4),(4,3)$
So the sequence $7, 6, 5, 4, 3$ is an example of a sequence $s_1, s_2, s_3, \dots, s_n$ where $(s_i, s_{i+1}) \in R$, the successor relation. So $(7,3) \in Tr(R)$.
Do you want to see if you can take it from here?
Relation terminology
Hello scorpion007
Just as a PS to my previous post, to use the correct terminology, the successor relation is defined on $\mathbb{N}$, of course, not $\mathbb{N} \times \mathbb{N}$. The relation is then a subset
of $\mathbb{N} \times \mathbb{N}$.
While I understand that a relation is a subset of the Cartesian product of two sets ( $R \subset S \times T$), my book states, for example, this exact quote, in the fifth question: "5. We define
the successor relation on $N \times N$ to be the set ...<the definition follows>".
Is it incorrect? (The book is Discrete Algorithmic Mathematics, 2nd Edition.)
Hello scorpion
The successor relation on $\mathbb{N} \times\mathbb{N}$ contains all ordered pairs of the form $(n+1, n), n \in \mathbb{N}$. And we can form a sequence of such ordered pairs, starting at any $n$,
and finishing at any $m<n$. For example, starting at $7$ and finishing at $3$:
$(7,6), (6,5), (5,4),(4,3)$
Ok, but if you started at n = 7, then wouldn't the first pair be (8, 7), since (7 + 1, 7)?
Ah, I think I see. When they say $s=s_1, s_2, s_3, \ldots, s_n = t<br />$, they mean $s_n = t$, not the sequence $(s_1, s_2, \dots, s_n) = t$, correct?
Ok, so from what I understand, Tr(R) contains ordered pairs such as (7, 3), (7, 4), (100, 5). Generally, any ordered pair (s, t) such that s > t. Formally, $\lbrace (s, t) | s > t \rbrace$.
Looking at the answer in the book, they have, $\lbrace (s, t) | s < t \rbrace$. Why is that?
Relation on N
Hello scorpion007
Ok, so from what I understand, Tr(R) contains ordered pairs such as (7, 3), (7, 4), (100, 5). Generally, any ordered pair (s, t) such that s > t. Formally, $\lbrace (s, t) | s > t \rbrace$.
Looking at the answer in the book, they have, $\lbrace (s, t) | s < t \rbrace$. Why is that?
If you define the successor relation in the way that you have then the ordered pairs are $(n+1, n)$ and hence (I think!) the answer should be $s > t$. But if the successor relation is the other
way round, so that $n$ is related to its successor, $n+1$, the ordered pairs then being $(n, n+1)$, then the answer will have $s < t$.
Ah, thank you. Perhaps it is a mistake in the book, then. They probably meant to define the successor relation as all points (n, m) instead of (m, n).
So for question b), The answer would be "Itself", which is also what the book states. Correct?
Hello scorpion007
May I just give my thoughts and hopefully tie up one or two loose ends?
While I understand that a relation is a subset of the Cartesian product of two sets ( $R \subset S \times T$), my book states, for example, this exact quote, in the fifth question: "5. We define
the successor relation on $N \times N$ to be the set ...<the definition follows>".
Is it incorrect? (The book is Discrete Algorithmic Mathematics, 2nd Edition.)
I have always defined a binary relation (i.e. one in which two elements are involved) as follows: A binary relation from a set $X$ to a set $Y$ is a set of ordered pairs $(x, y)$ such that $x \in
X$ and $y \in Y$. It follows therefore that a relation defines a subset of $X \times Y$.
Now it may be that $X$ and $Y$ are one and the same set - the set $\mathbb{N}$, for instance. So in referring to relations like the ones in this question (> and the 'successor relation'), I would
talk about a relation 'from $\mathbb{N}$ to itself', or simply a relation 'on $\mathbb{N}$'.
I would not argue with anyone who wanted to call this a relation 'on $\mathbb{N} \times \mathbb{N}$', but there is a slight danger here that this might be interpreted as a relation that mapped an
ordered pair e.g. $(1,2)$ onto another order pair e.g. $(5, 3)$ (for instance, with the relation 'is in the same quadrant as'), rather than mapping a natural number onto another natural number.
Strictly speaking, yes, if your first ordered pair is $(n+1, n)$ and $n=7$. But you'll notice I didn't say $n=7$, you did. I simply said 'starting at 7'.
I think this is much more likely. After all, if the relation is a representation of a mapping of $n$ onto its successor $n+1$ then the natural order for the ordered pair is $(n, n+1)$, and an ' $
{s_i}$' sequence then goes in ascending order: $3, 4, 5, 6, 7$.
A confusion will arise if the successor relation is representation of the words '... is the successor of ...', in which case $n+1$ 'is a successor of' $n$, and the ordered pair then gets written
as $(n+1, n)$. It's easy to get it the wrong way round, isn't it!
Yes. This is much less ambiguous!
I hope we've dealt with this question now! And you might like to look at this: http://www.mathhelpforum.com/math-he...l-posters.html.
Not to nitpick, but, re:
But you'll notice I didn't say n=7, you did. I simply said 'starting at 7'.
You said, originally,
And we can form a sequence of such ordered pairs, starting at any n, and finishing at any m<n. For example, starting at 7 and finishing at 3:
How should I have interpreted this, if not n=7, m=3?
Thanks for the explanations, by the way. Very useful.
March 6th 2009, 10:52 PM #2
March 6th 2009, 11:48 PM #3
March 7th 2009, 12:30 AM #4
Senior Member
Jul 2006
March 7th 2009, 12:41 AM #5
Senior Member
Jul 2006
March 7th 2009, 01:00 AM #6
Senior Member
Jul 2006
March 7th 2009, 03:39 AM #7
March 7th 2009, 02:54 PM #8
Senior Member
Jul 2006
March 7th 2009, 10:38 PM #9
March 8th 2009, 03:38 AM #10
Senior Member
Jul 2006 | {"url":"http://mathhelpforum.com/discrete-math/77265-transitive-closure-relation.html","timestamp":"2014-04-16T14:44:04Z","content_type":null,"content_length":"83970","record_id":"<urn:uuid:cc159672-c369-4238-9cef-53b540219f71>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Always Perfect' printed from http://nrich.maths.org/
Have you tried some simple cases to see if there is a pattern?
If you have a number "$x$" the next number is "$x +1$". How could you write four consecutive numbers?
If you can express $x^4 + \ldots + 16$ as a perfect square, the two factors will be of the form $(x^2 +\ldots + 4)$. | {"url":"http://nrich.maths.org/2034/clue?nomenu=1","timestamp":"2014-04-18T06:38:14Z","content_type":null,"content_length":"3367","record_id":"<urn:uuid:a24bc42b-5272-437c-a144-0b3bc15ad3ed>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Introduction To Loop Quantum Gravity
Could you please try to give an example of a calculation? I would like to get some feeling of how to handle all up to the mapping from a Lie algebra element to a Lie group element via parallel
transport along a loop. Additionally, I'd like to see what a projection from the manifold into the tangent space looks like in practice.
Just to make things more clear: what exactly is the nature of this manifold you are talking of? Is there any physical interpretation?
hello Cinquero, have you by any chance looked at the beginning treatment of LQG in Rovelli and Upadhya's paper? This was my introduction to the subject back in 2003. Several of us at Physics Forum
were reading that paper back then.
It is short (on the order of 10 pages) and shows how a number of things are calculated. If you are interested in learning LQG, then I could review the paper myself, and read some of it with you.
If you do not already have Rovelli/Upadhya and would like the link, please let me know. the date at arxiv is about 1998. | {"url":"http://www.physicsforums.com/showpost.php?p=524773&postcount=45","timestamp":"2014-04-18T00:28:59Z","content_type":null,"content_length":"8735","record_id":"<urn:uuid:0adae1af-ae9d-451e-9da4-fe50d77935ea>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Trick -- Impress Your Friends
Think of a number. Any number at all... between 10 and one billion.
Now, since that number has at least two digits, add up all of the digits, and subtract that sum from your original number.
Next, add up all of the digits of the number you just got after that subtraction, to get another new number. And then do it again with THAT number (if you have only a one-digit number, you would just
get that same number again).
Finally, subtract 1 from that last number, and find the letter of the alphabet that corresponds to the result (1=A, 2=B, 3=C, etc.) Now think of one of the 50 United States that starts with that
Aloha! You've just arrived in Hawaii, didn't you?
How does this work? The main "trick" to this is knowing that when a number is divisible by 9, the sum of its digits are also divisible by 9. Therefore, once you have a number that is divisible by 9,
you can keep summing up the digits, and eventually end up with 9 as the final result. For a number less than one billion, the sum of the digits can never exceed 81 (for 999,999,999)... therefore
you'll only have to do the digit-summing twice (at most) before ending up with 9 as the result. Once you end up with 9, then subtract 1 to get 8, this leads you to the letter H, and Hawaii is the
only choice.
So, the only remaining question is how to force the person to a number that is divisible by 9 in the first place. That's where the first part comes in: subtracting the sum of the digits from the
original number. This works because our numbering system is based on multiples of ten, so subtracting one of each digit leaves you with multiples of nine! Here's an example:
The number 3198 is the same as the total value of 3*1000 + 1*100 + 9*10 + 8. Subtracting the sum of the digits (3 + 1 + 9 + 8) means that we can remove one 3, one 1, one 9, and one 8 from that total.
That leaves us with 3*999 + 1*99 + 9*9. Notice how each of these terms is now divisible by 9; therefore, the total will be as well!
That's the whole trick. Force the person's randomly-chosen number to something that's divisible by 9, then use the digit-summing to reduce it down to the number 9. After that, the rest of the trick
is just for flair. Instead of subtracting one and asking for a state, you can subtract 4 and ask for a zoo animal (Elephant, though some may think of emu)... or subtract 5 and think of a country
(Denmark, though some may think of Dominican Republic). There are all sorts of variations on this trick, so make it your own and impress your friends! | {"url":"http://www.wyzant.com/resources/blogs/12836/math_trick_impress_your_friends","timestamp":"2014-04-19T00:01:20Z","content_type":null,"content_length":"40988","record_id":"<urn:uuid:5a171af3-6dea-4137-b620-4449215f8ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
PDL::Fit::Levmar PDL Levenberg-Marquardt fitting module
Debian packages for PDL modules
Some debian packages of pdl modules:n
These distributions are also available on CPAN. The packages are only built for one architecture. Some of the distributions already had a ./debian directory. Otherwise, I added the ./debian directory
PDL::Fit::Levmar is available at CPAN.
I frequently need to do non-linear fitting of data, and have never been satisfied with the tools I found. I want to work in a high-level, general numerical analysis framework. All I ask for is a
powerful, convenient, robust, flexible, free, etc. implementation. The levmar library (which uses the lapack and blas libraries) is the only C library that I found that appears to be featureful (eg,
box and linear constraints ), efficient (uses lapack and blas and has conditionals in the code to choose different methods depending on the size of matrices, etc.), well organized and easy to use,
and free. I wrote this PDL module to try to fill the remaining requirements.
The fit function can be supplied as a perl function, or as a string containing a C function which is transparantly compiled and linked, in which case the the entire fit procedure is done in compiled
C. (In some textbook optimization problems that have a small amount of data, but require a very large number of iterations to converge, this provides more than an order of magnitude increase in
speed.) In addition, the perl module includes a very simple pre-processor language that is as fast as C, but much more concise. Here is an example that fits data arrays ($x,$t) to a gaussian
$result = levmar($params,$x,$t, FUNC =>
' function
x = p0 * exp(-t*t * p1);
print $result->{P};
The hash $result contains the optimized parameters ( P ) and a lot of other information, such as the covariance matrix, and the values of all the quantities relevant to the stopping criteria. If
levmar is given additional arguments specifying linear or box constraints or an analytic jacobian, or single or double precision arguments, the appropriate C algorithms are chosen transparantly. The
implementation is designed to provide the simplest black box possible as well as maximum control if desired.
pylevmar python binding to levmar. It seems that this library is becoming popular ... | {"url":"http://www.johnlapeyre.com/pdl/index.html","timestamp":"2014-04-20T03:47:37Z","content_type":null,"content_length":"3791","record_id":"<urn:uuid:3b22096c-61a2-49b5-9701-baf398c0179b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Use synthetic division to find P(–3) for P(x) = x4 – 2x3 – 4x + 4.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50e61397e4b04bcb151679f0","timestamp":"2014-04-20T18:59:43Z","content_type":null,"content_length":"167390","record_id":"<urn:uuid:61f97425-f71d-4dc1-816c-63a300b2a16e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
A basic question about the use of a metric tensor in general relativity
I have very little knowledge in general relativity, though I do have a decent understanding of
the theory of special relativity.
In special relativity, points in space-time can be represented in Minkowski space (or a hyperbolic space) so that the metric tensor (that is derived in order to preserve the invariant interval) has a
unique form corresponding to that space.
From what I understand (and I say this with caution as I could easily be mistaken), since in the theory of general relativity light bends under the influence of gravity (does it?), this requires a
more complicated set of coordinates hence the use of more complicated metric tensors to preserve the invariant interval in such curvilinear space.
Is that correct? how would such metric tensor look like?
In order to begin to understand GR, you need to first get an understanding of the concept of curved spacetime. Most people are able to do this by extrapolating from lower dimensional examples.
Consider the 2 dimensional surface of a sphere, and imagine that a race of beings is living within this 2D surface. They are unaware that there is a 3rd radial dimension present. They lay out
perpendicular lines of constant longitude and constant latitude on the sphere surface, and, at least locally, think that they are using a flat rectangular Cartesian coordinate system. However, when
they actually measure distances in their 2D world, and try to apply a Euclidean metric to their system, they run into trouble. This is because the sphere is curved, and not flat. The Euclidean metric
only works to first order. To describe the metrical characteristics correctly on the surface of a sphere, they need to use curvilinear coordinates, such as Spherical.
Analogous to this is 4D Lorentzian spacetime which, for the most part is "flat", and amenable to the Minkowski metric (with constant metrical coefficients). However, in the vicinity of massive
objects, 4D spacetime is curved, and cannot be described using a rectilinear orthogonal (Lorentzian) set of coordinates. It is as if the 4D spacetime had been deformed "out of plane" into the 5th
All accelerated frames of reference must be regarded as non-flat. Thus, the hyperbolic space you described above is curved "out of the plane" of Lorentzian space, and cannot be described using the
Minkowski metric. Similarly, the Schwartzchild metric applies to an accelerated frame of reference. | {"url":"http://www.physicsforums.com/showthread.php?p=3807914","timestamp":"2014-04-20T18:25:08Z","content_type":null,"content_length":"32013","record_id":"<urn:uuid:b54f4773-2d65-4c39-aa72-c57d8b81df22>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00544-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Analysis: A First Course
“In a nutshell, this book presents the topics of a first-year calculus course, with all of the proofs and without the applications.” This is the one-sentence summary given by the author on p. viii,
and it sounds like Heaven — as it sounded like, and was, Heaven several decades ago when I took what was then called “Advanced Calculus” back in undergraduate school. Reading this book brought me
back to Heaven.
To me it has the perfect attitude. It’s written by somebody who obviously appreciates rigorous math, to the extent of sharing with his readers and students just what he appreciates. From the page 52
definition and explanation of what it means for a sequence to converge to a number — “The crucial point is that epsilon can be any positive number” and “It is worth pointing out that epsilon greater
than zero comes first, then an appropriate N is sought” — to later more advanced concepts, such as the passage on page 245 which motivates uniform convergence: “It appears that pointwise convergence
does not preserve any useful properties of functions. Since it is often necessary that the limit function inherit some properties, we will introduce a type of convergence that is stronger than
pointwise convergence in the next section”. And in between, on page 212, after introducing geometric and telescoping series: “Since the convergence of an infinite series depends on the convergence of
its sequence of partial sums and since sequences have already been studied in detail… it might seem as though there is little left to do. However, the two examples that have been given thus far have
been misleading. For both examples, it was possible to find an expression for s[n] that did not involve a sum, then use this expresison to find the limit of the sequence. it most cases involving
infinite series, it is not possible to express s[n] in a form which makes the limit easy to find.” These are the kinds of things that students who are not yet very familiar with “pure math” need to
read and hear, and often don’t.
Also, I’m partial to authors and teachers who write or say, as Gordon does on page 257, “…a power series is an infinite degree polynomial” — to me, that’s the beauty of power series. I’m even more
partial to authors who continue, “such series possess some nice properties that the typical series of functions does not possess.” This helps students to understand some of the subtleties — e.g.,
that indeed power series are series of functions, not “merely” numbers.
My very-favorites are the “Further Topics” and “Miscellaneous Results” sections, three of them appearing at the end of the chapters on “Differentiation”, “Sequences and Series of Functions”, and
“Point-Set Topology”. E.g., it’s always exciting to find out, once again, that an everywhere continuous functions does not need to have a derivative, anywhere.
Since the book is for math majors, it probably doesn’t vitally need to be writen all that clearly. But it is. The motivations, explanations, definitions, and proofs are all extremely beautifully
written. While being friendly and kind, he is definitely writing to math majors, and he gives them due respect. For example, he shares with his readers his perceptions concerning some of the proofs.
Page 279: “This proof of the Weierstrass Approximation Theorem does not involve any deep ideas, but it is not all that enlightening. The origin of the polynomials is not clear and the convergence of
the polynomials to the function is difficult to visualize…. Nevertheless, a sequence of polynomials that converges uniformly to a continuous function f on an interval [a, b] does exist.” (Actually, I
disagree with the first statement. I did my Master’s and part of my doctoral dissertations on Schwartz distribution theory, and if I remember correctly, parts of the proof which he refers to seem
very familiar; I think they relate to something called “delta-convergent sequences” and involve the notion of convolution products and how convergence sometimes preserves them. At any rate, armed
with this background, this proof does seem enlightening, and it is indeed possible to, in some sense, “visualize” the sequence of polynomials and why they do what we want them to do. — What does
“visualize” mean , anyway, in the context of math?)
Another good thing: He continually relates this “advanced calculus” to “non-advanced calculus”. E.g., page 142: “… the next theorem illustrates one such application. It is known as a monotonicity
theorem since it gives conditions that guarantee a function is monotone. Its conclusion is probably familiar to you.” In fact, this book makes me wonder why “regular” calculus books are so often not
clearly written; I firmly believe that math can be clearly written up, even without giving rigorous concepts and proofs, and even for students who are not “math people”. When I teach calculus, I make
mention of clarifying ideas, and I give sketches of proofs, or sometimes simple statements that make things believable to “laypersons”. Probably other teachers do that also, and from my interaction
with them, I’d venture a guess that they’d feel that their jobs would be easier, on both them and the students, if calculus books were better written.
Also, the author keeps abreast of the polarity between pure and applied, and he does it in just the same way that I would! E.g., page 130: “Although the deriva- tive concept has a number of practical
applications (some of which should be familiar to the reader), the focus in this book will be on the mathematicial aspects of the derivative. However, a few simple applications of the derivative will
appear in the exercises. (My own personal feeling is that, if teaching from this book, I would not specifically assign those exercises; I would just encourage the students to look at them — or
perhaps during class, I might spend two or three minutes pointing them out.)
He also keeps abreast of the polarity between intuition and rigor. E.g., page 135: concerning the fact that the inverse of a continuous function is continuous: “…this result is clear from a
geometrical perspective since the graph of the inverse function … is just the reflection of the graph of the function… through the line y = x. … as usual, this geometric reasoning does not constitute
a proof that inverse functions are differentiable; the proof must use the definition of the derivative.”
Finally, he communicates perspective. Page 144: “Although l’Hôpital’s Rule is sometimes useful, it is not very important as a theoretical rool in real analysis. For this reason, it will be covered
lightly here…”
He also communicates subtlety. In particular, his epsilon-delta stuff is a charm! Page 112: ”…it makes no real difference in this case if the inequalities are strict or not.”
I would now like to share a pet peeve. As a writer of poetry and creative non-fiction, I probably underline too much! But if I were a writer of math books, I would probably underline even more too
much (as proven by the italicizing in this sentence…) In writing out this review, in fact, I was very very often tempted to underline certain words. It truly does seem to increase the clarity. My pet
peeve is that editors and authors shy away from underlining, as though it were some sort of sin, or perhaps “cheap” in some way. What, I ask, is wrong with underlining? Especially in a math book. It
would, again, make things clearer, and also serve to further emphasize the beauty of it all.
The author explains things in very much the same way that I would, and it was a challenge to try to think of ways in which I would change the book if it were mine. ( I always say that when I review a
book which I love…) Here are two attempts:
Page 114: “Let f be a continuous function defined on an open interval (a, b). Then f is uniformly continuous on (a, b) if and only if f has one-sided limits at a and b.” I’d state that differently.
In accordance with the preceding motivation, I’d say “f has one-sided limits at a and b (and therefore can be extended to a continuous function on all of [a, b]) if and only if f is uniformly
continuous on (a, b).” That is, I’d make three changes: I’d change the order of the clauses, I’d throw in the parentheses, and I’d underline “uniformly ”. This, to me, would better convey the state
of affairs.
Page 296: “There is no simple characterization of a closed set in terms of closed intervals”. This made me pause briefly, and might make some students pause less briefly. One could, of course, take
the “dual” of the preceding theorem — “Every nonempty open set of real numbers can be expressed as a countable union of disjoint open intervals.” — and that would express any given closed set as a
countable intersection of closed intervals. It think that what he means to say is: “There is no simple characterization of a closed set as any kind of disjoint union of closed intervals.”
I also have my own pet ways of teaching the material on pages 143-4. I call what the Second Derivative Test accomplishes “figuring out which and whether”. (whether a critical point is actually an
extreme point and if so, which type it is, max or min). And I introduce L’Hôpital’s Rule by saying, “We’ve already seen derivatives via limits; now we’re going to see limits via derivatives.” And I
call that rule “diff-ing across the board”. Since it’s unlikely that I’ll ever write a calc text, anybody is welcome to use those little gimmicks…
These are pretty nit-picky, I admit. And my two bigger complaints are also meant to be minor, in the light of so much major. First, I think that Chapter 5 on “Inte- gration”, in particular Sectons
5.1 and 5.2, could be more clearly written. That’s the stuff that tends to trip students up, or at least make them feel saturated. In fact, the author admits that it’s hard. So I think that this is
the place for the author to show off, even more, his skills at motivating and making things visual. For example, page 166: The Norm of a partition is the length of the largest sub- interval; he
should say that. Also, the norm of a tagged partition does not make use of the “tags”. Finally, on that same page, when he gives the example of a partition “tagged” with the midpoints of each
subinterval, he might mention that this corresponds to the familiar Midpoint Rule. However, I commend his statement on the next page (167), where Riemann integrability is definited: “Once these
subintervals have been chosen, the tag from each subinterval may also be chosen at random. As a result of all this variability, it is tedious and/or difficult to prove that a function is Riemann
integrable on an interval using the definition unless the function has a very simple form”. But I also think that some of the more complicated proofs might be better motivated and illustrated (even
as I agree that, at some point, math students should and will be able to grasp ideas with less motivation and illustration).
The only other “bigger” complaint — and this might be more of an opinion that a complaint — concerns Chapter 8 on “Point-Set Topology”. To me, topology isn’t topology without the core dea of open and
closed sets (or some other equivalent core idea) — that any set, along with a collection of subsets which are closed with respect to finite intersection and arbitrary union, is a topological space. I
read every single word in this chapter (and in this book) and I didn’t see anything to that effect. (Perhaps I missed it. He does, sometimes, refer to the “advantage” of proofs which use only the
notions of open and closed sets rather than the properties of real numbers, but does not, in my perception, quite make it clear why this is so.)
I agree that the emphasis in this book is on real numbers, that the topology chapter should be treated accordingly, and that very little attention needs to be paid at this point to the abstract
notions in topology. However, I believe that the author missed an opportunity. On page 294, he gives us “Theorem 8.4:
a. The unon of any collection of open sets is open and the intersection of any finite collection of open sets is open.
b. The intersection of any collection of closed sets is closed and the union of any collection of closed sets is closed.”
This theorem refers to open and closed sets of reals, but the author might, after proving this theorem, briefly say something to the effect that (“for the record”, as he often aptly says) this is the
essence of rigorous point- set topology — that whenever a collection of subsets has property (a), they can be called “open” and the “universal” set a “topological space”, whether it consists of real
numbers or not. After all, students reading this book either will be taking or already have taken Abstract Algebra, so they’re not totally unfamiliar with abstraction. Again, this might be more of a
personal preference that anything even remotely serious.
All that said, this book most definitely gives us all an extremely illuminating picture of the spirit of real analysis, and of math. Students will work hard in the course and will be in Heaven.
Besides being a mathematician, Marion Cohen is a poet and writer, author of several books, the latest of which is Dirty Details: The Days and Nights of a Well Spouse (Temple University Press, PA).
Her forthcoming book, Crossing the Equal Sign (Plain View Press, TX), consists of poetry about the experience of mathematics. You might have seen some of these poems in The American Mathematical
Monthly. Marion can be emailed at mathwoman199436@aol.com and more of her writings can be seen on her website at http://www.marioncohen.com . | {"url":"http://www.maa.org/publications/maa-reviews/real-analysis-a-first-course","timestamp":"2014-04-16T23:05:06Z","content_type":null,"content_length":"114410","record_id":"<urn:uuid:531e4ab5-b8bb-4063-a08b-8487bd9f0192>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need some general and specific advice.
January 3rd, 2010, 01:42 PM
Need some general and specific advice.
So a few weeks ago I found out about the Stanford CS106a classes that you could find on youtube, and since then I have been occasionally working on Karel programs when I get the chance. For the
past few days though Ive been working on programming actual java. I've done a few things so far, mainly just math related code to get me used to programming Console Programs. For the past 2 days
when I've gotten the chance I've been working on just simply writing the quadratic formula, which took roughly 3-4 hours to write, and another hour and a half to debug mainly one bug that caused
the math to not function properly.
So last night I got everything fixed, and this morning I thought before I celebrate I had better make it so that the program repeats so I don't have to close it and run it every time I run a
problem. So, like I did in Karel programs, I used the standard "private void repeat() {" (without quotation marks). Which ended up getting some errors and causing some other errors. So after
about an hour and a half now messing with the code, googling, and trying to figure out what the error messages mean, I havent gotten to far. At the moment I have "private @ repeat() {" (once
again without quotes) and the only error message given is "Syntax error, insert 'enum Identifier' to complete enumHeader". All im aiming to do is create a method to call on if the user presses
"y" and wants to run the program again. Heres the code:
Code :
/* This program should allow the user to input variables, and solve the quadratic formula.
* Code that causes program to repeat may not be debugged fully.
* Currently working on code to make the program repeat itself.
* Could look neater in the future.
import acm.program.ConsoleProgram;
public class quadraticFormula extends ConsoleProgram {
public void run() {
private @ repeat() {
println("Enter values a, b, and c to compute Quadratic Formula");
int a = readInt("Enter a: "); // Getting some input for the formula //
int b = readInt("Enter b: ");
int c = readInt("Enter c: ");
int discriminant = ((b*b) - (4*a*c)); { // Solves the inside of the radical without finding the square root
if (discriminant < 0) { // This checks for an imaginary number //
println("This is an imaginary number.");
else { // This executes the rest of the equation if it is not imaginary //
println("The discriminant (result inside radical) is " + (Math.sqrt(discriminant))); // Prints & squares the discriminate //
int dividend = 2*a; // Simply multiplies the 2 on the bottom of the equation times a //
double posativeEquation = ((-b) + Math.sqrt(discriminant))/dividend; // These do the basic math to finish up the problem
double negativeEquation = ((-b) - Math.sqrt(discriminant))/dividend;
println("Result of addittion equation is " + posativeEquation + ".");
println("Result of subtraction equation is " + negativeEquation + ".");
println("----Would you like to compute another formula?");
int r = readInt("Enter y or n"); // Everything below this will cause the program to run again if y is read
int y = repeat;
int n = ABORT;
if (r = y); {
if (r = n); {
I kinda realize this maybe kindav painful to look at, which is another question I had. My karel programs looked extremely neat, but these end up looking pretty rough. Could I get some advice
about how to make this program look a little neater?
Like I was saying before Im really looking for something that I could just call on in "y" that would cause the program to restart.
I know this maybe pretty bad, but for my first real program Im pretty proud of it! Thx in advance for any advice you can give that I could use to improve!
January 3rd, 2010, 07:33 PM
Re: Need some general and specific advice.
I've never seen a method inside a method used inside of Java..
To repeat until a condition isn't satisfied, use a while loop or a do while loop. See Java while loops for more information on using while loops in Java (If you know C or C++, they operating in
pretty much the same way and have the same syntax structure).
Code wise, it could use some formatting. If you have the Eclipse IDE, it has an auto-format option that can format your code following parameters you define, or you can use the default ones. It
can correctly tab statements/brackets and such, put spaces in the correct places, add missing import statements, and make other changes format-wise to your program. There is an option that will
allow Eclipse to arrange the order of code sections inside your program, but I would recommend that you keep it off because it can have unwanted consequences for program flow and do this kind of
formatting by hand.
January 3rd, 2010, 10:31 PM
Re: Need some general and specific advice.
I've never seen a method inside a method used inside of Java..
To repeat until a condition isn't satisfied, use a while loop or a do while loop. See Java while loops for more information on using while loops in Java (If you know C or C++, they operating in
pretty much the same way and have the same syntax structure).
Code wise, it could use some formatting. If you have the Eclipse IDE, it has an auto-format option that can format your code following parameters you define, or you can use the default ones. It
can correctly tab statements/brackets and such, put spaces in the correct places, add missing import statements, and make other changes format-wise to your program. There is an option that will
allow Eclipse to arrange the order of code sections inside your program, but I would recommend that you keep it off because it can have unwanted consequences for program flow and do this kind of
formatting by hand.
A "while (true) { " worked perfectly! And ill look into the formatting when I can get a chance. | {"url":"http://www.javaprogrammingforums.com/%20loops-control-statements/2510-need-some-general-specific-advice-printingthethread.html","timestamp":"2014-04-18T23:16:26Z","content_type":null,"content_length":"11340","record_id":"<urn:uuid:1c1df73c-b519-4767-b56f-32ef6b33dd70>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Good, the Bad and the Context-Adjusted
Charlie Saeger demonstrates the inner workings of Context-Adjusted Defense v3.0. Zero Clint Eastwood references.
Recently, I received an e-mail from someone wanting an explanation of how Context-Adjusted Defense works for the math-impaired. I figured it would make a good article, so I’m writing it as an
The principle behind CAD is that a fielder’s contribution to his team’s fielding will, in some way, show up on the scoreboard. However, his context (mostly pitching staff, but also ballpark) will
alter just how we see his contribution in the final stat line. Therefore, we must use known principles of fielding as a machete to slash through the weeds that surround traditional defensive
Something important—I believe that, at their core, traditional fielding stats are mostly valid. All other things being equal, a shortstop who recorded 527 assists is a better shortstop who only
recorded 436 assists. Some statistics are not valid, and we should know which ones they are to keep them from fooling us.
Above all, the first clause of the second sentence of the above paragraph—“all other things being equal”—is important in evaluating defensive statistics. It’s valid in evaluating other baseball
statistics, but it is tantamount when evaluating fielders. The shortstop who recorded but 436 assists could well be a better shortstop than the one who recorded 527 assists. There are many important
cues for which one must watch so one can know when the numbers are lying.
First, some ground rules:
* I prefer innings estimates to a Claim Points system for individual fielders. Ultimately, any method of determining defensive innings will show itself in a Claim Points system, since the fellows who
recorded more outs played more innings. Estimating innings based on total chances works because Range Factor does not vary much at a position (remembering a fielding corollary of Voros’s Law: anyone
can do anything in 100 innings afield), and because those outs recorded themselves determine innings. An inning is three outs, after all.
So, when determining an individual player’s opportunities, prorate the adjusted team opportunities to the player’s innings. If Derek Jeter played 972 of the Yankees’ 1458 innings at shortstop, and
you determine the Yankees shortstops are responsible for 600 adjusted hits allowed, Jeter is responsible for two-thirds of those, or 400 adjusted hits allowed.
* I am comparing each rate to the league average, noting the outs/errors/whatever better or worse than the league rate, and setting it aside. Before deriving a value in runs, you’ll have anywhere
from three to five numbers, either positive or negative.
The core calculation for CAD is a measure of range. I place the fielder’s outs in the context of his team’s hits allowed total. The reason for this is because defensive stats do a pretty good job at
telling us how often a fielder succeeded, but they are almost completely silent when we ask them how often a fielder failed. We do know how often the entire team failed, however, and that’s the
team’s hits allowed total.
Thus, the initial calculation is:
Player outs / (Player outs + team hits allowed)
“Team hits allowed” means different things for infielders and outfielders. Look at the groundball/flyball adjustment for more detailed info.
Since I have updated the system and haven’t given out the details to anyone but co-author Mike Emeigh, I’d like to redefine “Player Outs” by position:
│ if │ A │
│ c │ PO - SO │
│ 1b │ PO - (A.2b+ A.3b + A.ss) + A │
│ of │ PO │
I removed putouts from infielders for many reasons. For pitchers, they reflect two things, both of which are unrelated to skill: covering first base on 3-1 groundouts, and pop flies. For third
basemen, many putouts are related to skill, but again there are pop flies, and a team’s foul territory heavily influences the number of foul files he catches. (First basemen are also affected by
these phenomena, but not putouts as they primarily reflect unassisted groundouts, which is the most important measure of their fielding, so we must keep them.) I handle middle infielders’ putouts
separately later on as to weed out the forceouts (which are about 30-40% of a middle infielders’ putouts).
For catchers and first basemen, scale down these net putouts as as a percentage of putouts, giving the same percentage to each fielder. The good fielders will make up for this when you scale down the
hits, for which you use defensive innings. If you do not have a defensive innings total, you’ll need to estimate it. I was using a weighted formula of team games, but now I stole Bill James’s since
it works much better. If you’re dealing with an outfielder who played in multiple fields, scale his stats by the percentage of games in each field, and give a 25% boost to putouts in center field.
The first context in which we place a fielder’s outs is his team’s hits allowed. I mentioned this above, but it bears repeating. Indeed, this is the most important adjustment of all, since a good
defensive team became such through its good individual fielders. Thus a fielder on a team like the 2001 Seattle Mariners, which was an outstanding defensive team, will be considered a good fielder
unless his traditional defensive statistics (Range Factor, Fielding Percentage, Double Play Rate) were very poor. A fielder on the 2001 Cleveland Indians, a poor defensive team, would considered to
be a bad fielder unless his traditional defensive statistics were very good. Those people who decided Roberto Alomar was a better defensive player than Bret Boone (in 2001) should take note.
The second context is his team’s groundball/flyball rate. Looking at the play-by-play data, we have found the following to be true:
* Team assist rates accurately predict the number of groundouts a team’s pitchers generated.
* Virtually all doubles and triples are the responsibility of the outfielders.
* Most singles are also the responsibility of the outfielders, and have fall at a similar distribution to the outs.
I changed the adjustment based on this info. To figure it, first determine a team’s number of groundouts and flyouts:
│ Groundouts │ A - A.c - A.of - DP.1b │
│ Flyouts │ PO - SO - A │
Incidentally, I wrote four years ago that the ratio of groundouts to flyouts was low, but this accurately predicted ranking. I was wrong. This is the proper ratio. We have become accustomed to
looking at data from the Elias Bureau that has a normal ground-air ratio as 1.17. Elias figures this because it counts double plays as two groundouts. If you remove them, you find a team’s ground-air
ratio is closer to 0.80, just including the outs.
Then, subtract doubles and triples allowed, and multiply this number by GO/(GO+FO). This is the number of singles for which the infielders are responsible. The outfielders are responsible for all
remaining hits, which includes the doubles and triples, so remember to add them back into the total.
Thus, one figures infielders’ outs in relation to team groundball hits, and outfielders’ outs in relation to team flyball hits.
Next, we come to the pitching staff. Really, we come to the batter at the plate. Left-handed batters are more likely to hit the ball to first base, second base or left field, and right-handed batters
are more likely to hit the ball to third base, shortstop or right field. In post-Casey Stengel times, a right-handed pitcher is more likely to face a left-handed batter, and a left-handed pitcher is
more likely to face a right-handed batter.
I was wrong on this adjustment before for four reasons:
* I made the rate too extreme. A team’s LHB/RHB rate is not as lock-in-step with the pitching staff as I assumed. Thus, I moved the exponent for the rate from 0.5 to 0.25.
* Batters do not pull the ball more often with the platoon advantage.
* Batters do not pull the ball to the outfield. In fact, they usually hit the ball to opposite field.
* I’m looking, but I’m not sure where the back boundary for platooning is. It may have occurred in some seasons, and not in others. For example, in 1941, it’s pretty clear that few, if any, managers
were platooning. However, in 1942, both leagues were platooning at a modern rate. It’s possible teams platooned more as the regular ballplayers went to war.
With all this in mind, I present two versions of the adjustment. The first is for seasons in which we do not have LHB/RHB data:
((BFP.rhp - HR.rhp - HB.rhp - BB.rhp - SO.rhp) / (BFP.tm - HR.tm - HB.tm - BB.tom - SO.tm)) / lgAVG) ^ 0.25
This is for third basemen. Divide by 2 and add 0.50 for shortstops, and divide by 5 and add 0.80 for right fielders. For first basemen, divide the third base rate into one (or, more mathematically,
this is the inverse); for second basemen, divide the shortstop rate into one; and for left fielders, divide the right field rate into run.
The second is for seasons for which we do have LHB/RHB data:
(((AB.rhb - HR.rhb - SO.rhb) * posRate) / (AB.tm - HR.tm - SO.tm)) / lgAVG
What the heck is the posRate? It is the “position rate,” a measure of how much more likely a right-handed batter is to hit a ball to that position. Each affected position has a different rate:
│ 1b │ 0.25 │
│ 2b │ 0.50 │
│ 3b │ 4.00 │
│ ss │ 2.00 │
│ lf │ 0.70 │
│ rf │ 1.30 │
A right-handed batter is four times as likely to hit a ball to the third baseman than a left-handed batter, and twice as likely to hit a ball to the shortstop than a left-handed batter. This sounds
like a big difference, and it is, but teams don’t vary much on the number of right-handed batters they face.
On the flip side (pun intended), that same batter is 30% more likely to hit the ball to the right fielder. That’s not a huge difference, and I debated on whether to adjust for that. I chose to do so,
since it’s not entirely trivial. Do not make this adjustment when you are making estimates of left/center/right playing time.
Finally, I adjust for ballpark. I haven’t made any changes here, though I do make the assumption that a team’s ground/air rate remains the same from park to park, since I calculate groundball and
flyball singles for both home and road. It may well not, but since we’re primarily talking about historical data, we shall never know whether most teams do this or not. Still, a pitcher could throw
lower in the strike zone in Coors Field than in Dodger Stadium, and it would be worth knowing if this is true.
As some have noted in discussions, I make no adjustment for team strikeouts. This is partially true, actually; I make no overt adjustment for team strikeouts. A team with a high strikeout rate will
have fewer outs available to the fielders, but will also have fewer hits allowed. Strikeouts are Adam Smith’s invisible hand—they correct for themselves without adjustment.
Herein lies the ultimate difference between my system and Clay Davenport’s system. For a team, the two of us will create virtually identical ratings. However, when a team allows 100 fewer hits than
normal and strikes out 100 fewer batters than normal, Davenport’s methods will assume the two events negate each other. Davenport’s system will rate third baseman with 300 assists versus 300 assists
for league average as average in this context. CAD will assume the player is a bit better than average.
Next, I make an assessment of a player’s ability to remove runners who are already on base. The basic calculation for this is:
Runners Removed / Opportunities
Each position, naturally, has a different way of determining runners removed:
│ c │ A │
│ 1b │ DP - DP.2b - DP.p / 2 │
│ 1b │ A - PO.p │
│ if │ DP │
│ of │ A │
Notice this has not changed, aside from adding the extra measure of a first baseman’s arm. See below (“What I learned from Bill James, and what I already knew”) for details.
Similarly, each position has a different way of determining how many runners are available:
│ c │ 1B + BB + HBP + (0.71 * Err) - DP.1b - A.of │
│ if │ 1B + BB + HBP + (0.71 * Err) - A.c - A.of │
│ of │ H + HR + BB + HBP + (0.71 * Err) - A.c - DP.1b │
Again, notice that nothing has changed. Actually, I’m experimenting with different weights for errors by position, but it isn’t a big deal. Here are the values for that:
│ p │ 50% │
│ c/of │ 25% │
│ 1b │ 80% │
│ if │ 85% │
That’s the percentage of time an error at that position puts a man on first base. I have no idea how well that holds up over time. After working with National Association data, I would assume it does
not hold up well in the 19th Century.
This figure is affected by a team’s pitching staff’s handedness in the same way the range figure is, except I do not make this adjustment for middle infielders. Apply adjustments to opportunities, by
the way, for both range and arm calculations. The number of outs a fielder made or the double plays he turned is certain. His opportunity to make those outs and turn those double plays is not.
For groundballs and flyballs, I took a different tack. I took the team’s groundball or flyball total, divided it by BFP, and multiplied the resulting figure by the team’s opportunities. This combines
the groundball/flyball adjustment with the old balls in play adjustment, and produces better results.
It would be interesting to see park data for these figures. Frankly, I hope someone who has more time on his hands than I do will tally up DP and Errors by park, at a minimum, in addition to my
long-standing wish to have H, 2B and 3B for every ballpark ever. It’s tedious but possible. However, with the data we have now, I make no park adjustments.
By the way, I should let you know that accuracy with this is good but not great for infielders and outfielders, and poor for catchers. Stolen bases allowed by each catcher varies more than opponents
caught stealing do, and thus a catcher’s arm rating may be very different than his real throwing capacity. Until opposition stolen bases become available for all teams (which will be a few years),
take single-season catcher arm ratings with a grain of salt. We are working on methods of determining dropped third strike assists from a team’s passed ball/wild pitch rate, but it only helps a
little, and only with the caught stealing side of the equation, which, as I just said, is the lesser side. That being the case, one can and should use stolen bases when they are available, which is
from 1978 to the present.
Now, we move on to error rates. I chose to treat these separately from range, largely because I can then use a different value for errors when it comes time to determining run-values. Determine error
rates as below:
│ c │ (PO + A - SO) / (PO + A + SO + Err) │
│ 1b │ (PO + A - A.2b - A.3b - A.ss) / (PO + A - A.2b - A.3b - A.ss + Err) │
│ 2b/ss │ (PO + A - DP) / (PO + A - DP + Err) │
│ 3b/p/of │ (PO + A) / (PO + A + Err) │
For the most part, these results vary little from traditional fielding percentage, but it helps to remove the “dead plays” from a fielder’s error rate. It does make a large difference for catchers,
as Bill James has noted.
For many positions, I calculate position-specific details:
│ c │ PB/WP rate │
│ 1b │ 3b/ss Error rate │
│ 2b/ss │ Combined PO rate │
For catchers, this is easy. Same as before—PB and WP rate per runner on base. A low rate is good. Including Wild Pitches keeps the value from varying too much.
For first basemen, we figure the rate at which a team’s third basemen and shortstops made errors. It’s an easy calculation:
(A.3b + A.ss) / (A.3b + A.ss + Err.3b + Err.ss)
Again, a low number is good. The theory is that a good first baseman will prevent throwing errors by his third basemen and shortstops. This is true, to a small extent, so I weight it appropriately
Finally, I figure net putouts for middle infielders. A middle infielder obtains a large portion of his putouts as cheap plays, plays a high school shortstop could make, namely popups and fielders’
choices. Furthermore, these plays are elective plays, either the shortstop or second baseman could make them. They occur a bit more often on bad teams, since a requirement for a fielder’s choice is a
runner on first, which occurs more often with a bad team. Bad teams also allow a few more balls in play. The calculation is:
(PO.2b + PO.ss - A.c / 2 - DP.1b) / (BFP - SO - 2B - 3B - HR - A.c / 2 - DP.1b)
Since flyball pitchers induce popups and groundball pitchers induce fielders’ choices, the two cancel each other out. I divide the result by two—half for the second basemen, half for the
shortstops—at the runs value stage. Obviously, substitute OCS for A.c / 2 when it is available.
A note on this. This particular formula should brand me as a hypocrite, as I use Balls in Play as the primary source of opportunities instead of hits allowed. There are reasons why I did it this way:
* A popup does not compete with a hit allowed. Infielders catch 99% of popups, and those they do not catch are errors. A team that allows many flyballs and strikes out few batters will induce more
* Again, a fielder’s choice requires a runner on first, as well as a groundball, which cancels out the flyball effect, but is still affected by strikeouts.
* Balls in Play does include hits allowed. It is not like I ignored them entirely.
* Most importantly, the standard deviation as a percentage of the league rate is lowest when I compute infield putout rates this way. For awhile, I was computing this as a percentage of runners on
base, and the standard deviation was large.
When applying this number to individual fielders, take it as a percentage of team putouts and apply it to the individual fielder’s putouts. It looks a little weird, having to apply 398 PO as a
percentage of 277 PO. As I said before, apply the net putouts for catchers and first basemen, and net assists and double plays for first basemen the same way.
I should note I am comparing totals to league averages to come up with an outs over/under number. Thus, the initial context in which I am working is a Pete Palmer-style context. I can switch to a
Bill James-style context, and I worked my butt off to do so. The nature of fielding statistics makes it easier to come up with a Pete Palmer-type over/under number. I can create a number like
Fielding Runs, like Defensive Winning Percentage, and am even working on Win Shares. With the right plus/minus value and the right base, you can turn a decent fielding metric into anything.
Finally, we come to create a table of values for statistics. I stole some values from XRuns, and some others I used a Tangotiger suggestion and adjusted values to a 27-out context. Basically, turning
a hit into an out not only puts that batter out, it prevents another batter from coming to the plate. I shan’t run this table with this article; its values would change annually. Remember, if you
come up with your own table that you need to adjust everything down a bit because we know where the outs went, but not the hits (I used 1/3 the value for infielders and 2/3 the value for
Or, to put that in plainer English, were you to evaluate an infield as a unit (add all infield assists together, as well as net putouts by first basemen and catchers, and figure its rating vis-a-vis
hits) versus adding up all the individual totals, the infield would be one third the sum of the individual totals. It is a mathematical phenomenon.
To compute these values:
│ Hits.if │ (0.59 + R / PA) / 3 │
│ Hits.of │ ((0.50 * 1B + 0.22 * 2B + 0.54 * 3B) / (H - HR) + 0.09 - R / PA) * 2 / 3 │
│ Caught Stealing │ 0.50 + R / PA │
│ Assists.c │ 0.32 │
│ Assists.1b │ 0.18 │
│ Double Plays.P │ (0.37 + R / PA) / 2 │
│ Double Plays.1b │ (0.37 + R / PA) / 4 │
│ Assists.of │ 0.50 + R / PA │
│ Passed Balls │ 0.09 │
│ Errors.p │ (0.59 + R / PA) / 2 + 0.09 │
│ Errors.c │ (0.59 + R / PA) / 4 + 0.135 │
│ Errors.1b │ (0.59 + R / PA) * 0.80 + 0.036 │
│ Errors.2b │ (0.59 + R / PA) * 0.85 + 0.027 │
│ Errors.3b/ss │ (0.59 + R / PA) * 0.765 + 0.0243 │
│ Errors.of │ (0.59 + R / PA) / 4 + 0.135 │
│ E throw │ (0.59 + R / PA ) * 0.085 + 0.0027 │
I derived most of this through the following values:
│ Single │ 0.50 Runs │
│ Double │ 0.72 Runs │
│ Triple │ 1.04 Runs │
│ Out │ -0.09 Runs │
│ GDP │ -0.37 Runs │
│ SB │ 0.18 Runs │
│ CS │ -0.32 Runs │
Some additional explanations are in order. For errors we’re making an assessment of whether or not the error put a man on base. As such, different positions have different error weights. The other
errors at the position are weighted as advancing a man. There are a few muffed foul flies, but not many, and even in those cases, a small penalty is in order. I also reduced the values ever so
slightly for third basemen and shortstops, since we are crediting first basemen for part of their rating (that’s the “E.throw”).
In the case of net assists for first basemen, we’re only interested in the advancement penalty. We already counted the hit prevention/out making ability of those assists in the range calculation.
Also, to avoid additional double counting, we halved the value of first basemen’s double plays, as well as pitchers (though, for pitchers, this is because they often end double plays).
For outfielders, I only calculated the amount an extra base hit has over a single. I know, there are exclamation points in your eyes, hear me out. For the most part, outfielders cannot prevent extra
base hits, but rather keep them from becoming extra base hits. This isn’t a pretty solution, as turning that double into a single lowers the rating, but the values do work.
Read that last sentence in Bill Jamesese—I tried it the other way, and Mike Emeigh and I agreed it didn’t work. I tried it about three ways, and the play-by-play data did not support the variance
between each fielder each other way showed. I am certainly open for another solution.
For outfield assists, Mike Emeigh ran the correlation, and there is an inverse correlation between outfield assists and advances, but it is not large. Therefore, I chose to make it equal to one
advance plus one runner pegged, as well as the value of not having another man come to bat.
Finally, to come up with a Defensive Winning Percentage, we need to devise a baseline, a level of basic defensive responsibility. We first find the league (R - HR) / (PO - SO). Then, we come up with
an outs value for each position:
│ p │ A │
│ 1b │ PO - (A.2b + A.3b + A.ss) + A │
│ 2b/ss │ PO + A - DP - (PO - DP) / 2 │
│ 3b/of │ PO + A │
│ c │ PO - SO + A │
and multiply the number for the league by (R - X R(pitcher only)) / (PO - SO). For catchers, we need to add one-third the pitcher-only XRuns ... what are the pitcher-only runs?
(HR * 1.44) + (BB - IBB + HBP) * .34 + (IBB * .25) - (SO * X)
where X is:
((1B * 0.5) + (2B * 0.72) + (3B * 1.04) - (AB - H - SO) * 0.09) / (AB - HR - SO) + 0.098
This is a measure of the hit-prevention value of a strikeout, and will be around 0.20 for a league.
Next, we make this a rate:
│ c │ divide by BFP │
│ 1b/3b/p │ divide by GB │
│ 2b │ divide by GB and subtract 0.025 │
│ ss │ divide by GB and add 0.025 │
│ of │ divide by FB │
Multiply this by the team BFP, GB or FB, and apply the left/right adjustment. This is the baseline. From here, you can use the Pythagorean Formula to derive a percentage.
Important note—to deal with individual players, you need to proportion opportunities by the percentage of innings the player played afield. If you do not have defensive innings, you need to estimate
them; I am currently using the innings estimator Bill James used in Win Shares. If you come up with a better innings estimate, by all means post it.
What I learned from Bill James, and what I already knew
I have heard (such as one can hear in cyberspace) a fair number of
people remark about the similarities between my defensive system and
Bill James’s Win Shares fielding. Inevitably, someone posits the the
idea that James read CAD and stole it. I would be surprised if that
were true, and were it true, I would be flattered.
The truth is, most of these concepts are obvious, if you think about them. Team assists represent groundballs—I learned that one from the 1985 Baseball Abstract, when someone recommended to James
putting an outfielder’s putouts in the context of PO-SO-A. Left-handed pitchers may cause a third baseman to record fewer plays—someone wrote into The Sporting News in 1991 to defend Carney
Lansford’s low Range Factor with this fact.
I claim two things as mine:
* The idea that, if you remove the assists by the other infielders, a first baseman’s putout rate is meaningful.
* Hits less home runs is the proper context for fielding stats.
And do you know what? Even after I came up with these, I learned that other people had already hinted at them. In response to Bill James’s article introducing Range Factor to Baseball Digest readers,
someone wrote a letter explaining why James should not have figured these for a first baseman, since his infielders could increase his putouts with their assists. The 1982 Baseball Abstract has Bill
James calling hits (less home runs, “and the doubles and triples off the wall”) defensive failures, and even running a comparison chart of several pitchers, beckoning shades of Voros McCracken. These
examples are only in the context of Bill James, and there doubtless are other examples.
That Bill James and I would come to similar conclusions about fielding data is not the surprise. The real surprise is that James, Clay Davenport and I are the only people who bothered to adjust for
what we already knew.
I first published this system to rec.sport.baseball in 1994. The system resembled James’s Range Index then, as Davenport’s system does now. James published Range Index in the 1984 Abstract, which I
had not read at that time. However, my estimate of first basemen’s double plays is consciously adapted from those early Abstracts.
Bill James’s Win Shares influenced me in three ways for this revision:
* I chose to use his net first baseman assists. While I do not regard it as important as James does, it is one of the few places where we can assign difficulty to plays.
* I chose not to use putouts by third basemen and pitchers.
* I chose to evaluate the putouts of second basemen and shortstops collectively, and in the context of runners on first base. I had been looking for a better way to handle middle infielders’ putouts
for some time, and James’s comments on Nap Lajoie influenced my thinking.
As an aside, James did not place many items in their proper context. In racking my brain to find a place for net first basemen’s assists, I realized every event James described occurred with a runner
on base. I gave them credit only for stopping the advancement, as I already gave credit for making an out. James instead chose to place them in context of the number of balls in play, and as one of
his DWP-style “factors.” As a result, I think he gives these plays too much weight.
Good works feed off each other. Clay Davenport’s work caused me to lower my exponent for left-handed pitching on two occasions. I sure as hell hope someone fixes places where I have flaws, like using
catchers’ assists.
Context-Adjusted Defense 4.0
The above is the 3.0 version (the rec.sport.baseball June 1994 version was 1.0, the Big Bad Baseball Annual 1999 was 2.0). Some things I would love to implement:
* A final-score table. How do I explain this? We know many events are more or less common depending on the relative score of the game. Assists by catchers and outfielders are more common in close
games (especially by a team that is one or two runs behind), just to pick on something. We shall probably never know the score at every point in every game ever, but we do know the final scores. I’m
hoping for a table like this:
Event +5+ +2-4 +1 +0 -1 -2-4 -5+
A.of -50% +50% +30% +20% +10% +0% -60%
So, knowing the final score of the game, we can now how much more often an opposing baserunner would take a base and the outfielder would record the assist.
* Catchers’ assists. We know passed ball/wild pitch rates do have some correlation with dropped third strikes. We’d also like to see if there’s a way we can use the assist rate to figure both the
ability to throw out a baserunner and to see how well the catcher fields other plays, like bunts. Forays into the play-by-play data have not been successful for most of this—these elements vary
wildly from team to team.
* Range Bonus Plays. A great Bill James invention, and the one thing that is easier to represent in a Claim Point system. Why? In a Claim Point system, there is no need for penalties for the fielders
who didn’t earn such plays. For seasons for which we lack accurate innings data, we would need to find a way to divvy up the RBPs in reverse for the fielders who lacked a bonus. Also, what is the
weight of these? We know being a better fielder than your teammates is a sign of being a good fielder, but we already have given the player some credit for these plays.
For the record, I would want these for the infielders, and for catcher assists. For outfielders, this just causes problems when you do not have accurate innings data—Win Shares overrates center
fielders because of these. | {"url":"http://www.baseballthinkfactory.org/primate_studies/discussion/charlie_saeger_2002-09-21_0","timestamp":"2014-04-16T11:23:45Z","content_type":null,"content_length":"88375","record_id":"<urn:uuid:eee1907b-e559-4ff0-a1a0-3edfda198169>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unraveling the Matrix
Among the most common tools in electrical engineering and computer science are rectangular grids of numbers known as matrices. The numbers in a matrix can represent data: The rows, for instance,
could represent temperature, air pressure and humidity, and the columns could represent different locations where those three measurements were taken. But matrices can also represent mathematical
equations. If the expressions t + 2p + 3h and 4t + 5p + 6h described two different mathematical operations involving temperature, pressure and humidity measurements, they could be represented as a
matrix with two rows, [1 2 3] and [4 5 6]. Multiplying the two matrices together means performing both mathematical operations on every column of the data matrix and entering the results in a new
matrix. In many time-sensitive engineering applications, multiplying matrices can give quick but good approximations of much more complicated calculations.
In a paper published in the July 13 issue of Proceedings of the National Academy of Science, MIT math professor Gilbert Strang describes a new way to split certain types of matrices into simpler
matrices. The result could have implications for software that processes video or audio data, for compression software that squeezes down digital files so that they take up less space, or even for
systems that control mechanical devices.
Strang’s analysis applies to so-called "banded matrices." Most of the numbers in a banded matrix are zeroes; the only exceptions fall along diagonal bands, at or near the central diagonal of the
matrix. This may sound like an esoteric property, but it often has practical implications. Some applications that process video or audio signals, for instance, use banded matrices in which each band
represents a different time slice of the signal. By analyzing local properties of the signal, the application could, for instance, sharpen frames of video, or look for redundant information that can
be removed to save memory or bandwidth.
Since most of the entries in a banded matrix — maybe 99 percent, Strang says — are zero, multiplying it by another matrix is a very efficient procedure: You can ignore all the zero entries. After a
signal has been processed, however, it has to be converted back into its original form. That requires multiplying it by the “inverse” of the processing matrix: If multiplying matrix A by matrix B
yields matrix C, multiplying C by the inverse of B yields A.
But the fact that a matrix is banded doesn’t mean that its inverse is. In fact, Strang says, the inverse of a banded matrix is almost always “full,” meaning that almost all of its entries are
nonzero. In a signal-processing application, all the speed advantages offered by banded matrices would be lost if restoring the signal required multiplying it by a full matrix. So engineers are
interested in banded matrices with banded inverses, but which matrices those are is by no means obvious.
In his PNAS paper, Strang describes a new technique for breaking a banded matrix up into simpler matrices — matrices with fewer bands. It’s easy to tell whether these simpler matrices have banded
inverses, and if they do, their combination will, too. Strang’s technique thus allows engineers to determine whether some promising new signal-processing techniques will, in fact, be practical.
One of the most common digital-signal-processing techniques is the discrete Fourier transform (DFT), which breaks a signal into its component frequencies and can be represented as a matrix. Although
the matrix for the Fourier transform is full, Strang says, “the great fact about the Fourier transform is that it happens to be possible, even though it’s full, to multiply fast and to invert it
fast. That’s part of what makes Fourier wonderful.” Nonetheless, for some signal-processing applications, banded matrices could prove more efficient than the Fourier transform. If only parts of the
signal are interesting, the bands provide a way to home in on them and ignore the rest. “Fourier transform looks at the whole signal at once,” Strang says. “And that’s not always great, because often
the signal is boring for 99 percent of the time.”
Richard Brualdi, the emeritus UWF Beckwith Bascom Professor of Mathematics at the University of Wisconsin-Madison, points out that a mathematical conjecture that Strang presents in the paper has
already been proven by three other groups of researchers. “It’s a very interesting theorem,” says Brualdi. “It’s already generated a couple of papers, and it’ll probably generate some more.” Brualdi
points out that large data sets, such as those generated by gene sequencing, medical imaging, or weather monitoring, often yield matrices with regular structures. Bandedness is one type of structure,
but there are others, and Brualdi expects other mathematicians to apply techniques like Strang’s to other types of structured matrices. “Whether or not those things will work, I really don’t know,”
Brualdi says. “But Gil’s already said that he’s going to look at a different structure in a future paper.”
— MIT News Office | {"url":"http://www.drdobbs.com/parallel/unraveling-the-matrix/226400059","timestamp":"2014-04-16T13:30:37Z","content_type":null,"content_length":"94677","record_id":"<urn:uuid:ccdda67a-a292-4538-997f-db9f96adcfa6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
First Connect Three
Copyright © University of Cambridge. All rights reserved.
'First Connect Three' printed from http://nrich.maths.org/
In this game the winner is the first to complete a row of three, either horizontally, vertically or diagonally.
Roll the dice, place each dice in one of the squares and decide whether you want to add or subtract to produce a total shown on the board. Your total will then be covered with a counter.
You cannot cover a number which has already been covered.
If you are unable to find a total which has not been covered you must Pass.
You can use the interactive version below or print
this board
to play away from the computer.
Full Screen Version
This text is usually replaced by the Flash movie.
Are there some numbers that we should be aiming for? Why?
Which number on the grid is the easiest to get? Why?
Which number is the most difficult to get? Why?
For a more challenging version of this game, you could look at
Connect Three | {"url":"http://nrich.maths.org/5865/index?nomenu=1","timestamp":"2014-04-19T17:05:49Z","content_type":null,"content_length":"5032","record_id":"<urn:uuid:e0e66c22-e6af-4e98-93a1-9b5ddb26432c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frustum of a Regular Pyramid
Frustum of a regular pyramid is a portion of right regular pyramid included between the base and a section parallel to the base.
Properties of a Frustum of Regular Pyramid
• The slant height of a frustum of a regular pyramid is the altitude of the face.
• The lateral edges of a frustum of a regular pyramid are equal, and the faces are equal isosceles trapezoids.
• The bases of a frustum of a regular pyramid are similar regular polygons. If these polygons become equal, the frustum will become prism.
Elements of a Frustum of Regular Pyramid
a = upper base edge
b = lower base edge
e = lateral edge
h = altitude
L = slant height
A[1] = area of lower base
A[2] = area of upper base
n = number of lower base edges
Formulas for Frustum of a Regular Pyramid
Area of Bases, A[1] and A[2]
See the formulas of regular polygon for the formula of A[1] and A[2]
See the derivation of formula for volume of a frustum.
Lateral Area, A[L]
The lateral area of frustum of regular pyramid is equal to one-half the sum of the perimeters of the bases multiplied by the slant height.
The relationship between slant height L, lower base edge b, upper base edge a, and lateral edge e, of the frustum of regular pyramid is given by | {"url":"http://www.mathalino.com/reviewer/solid-mensuration-solid-geometry/frustum-regular-pyramid","timestamp":"2014-04-16T04:15:48Z","content_type":null,"content_length":"49698","record_id":"<urn:uuid:d32f84bc-b794-4f48-92de-26937d611c7b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weymouth Math Tutor
Find a Weymouth Math Tutor
...It would be my joy to help you feel the same way.I have taught Algebra 1 for more than 25 years. I have also served as a tutor of Algebra 1, Algebra 2, and Trigonometry for the same time
period. I am confident that you will find that my added experience in all these subjects give me excellent insight in how to best approach the teaching of Algebra 1.
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I am currently working in the accounting department at a local bank. Prior to my current position, I worked as an auditor at a CPA firm. I have been tutoring for approximately eight years for
students in elementary school through the college level.
9 Subjects: including algebra 1, prealgebra, geometry, algebra 2
...Math and home tutored homebound students in Math and English. I have taught H.S. Algebra and have tutored middle school math and H.S. math (Algebra I, II, Geometry and Pre-Calculus). I have
tutored college grads preparing for the math section of the MA teacher's test.
19 Subjects: including prealgebra, linear algebra, algebra 1, algebra 2
...I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. I prefer to focus on
examples from each section, rather than each specific problem, to make sure they understand all of the concepts.
17 Subjects: including algebra 2, geometry, prealgebra, precalculus
...My tutoring style is pretty basic. I figure out where the student is struggling and try different approaches until I find one that gives him or her a better understanding of what needs to be
accomplished. I then use repetition for as long as needed in order to completely grasp a concept.
21 Subjects: including prealgebra, calculus, ACT Math, SPSS
Nearby Cities With Math Tutor
Braintree Math Tutors
Brockton, MA Math Tutors
Brookline, MA Math Tutors
Dorchester, MA Math Tutors
East Weymouth Math Tutors
Hingham, MA Math Tutors
Hull, MA Math Tutors
Hyde Park, MA Math Tutors
North Weymouth Math Tutors
Quincy, MA Math Tutors
Randolph, MA Math Tutors
Revere, MA Math Tutors
Roxbury, MA Math Tutors
South Weymouth Math Tutors
Weymouth Lndg, MA Math Tutors | {"url":"http://www.purplemath.com/weymouth_math_tutors.php","timestamp":"2014-04-20T02:29:42Z","content_type":null,"content_length":"23703","record_id":"<urn:uuid:d0081642-9bb7-4d0e-9f84-136522828116>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 1998 [00079]
[Date Index] [Thread Index] [Author Index]
Re: operator overloading with UpValues (eg, for shifting graphics)
• To: mathgroup at smc.vnet.net
• Subject: [mg13642] Re: operator overloading with UpValues (eg, for shifting graphics)
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Fri, 07 Aug 1998 18:37:26 +0800
• Organization: University of Western Australia
• References: <6qbqkd$a67@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Daniel Reeves wrote:
> I thought it would be nice to shift some graphics by just adding an x-y
> pair. It seemed like this sort of thing should do that:
> Unprotect[Line];
> Line /: Line[pts_] + {x_,y_} := Line[pts+{x,y}] Protect[Line];
> But it doesn't. The UpValue for Graphics doesn't get used. This does
> work:
> Line /: Line[pts_] + shift[x_,y_] := Line[pts+{x,y}]
> but then I might as well just do this:
> shift[Line[pts_],{x_,y_}] := Line[pts+{x,y}]
> The idea is to be able to just say Line[...] + {x,y} and get a shifted
> Line.
> Does anyone know a way to do that?
> Also, if/why Mathematica is doing "the right thing" in defying my
> expectations in the first example. Ie, should the Listable-ness of
> Plus have precedence over my UpValue for Line?
I think you'll find that all these questions (and many more) are
answered in the excellent "Mathematica Graphics: Techniques and
Applications" by Tom Wickham-Jones. Incidentally, the packages for
this book are available from MathSource:
BTW, a nice simple example of Affine transformations on graphics was
given in The Mathematica Journal 4(3):38-39:
An arbitrary affine transformation (that is, a composition of
translation, rotation, and scaling) of a vector {x,y} in the plane can
be be written in matrix notation as m.{x,y} + c, where m is a 2x2
matrix (generating rotation and scaling) and c is a constant
(translation) vector.
Here is a simple procedure that applies affine transformations to
two-dimensional graphics:
Show[plot/.{x_?NumberQ,y_?NumberQ} -> m.{x,y}+c]
Commencing with a basic plot:
SinPlot=Plot[Sin[x],{x,0,6 Pi}];
one can rotate it about the z-axis by 90 degrees and then mirror it in
the y-axis:
or display a more complicated combination of rotation, scaling, and
> "This isn't right. This isn't even wrong."
> -- Wolfgang Pauli, on a paper submitted by a physicist colleague
Only a physicist would have such a sig ...
Paul Abbott Phone: +61-8-9380-2734
Department of Physics Fax: +61-8-9380-1014
The University of Western Australia Nedlands WA 6907
mailto:paul at physics.uwa.edu.au AUSTRALIA
God IS a weakly left-handed dice player | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Aug/msg00079.html","timestamp":"2014-04-16T19:22:53Z","content_type":null,"content_length":"37500","record_id":"<urn:uuid:81d2de87-46a2-4d44-b36a-1c12403f87cf>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Washington, DC Precalculus Tutor
Find a Washington, DC Precalculus Tutor
...My tutoring style can adapt to individual students and will teach along with class material so that students can keep their knowledge grounded. I have a Master's degree in Chemistry and I am
extremely proficient in mathematics. I have taken many math classes and received a perfect score on my SAT Math test.
11 Subjects: including precalculus, chemistry, geometry, algebra 2
...I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus,
Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Chemistry, even though they a...
11 Subjects: including precalculus, chemistry, calculus, French
...I have a bachelor's and master's degree in Math Secondary Education. I have taught all subjects to grades 5 - 10 for over 12 years. I enjoy teaching and helping students understand math.
12 Subjects: including precalculus, calculus, geometry, algebra 1
...I can tutor in a variety of subjects; my specialty is in math and science. I am good at pre-algebra, algebra 1, algebra 2, physical science, biology, anatomy, pre-calculus, calculus, organic
and general chemistry, reading and other subjects you might need help in. I am a caring tutor who believes in thinking big and unleashing your potential for excellence.
17 Subjects: including precalculus, reading, English, chemistry
...I started tutoring in math and science about 8 years ago while I was a high school student. I was involved in a teaching assist program during this time. While in college, I tutored high school
students during my winter and summer breaks while maintaining engineering internships.
13 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Washington_DC_precalculus_tutors.php","timestamp":"2014-04-17T21:51:06Z","content_type":null,"content_length":"24375","record_id":"<urn:uuid:51ffc675-7235-4cb6-bdbb-ad7ae43ef271>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frustum of a Regular Pyramid
Frustum of a regular pyramid is a portion of right regular pyramid included between the base and a section parallel to the base.
Properties of a Frustum of Regular Pyramid
• The slant height of a frustum of a regular pyramid is the altitude of the face.
• The lateral edges of a frustum of a regular pyramid are equal, and the faces are equal isosceles trapezoids.
• The bases of a frustum of a regular pyramid are similar regular polygons. If these polygons become equal, the frustum will become prism.
Elements of a Frustum of Regular Pyramid
a = upper base edge
b = lower base edge
e = lateral edge
h = altitude
L = slant height
A[1] = area of lower base
A[2] = area of upper base
n = number of lower base edges
Formulas for Frustum of a Regular Pyramid
Area of Bases, A[1] and A[2]
See the formulas of regular polygon for the formula of A[1] and A[2]
See the derivation of formula for volume of a frustum.
Lateral Area, A[L]
The lateral area of frustum of regular pyramid is equal to one-half the sum of the perimeters of the bases multiplied by the slant height.
The relationship between slant height L, lower base edge b, upper base edge a, and lateral edge e, of the frustum of regular pyramid is given by | {"url":"http://www.mathalino.com/reviewer/solid-mensuration-solid-geometry/frustum-regular-pyramid","timestamp":"2014-04-16T04:15:48Z","content_type":null,"content_length":"49698","record_id":"<urn:uuid:d32f84bc-b794-4f48-92de-26937d611c7b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
more on structural models for clusters
Jim Lux James.P.Lux at jpl.nasa.gov
Thu Oct 2 20:39:33 EDT 2003
At 05:29 PM 10/2/2003 -0700, Bill Broadley wrote:
>On Wed, Oct 01, 2003 at 03:36:35PM -0700, Jim Lux wrote:
> > In regards to my recent post looking for cluster implementations for
> > structural dynamic models, I would like to add that I'm interested in
> > "highly distributed" solutions where the computational load for each
> > processor is very, very low, as opposed to fairly conventional (and widely
> > available) schemes for replacing the Cray with a N-node cluster.
> >
> > The number of processors would be comparable to the number of structural
> > nodes (to a first order of magnitude)
>Er, why bother? Is there some reason to distribute those things so
>thinly? Your average dell can do 1-4 Billion floating point ops/sec,
>why bother with so few per CPU? Am I missing something?
Your average Dell isn't suited to inclusion as a MCU core in an ASIC at
each node and would cost more than $10/node... I'm looking at Z80/6502/low
end DSP kinds of computational capability in a mesh containing, say,
100,000 nodes.
Sure, we'd do algorithm development on a bigger machine, but in the end
game, you're looking at zillions of fairly stupid nodes. The commodity
cluster aspect would only be in the development stages, and because it's
much more likely that someone has solved the problem for a Beowulf (which
is fairly loosely coupled and coarse grained) than for a big multiprocessor
with tight coupling like a Cray.
Haven't fully defined the required performance yet, but, as a starting
point, I'd need to "solve the system" in something like 100
microseconds. The key is that I need an algorithm for which the workload
scales roughly linearly as a function of the number of nodes, because the
computational power available also scales as the number of loads.
Clearly, I'm not going to do a brute force inversion or LU decomposition of
a 100,000x100,000 matrix... However, inverting 100,000 matrices, each,
say, 10x10, is reasonable.
>Bill Broadley
>UC Davis
James Lux, P.E.
Spacecraft Telecommunications Section
Jet Propulsion Laboratory, Mail Stop 161-213
4800 Oak Grove Drive
Pasadena CA 91109
tel: (818)354-2075
fax: (818)393-6875
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf mailing list | {"url":"http://www.clustermonkey.net/pipermail/beowulf/2003-October/032769.html","timestamp":"2014-04-16T19:00:33Z","content_type":null,"content_length":"5171","record_id":"<urn:uuid:8784dbbd-1536-4517-b116-e1dcef49c617>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
ymmetric spectra
Results 1 - 10 of 176
- Proc. London Math. Soc , 1998
"... In recent years the theory of structured ring spectra (formerly known as A # - and E # -ring spectra) has been signicantly simplified by the discovery of categories of spectra with strictly
associative and commutative smash products. Now a ring spectrum can simply be dened as a monoid with respect t ..."
Cited by 143 (27 self)
Add to MetaCart
In recent years the theory of structured ring spectra (formerly known as A # - and E # -ring spectra) has been signicantly simplified by the discovery of categories of spectra with strictly
associative and commutative smash products. Now a ring spectrum can simply be dened as a monoid with respect to the smash product in one of these new categories of spectra. In order to make use of
all of the standard tools from homotopy theory, it is important to have a Quillen model category structure [##] available here. In this paper we provide a general method for lifting model structures
to categories of rings, algebras, and modules. This includes, but is not limited to, each of the new theories of ring spectra. One model for structured ring spectra is given by the S-algebras of [#
#]. This example has the special feature that every object is brant, which makes it easier to fo...
- Proc. London Math. Soc
"... 1. Preliminaries about topological model categories 5 2. Preliminaries about equivalences of model categories 9 3. The level model structure on D-spaces 10 4. Preliminaries about π∗-isomorphisms
of prespectra 14 ..."
Cited by 111 (35 self)
Add to MetaCart
1. Preliminaries about topological model categories 5 2. Preliminaries about equivalences of model categories 9 3. The level model structure on D-spaces 10 4. Preliminaries about π∗-isomorphisms of
prespectra 14
- TOPOLOGY , 2003
"... A stable model category is a setting for homotopy theory where the suspension functor is invertible. The prototypical examples are the category of spectra in the sense of stable homotopy theory
and the category of unbounded chain complexes of modules over a ring. In this paper we develop methods for ..."
Cited by 78 (16 self)
Add to MetaCart
A stable model category is a setting for homotopy theory where the suspension functor is invertible. The prototypical examples are the category of spectra in the sense of stable homotopy theory and
the category of unbounded chain complexes of modules over a ring. In this paper we develop methods for deciding when two stable model categories represent ‘the same homotopy theory’. We show that
stable model categories with a single compact generator are equivalent to modules over a ring spectrum. More generally stable model categories with a set of generators are characterized as modules
over a ‘ring spectrum with several objects’, i.e., as spectrum valued diagram categories. We also prove a Morita theorem which shows how equivalences between module categories over ring spectra can
be realized by smashing with a pair of bimodules. Finally, we characterize stable model categories which represent the derived category of a ring. This is a slight generalization of Rickard’s work on
derived equivalent rings. We also include a proof of the model category equivalence of modules over the Eilenberg-Mac Lane spectrum HR and (unbounded) chain complexes of R-modules for a ring R.
, 1998
"... The main theorem of [4] say that there is a proper closed simplicial model category ..."
- INTERNATIONAL CONGRESS OF MATHEMATICIANS. VOL. II , 2006
"... Differential graded categories enhance our understanding of triangulated categories appearing in algebra and geometry. In this survey, we review their foundations and report on recent work by
Drinfeld, Dugger-Shipley,..., Toën and Toën-Vaquié. ..."
Cited by 63 (3 self)
Add to MetaCart
Differential graded categories enhance our understanding of triangulated categories appearing in algebra and geometry. In this survey, we review their foundations and report on recent work by
Drinfeld, Dugger-Shipley,..., Toën and Toën-Vaquié.
- MR1922205 (2003i:55012), Zbl 1025.55002
"... 1. Right exact functors on categories of diagram spaces 2 2. The proofs of the comparison theorems 5 3. The construction of the functor N ∗ 12 ..."
Cited by 61 (8 self)
Add to MetaCart
1. Right exact functors on categories of diagram spaces 2 2. The proofs of the comparison theorems 5 3. The construction of the functor N ∗ 12
- J. Pure Appl. Algebra
"... Abstract. We give two general constructions for the passage from unstable to stable homotopy that apply to the known example of topological spaces, but also to new situations, such as the
A1-homotopy theory of Morel-Voevodsky [16, 23]. One is based on the standard notion of spectra originated by Boa ..."
Cited by 55 (0 self)
Add to MetaCart
Abstract. We give two general constructions for the passage from unstable to stable homotopy that apply to the known example of topological spaces, but also to new situations, such as the A1-homotopy
theory of Morel-Voevodsky [16, 23]. One is based on the standard notion of spectra originated by Boardman [24]. Its input is a well-behaved model category C and an endofunctor
- Adv. in Math , 1998
"... ABSTRACT. We begin by showing that in a triangulated category, specifying a projective class is equivalent to specifying an ideal I of morphisms with certain properties, and that if I has these
properties, then so does each of its powers. We show how a projective class leads to an Adams spectral seq ..."
Cited by 41 (5 self)
Add to MetaCart
ABSTRACT. We begin by showing that in a triangulated category, specifying a projective class is equivalent to specifying an ideal I of morphisms with certain properties, and that if I has these
properties, then so does each of its powers. We show how a projective class leads to an Adams spectral sequence and give some results on the convergence and collapsing of this spectral sequence. We
use this to study various ideals. In the stable homotopy category we examine phantom maps, skeletal phantom maps, superphantom maps, and ghosts. (A ghost is a map which induces the zero map of
homotopy groups.) We show that ghosts lead to a stable analogue of the Lusternik–Schnirelmann category of a space, and we calculate this stable analogue for low-dimensional real projective spaces. We
also give a relation between ghosts and the Hopf and Kervaire invariant problems. In the case of A ∞ modules over an A ∞ ring spectrum, the ghost spectral sequence is a universal coefficient spectral
sequence. From the phantom projective class we derive a generalized Milnor sequence for filtered diagrams of finite spectra, and from this it follows that the group of phantom maps from X to Y can
always be described as a lim1 ←− group. The last two sections focus
, 2003
"... We develop a new system of model structures on the modules, algebras and commutative algebras over symmetric spectra. In addition to the same properties as the standard stable model structures
defined in [HSS] and [MMSS], these model structures have better compatibility properties between commutati ..."
Cited by 39 (2 self)
Add to MetaCart
We develop a new system of model structures on the modules, algebras and commutative algebras over symmetric spectra. In addition to the same properties as the standard stable model structures
defined in [HSS] and [MMSS], these model structures have better compatibility properties between commutative algebras and the underlying modules.
, 2002
"... This is the first of a series of papers devoted to lay the foundations of Algebraic Geometry in homotopical and higher categorical contexts. In this first part we investigate a notion of higher
topos. For this, we use S-categories (i.e. simplicially enriched categories) as models for certain kind of ..."
Cited by 32 (20 self)
Add to MetaCart
This is the first of a series of papers devoted to lay the foundations of Algebraic Geometry in homotopical and higher categorical contexts. In this first part we investigate a notion of higher
topos. For this, we use S-categories (i.e. simplicially enriched categories) as models for certain kind of ∞-categories, and we develop the notions of S-topologies, S-sites and stacks over them. We
prove in particular, that for an S-category T endowed with an S-topology, there exists a model | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1563826","timestamp":"2014-04-18T07:17:49Z","content_type":null,"content_length":"34133","record_id":"<urn:uuid:1c7b1961-2fd6-4aaa-bccb-3c4e8fd680da>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
2nd ordr homogenoues ODEs
July 7th 2012, 08:04 PM #1
Junior Member
Jul 2012
2nd ordr homogenoues ODEs
the behavoiur of the solution of d^2x/dt^2+adx/dt+4x
depends on the value of a
Note a=alpha
a)find the ranges of a so that the solution displays
ii)critical damping
iii)underdamping(decaying oscilations)
iv)growing oscilations
v)constant oscilations
b)For each of these cases, choose value of a within your stated
ranges, and find the general solution, displaying your solution
pictorially to show the behaviour of the solution
c) By carefully considering all the possible values of a identify the ranes which do not lead to the behavouirs listed in a, and describe the typical behaviour in this case
Re: 2nd ordr homogenoues ODEs
the behavoiur of the solution of d^2x/dt^2+adx/dt+4x
depends on the value of a
Note a=alpha
a)find the ranges of a so that the solution displays
ii)critical damping
iii)underdamping(decaying oscilations)
iv)growing oscilations
v)constant oscilations
b)For each of these cases, choose value of a within your stated
ranges, and find the general solution, displaying your solution
pictorially to show the behaviour of the solution
c) By carefully considering all the possible values of a identify the ranes which do not lead to the behavouirs listed in a, and describe the typical behaviour in this case
You have not given an equation... I expect you meant \displaystyle \begin{align*} \frac{d^2x}{dt^2} + \alpha\,\frac{dx}{dt} + 4x = 0 \end{align*}.
Anyway, I suggest you read this article.
Re: 2nd ordr homogenoues ODEs
thanks, how do u write the eqaution like that, wihout useing / or a?
Re: 2nd ordr homogenoues ODEs
We use the inbuilt LaTeX compiler. You can go to the LaTeX sub forum for more information
July 7th 2012, 08:25 PM #2
July 7th 2012, 08:54 PM #3
Junior Member
Jul 2012
July 7th 2012, 08:57 PM #4 | {"url":"http://mathhelpforum.com/differential-equations/200743-2nd-ordr-homogenoues-odes.html","timestamp":"2014-04-20T16:59:13Z","content_type":null,"content_length":"40992","record_id":"<urn:uuid:4675c483-70ed-47ac-af2c-01475c6338e0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 640: Nonlinear Analysis
From MathWiki
Catalog Information
Nonlinear Analysis.
(Credit Hours:Lecture Hours:Lab Hours)
Differential calculus in normed spaces, fixed point theory, and abstract critical point theory.
Desired Learning Outcomes
This course is intended as a natural nonlinear sequel to Math 540. Like its prequel, the focus would be on operators on abstract Banach spaces.
Students need to have a good understanding of basic linear analysis, whether this comes from taking the Math 540 or some other way.
Minimal learning outcomes
Students should obtain a thorough understanding of the topics listed below. In particular they should be able to define and use relevant terminology, compare and contrast closely-related concepts,
and state (and, where feasible, prove) major theorems.
1. Differential calculus on normed spaces
□ Fréchet derivatives
□ Gâteaux derivatives
□ Inverse Function theorem
□ Implicit Function theorem
□ Lyapunov-Schmidt reduction
2. Fixed point theory
□ Metric spaces
☆ Banach’s contraction mapping principle
☆ Parametrized contraction mapping principle
□ Finite-dimensional spaces
☆ Brouwer fixed point theorem
□ Normed spaces
☆ Schauder fixed point theorem
☆ Leray-Schauder alternative
□ Ordered Banach spaces
☆ Monotone iterative method
□ Monotone operators
3. Abstract critical point theory
□ Functional properties
☆ Convexity
☆ Coercivity
☆ Lower semi-continuity
□ Existence of global minimizers
□ Existence of constrained minimizers
□ Minimax results
☆ Ambrosetti-Rabinowitz mountain pass theorem
Possible textbooks for this course include (but are not limited to):
Additional topics
In addition to the minimal learning outcomes above, instructors should give serious consideration to covering the following specific topics:
1. Differential calculus on normed spaces
2. Fixed point theory
□ Metric spaces
☆ Caristi fixed point theorem
□ Hilbert spaces
☆ Browder-Göhde-Kirk theorem
□ Ordered Banach spaces
☆ Krasnoselski’s fixed point theorem
☆ Krein-Rutman theorem
□ Monotone operators
☆ Hartman-Stampacchia theorem
3. Abstract critical point theory
□ Minimax results
☆ Ky Fan’s minimax inequality
☆ Ekeland’s variational principle
☆ Schechter’s bounded mountain pass theorem
☆ Rabinowitz saddle point theorem
☆ Rabinowitz linking theorem
Furthermore, it is anticipated that instructors will want to motivate the abstract theory by considering appropriate concrete examples.
Courses for which this course is prerequisite
It is proposed that this course be a prerequisite for Math 647. | {"url":"http://www.math.byu.edu/wiki/index.php/Math_640:_Nonlinear_Analysis","timestamp":"2014-04-20T10:46:59Z","content_type":null,"content_length":"17401","record_id":"<urn:uuid:4274c4c2-da3f-4703-97dd-73aab95cfd9b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advisor Perspectives
Are REITs Now Undervalued?
Geoff Considine, Ph.D.
September 8, 2009
With such a diversity of REITS, differences in leverage are one significant factor (see here). Adding leverage to a REIT allows it to generate bigger returns—but also results in substantially
increased risk and a higher Beta. At a recent NAREIT (National Association of REITs) conference, a consensus emerged that less-leveraged REITs will emerge as the dominant player in coming years.
Not surprisingly, there is a positive correlation between leverage and market-to-book ratio: Highly-leveraged REITs tend to have high market-to-book ratios, while less leverage means lower ratios.
This measure can provide some basis for differentiating between REITs, but I have found that other statistics have considerable value as well.
One useful question, for example, is which variables, if any, could have helped flag the REITs and REIT indexes that were most vulnerable to the 2007 real estate crash. By analyzing the above REITs
through July 2006, I discovered which variables were the best predictors of loss. The half of these funds with the highest Betas at that time generated returns substantially better than the half
with lower Betas—a difference of about 15% per year in the subsequent three-year period. The high-Beta REITs and REIT funds had slightly higher trailing volatility than their low-Beta counterparts,
a result that seems somewhat paradoxical. When the broader market declines, high-Beta investments are expected to decline more than low-Beta investments. To investigate this further, I separated
the half of these REITs and REIT funds with the highest Betas from those with the lowest Betas (as of July 2006) and analyzed them as two separate portfolios in a portfolio Monte Carlo simulation
(Quantext Portfolio Planner).
Monte Carlo projections for low Beta REIT’s (through July 06)
This portfolio, with equal weights to the low-Beta REIT’s, exhibited very high trailing returns as of 7/31/2006—the trailing return and volatility are very close to the results for the REIT funds
cited earlier. Note that the Monte Carlo projected returns are considerably lower than the trailing returns (by about 5% per year: 18.25% vs. trailing return of 23.15%) and the projected volatility
is twice the trailing volatility. By March of 2007, my Monte Carlo projections were that REIT’s were vastly over-valued.
Monte Carlo projections for high Beta REIT’s (through July 06)
The high-Beta REIT portfolio had a July 2006 Beta of 1.1 vs. .78% for the low-Beta case. The high-Beta REIT portfolio had a significantly higher yield and almost double the R-squared (relative to
the S&P500) of the low-Beta portfolio. QPP also suggests that the low-Beta portfolio is more overvalued than the high-Beta portfolio.
Low-Beta Excess Returns = 23.15% - 18.25% = 4.90% per year
High-Beta Excess Returns = 22.01% - 19.05%= 2.96% per year
As of the end of July 2009, both of these portfolios had trailing three-year Betas of about 1.40 and R-squared values of 60% relative to the S&P500.
The difference between trailing returns and expected returns is an important tactical variable (see here). Asset classes that have returned considerably more than their expected returns are likely
to revert to the mean and deliver lower returns (and vice versa). Reversion to the mean was one of the big warning signals going into late 2007 and 2008. Even among REITs, I have noted that the
high-Beta REITs as a group were less over-valued on this basis than the low-Beta REITs. I sorted all of the REITs and REIT funds just based on the difference between trailing three-year return and
QPP expected returns using data through July 06, and I found that REITs that were under-valued (expected return greater than trailing three-year return) out-performed REITs that were over-valued
(expected return less than trailing return) by 9.3% in average annual return over the next three years.
The real estate bubble was obviously driven by dynamics that were at work for more than just the last three years, so I generated QPP projections using ten years of trailing data (through July 2009)
and compared the expected return to the trailing ten-year return for all of the REITs and REIT funds that had at least ten years of data available. The results were striking. The trailing ten-year
average return for all of the REITs in our sample was 14.1% (arithmetic average annual return). The expected average annual return for the REITs from the Monte Carlo simulation was 14.0%. In other
words, REITs as a group were more or less fairly valued at the end of July 2009. There is, however, quite a spread within the asset class—some REITs look under-valued and some look over-valued on
this statistical basis.
Go to page
Display article as PDF for printing.
Would you like to send this article to a friend?
Remember, if you have a question or comment, send it to . | {"url":"http://www.advisorperspectives.com/newsletters09/36-REITs3.php","timestamp":"2014-04-20T03:09:38Z","content_type":null,"content_length":"27183","record_id":"<urn:uuid:714d68a6-5dc1-40c5-8dca-94b1de15ecb7>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding isothermal coordinates uniformly
up vote 2 down vote favorite
It's a well-known but difficult theorem that any $C^2$ surface in $R^3$ with parameter domain the unit disk can be put into isothermal parameters. The Wikipedia article on isothermal coordinates
references several proofs (none of which I can access immediately). Suppose given a family of surfaces $X^t$ such that the map $t \mapsto X^t$ is continuous from some interval of $t$ values to $C^n
(D,R^3)$. Then can we find surfaces $Y^t$ such that $Y^t$ gives isothermal parameters for $X^t$ and the map $t \mapsto Y^t$ is continuous into $C^n(D,R^3)$?
It looks like this might follow if we know (a) the isothermal parameters are solutions of some differential equation and (b) solutions of that differential equation depend continuously in the $C^n$
metric on parameters. That proof promises to involve checking a lot of details in two complicated proofs and will look like hand-waving. So perhaps this is stated somewhere in the literature?
Do you need some boundedness conditions on the "$C^2$ surface in $\mathbb{R}^3$ with parameter domain the unit disk" you permit? Otherwise, you might end up with surfaces like the plane, which
cannot be put into global isothermal co-ordinates with domain the disc (although of course it admits local isothermal co-ordinates). – macbeth Mar 22 '11 at 3:17
add comment
1 Answer
active oldest votes
Choose $o^t\in X^t$ so that $t\mapsto o^t$ is smooth and yet choose a famity of unit vectors $u^t\in T_{o^t}X^t$, so that $t\mapsto u^t$ is smooth.
Parametrize $X^t$ isothermaly by unit disc $f^t:D\to X^t$ in such a way that $f^t(0)=o^t$ and $(d_0f^t)(1)$ is proportional to $u^t$. Such a parametrization is unique. [The later follows
since conformal diffeomorphism $h:D\to D$ such that $h(0)=0$ and $h'(0)\in\mathbb R_+$ has to be identity.]
up vote
4 down It follows that $t\mapsto f^t$ is continuous; otherwise different partial limits would give different isothermal coordinate with chousen origin and real direction.
With a bit more work, one can show that $t\mapsto f^t$ is smooth.
This seems like an excellent idea, and a big improvement over trying to check the detailed proofs. As soon as I can work out the details I will accept your answer. – Michael Beeson Mar 15
'11 at 19:57
I tried to work out the details. There is a gap, though. Suppose we have a sequence of isothermal parametrizations $Y^{t_n}$ that are bounded in $C^n$ norm away from $X^0$. Now we need to
argue by compactness that some subsequence converges. For that we need a uniform bound on the isothermal parametrizations $Y^t$. Where will we get a bound on the $C^n$ norms of the $Y^t$?
– Michael Beeson Mar 16 '11 at 20:26
The isothermal coordinate is a solution of some elliptic PDE with coefficients taken from the first fundamental form. Such solutions are known to be uniformly smooth in a compact
subdomain. From this you should get $C^\infty$ case. For $C^n$ you have to be more careful, the first fundamental form is in class $C^{n-1}$, but if I remember right solution of elliptic
equation with such coefficient should be $C^n$... (It was a while since I study this stuff.) – Anton Petrunin Mar 17 '11 at 16:36
Now I have another difficulty with your answer. Let's check the uniqueness. Suppose $W$ and $V$ are two different isothermal parametrizations of $X$ (taking 0 to the same point and having
the same x-derivative at 0). Then if I understand you, you want to define $\varphi:= W^{-1} V$ and say that $\varphi$ is a conformal mapping of the disk, fixing 0 and with $\varphi_x(0) =
1$, and therefore $\varphi$ is the identity. Nice, but, the domain of $W^{-1}$ is a geometric surface, so how do I define it if $W$ is not one-one? It seems this will only work to get
LOCAL isothermal coordinates. – Michael Beeson Mar 21 '11 at 23:38
1 @Michael Beeson: Since $W$ and $V$ are two different parametrizations of $X$, we can write $V=W\circ\varphi$, for some self-diffeomorphism $\varphi$ of the disc. Since $W$ and $V$ are
both conformal, we know that $\varphi$ must be locally conformal, hence conformal. – macbeth Mar 22 '11 at 3:02
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/58491/finding-isothermal-coordinates-uniformly","timestamp":"2014-04-21T07:41:10Z","content_type":null,"content_length":"58769","record_id":"<urn:uuid:2abb2156-13e2-4b17-907c-937d8a81b197>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Throwing Buffon’s Needle with Mathematica
It has long been known that Buffon’s needle experiments can be used to estimate BuffonNeedle to carry out the most common forms of Buffon’s needle experiments. In this article we review statistical
aspects of the experiments, introduce the package BuffonNeedle, discuss the crossing probabilities and asymptotic variances of the estimators, and describe how to calculate them using Mathematica.
Buffon’s needle problem is one of the oldest problems in the theory of geometric probability. It was first introduced and solved by Buffon [1] in 1777. As is well known, it involves dropping a needle
of length 2] and Diaconis [3] examine several aspects of the problem with a long needle (4] and Solomon [5] provide the general extension of the problem. Perlman and Wishura [6] investigate a number
of statistical estimation procedures for 7] introduce the concept of grid density and provide an alternative idea. They show that Buffon’s original single grid is actually the most efficient if the
needle length is held constant (at the distance between lines on the single grid) and the grids are chosen to have equal grid density (i.e., equal length of grid material per unit area). In [8], Wood
and Robertson investigate the ways of maximizing the information in Buffon’s experiments.
We organize this article as follows. In the first three sections, we review Buffon’s experiments on single, double, and triple grids and their statistical issues. In the next section, we introduce
the features of the package BuffonNeedle. The functions in the package implement Monte Carlo experiments for the three types of grids. The results of each experiment are given in a table and in a
picture. When the number of the needles thrown on each grid is large, very nice pictures exhibit the interface between chance and necessity. In the last two sections, we describe how to calculate the
crossing probabilities in single- and double-grid experiments and the asymptotic variances of the estimators for each grid using Mathematica.
Single-Grid Experiment
The single-grid form is Buffon’s well-known original experiment. A plane (table or floor) has parallel lines on it at equal distances
where 5, 9, 10]. They are also explained in the “Calculating the Crossing Probabilities in Single and Double Grids” section of this article.
Figure 1. Buffon’s needles on a single grid.
From equation (1), we can write
Let 2) as
The random variable 6]. The variance of
which is minimized by taking
In Buffon’s experiments, the parameter of main interest is 3) as
It can also be expressed in terms of
where 6) or (7) and Monte Carlo methods, we can obtain empirical estimates of 11] ensures that Buffon’s estimator is an asymptotically unbiased, 100% efficient estimator of
which is, as expected, minimized at
If it is evaluated at
Double-Grid Experiment
In the double-grid experiment, also called the Laplace extension of Buffon’s problem, a plane is covered with two sets of parallel lines where one set is orthogonal to the other.
Figure 2. Buffon’s needles on a double grid.
In Figure 2, we see a double-grid plane and three needles of length
Let 6] showed that the random variable
is the UMVUE and has 100% asymptotic efficiency with the variance
As in the case of a single grid, the variance of 12), we have
Then Buffon’s estimator,
which can be used to obtain empirical estimates of
which is minimized at
When evaluated at
Compare the last equation with equation (10). Buffon’s estimator in the double-grid experiment is
Triple-Grid Experiment
In the triple-grid experiment, a plane is covered with equilateral triangles of altitude
Figure 3. Buffon’s needles on a triple grid.
Figure 3 shows a triple-grid plane and four needles of length 7], the crossing probabilities are given as
Let 6] investigated the random variable
By replacing
The variance of
As in the cases of the single- and double-grid experiments, the variance of
From equation (21), Buffon’s estimator can be written as
For this experiment, the asymptotic variance of Buffon’s estimator is
Comparing this with equations (10) and (18), we can infer that Buffon’s estimator in the triple-grid experiment is 7] investigated this conclusion. They introduced the notion of grid density, which
is the average length of grid in a unit area and showed that when the experiments are standardized, Buffon’s estimator in a single grid is the most efficient. In their approach, when 8), (16), and (
24) and evaluating them at
Table 1. Asymptotic variances of Buffon’s estimator for three grids.
Table 2. Asymptotic variances of Buffon’s estimator for three standardized grids.
The Package
The BuffonNeedle package is designed to throw needles on single, double, and triple grids. Copy the file BuffonNeedle.m (see Additional Material) into the Mathematica
There are three functions in this package:
7). The function also gives a picture of the simulation results. In the picture, the midpoints of the needles crossing any line are colored red, while those of needles crossing no line are colored
green. The functions 15). As in the other two cases, in a triple-grid experiment carried out by the function 23).
For each grid, as
Calculating the Crossing Probabilities in Single and Double Grids
In this section, we show how to calculate the crossing probabilities in single- and double-grid experiments using Mathematica.
Single-Grid Probabilities
In the single-grid experiment, two independent random variables with uniform distribution are defined to determine the relative position of the needle to the lines: the distance
From Figure 4, it is clear that the needle crosses the line when
As there are two possible outcomes in the single-grid experiment, the probability that the needle does not cross any line is given by
which can alternatively be calculated by
The probabilities 1).
Figure 4. The random variables in the single-grid experiment.
Double-Grid Probabilities
In the double-grid experiment, three independent random variables with uniform distribution can be defined to determine the relative position of the needle to the lines: the distance
As in the case of the single-grid experiment, the joint density function of
In the double-grid experiment, there are four possible outcomes:
• The needle crosses a horizontal line while not crossing a vertical line.
• The needle crosses a vertical line while not crossing a horizontal line.
• The needle crosses both a vertical line and a horizontal line or, equivalently, the needle crosses two lines.
• The needle crosses neither a vertical line nor a horizontal line or, equivalently, the needle crosses no line.
The needle crosses a horizontal line but does not cross a vertical line when
The needle crosses a vertical line but does not cross a horizontal line when
Thus, the probability that the needle crosses exactly one line is
The needle crosses both the vertical line and the horizontal line when
Finally, the needle crosses neither the vertical line nor the horizontal line when
The probabilities 11).
Figure 5. The random variables in the double-grid experiment.
Delta Method and Asymptotic Variance
Let a random variable 12, 13] and is based on the Taylor expansion about the mean of
Taking the expectation of both sides, we obtain the approximate mean of
From the well-known identity of statistics
the approximate variance, also called asymptotic variance, of
Thus, we can say that the random variable
Buffon’s estimator is a nonlinear function of the random variable 29), substituting 5), (13), and (22), we can obtain the asymptotic variances of 8), (16), and (24) for each grid.
Alternatively, asymptotic variances of 7, 11]
where 1), (11), and (19) actually define a list for each grid as follows:
From equations (30) and (31), one can define the following functions to obtain the asymptotic variances:
For each grid, therefore, the asymptotic variances are
which were previously given in equations (8), (16), and (24), respectively. For
For the standardized experiments of Wood and Robertson [7], we can obtain the asymptotic variances given in Table 2 by substituting
I would like to thank the reviewers whose comments led to a substantial improvement in this article. I would also like to thank my colleague, M. H. Satman, for creating some of the functions in the
package BuffonNeedle. Thanks to Dr. Aylin Aktukun for reading and commenting on numerous versions of this manuscript. This work is supported by the Scientific Research Projects Unit of Istanbul
[1] G. L. Buffon, “Essai d’arithmétique morale,” Histoire naturelle, générale, et particulière, Supplément 4, 1777 pp. 685-713.
[2] M. G. Kendall and P. A. P. Moran, Geometrical Probability, New York: Hafner, 1963.
[3] P. Diaconis, “Buffon’s Problem with a Long Needle,” Journal of Applied Probability, 13, 1976 pp. 614-618.
[4] R. A. Morton, “The Expected Number and Angle of Intersections Between Random Curves in a Plane,” Journal of Applied Probability, 3, 1966 pp. 559-562.
[5] H. Solomon, Geometric Probability, Philadelphia: Society for Industrial and Applied Mathematics, 1978.
[6] M. D. Perlman and M. J. Wichura, “Sharpening Buffon’s Needle,” American Statistician, 29(4), 1975 pp. 157-163.
[7] G. R. Wood and J. M. Robertson, “Buffon Got It Straight,” Statistics and Probability Letters, 37,1998 pp. 415-421.
[8] G. R. Wood and J. M. Robertson, “Information in Buffon Experiments,” Journal of Statistical Planning and Inferences, 66,1998 pp. 21-37.
[9] M. R. Spiegel, Probability and Statistics, Schaum’s Outline of Probability and Statistics, New York: McGraw-Hill, 1975 pp. 67-68.
[10] J. V. Uspensky, Introduction to Mathematical Probability, New York: McGraw-Hill, 1937, p. 252.
[11] C. R. Rao, Linear Statistical Inference and Its Applications, 2nd ed., New York: Wiley & Sons, 1973.
[12] G. W. Oehlert, “A Note on the Delta Method,” American Statistician, 46, 1992 pp. 27-29.
[13] J. A. Rice, Mathematical Statistics and Data Analysis, 2nd ed., Belmont, CA: Duxbury Press, 1994 p. 149.
E. Siniksaran, “Throwing Buffon’s Needle with Mathematica,” The Mathematica Journal, 2011. dx.doi.org/doi:10.3888/tmj.11.1-4.
Additional Material
Available at www.mathematica-journal.com/data/uploads/2011/11/BuffonNeedle.m.
About the Author
Enis Siniksaran is an assistant professor at Istanbul University, Department of Econometrics. His research interests include geometric approaches to statistical methods, robust statistical methods,
and the methodology of econometrics. He has been working with Mathematica since 1998. For details on his recent work dealing with the geometry of classical test statistics, in which all symbolic and
numerical computations and graphical work are done using Mathematica, see library.wolfram.com/infocenter/Articles/5962.
Enis Siniksaran
Istanbul Universitesi, Iktisat Fak., Ekonometri Bolumu | {"url":"http://www.mathematica-journal.com/2009/01/throwing-buffons-needle-with-mathematica/","timestamp":"2014-04-19T11:57:02Z","content_type":null,"content_length":"87980","record_id":"<urn:uuid:45897883-286a-4362-b6e6-270666b26124>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
Have you ever inserted batteries in a device only to find that it didn't work? You reverse the batteries and try again, but no luck. You can't find the polarity diagram to guide you and you're
dealing with 3 or 4 batteries and all the possible combinations! Well, that just happened to me as I was inserting 3 'C' batteries into a new emergency lantern I just purchased. There was no guide
that I could see. I knew there were 8 possibilities but it was late and my patience quickly ran out. I tried it again the following morning, shone my small LED light on it and saw the barely visible
After seeing the lantern finally operate, I realized I should have used a methodical approach -- practice what I preach!! Then I thought that this might be a natural application of the Multiplication
Principle one could use in the classroom. Of course, it would work nicely if you happened to have the identical lantern but you might have some of these in the building or at home which take 2 or
more batteries. IMO, there's something very real and exciting about solving a math problem and seeing the solution confirmed by having "the light go on!" I'll avoid commenting on the obvious
symbolism of that quoted phrase...
Instructional/Pedagogical Considerations
(1) I would start with a small flashlight requiring only one battery to set up the problem. For this simplest case, students should be encouraged to describe the correct placement in their own words
and on paper.
(2) Would you have several flashlights/lanterns available, one for each group of 2-4 students or would you demonstrate the problem with one device and call on students to suggest a placement of the
batteries? Needless to say, if you allow students to work with their own flashlights, they will look for the polarity diagram so you will need to cover those somehow. That is problematic!
(3) Do you believe most middle school students (if the polarity diagram is not visible) will randomly dump in the batteries to get the light to go on and be the first to do so? Is it a good idea to
let them do it their way before developing a methodical approach? Again, if a student or group solves the problem, it is important to have them write their solution before describing it to the class.
If there is more than one battery compartment, students should realize realize the need to label the compartments such as A, B, C , ... Once they reach 3 or more batteries, they should recognize that
a more structured methodical approach is needed so that one doesn't repeat the same battery placement or miss one. One would hope!
(4) Is it a drawback that the experiment will probably end (i.e., the light goes on) before exhausting all possible combinations? How would we motivate students to make an organized list or devise a
methodical approach if the light goes on after the first or second placement of the batteries?
(5) I usually model these kinds of problems using the so-called "slot" method. Label the compartments A, B, ... for example and make a "slot" for each. For two compartments we have
A B
_ _
Under each slot, I list the possibilities, e.g., (+) end UP or DOWN (depending on the device, other words may be more appropriate). Here I would only concern myself with labeling the (+) end, the one
with the small round protruding nub. For this problem I would write the number (2) on each slot since there are only TWO ways for each battery to be placed. Note the use of (..). In general, above
each slot I would write the number of possibilities. For two compartments (or two batteries), the students would therefore write (2) (2). They know the answer is 4 but some will think we are adding
rather than multiplying. Ask the class which operation they believe will always work. How would you express your questions or explanation to move students toward the multiplication model? The precise
language we use is of critical importance and we usually only learn this by experimentation. If one way of expressing it doesn't seem to click with some students, we try another until we refine it or
see the need for several ways of phrasing it. This is the true challenge of teaching IMO. We can plan all of this carefully ahead of time, but we don't know what the effect is until we go "live" (or
have experienced it many times!).
Perhaps you've already used a similar application in the classroom - please share with us how you implemented it. Circuit diagrams in electronics also lend themselves nicely to this approach.
Typically, I've used 2, 3 or more different coins to demonstrate the principle but the batteries seem to be a more natural example, although I see advantages and disadvantages to both. At least with
the batteries, students should not question the issue of whether "order counts!"
I could say much more about developing the Multiplication Principle in the classroom, but I would rather hear from my readers.
If you've used other models to demo this key principle, let us know...
MathNotations' Third Online Math Contest is tentatively scheduled for the week of Oct 12-16, a 5-day window to administer the 45-min contest and email the results. As with the previous contest, it
will be FREE, up to two teams from a school may register and the focus will be on Geometry, Algebra II and Precalculus. If any public, charter, prep, parochial or homeschool (including international
school) is interested, send me an email ASAP to receive registration materials: "dmarain 'at' gmail dot com."
Read Update (4) below!
Updates (Pls Read!!)
(1) The first draft of the contest is now complete.
(2) As with the precious two contests there will be one or two questions which require demonstration, that is, the students will have to derive, explain or prove a statement. This is best done
freehand and then scanned as a jpeg image which can be emailed as an attachment along with the official answer sheet. In fact, the entire answer sheet can be scanned but there is information on it
that I need to have.
(3) Some of the questions are multipart with the last part requiring more generalization.
(4) Even if you have previously indicated that you wish to participate, please send me another email using the title: THIRD MATHNOTATIONS CONTEST. Please copy and paste that into the title. Also,
when sending the email pls include your full name and title (advisor, teacher, supervisor, etc.), the name of your school (indicate if HS or Middle School) and the complete school address. I have
accumulated a database of most of the schools which have expressed interest or previously participated but searching through thousands of emails is much easier when the title is the same! If you have
already sent me an email this summer or previously participated, pls send me one more if interested in participating again.
(5) Finally, pls let your colleagues from other schools in your area know about this. Spread the word! If you have a blog, pls mention the contest. If you're connected to your local or state math
teachers association, pls let them know about this and ask them to post this info on their website if possible.
Note: Sending me the email is not a commitment! It simply means you are interested and will receive a registration form.
I have an equal number of pennies, nickels and dimes. I also have some quarters which have the same value as the pennies, nickels and dimes combined. If I have no other coins, what is the fewest
possible total number of coins I could have? What is the value of all the coins?
(1) An opening day problem?
(2) Would you have students working alone or in small groups?
(3) Would you allow the calculator?
(4) Appropriate for prealgebra students? Students below grade 6?
(5) Is zero a possible answer?
(6) Wording too confusing for most students? Is it ambiguous or clear?
(7) Do you feel there are important underlying concepts and ideas embedded here or is it just a fun puzzle to engage students?
(8) Do students have difficulty in separating number of coins from their value?
MathNotations' Third Online Math Contest is tentatively scheduled for the week of Oct 12-16, a 5-day window to administer the 45-min contest and email the results. As with the previous contest, it
will be FREE, up to two teams from a school may register and the focus will be on Geometry, Algebra II and Precalculus. If any public, charter, prep, parochial or homeschool (including international
school) is interested, send me an email ASAP to receive registration materials: "dmarain 'at' gmail dot com."
Read Update (4) below!
(1) The first draft of the contest is now complete.
(2) As with the precious two contests there will be one or two questions which require demonstration, that is, the students will have to derive, explain or prove a statement. This is best done
freehand and then scanned as a jpeg image which can be emailed as an attachment along with the official answer sheet. In fact, the entire answer sheet can be scanned but there is information on it
that I need to have.
(3) Some of the questions are multipart with the last part requiring more generalization.
(4) Even if you have previously indicated that you wish to participate, please send me another email using the title: THIRD MATHNOTATIONS CONTEST. Please copy and paste that into the title. Also,
when sending the email pls include your full name and title (advisor, teacher, supervisor, etc.), the name of your school (indicate if HS or Middle School) and the complete school address. I have
accumulated a database of most of the schools which have expressed interest or previously participated but searching through thousands of emails is much easier when the title is the same! If you have
already sent me an email this summer or previously participated, pls send me one more if interested in participating again.
(5) Finally, pls let your colleagues from other schools in your area know about this. Spread the word! If you have a blog, pls mention the contest. If you're connected to your local or state math
teachers association, pls let them know about this and ask them to post this info on their website if possible.
Note: Sending me the email is not a commitment! It simply means you are interested and will receive a registration form.
MathNotations' Third Online Math Contest is tentatively scheduled for the week of Oct 12-16, a 5-day window to administer the 45-min contest and email the results. As with the previous contest, it
will be FREE, up to two teams from a school may register and the focus will be on Geometry, Algebra II and Precalculus. If any public, charter, prep, parochial or homeschool (including international
school) is interested, send me an email ASAP to receive registration materials: "dmarain 'at' gmail dot com."
Read Update (4) below!
(1) The first draft of the contest is now complete.
(2) As with the precious two contests there will be one or two questions which require demonstration, that is, the students will have to derive, explain or prove a statement. This is best done
freehand and then scanned as a jpeg image which can be emailed as an attachment along with the official answer sheet. In fact, the entire answer sheet can be scanned but there is information on it
that I need to have.
(3) Some of the questions are multipart with the last part requiring more generalization.
(4) Even if you have previously indicated that you wish to participate, please send me another email using the title: THIRD MATHNOTATIONS CONTEST. Please copy and paste that into the title. Also,
when sending the email pls include your full name and title (advisor, teacher, supervisor, etc.), the name of your school (indicate if HS or Middle School) and the complete school address. I have
accumulated a database of most of the schools which have expressed interest or previously participated but searching through thousands of emails is much easier when the title is the same! If you have
already sent me an email this summer or previously participated, pls send me one more if interested in participating again.
(5) Finally, pls let your colleagues from other schools in your area know about this. Spread the word! If you have a blog, pls mention the contest. If you're connected to your local or state math
teachers association, pls let them know about this and ask them to post this info on their website if possible.
Note: Sending me the email is not a commitment! It simply means you will receive a registration form.
An aside...
I've been asking my kids questions every day to sharpen their minds for school which starts next week. I asked my son how he would spell, arachnophobia, the fear of spiders. He was confident he knew
the first four letters: iraq....
With the school year starting for some and soon for others, here are a couple of ideas to set the tone in our math classes early on. Do not assume these are intended only for your advanced
Middle School
1) (No calculator!) What is the average of ninety-nine 1's and one 2?
2) (No calculator!) Find 5 different sets of 5 numbers each of which has a mean of 5.
Note: The wording will be problematic here since students often associate the adjective different with the numbers themselves. Basic grammar, cough, cough...
High School (or advanced middle schoolers)
(No calculator!)
Set S consists of 100 different numbers each of which is between 0 and 1.
Which of the following could be the mean of these 100 numbers?
I. 0.01
II. 0.5
III. 0.98
(A) I only (B) II only (C) I and II (D) I and III (E) I, II, and III
[Yes, there will always be some discussion of "between!"]
A few comments...
(1) These problems are intended to be a springboard for your own creativity. You can do better!!
(2) Each of you probably has your own favorite resources of problems so that you don't have to reinvent the wheel. However, finding high-quality Problems of the Day which are matched to your
curriculum is not always easy despite the abundant ancillaries supplied by the publisher and resources on the web.
(3) From the previous comment you can guess that I feel strongly about giving more challenging warm-ups to our students - all of our students (adjusted for backgrounds, abilities, skills). Don't
worry that discussion of these will destroy your lesson. Students can work together for 5 minutes while you're taking attendance, checking homework, etc. I usually invited students who solved some or
all of these to go to the board and explain their methods. To encourage students to look these over, tell them you will include a variation of one of these questions on the next quiz or test. Start
by having it as an Extra Credit problem, then worth a couple of points, gradually increasing their value.
(4) Imagine if our students were exposed to these higher-order types of questions about 180 times a year from middle school on. By the time they take their college-entrance exams or other state
assessments (or tests like the ADP End of Course Exams), they will have a much higher degree of comfort and should perform better, although we know that there are so many other factors that go into
performance on high-stakes tests.
(5) Yes, the above high school problem is in SAT format. Why do you think I included these kinds on my daily warm-ups? By the way, I'm not promoting ETS but middle and high school teachers may well
want to invest in (or ask their supervisor to order) the College Board's book of
10 Real SATs. There is no better source for these kinds of problems and many questions are appropriate for middle schoolers.
After 4 tests, Barry's average score was 5 points higher than Michelle's. After the 5th test, Michelle's overall average was 5 points higher than Barry's. Michelle's score on the 5th test was how
many points higher than Barry's?
Can you find at least three methods for solving this?
Algebraic, "plug-in", conceptual, etc...
As teachers we need to have a deep understanding of these kinds of problems and familiarity with several approaches. Of course, our students will show us a variety of methods, both right and wrong,
when we open up the dialog!
Students from middle school on see many problems relating to means. However, they need to see a variety of problems of increasing difficulty. This question is certainly not a highly challenging math
contest problem but I believe it demonstrates some important principles of averages and can be used to review different problem-solving strategies. Middle schoolers would struggle with the algebraic
approach (a system of two equations), however they should be thoroughly comfortable with the underlying ideas.
Since the focus is on concept and method, I will give the answer: 45
f(x) = t-2(x+4)^2 where t is a constant.
If f(-8.3) = f(a) and a > 0, what is the value of a?
This type of question is of the Grid-in type (or short constructed response) that now appears on standardized testing like the SAT-I and ADP Algebra 2.
I administered it to a group of strong SAT students recently and the students who completed Alg II struggled with it. As our president might say, this was a "teachable moment!"
A few thoughts...
Should textbooks include more questions of this type both as examples and regular homework exercises? As you might guess, I'm very much opposed to having questions labeled as Standardized Test
Practice in texts or appear in a separate section of the text or in ancillaries.
By the way, by including the label "SAT-type problems" in the title of this post I'm trying to engender both positive and negative response. Those of you who have followed this blog for 2- 1/2 years
know that what I'm really referring to are "conceptually-based questions." Some of you react adversely to the idea that standardized test questions should influence our curriculum or how we teach.
N'est-ce pas?
Your comments... | {"url":"http://mathnotations.blogspot.com/2009_08_01_archive.html","timestamp":"2014-04-20T05:42:24Z","content_type":null,"content_length":"235351","record_id":"<urn:uuid:ac85421c-5253-45ac-b54a-d40e44f6a7e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
convergence test for integral
April 9th 2010, 08:26 AM #1
Nov 2009
convergence test for integral
hey there.. i'm kinda lost here..
i need to do convergence test for the equation below...
$y(x)=\frac{1}{\sqrt{2\pi}}\frac{1}{a}\sqrt{\frac{\ pi}{a}}\int_{-\infty}^{\infty} e^{-a|x|}f(x-t)dt$
however.. i've got 2 value of f(x).. e^x^2 and x^5
how am i going to test them?
should i include my f(x) inside the equation above?
like this...
$y(x)=\int_{-\infty}^{\infty} e^{-a|x|}f(x-t)^{5}dt$
$y(x)=\int_{-\infty}^{\infty} e^{-a|x|}f(e^{(x-t)^{2}})dt$
please guide me..
If f(x) = $x^5$, then the substitution gives
$y(x)=(constants)\int_{-\infty}^{\infty} e^{-a|x|}(x-t)^{5}dt$
If f(x) = $e^{x^2}$, then the substitution gives
$y(x)=(constants)\int_{-\infty}^{\infty} e^{-a|x|}e^{(x-t)^{2}}dt$
how am i going to test the equations?
April 9th 2010, 09:26 AM #2
Senior Member
Nov 2009
April 9th 2010, 02:59 PM #3
Nov 2009 | {"url":"http://mathhelpforum.com/calculus/138132-convergence-test-integral.html","timestamp":"2014-04-19T08:03:45Z","content_type":null,"content_length":"35504","record_id":"<urn:uuid:9ae52a21-80a2-49f3-bce1-449cfa9d7fe1>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
My first little PID: please comment
I just finished my first PID code I wrote from scratch (from the theory) in order to fully understand it. Because I am also a noob in math, I thought I would be fun to let you guys comment on how I
am doing.
I did study some other implementations of PID algorithms and each had small (and not so small) variation in how the math was done. That's why I decided to give my interpretation.
This class is part of my
Arduino Template Library
. It is written specifically to cover one responsibility only and work with the other (template) classes I have in the library. So for instance, the PID class does not do time tracking, I have other
classes in the library for that. The user of the class has to typedef its own class hierarchies for the situation the classes are used in.
BaseT is used as a base class and implements:
T getFeedback()
unsigned int getDeltaTime()
T getSmallestAcceptableError()
T is the data type that hold the values. Either float or double.
Error = SetPoint - Feedback
P = Error * gainP
I = Sum(previous-I's) + ((Error * deltaTime) * gainI)
D = ((previous-Error - Error) / deltaTime) * gainD
PI = P + I
PD = P + D
PID = P + I + D
template<class BaseT, typename T>
class PID : public BaseT
T P(T setPoint, T gainP)
T input = BaseT::getFeedback();
T error = CalcError(setPoint, input);
return CalcP(error, gainP);
T P_D(T setPoint, T gainP, T gainD)
T input = BaseT::getFeedback();
T error = CalcError(setPoint, input);
unsigned int deltaTime = BaseT::getDeltaTime();
return CalcP(error, gainP) + CalcD(error, deltaTime, gainD);
T P_I(T setPoint, T gainP, T gainI)
T input = BaseT::getFeedback();
T error = CalcError(setPoint, input);
unsigned int deltaTime = BaseT::getDeltaTime();
return CalcP(error, gainP) + CalcI(error, deltaTime, gainI);
T P_I_D(T setPoint, T gainP, T gainI, T gainD)
T input = BaseT::getFeedback();
T error = CalcError(setPoint, input);
unsigned int deltaTime = BaseT::getDeltaTime();
return CalcP(error, gainP) + CalcI(error, deltaTime, gainI) + CalcD(error, deltaTime, gainD);
T _integralAcc;
T _lastError;
inline T CalcError(T setPoint, T input)
T error = setPoint - input;
if (error < BaseT::getSmallestAcceptableError() && error > 0 ||
error > -BaseT::getSmallestAcceptableError() && error < 0)
error = 0;
return error;
inline T CalcP(T error, T gain)
return error * gain;
inline T CalcI(T error, unsigned int deltaTime, T gain)
_integralAcc += (error * deltaTime) * gain;
return _integralAcc;
inline T CalcD(T error, unsigned int deltaTime, T gain)
T value = ((_lastError - error) / deltaTime) * gain;
_lastError = error;
return value;
So please comment on the correctness of the math especially. | {"url":"http://www.societyofrobots.com/robotforum/index.php?topic=16926.msg118361","timestamp":"2014-04-16T16:27:57Z","content_type":null,"content_length":"131894","record_id":"<urn:uuid:0f0672ff-26b2-4a7f-a953-ca46a7e63908>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holiday Hills, IL Algebra 2 Tutor
Find a Holiday Hills, IL Algebra 2 Tutor
...I enjoy helping students understand the subject and realize that Math can be fun and not stressful.Algebra 1 is the basis of all other Math courses in the future and is used in many
professions. Topics include: simplifying expressions, algebraic notation, number systems, understanding and solvin...
11 Subjects: including algebra 2, calculus, geometry, algebra 1
...I can work with a wide range of ages, as I have tutored children as young as 6 years old in basic skills such as English, Reading, and Math. However, I have also tutored refugee students from
Peru in Reading and Math. At my school, I work as a peer tutor; I assist my classmates in the subjects listed above.
16 Subjects: including algebra 2, chemistry, calculus, statistics
...I am also extremely computer literate (i.e. I can show a computer who's boss) and very proficient with all Microsoft programs. Basketball is one of my greatest passions, and I have been playing
for over 15 years.
13 Subjects: including algebra 2, English, algebra 1, ACT Math
...I have recently completed an MBA from University of Chicago Booth School of Business. During my MBA, I took 20 graduate level MBA courses in economics, finance, accounting, statistics and
operations. In addition to MBA, I have a PhD in Engineering so I believe I am well qualified to teach mathe...
22 Subjects: including algebra 2, calculus, physics, geometry
...And I can do that. Mathematics and Physics are a part of our lives whether we understand them or not, and the way I look at it – it does not hurt to understand more. People often dislike what
they don’t understand, and parents and teachers often notice that in their children and students.
9 Subjects: including algebra 2, calculus, physics, geometry | {"url":"http://www.purplemath.com/Holiday_Hills_IL_Algebra_2_tutors.php","timestamp":"2014-04-20T11:06:05Z","content_type":null,"content_length":"24445","record_id":"<urn:uuid:4edb6e1c-2e33-4528-b5ee-b14ce0d2dadb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
A note on a P 6=NP result for a restricted class of real machines
- Theoretical Computer Science , 1995
"... We define a class of recursive functions on the reals analogous to the classical recursive functions on the natural numbers, corresponding to a conceptual analog computer that operates in
continuous time. This class turns out to be surprisingly large, and includes many functions which are uncomp ..."
Cited by 73 (4 self)
Add to MetaCart
We define a class of recursive functions on the reals analogous to the classical recursive functions on the natural numbers, corresponding to a conceptual analog computer that operates in continuous
time. This class turns out to be surprisingly large, and includes many functions which are uncomputable in the traditional sense.
, 1997
"... We pursue the study of the computational power of Piecewise Constant Derivative (PCD) systems started in [5, 6]. PCD systems are dynamical systems defined by a piecewise constant differential
equation and can be considered as computational machines working on a continuous space with a continuous tim ..."
Cited by 26 (6 self)
Add to MetaCart
We pursue the study of the computational power of Piecewise Constant Derivative (PCD) systems started in [5, 6]. PCD systems are dynamical systems defined by a piecewise constant differential
equation and can be considered as computational machines working on a continuous space with a continuous time. We prove that the languages recognized by rational PCD systems in dimension d = 2k + 3
(respectively: d = 2k + 4), k 0, in finite continuous time are precisely the languages of the ! k th (resp. ! k + 1 th ) level of the hyper-arithmetical hierarchy. Hence the reachability problem for
rational PCD systems of dimension d = 2k + 3 (resp. d = 2k + 4), k 1, is hyper-arithmetical and is \Sigma ! k-complete (resp. \Sigma ! k +1 -complete).
- Journal of Complexity , 1996
"... Let K be an algebraically closed field of characteristic 0. We show that constants can be removed efficiently from any machine over K solving a problem which is definable without constants. This
gives a new proof of the transfer theorem of Blum, Cucker, Shub & Smale for the problem P ? = NP. We h ..."
Cited by 12 (7 self)
Add to MetaCart
Let K be an algebraically closed field of characteristic 0. We show that constants can be removed efficiently from any machine over K solving a problem which is definable without constants. This
gives a new proof of the transfer theorem of Blum, Cucker, Shub & Smale for the problem P ? = NP. We have similar results in positive characteristic for non-uniform complexity classes. We also
construct explicit and correct test sequences (in the sense of Heintz and Schnorr) for the class of polynomials which are easy to compute. An earlier version of this paper appeared as NeuroCOLT
Technical Report 96-43. The present paper contains in particular a new bound for the size of explicit correct test sequences. 1 A part of this work was done when the author was visiting DIMACS at
Rutgers University. 1 Introduction As in discrete complexity theory, the problem P ? = NP is a major open problem in the Blum-Shub-Smale model of computation over the reals [3]. It has been possible
to show that P...
- International Journal of Bifurcation and Chaos , 1995
"... . Finding a natural meeting ground between the highly developed complexity theory of computer science ---with its historical roots in logic and the discrete mathematics of the integers--- and
the traditional domain of real computation, the more eclectic less foundational field of numerical analysis ..."
Cited by 11 (0 self)
Add to MetaCart
. Finding a natural meeting ground between the highly developed complexity theory of computer science ---with its historical roots in logic and the discrete mathematics of the integers--- and the
traditional domain of real computation, the more eclectic less foundational field of numerical analysis ---with its rich history and longstanding traditions in the continuous mathematics of
analysis--- presents a compelling challenge. Here we illustrate the issues and pose our perspective toward resolution. This article is essentially the introduction of a book with the same title (to
be published by Springer) to appear shortly. Webster: A public declaration of intentions, motives, or views. k Partially supported by NSF grants. y International Computer Science Institute, 1947
Center St., Berkeley, CA 94704, U.S.A., lblum@icsi.berkeley.edu. Partially supported by the Letts-Villard Chair at Mills College. z Universitat Pompeu Fabra, Balmes 132, Barcelona 08008, SPAIN,
cucker@upf.es. P...
, 1997
"... We show that proving lower bounds in algebraic models of computation may not be easier than in the standard Turing machine model. For instance, a superpolynomial lower bound on the size of an
algebraic circuit solving the real knapsack problem (or on the running time of a real Turing machine) would ..."
Cited by 8 (3 self)
Add to MetaCart
We show that proving lower bounds in algebraic models of computation may not be easier than in the standard Turing machine model. For instance, a superpolynomial lower bound on the size of an
algebraic circuit solving the real knapsack problem (or on the running time of a real Turing machine) would imply a separation of P from PSPACE. A more general result relates parallel complexity
classes in boolean and real models of computation. We also propose a few problems in algebraic complexity and topological complexity.
- In Proceeding of ICALP'97 , 1997
"... We study the computational power of Piecewise Constant Derivative (PCD) systems. PCD systems are dynamical systems defined by a piecewise constant differential equation and can be considered as
computational machines working on a continuous space with a continuous time. We show that the computation ..."
Cited by 7 (2 self)
Add to MetaCart
We study the computational power of Piecewise Constant Derivative (PCD) systems. PCD systems are dynamical systems defined by a piecewise constant differential equation and can be considered as
computational machines working on a continuous space with a continuous time. We show that the computation time of these machines can be measured either as a discrete value, called discrete time, or
as a continuous value, called continuous time. We relate the two notions of time for general PCD systems. We prove that general PCD systems are equivalent to Turing machines and linear machines in
finite discrete time. We prove that the languages recognized by purely rational PCD systems in dimension d in finite continuous time are precisely the languages of the d \Gamma 2 th level of the
arithmetical hierarchy. Hence the reachability problem of purely rational PCD systems of dimension d in finite continuous time is \Sigma d\Gamma2 -complete. 1 Introduction There has been recently an
increasing in...
- In Proc. STACS 2000 , 2000
"... . This survey is devoted to some aspects of the \P = NP ?" problem over the real numbers and more general algebraic structures. We argue that given a structure M , it is important to nd out
whether NPM problems can be solved by polynomial depth computation trees, and if so whether these trees ca ..."
Cited by 5 (4 self)
Add to MetaCart
. This survey is devoted to some aspects of the \P = NP ?" problem over the real numbers and more general algebraic structures. We argue that given a structure M , it is important to nd out whether
NPM problems can be solved by polynomial depth computation trees, and if so whether these trees can be eciently simulated by circuits. Point location, a problem of computational geometry, comes into
play in the study of these questions for several structures of interest. 1 Introduction In algebraic complexity one measures the complexity of an algorithm by the number of basic operations performed
during a computation. The basic operations are usually arithmetic operations and comparisons, but sometimes transcendental functions are also allowed [21-23, 26]. Even when the set of basic
operations has been xed, the complexity of a problem depends on the particular model of computation considered. The two main categories of interest for this paper are circuits and trees. In section 2
, 1996
"... Introduction 2 Keywords: algebraic decision trees -- sparse sets -- computational complexity. 1 Introduction In 1977 Berman and Hartmanis ([1]) conjectured that all NP-complete sets are
polynomially isomorphic. Should this conjecture be proved, we would have as a consequence that no "small" NP-com ..."
Cited by 2 (1 self)
Add to MetaCart
Introduction 2 Keywords: algebraic decision trees -- sparse sets -- computational complexity. 1 Introduction In 1977 Berman and Hartmanis ([1]) conjectured that all NP-complete sets are polynomially
isomorphic. Should this conjecture be proved, we would have as a consequence that no "small" NP-complete set exists in a precise sense of the word "small". Denote by \Sigma the set f0; 1g and by \
Sigma the set of all finite sequences of elements in \Sigma. A set S ` \Sigma is said to be sparse when there is a polynomial p such that for all n 2 IN the subset S n of all elements in S having
size n has cardinality at most p(n). If the Berman-Hartmanis conjecture is
, 1999
"... We prove that all NP problems over the reals with addition and order can be solved in polynomial time with the help of a boolean NP oracle. As a consequence, the "P = NP?" question over the
reals with addition and order is equivalent to the classical question. For the reals with addition and equalit ..."
Cited by 2 (0 self)
Add to MetaCart
We prove that all NP problems over the reals with addition and order can be solved in polynomial time with the help of a boolean NP oracle. As a consequence, the "P = NP?" question over the reals
with addition and order is equivalent to the classical question. For the reals with addition and equality only, the situation is quite different since P is known to be different from NP.
Nevertheless, we prove similar transfer theorems for the polynomial hierarchy.
, 1995
"... Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In
this paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, ..."
Add to MetaCart
Computing the maximum bichromatic discrepancy is an interesting theoretical problem with important applications in computational learning theory, computational geometry and computer graphics. In this
paper we give algorithms to compute the maximum bichromatic discrepancy for simple geometric ranges, including rectangles and halfspaces. In addition, we give extensions to other discrepancy
problems. 1 Introduction The main theme of this paper is to present efficient algorithms that solve the problem of computing the maximum bichromatic discrepancy for axis oriented rectangles. This
problem arises naturally in different areas of computer science, such as computational learning theory, computational geometry and computer graphics ([Ma], [DG]), and has applications in all these
areas. In computational learning theory, the problem of agnostic PAC-learning with simple geometric hypotheses can be reduced to the problem of computing the maximum bichromatic discrepancy for
simple geometric ra... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1206170","timestamp":"2014-04-19T05:43:22Z","content_type":null,"content_length":"37823","record_id":"<urn:uuid:aaf28b7d-e426-412d-b4b9-e30901d9e819>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
FindDivisions[{x[min], x[max]}, n]
finds a list of about n "nice" numbers that divide the interval around to into equally spaced parts.
FindDivisions[{x[min], x[max], dx}, n]
makes the parts always have lengths that are integer multiples of dx.
FindDivisions[{x[min], x[max]}, {n[1], n[2], ...}]
finds successive subdivisions into about , , ... parts.
FindDivisions[{x[min], x[max], {dx[1], dx[2], ...}}, {n[1], n[2], ...}]
uses spacings that are forced to be multiples of , , ....
FindDivisions[{x[min], x[max], {dx[1], dx[2], ...}}]
gives all numbers in the interval that are multiples of the .
Find five divisions of the interval [0,1]:
Division end points may be outside the initial range:
Generate multiple levels of divisions:
Find divisions that are aligned to multiples of
Find divisions that are short in a given base:
New in 7 | {"url":"http://reference.wolfram.com/mathematica/ref/FindDivisions.html","timestamp":"2014-04-19T02:29:02Z","content_type":null,"content_length":"40626","record_id":"<urn:uuid:52c95fb9-013a-470b-ad8c-1e61368ded1c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
) What Is The Magnitude Of The Force Per Meter ... | Chegg.com
) What is the magnitude of the force per meter of length on astraight wire carrying a 9.79 Acurrent when perpendicular to a 0.80 Tmagnetic field?
[7.832 ]
(b) What if the angle between the wire and field is 60.0°?
[89.045 ] | {"url":"http://www.chegg.com/homework-help/questions-and-answers/magnitude-force-per-meter-length-astraight-wire-carrying-979-acurrent-perpendicular-080-tm-q208847","timestamp":"2014-04-17T08:11:37Z","content_type":null,"content_length":"21821","record_id":"<urn:uuid:4230d4ec-fee7-49ef-b706-86b114239965>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decidability of Deterministic Context-Free Languages
Matthias-Christian Ott <ott@mirix.org>
Tue, 1 Jun 2010 23:47:04 +0200
From comp.compilers
| List of all articles for this month |
From: Matthias-Christian Ott <ott@mirix.org>
Newsgroups: comp.compilers
Date: Tue, 1 Jun 2010 23:47:04 +0200
Organization: Compilers Central
Keywords: parse, theory, question
Posted-Date: 01 Jun 2010 18:47:21 EDT
it is generally not decidable whether a context-free language is
deterministic. That means it is not decidable whether a grammar is an
LR(k) grammar.
However, a LR(k) parser can indicate conflicts in the parsing table
and detect if a grammar is not a LR(k) grammar. So it can decide
whether a grammar is deterministic context-free.
This seems to be a contradiction. I see the following possibilities
that would resolve it:
a) It is decidable whether a context-free language is deterministic.
b) Not all deterministic context-free languages can be described by an
LR(k) grammar, in other words LR(k) languages are a subset of
deterministic context-free languages.
c) Not all context-free grammars which aren't deterministic produce a
a) seems unlikely to be true. I know too little to decide whether b)
and c) are true.
Can someone help me with this?
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/10-06-003","timestamp":"2014-04-20T13:21:55Z","content_type":null,"content_length":"5811","record_id":"<urn:uuid:5c4ee3d3-0ba0-4b40-a039-4e51128f84ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of Hyperreal_number
The system of hyperreal numbers represents a rigorous method of treating the ideas about infinite and infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers
ever since the invention of calculus by Newton and Leibniz. The hyperreals, or nonstandard reals (usually denoted as *R), denote an ordered field which is a proper extension of the ordered field of
real numbers R and which satisfies the transfer principle. This principle allows true first order statements about R to be reinterpreted as true first order statements about *R.
An important property of *R is that it has infinitely large as well as infinitesimal numbers, where an infinitely large number is a number that is larger than all numbers representable in the form
$1 + 1 + cdots + 1.$
The use of the definite article
in the phrase
the hyperreal number
is somewhat misleading in that there is not a unique ordered field that is referred to in most treatments. However, a 2003 paper by Kanovei and
shows that there is a definable, countably
, but not of course countable)
elementary extension
of the reals, which therefore has a good claim to the title of
hyperreal numbers.
The condition of being a hyperreal field is a stronger one than that of being a real closed field strictly containing R. It is also stronger than that of being a superreal field in the sense of Dales
and Woodin.
The application of hyperreal numbers and in particular the transfer principle to problems of analysis is called nonstandard analysis; some find it more intuitive than standard real analysis.
From Newton to Robinson
and (more explicitly)
introduced differentials, they used infinitesimals and these were still regarded as useful by later mathematicians such as
. Nonetheless these concepts were from the beginning seen as suspect, notably by
, and when in the 1800s
was put on a firm footing through the development of the
(ε, δ)-definition of limit
by Cauchy,
, and others, they were largely abandoned.
However, in the 1960s Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. Robinson developed his
theory nonconstructively, using model theory; however it is possible to proceed using only algebra and topology, and proving the transfer principle as a consequence of the definitions. In other words
hyperreal numbers per se, aside from their use in nonstandard analysis, have no necessary relationship to model theory or first order logic.
The transfer principle
development of the hyperreals turned out to be possible if every true
first-order logic
statement that uses basic arithmetic (the
natural numbers
, plus, times, comparison) and quantifies only over the real numbers was assumed to be true in a reinterpreted form if we presume that it quantifies over hyperreal numbers. For example, we can state
that for every real number there is another number greater than it:
$forall x in mathbb\left\{R\right\} quad exists y inmathbb\left\{R\right\}quad x < y$
The same will then also hold for hyperreals:
$forall x in \left\{\right\}^starmathbb\left\{R\right\} quad exists y in \left\{\right\}^starmathbb\left\{R\right\}quad x < y$
Another example is the statement that if you add 1 to a number you get a bigger number:
$forall x in mathbb\left\{R\right\} quad x < x+1$
which will also hold for hyperreals:
$forall x in \left\{\right\}^starmathbb\left\{R\right\} quad x < x+1$
The correct general statement that formulates these equivalences is called the transfer principle. Note that in many formulas in analysis quantification is over higher order objects such as functions
and sets which makes the transfer principle somewhat more subtle than the above examples suggest.
The transfer principle however doesn't mean that R and *R have identical behavior. For instance, in *R there exists an element w such that
but there is no such number in
. This is possible because the nonexistence of this number cannot be expressed as a first order statement of the above type. A hyperreal number like
is called infinitely large; the reciprocals of the infinitely large numbers are the infinitesimals.
The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order
The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal
number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if
given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed. (Kanovei and Shelah have found a method that gives an explicit construction, at the
cost of a significantly more complicated treatment.)
The ultrapower construction
We are going to construct a hyperreal field via sequences of reals. In fact we can add and multiply sequences componentwise; for example,
$\left(a_0, a_1, a_2, ldots\right) + \left(b_0, b_1, b_2, ldots\right) = \left(a_0 +b_0, a_1+b_1, a_2+b_2, ldots\right)$
and analogously for multiplication. This turns the set of such sequences into a
commutative ring
, which is in fact a real
algebra A
. We have a natural embedding of
by identifying the real number
with the sequence (
, ...) and this identification preserves the corresponding algebraic operations of the reals. The intuitive motivation is, for example, to represent an infinitesimal number using a sequence that
approaches zero. The inverse of such a sequence would represent an infinite number. As we will see below, the difficulties arise because of the need to define rules for comparing such sequences in a
manner that, although inevitably somewhat arbitrary, must be self-consistent and well defined. For example, we may have two sequences that differ in their first
members, but are equal after that; such sequences should clearly be considered as representing the same hyperreal number. Similarly, most sequences oscillate randomly forever, and we must find some
way of taking such a sequence and interpreting it as, say,
, where
is a certain infinitesimal number.
Comparing sequences is thus a delicate matter. We could, for example, try to define a relation between sequences in a componentwise fashion:
$\left(a_0, a_1, a_2, ldots\right) leq \left(b_0, b_1, b_2, ldots\right) iff a_0 leq b_0 wedge a_1 leq b_1 wedge a_2 leq b_2 ldots$
but here we run into trouble, since some entries of the first sequence may be bigger than the corresponding entries of the second sequence, and some others may be smaller. It follows that the
relation defined in this way is only a
partial order
. To get around this, we have to specify which positions matter. Since there are infinitely many indices, we don't want finite sets of indices to matter. A consistent choice of index sets that matter
is given by any free
ultrafilter U
on the
natural numbers
; these can be characterized as ultrafilters which do not contain any finite sets. (The good news is that the
axiom of choice
guarantees the existence of many such
, and it turns out that it doesn't matter which one we take; the bad news is that they cannot be explicitly constructed.) We think of
as singling out those sets of indices that "matter": We write (
, ...) ≤ (
, ...) if and only if the set of natural numbers {
} is in
This is a total preorder and it turns into a total order if we agree not to distinguish between two sequences a and b if a≤b and b≤a. With this identification, the ordered field *R of hyperreals is
constructed. From an algebraic point of view, U allows us to define a corresponding maximal ideal I in the commutative ring A, and then to define *R as A/I; as the quotient of a commutative ring by a
maximal ideal, *R is a field. This is also notated A/U, directly in terms of the free ultrafilter U; the two are equivalent.
The field A/U is an ultrapower of R. Since this field contains R it has cardinality at least the continuum. Since A has cardinality
$\left(2^\left\{aleph_0\right\}\right)^\left\{aleph_0\right\} = 2^\left\{aleph_0^2\right\} =2^\left\{aleph_0\right\},,$
it is also no larger than
, and hence has the same cardinality as
One question we might ask is whether, if we had chosen a different free ultrafilter V, the quotient field A/U would be isomorphic as an ordered field to A/V. This question turns out to be equivalent
to the continuum hypothesis; in ZFC with the continuum hypothesis we can prove this field is unique up to order isomorphism, and in ZFC with the continuum hypothesis false we can prove that there are
non-order-isomorphic pairs of fields which are both countably indexed ultrapowers of the reals.
For more information about this method of construction, see ultraproduct.
An intuitive approach to the ultrapower construction
The following is an intuitive way of understanding the hyperreal numbers. The approach taken here is very close to the one in the book by Goldblatt. Recall that the sequences converging to zero are
sometimes called infinitely small. These are almost the infinitesimals in a sense, the true infinitesimals are the classes of sequences that contain a sequence converging to zero. Let us see where
these classes come from. Consider first the sequences of real numbers. They form a ring, that is, one can multiply add and subtract them, but not always divide by non-zero. The real numbers are
considered as the constant sequences, the sequence is zero if it is identically zero, that is, $a_n=0quad$ for all $nquad$.
In our ring of sequences one can get $ab=0quad$ with neither $a=0quad$ nor $b=0quad$. Thus, if for two sequences $a, bquad$ one has $ab=0quad$, at least one of them should be declared zero.
Surprisingly enough, there is a consistent way to do it. As a result, the classes of sequences that differ by some sequence declared zero will form a field which is called a hyperreal field. It will
contain the infinitesimals in addition to the ordinary real numbers, as well as infinitely large numbers (the reciprocals of infinitesimals, they will be represented by the sequences converging to
infinity). Also every hyperreal which is not infinitely large will be infinitely close to an ordinary real, in other words, it will be an ordinary real + an infinitesimal.
This construction is parallel to the construction of the reals from the rationals given by Cantor. He started with the ring of the Cauchy sequences of rationals and declared all the sequences that
converge to zero to be zero. The result is the reals. To continue the construction of hyperreals, let us consider the zero sets of our sequences, that is, the $z\left(a\right)=\left\{i: a_i=0\right\}
quad$, that is, $z\left(a\right)quad$ is the set of indexes $iquad$ for which $a_i=0quad$. It is clear that if $ab=0quad$, then the union of $z\left(a\right)quad$ and $z\left(b\right)quad$ is N (the
set of all natural numbers), so:
(i) one of the sequences that vanish on 2 complementary sets should be declared zero
(ii) if $aquad$ is declared zero, $abquad$ should be declared zero too, no matter what $bquad$ is.
(iii) if both $aquad$ and $bquad$ are declared zero, then $a^2+b^2quad$ should also be declared zero.
Now the idea is to single out a bunch
of subsets
and to declare that
if and only if
belongs to
. From the conditions (i), (ii) and (iii) one can see that
(i) From 2 complementary sets one belongs to U
(ii) Any set containing a set that belongs to U, also belongs to U.
(iii) An intersection of any 2 sets belonging to U belongs to U.
(iv) we don't want an empty set to belong to U
because then everything becomes zero because every set contains an empty set.
Any family of sets that satisfies (ii)-(iv) is called a filter (an example: the complements to the finite sets, it is called the Fréchet filter and it is used in the usual limit theory). If (i) also
holds, U is called an ultrafilter (because you can add no more sets to it without breaking it). The only explicitly known example of an ultrafilter is the family of sets containing a given element
(in our case, say, the number 10). Such ultrafilters are called trivial, and if we use it in our construction, we come back to the ordinary real numbers (exercise). Any ultrafilter containing a
finite set is trivial (exercise). It is known that any filter can be extended to an ultrafilter, but the proof uses the axiom of choice. The existence of a nontrivial ultrafilter (the ultrafilter
lemma) can be added as an extra axiom, it's weaker than the axiom of choice (that says that for any bunch of nonempty sets there is a function f that picks an element from any of them, f(X) is an
element of X).
Now if we take a nontrivial ultrafilter (which is an extension of the Fréchet filter, exercise) and do our construction, we get the hyperreal numbers as a result. The infinitesimals can be
represented by the non-vanishing sequences converging to zero in the usual sense, that is with respect to the Fréchet filter (exercise).
If $fquad$ is a real function of a real variable $xquad$ then $fquad$ naturally extends to a hyperreal function of a hyperreal variable by composition:
$\left\{ dots\right\}$
means "the equivalence class of the sequence
relative to our ultrafilter", two sequences being in the same class if and only if the zero set of their difference belongs to our ultrafilter.
All the arithmetical expressions and formulas make sense for hyperreals and hold true if they are true for the ordinary reals. One can prove that any finite (that is, such that $|x| < aquad$ for some
ordinary real $aquad$) hyperreal $xquad$ will be of the form $y+dquad$ where $yquad$ is an ordinary (called standard) real and $dquad$ is an infinitesimal.
It is parallel to the proof of the Bolzano-Weierstrass lemma that says that one can pick a convergent subsequence from any bounded sequence, done by bisection, the property (i) of the ultrafilters is
again crucial.
Now one can see that $fquad$ is continuous means that $f\left(a\right)-f\left(x\right)quad$ is infinitely small whenever $x-aquad$ is, and $fquad$ is differentiable means that
is infinitely small whenever
is. Remarkably, if one allows
to be hyperreal, the derivative will be automatically continuous (because,
being differentiable at
is infinitely small when
is, therefore
is also infinitely small when
Infinitesimal and infinite numbers
A hyperreal number r is called infinitesimal if it is less than every positive real number and greater than every negative real number. Zero is an infinitesimal, but non-zero infinitesimals also
exist: take for instance the class of the sequence (1, 1/2, 1/3, 1/4, 1/5, 1/6, ...) (this works because the ultrafilter U contains all index sets whose complement is finite).
A hyperreal number x is called finite (or limited by some authors) if there exists a natural number n such that -n < x < n; otherwise, x is called infinite (or illimited). Infinite numbers exist;
take for instance the class of the sequence (1, 2, 3, 4, 5, ...). A non-zero number x is infinite if and only if 1/x is infinitesimal.
The finite elements F of *R form a local ring, and in fact a valuation ring, with the unique maximal ideal S being the infinitesimals; the quotient F/S is isomorphic to the reals. Hence we have a
homomorphic mapping, st(x), from F to R whose kernel consists of the infinitesimals and which sends every element x of F to a unique real number whose difference from x is in S; which is to say, is
infinitesimal. Put another way, every finite nonstandard real number is "very close" to a unique real number, in the sense that if x is a finite nonstandard real, then there exists one and only one
real number st(x) such that x – st(x) is infinitesimal. This number st(x) is called the standard part of x, conceptually the same as x to the nearest real number. This operation is an
order-preserving homomorphism and hence is well-behaved both algebraically and order theoretically. It is order-preserving though not isotonic, i.e. $x le y$ implies $operatorname\left\{st\right\}\
left(x\right) le operatorname\left\{st\right\}\left(y\right)$, but $x < y$ does not imply $operatorname\left\{st\right\}\left(x\right) < operatorname\left\{st\right\}\left(y\right)$.
• We have, if both x and y are finite,
$operatorname\left\{st\right\}\left(x + y\right) = operatorname\left\{st\right\}\left(x\right) + operatorname\left\{st\right\}\left(y\right)$
$operatorname\left\{st\right\}\left(x y\right) = operatorname\left\{st\right\}\left(x\right) operatorname\left\{st\right\}\left(y\right)$
• If x is finite and not infinitesimal.
$operatorname\left\{st\right\}\left(1/x\right) = 1 / operatorname\left\{st\right\}\left(x\right)$
$operatorname\left\{st\right\}\left(x\right) = x$
The map st is
with respect to the order topology on the finite hyperreals, in fact it is
locally constant
Hyperreal fields
is a
Tychonoff space
, also called a T
space, and C(
) is the algebra of continuous real-valued functions on
. Suppose M is a
maximal ideal
in C(
). Then the
factor algebra
A = C(
)/M is a totally ordered field F containing the reals. If F strictly contains
then M is called a
hyperreal ideal
and F a
hyperreal field
. Note that no assumption is being made that the cardinality of F is greater than
; it can in fact have the same cardinality.
An important special case is where the topology on X is the discrete topology; in this case X can be identified with a cardinal number κ and C(X) with the real algebra $Bbb\left\{R\right\}^kappa$ of
functions from κ to R. The hyperreal fields we obtain in this case are called ultrapowers of R and are identical to the ultrapowers constructed via free ultrafilters in model theory.
See also
External links | {"url":"http://www.reference.com/browse/wiki/Hyperreal_number","timestamp":"2014-04-17T09:59:18Z","content_type":null,"content_length":"102538","record_id":"<urn:uuid:f3dbeefa-95c3-4f6c-82c2-7a512cb6c84a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baryons – Testing Ground of QCD Dynamics
Physics Department Colloquium
Baryons – Testing Ground of QCD Dynamics
Time: 4:00 PM
Place: 2241 Chamberlin Hall (coffee and cookies at 330 pm) Speaker: Jose Goity, JLab/Hampton University Abstract: Quantum Chromodynamics, QCD, has been established as the fundamental theory of the
strong interaction for a long time, yet its non-perturbative dynamics remains incompletely understood. Most properties of hadrons, such as those of the familiar proton and neutron, are determined by
these non-perturbative dynamics, and through the experimental and theoretical study of hadrons important aspects are being revealed. This talk will focus on the role played by the study of baryons,
discussing in particular the spectrum of excited baryons. Two significant theoretical advances will be highlighted studies of the baryon spectrum by means of lattice simulations of QCD, and analysis
of that spectrum by the implementation of the 1/Nc expansion, where Nc is the number of color degrees of freedom in QCD. Host: Ramsey-Musolf Download this video | {"url":"http://www.physics.wisc.edu/vod/2010/04/09.html","timestamp":"2014-04-16T23:11:24Z","content_type":null,"content_length":"13343","record_id":"<urn:uuid:06def52b-4a47-4b80-813c-ce1dbd8e9847>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: POW response
Replies: 0
POW response
Posted: Jul 17, 1997 11:28 AM
This group response includes the ideas of Rasheed, Debbie, Dianne, Willie
Mae, and Catherine.
PROBLEM:"The Emperor's Banquet"
You have been invited to the emperor's banquet. The emperor is a really
strange host. Instead of sitting with his guests at his large round dining
table, he walks around the table pouring oats on the head of every other
person. He continues this process. pouring oats on the head of everyone who
has not had oats until there is only one person left. This last person is
then allowed to join the emperor in his grand feast. The question is, where
should you sit if you do not want oats poured on your head?
>What is your answer? How did you get it? Did you all agree?
We are not going to give any of our answers. We feel that the terms of the
problem could be defined in so many ways that the possibilities are too
numerous to list or show here.
We did work out a number of solutions. First, we tried to solve the
problem individually as a brain teaser (mentally). In checking our
solutions, we decided that none of us had considered all of the
possibilities and that unless we made some decisions about the problem:
defining terms and the "King's process" too many solutions were possible.
The biggest stumbling block was in defining the "King's process" and in a
sense the problem became insoluable, because if one was to determine the
best place to sit so that they might then dine with the king, it would be
possible to do this only if you read the king's mind or else had attended a
number of these banquets and saw that he always proceeded in the same
>What are similar problems that you have used in your classroom?
A similar problem (in the sense that they are thought provoking) are:
"A person has two coins, one in each hand. If added together, the sum of
the two coins is $0.55. One of the coins is not a nickel. What are the
two coins?"
>How would you use this problem in your classroom?
We think that we would proceed in class much as we did today: dividing the
children into groups and allowing them to define the problem and seek
various solutions.
>How would you change this problem if you were going to use it?
If we wanted to reduce the frustration level, we might give more definition
to the problem ourselves.
>How does this problem fit in with your current curricular focus or
>focuses? (patterns, functions, and problem solving, cooperative
>learning, or whatever)
It would fit all these foci.
>After you've done a bit of this writing, take a look at the solutions
>submitted by students and talk about them in your group. What do you
There seems to be a correct solution.
Do you hear anything surprising to you?
Yes, there seems to be a correct solution.
Do you see things that you
>talked about when you were solving the problem in your group?
> -Annie | {"url":"http://mathforum.org/kb/thread.jspa?threadID=350421","timestamp":"2014-04-16T13:41:51Z","content_type":null,"content_length":"16798","record_id":"<urn:uuid:378d6117-ed34-4f11-a9d4-f8f8ad17793d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coding, GUIs and Statistical Rituals
I was recently inspired to comment on this blog post, asking is R is a cure for ‘mindless statistics’. Anyone whose familiar with statistics used in applied fields like epidemiology, sociology,
social sciences generally will be familiar with the idea of a ‘statistical ritual’. Rather than think about the proper statistical approach to every question, the researcher somewhat mindlessly
follows the formula/cookbook they learned in classes. They apply what they know mindlessly, which gives rise to phrases like “When all you have is a t-test, everything looks like a comparison of
normally distributed samples”. More after the jump.
Only a little bit better is allowing the software to do it for you. JMP or SPSS for example, will “helpfully” decide what test to apply, based on what data you’ve told it to use. But its a guess, and
it takes place in a black box. That’s…bad.
The blogger above asked if R is the cure for this kind of thing. First, points to him for turning one of R’s greatest weaknesses, a steep learning curve, into a strength. The core of the idea is that
by specifying everything, you’ll have to think about what you’re asking the software to do. You’ll choose the proper task, because you must choose.
It’s a promising concept, but I think the answer is “no”. It might help, but it’s not a cure. I’ll talk about a language I use more, which could have the same claim: SAS.
Consider the following code:
proc phreg data=work.dataset;
model t*outcome(0)=treat covariate treat*covariate/risklimits ties=efron;
ods select modelinfo fitstatistics parameterestimates;
title2 “PH regression results interaction”;
Simple code for a Cox proportional hazards model with an outcome with censoring, treatment, a covariate, and an interaction between treatment and the covariate. Seems like I’ve done a great deal of
thinking – I have to specify how ties in the data are handled, where the output is going, etc. But really, as long as I have this snippet saved, I can plug new variables and datasets in whenever I
have a “new” problem I think is appropriate. And if you Googled this code up, you could too, without really knowing what “ties=efron” means. Or how many options provided in PROC PHREG where I used
the defaults.
The same is true for R. You can Google a solution, without ever knowing all the arguments a function could take, and you’ll get results out. You just did what the nice man (or woman) on the website
told you to. I see it all the time with users of STATA, a program that sits somewhere in the middle between something like JMP and SAS or R (no offense STATA users, I also know some very good STATA
coders). It’s still mindless, still ritualized…just a little more work.
Filed under: Epidemiology, R, SAS, Soapbox
Leave a Comment
1. Leave a Comment
• RSS Feeds
• August 2011
M T W T F S S
« Jul Sep »
• Blogroll
• Macs in Research
• Meta | {"url":"http://confounding.net/2011/08/10/coding-guis-and-statistical-rituals/","timestamp":"2014-04-19T13:20:25Z","content_type":null,"content_length":"55095","record_id":"<urn:uuid:a610e393-a736-4905-88d6-b42c69b98115>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Size of bitmap returned by camera via intent?
up vote 10 down vote favorite
How do I get a bitmap with a certain (memory friendly) size from the camera?
I'm starting a camera intent with:
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
cameraIntent.putExtra("return-data", true);
photoUri = Uri.fromFile(new File(Environment.getExternalStorageDirectory(), "mytmpimg.jpg"));
cameraIntent.putExtra(android.provider.MediaStore.EXTRA_OUTPUT, photoUri);
startActivityForResult(cameraIntent, REQUEST_CODE_CAMERA);
I handle the result here:
// Bitmap photo = (Bitmap) intent.getExtras().get("data");
Bitmap photo = getBitmap(photoUri);
Now if I use the commented line - get the bitmap directly, I get always a 160 x 120 bitmap, and that's too small. If I load it from the URI using some standard stuff I found (method getBitmap), it
loads a 2560 x 1920 bitmap (!) and that consumes almost 20 mb memory.
How do I load let's say 480 x 800 (the same size the camera preview shows me)?
Without having to load the 2560 x 1920 into memory and scaling down.
memory bitmap camera
Does this help? stackoverflow.com/questions/3331527/… – Ben Ruijl Jul 29 '12 at 13:42
Probably, but isn't there a way to just get what I see on the screen when taking the pic...? I don't need anything more. – Ixx Jul 29 '12 at 13:46
Ben Rujil's link points to the best answer I know. Your choice is basically either thumbnail in Intent, or native-resolution photo in File. Absent getting the camera app to save the photo at a
lower resolution, that is your choice. – Sparky Jul 29 '12 at 22:25
add comment
1 Answer
active oldest votes
Here is what I came up with, based on a method called getBitmap() from a crop library which was removed from old Android version. I did some modifications:
private Bitmap getBitmap(Uri uri, int width, int height) {
InputStream in = null;
try {
int IMAGE_MAX_SIZE = Math.max(width, height);
in = getContentResolver().openInputStream(uri);
//Decode image size
BitmapFactory.Options o = new BitmapFactory.Options();
o.inJustDecodeBounds = true;
BitmapFactory.decodeStream(in, null, o);
int scale = 1;
if (o.outHeight > IMAGE_MAX_SIZE || o.outWidth > IMAGE_MAX_SIZE) {
scale = (int)Math.pow(2, (int) Math.round(Math.log(IMAGE_MAX_SIZE / (double) Math.max(o.outHeight, o.outWidth)) / Math.log(0.5)));
//adjust sample size such that the image is bigger than the result
scale -= 1;
BitmapFactory.Options o2 = new BitmapFactory.Options();
o2.inSampleSize = scale;
in = getContentResolver().openInputStream(uri);
up vote 2 down Bitmap b = BitmapFactory.decodeStream(in, null, o2);
vote accepted in.close();
//scale bitmap to desired size
Bitmap scaledBitmap = Bitmap.createScaledBitmap(b, width, height, false);
//free memory
return scaledBitmap;
} catch (FileNotFoundException e) {
} catch (IOException e) {
return null;
What this does is load the bitmap using BitmapFactory.Options() + some sample size - this way the original image is not loaded into memory. The problem is that the sample size just
works in steps. I get the "min" sample size for my image using some maths I copied - and subtract 1 in order to get the sample size which will produce the min. bitmap bigger than the
size I need.
And then in order to get the bitmap with exactly the size requested do normal scaling with Bitmap.createScaledBitmap(b, width, height, false);. And immediatly after it recycle the
bigger bitmap. This is important, because, for example, in my case, in order to get 480 x 800 bitmap, the bigger bitmap was 1280 x 960 and that occupies 4.6mb memory.
A more memory friendly way would be to not adjust scale, so a smaller bitmap will be scaled up to match the required size. But this will reduce the quality of the image.
add comment
Not the answer you're looking for? Browse other questions tagged memory bitmap camera or ask your own question. | {"url":"http://stackoverflow.com/questions/11709550/size-of-bitmap-returned-by-camera-via-intent","timestamp":"2014-04-20T19:01:16Z","content_type":null,"content_length":"69094","record_id":"<urn:uuid:5d5acf66-bd51-46fb-9a7d-55dd4549607d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Application Guide for AFINCH (Analysis of Flows in Networks of Channels) Described b
Scientific Investigations Report 2009–5188
National Water Availability and Use Program--Great Lakes Basin Pilot
Application Guide for AFINCH (Analysis of Flows in Networks of Channels) Described by NHDPlus
By David J. Holtschlag
AFINCH (Analysis of Flows in Networks of CHannels) is a computer application that can be used to generate a time series of monthly flows at stream
segments (flowlines) and water yields for catchments defined in the National Hydrography Dataset Plus (NHDPlus) value-added attribute system. AFINCH
provides a basis for integrating monthly flow data from streamgages, water-use data, monthly climatic data, and land-cover characteristics to estimate • Report PDF (11.9 MB)
natural monthly water yields from catchments by user-defined regression equations. Images of monthly water yields for active streamgages are generated
in AFINCH and provide a basis for detecting anomalies in water yields, which may be associated with undocumented flow diversions or augmentations. Part or all of this report is presented in
Water yields are multiplied by the drainage areas of the corresponding catchments to estimate monthly flows. Flows from catchments are accumulated Portable Document Format (PDF); the latest
downstream through the streamflow network described by the stream segments. For stream segments where streamgages are active, ratios of measured to version of Adobe Reader or similar software is
accumulated flows are computed. These ratios are applied to upstream water yields to proportionally adjust estimated flows to match measured flows. required to view it. Download the latest version
Flow is conserved through the NHDPlus network. A time series of monthly flows can be generated for stream segments that average about 1-mile long, or of Adobe Reader, free of charge.
monthly water yields from catchments that average about 1 square mile. Estimated monthly flows can be displayed within AFINCH, examined for
nonstationarity, and tested for monotonic trends. Monthly flows also can be used to estimate flow-duration characteristics at stream segments. AFINCH
generates output files of monthly flows and water yields that are compatible with ArcMap, a geographical information system analysis and display
environment. Chloropleth maps of monthly water yield and flow can be generated and analyzed within ArcMap by joining NHDPlus data structures with
AFINCH output. Matlab code for the AFINCH application is presented.
Suggested citation:
Holtschlag, D.J., 2009, Application guide for AFINCH (analysis of flows in networks of channels) described by NHDPlus: U.S. Geological Survey Scientific Investigations Report 2009-5188, 106 p.
Methods and Data Used in AFINCH Modeling
AFINCH Code
Mapping Water Yields in Catchments and Streamflow at Flowlines
Limitations of AFINCH and Suggestions for Future Development
Literature Cited
Appendix 1. Starting AFINCH (AFinch)
Appendix 2. A Graphical User Interface for AFINCH (AFinchGUI)
Appendix 3. Initialize Common Variables in the Matlab Workspace (AFIniAFStruct)
Appendix 4. Setup Data for AFINCH (AFSetupData)
Appendix 5. Read in National Land Cover Data (AFReadNLCD)
Appendix 6. Read in PRISM Precipitation Data (AFReadPrismPrec)
Appendix 7. Associate NHDPlus Flowlines with Streamflow Gaging Stations (AFGenStrucData)
Appendix 8. Read in monthly Streamflow Data at Gaging Stations (AFReadInFlowWY)
Appendix 9. Develop and Display Annual Network Design Matrices (AFStaBasinGridComIDWY)
Appendix 10. Plot the Relation Between Drainage Areas and Flows at Streamgages (AFPlotAreasFlows)
Appendix 11. Plot Image of Monthly Water Yields by Streamgage (AFYieldImage)
Appendix 12. Read in PRISM Air Temperature Data (AFReadPrismTemp)
Appendix 13. Compute the Previous Month's Precipitation (AFGenLag1Precip)
Appendix 14. Create Boxplots Showing the Distribution of Explanatory Variables (AFBoxplotExplanVar)
Appendix 15. Calls Graphical User Interface for User-Specified Water Yield Regression Equation (AFCallRegCheckBox)
Appendix 16. Graphical User Interface for User-Specified Regression Equation (AFRegCheckBoxGUI)
Appendix 17. Estimate Parameters for User-Specified Regression Equation with Data for the Entire Period of Analysis (AFRegressPOA)
Appendix 18. Estimate Parameters for Regression Equation by Water Year (AFRegressByWY)
Appendix 19. Plot Annual Estimates of Regression Equation Parameters (AFPlotRegressCoeff)
Appendix 20. Compute Estimates of Adjusted Incremental Water Yields and Flows (AFQEstAdjInc)
Appendix 21. Compute Constrained Estimates of Adjusted Incremental Water Yields and Flows (AFQConAdjInc)
Appendix 22. Plot monthly Estimates of Flows for the Period of Analysis (AFPlotQmMeaEst)
Appendix 23. Write Estimates of Water Yields and Flows to Files (AFWrtQYEstCon)
Appendix 24. Accumulate Flows Throughout the NHDPlus Network (AFConFlowAccum)
Appendix 25. Plot Time Series of Monthly Flows and Display Monthly Flow Duration Curves (AFTrendDurations)
Appendix 26. Compute Kendall's tau Correlation Coefficient and Sen's Monotonic Trend Slope Statistic (AFKenSen)
Appendix 27. Graphical User Interface for Plotting Images of Monthly Water Yields (AFYieldAtGagesGUI)
Appendix 28. Plot Image of Water Yields at Historically Gaged Streams (AFImagePOAYield)
Appendix 29. Identify Streamgages and Gaging Activity from Image Plot(AFid) | {"url":"http://pubs.usgs.gov/sir/2009/5188/","timestamp":"2014-04-18T10:35:41Z","content_type":null,"content_length":"13573","record_id":"<urn:uuid:9e293e23-f047-4e21-8e3c-3d2435e82c68>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predictive analysis on Web Analytics tool data
July 3, 2013
By Amar Gondaliya
In our previous webinar, we discussed on predictive analytics and basic things to perform predictive analysis. We also discussed on an eCommerce problem and how it can be solved using predictive
analysis. In this post, I will explain R script that I used to perform predictive analysis during webinar.
Before I explain about R script, let me recall eCommerce problem that we discussed during webinar so can get better idea about the data and R script. For eCommerce retailers product return is
headache and higher return rates impact the bottom line of their business. So if return rate is reduced by a small amount then it would impact on the total revenue. In order to reduce return rate, we
need to identify transactions where probability of product return is higher, if we can able to identify those transactions then we can perform some actions before delivering products and reduce the
return rate.
In webinar, we discussed that we can solve this problem using predictive analytics and use Google Analytics data. To perform predictive analysis we need to go through modeling process and following
are the major steps of it.
1. Load input data
2. Introducing model variables
3. Create model
4. Check model performance
5. Apply model on test data
I have included these steps in R script. So, let me explain R script that we used in webinar. R script is shown below.
# Step-1 : Read train dataset
train <- read.csv("train.csv")
# remove TransactionID from train dataset
train <- train[,-1]
# Step-3 : Create model
model <- glm(train$label~.,family=binomial(),data=train)
# Step-4 : Calculate accuracy of model
predicted <- round(predict(model,newdata=train,type="response"))
actual <- train$label
confusion_matix <- ftable(actual,predicted)
accuracy <- sum(diag(confusion_matrix))*100/length(actual)
#Step-5 : Applying model on test data
#Load test dataset
test <- read.csv("test.csv")
#Predict for test data
test_predict <- predict(model,newdata=test,type="response")
#creating label for test dataset
label <- rep(0,nrow(test))
# set label equal to 1 where probabilty of return > 0.6
label[test_predict>0.6] <- 1
# attach label to test dataset
test$label <- label
# Identify transactionID where label is 1.
high_prob_transactionIds <- test$TransactionID[test$label==1]
As you can see that first step is load input data set. In our case input data are train data and train data are loaded using read.csv() function. Train data contain the transaction based data and it
contains TransactionID. TransactionID is not needed to use in the model, so it should be removed from the train data.
We also discussed about the variables during the webinar. Train data include pre-purchase, in-purchase and some general attributes. We can retrieve these data from the Google Analytics.
Next, model is created using glm() function and three arguments are given to it which are formula, family and data. In formula, we specify response variable and predictor variables separated by ~
sign. Second argument we set family equal to binomial and last we set data equal to train. Once model is created, its performance is checked where accuracy of the model is calculated. it is shown in
the script.
Finally, model is applied on the test dataset and predict the probability of the product return for each transaction in test dataset. In the script, you can see that I have performed several steps to
identify the transactionIDs from test data having higher probability of product return. Let me explain them, first test data are loaded. Second, predict() function is used which will generate the
probabilities of product return and store in test_predict. Third, new variable label is created which contain 0 for all transactions initially and then using test_predict variable, 0 is replaced with
the 1 where probability of return is greater than 0.6 or 60%. Now this label is attached to the test data. Finally all the transactionIDs are retrieved where label is 1 which means that probability
of product return is greater than 60% in these transactionIDs.
So this is the script which I used during the webinar and performed the predictive analysis. I have created dummy datasets which you can use to perform these steps yourself. You can download data and
R script from here
Here I want to share you one thing, this is not optimized model. This is a practice model. You can improve the model by taking other variables from Google Analytics or performing some optimization
tasks, so you can get better results. However if you want to look at some other predictive models on web analytics tool data click here
for the author, please follow the link and comment on his blog:
Tatvic Blog » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/predictive-analysis-on-web-analytics-tool-data/","timestamp":"2014-04-19T04:33:29Z","content_type":null,"content_length":"40536","record_id":"<urn:uuid:2c2583e2-e484-4ff8-8820-905a12905a78>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earlier this month we blogged about Harvard Professors Gary King and Stuart Shieber providing advice to graduate students about open access, dissertations, and journal publishing. We also mentioned
some of the great initiatives that facilitate open access publishing in the statistics community, like the Journal of Statistical Software (JSS), The R Journal and arxiv.org. The ...
wapply: A faster (but less functional) ‘rollapply’ for vector setups
For some cryptic reason I needed a function that calculates function values on sliding windows of a vector. Googling around soon brought me to ‘rollapply’, which when I tested it seems to be a very
versatile function. However, I wanted to code my own version just for vector purposes in the hope that it may
Review: Kölner R Meeting 12 April 2013
Our 5th Cologne R user group meeting was the best attended meeting so far, with 20 members finding their way to the Institute of Sociology for two talks by Diego de Castillo on shiny and Stephan
Holtmeier on cluster analysis, followed by beer and schnitzel at the Lux, a gastropub nearby.ShinyDiego gave an overview of...
Installation of WRS package (Wilcox’ Robust Statistics)
Some users had trouble installing the WRS package from R-Forge. Here’s a method that should work automatically and fail-safe: ?View Code RSPLUS# first: install dependent packages install.packages(c
("MASS", "akima", "robustbase")) # second: install suggested packages install.packages(c("cobs", "robust", "mgcv", "scatterplot3d", "quantreg", "rrcov", "lars", "pwr", "trimcluster", "parallel",
"mc2d", "psych", "Rfit")) # third: install WRS install.packages("WRS", repos="http://R-Forge.R-project.org",
Scripts and Functions: Using R to Implement the Golden Section Search Method for Numerical Optimization
In an earlier post, I introduced the golden section search method – a modification of the bisection method for numerical optimization that saves computation time by using the golden ratio to set its
test points. This post contains the R function that implements this method, the R functions that contain the 3 functions that were
The Golden Section Search Method: Modifying the Bisection Method with the Golden Ratio for Numerical Optimization
Introduction The first algorithm that I learned for root-finding in my undergraduate numerical analysis class (MACM 316 at Simon Fraser University) was the bisection method. It’s very intuitive and
easy to implement in any programming language (I was using MATLAB at the time). The bisection method can be easily adapted for optimizing 1-dimensional functions with
Adding Percentiles to PDQ
Pretty Damn Quick (PDQ) performs a mean value analysis of queueing network models: mean values in; mean values out. By mean, I mean statistical mean or average. Mean input values include such
queueing metrics as service times and arrival rates. These could be sample means. Mean output values include such queueing metrics as waiting time and queue...
Upcoming GDAT Class May 6-10, 2013
Enrollments are still open for the Level III Guerrilla Data Analysis Techniques class to be held during the week May 6—10. Early-bird discounts are still available. Enquire when you register. As
usual, all classes are held at our lovely Larkspur...
Gridding data for multi-scale macroecological analyses
These are materials for the first practical lesson of the Spatial Scale in Ecology course. All of the data and codes are available here. The class covered a 1.5h session. R code for the session is
also at the end…Read more →
Time Varying Higher Moments with the racd package.
The Autoregressive Conditional Density (ACD) model of Hansen (1994) extended GARCH models to include time variation in the higher moment parameters. It was a somewhat natural extension to the premise
of time variation in the conditional mean and variance, though it probably raised more questions than it, or subsequent research have been able to answer. | {"url":"http://www.r-bloggers.com/2013/04/page/7/","timestamp":"2014-04-17T18:39:42Z","content_type":null,"content_length":"39546","record_id":"<urn:uuid:90086f2b-d40e-4029-bab7-c56cd3d94f3f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
System and method for modeling affinity and cannibalization in customer buying decisions - SAP AG
The present patent application is related to copending U.S. Patent Application PCT/US05/19765, entitled “System and Method for Modeling Customer Response Using Data Observable from Customer Buying
Decisions” and filed concurrently herewith by Kenneth J. Ouimet et al.
The present invention relates in general to economic modeling and, more particularly, to a system and method for modeling affinity and cannibalization in customer buying decisions.
Economic and financial modeling and planning is commonly used to estimate or predict the performance and outcome of real systems, given specific sets of input data of interest. An economic-based
system will have many variables and influences which determine its behavior. A model is a mathematical expression or representation which predicts the outcome or behavior of the system under a
variety of conditions. In one sense, it is relatively easy, in the past tense, to review historical data, understand its past performance, and state with relative certainty that the system's past
behavior was indeed driven by the historical data. A much more difficult task, but one that is extremely valuable, is to generate a mathematical model of the system which predicts how the system will
behave, or would have behaved, with different sets of data and assumptions. While forecasting and backcasting using different sets of input data is inherently imprecise, i.e., no model can achieve
100% certainty, the field of probability and statistics has provided many tools which allow such predictions to be made with reasonable certainty and acceptable levels of confidence.
In its basic form, the economic model can be viewed as a predicted or anticipated outcome of a mathematical expression, as driven by a given set of input data and assumptions. The input data is
processed through the mathematical expression representing either the expected or current behavior of the real system. The mathematical expression is formulated or derived from principles of
probability and statistics, often by analyzing historical data and corresponding known outcomes, to achieve a best fit of the expected behavior of the system to other sets of data, both in terms of
forecasting and backcasting. In other words, the model should be able to predict the outcome or response of the system to a specific set of data being considered or proposed, within a level of
confidence, or an acceptable level of uncertainty. As a simple test of the quality of the model, if historical data is processed through the model and the outcome of the model, using the historical
data, is closely aligned with the known historical outcome, then the model is considered to have a high confidence level over the interval. The model should then do a good job of predicting outcomes
of the system to different sets of input data.
Economic modeling has many uses and applications. One emerging area in which modeling has exceptional promise is in the retail sales environment. Grocery stores, general merchandise stores, specialty
shops, and other retail outlets face stiff competition for limited customers and business. Most if not all retail stores make every effort to maximize sales, volume, revenue, and profit. Economic
modeling can be a very effective tool in helping the store owners and managers achieve these goals.
Retail stores engage in many different strategies to increase sales, volume, revenue, and profit. One common approach is to offer promotions on select merchandise. The store may offer one or more of
its products at temporary sale price, discounts for multiple item purchases, or reduced service charges. One or more items may be offered with a percentage off regular price, fixed reduced price, no
interest financing, no sales tax, or the well-known “buy two get one free” sale. The store may run advertisements, distribute flyers, and place promotional items on highly visible displays and
end-caps (end displays located on each isle). In general, promotional items are classified by product, time of promotion, store, price reduction, and type of promotion or offer.
The process by which retailers select and implement promotional programs varies by season, region, company philosophy, and prior experience. Some retailers follow the seasonal trends and place on
promotion those items which are popular or in demand during the season. Summertime is for outdoor activities; Thanksgiving and Christmas are for festive meals, home decorations, and gift giving;
back-to-school is new clothes and classroom supplies. Some retailers use flyers and advertisements in newspapers, television, radio, and other mass communication media for select merchandise on
promotion, without necessarily putting every item at a reduced price. Some retailers try to call attention to certain products with highly visible displays. Other retailers follow the competition and
try to out-do the other. Still other retailers utilize loss-leaders and sell common items at cost or below cost in an effort to get customers into the store to hopefully buy other merchandise. The
retailers may also focus on which other items will sell with the promotional items.
Promotional programs are costly and time consuming. Flyers and advertisements are expensive to run, base margins are lost on price reductions, precious floor-space and shelf-space are dedicated to
specific items, and significant time and energy are spent setting up and administering the various promotions implemented by the retailer. It is important for the retailer to get good results, i.e.
net profit gains, from the promotional investments. Yet, most if not all retailers make promotional decisions based on canned programs, gross historical perception, intuition, decision by committee,
and other non-scientific indicators. Many promotional plans are fundamentally based on the notion that if we did it in the past it must be good enough to do again. In most situations, retailers
simply do not understand, or have no objective scientific data to justify, what promotional tools are truly providing the best results on a time dependent per product basis.
Customers make their own buying decisions and do not necessarily follow trends. Retailers may have false understanding as to what factors have been primarily driving previous buying decisions and
promotional successes. What has been perceived as working in the past may not achieve the same results today, possibly because the basis for belief in the effectiveness of prior promotion programs is
flawed. Other unknown or non-obvious factors may be in play driving customer buying decisions which undermine or reveal the weakness in previous promotions. Economic, demographic, social, political,
or other unforeseen factors may have changed and thereby altered the basis for customer buying decisions. In spite of costly and elaborate promotions, retailers not infrequently end up with
disappointing sales, lower than expected profits, unsold inventory, and lost opportunities based on promotional guesswork. When a promotional program fails to achieve intended objectives, retailers,
distributors, manufacturers, and promotional support organizations all lose confidence and business opportunity.
The retail industry as a whole operates on a low margin, high volume business model. Many merchandizing elements affect customer purchase decisions and consequently impact the volume of sales that
any given retail outlet experiences. Price and promotion are probably the two key drivers of sales volume, with promotion playing an increasingly important role as retail margins and prices measured
in inflation adjusted dollars drop. The margin pressure experienced by the retail industry is driven by many factors, most importantly the increase in large-box (e.g., Costco) and “every day low
price” (e.g., WalMart) players in the industry.
Since margins are low, promotions almost always represent a drop in profit generated by the promoted items, in spite of the increase in sales caused by the promotional activity. Clearly, then, the
motivation of retailers to participate in promotional activity is that the promotion items become “loss-leaders”, i.e., that the promoted items will either drive increased traffic to the store, and/
or that the promoted items will drag other items along with the promotion purchase. This effect is termed “affinity” in the retail industry. The inverse effect, in which sales of non-promotional
items are lost due to the purchase of promoted items in their stead is termed “cannibalization”. Characterization and prediction of the affinity and cannibalization (AC) effect are the central
concerns of the method presented here.
The AC effect is widely believed in the retail industry to be a significant driver of dollars to the enterprise bottom line. Consequently, there is significant economic value in a model which can
both characterize AC relationships and accurately predict fiscal impact of AC on promotional activity.
Past work in this area has centered on characterization of the AC effect between individual pairs of products. Consider two specific products P[i ]and P[j ]in store S1. Most schemes report variations
on four basic metrics, upon which an analyst can make an inference as to the strength of the affinity relationship. These metrics are all simple aggregate probabilities based on the co-occurrence of
two products in market baskets.
These values are sufficient to infer if the actual co-occurrence is significantly greater than the expectation of co-occurrence for uncorrelated products, and also if the relationship is asymmetric.
Asymmetric relationships are typical if one of the two products is the dominant seller, e.g., electric drills drive sales of drill bits but not the other way around, even though drill bits may be
higher unit sellers than electric drills.
An alternative approach to characterization of relationships is obtained by examining temporal correlation of product sales, looking for correlated increases and decreases in sales between the
hypothesized AC items. This approach may be statistically more sound, and might yield better characterizations than the previous method. However, it is also more computationally intense and therefore
less practical given the large quantities of data that need to be processed to find relationships in retail sales data.
In order to plan effectively, e.g., choosing the optimal set of products and promotional attributes, it is necessary to quantify the units, sales, and profit projections and baselines for the planned
promotion. Thus, any approach that serves only to characterize but not predict the effect of AC fails to fit the full requirements of a promotion planning system. Moreover the former method fails to
capture correlations of cannibalization because it ignores the correlation between the driven product and the probability of not finding the driver product in the basket. The latter method fails
because it does not examine actual co-occurrence data, and because causes of fluctuation which are co-linear cannot be well-resolved by simple time series analysis.
A need exists for an economic model which helps retailers make effective and successful promotional decisions in view of customer responses.
In one embodiment, the present invention is a computer implemented method of modeling customer response comprising providing a linear relationship between functions related to first and second
products, wherein the linear relationship includes a constant of proportionality between the functions related to the first and second products, expressing the linear relationship using a Taylor
Series expansion, expressing the constant of proportionality using a first derivative term of the Taylor Series expansion, and solving for the constant of proportionality using data observable from
customer responses.
In another embodiment, the present invention is a method of providing a computer model of customer response comprising providing a linear relationship between functions related to first and second
products, wherein the linear relationship includes a constant of proportionality between the functions related to the first and second products, expressing the constant of proportionality in terms of
functions related to the first and second products, and solving for the constant of proportionality using data observable from customer responses, wherein the constant of proportionality represents
an affinity or cannibalization relationship between the first and second products.
In yet another embodiment, the present invention is a method of modeling a relationship between first and second products comprising providing a computer model relating first and second products,
wherein the computer model includes a constant of proportionality between functions related to the first and second products, and solving for the constant of proportionality using data observable
from customer responses, wherein the constant of proportionality represents affinity or cannibalization relationship between the first and second products.
FIG. 1 is a block diagram of retail business process using a promotional model;
FIG. 2 is a customer buying decision tree;
FIGS. 3a-3b are plots of unit sales of a specific item over time;
FIG. 4 is a computer system for storing customer transactional data and executing the promotional model;
FIG. 5 is a graph of expected value of unit sales of product P[i ]as a function of time;
FIG. 6 is a block diagram of creating the promotional model;
FIG. 7 is a block diagram of executing the promotional model;
FIG. 8 is a block diagram of an evaluation of a model baseline;
FIG. 9 illustrates the steps of providing a promotional model using observable data from customer purchase decisions;
FIG. 10 is a block diagram of retail business process using an affinity and cannibalization model; and
FIG. 11 illustrates the steps of providing affinity and cannibalization features for use with the promotional model.
The present invention is described in one or more embodiments in the following description with reference to the Figures, in which like numerals represent the same or similar elements. While the
invention is described in terms of the best mode for achieving the invention's objectives, it will be appreciated by those skilled in the art that it is intended to cover alternatives, modifications,
and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and their equivalents as supported by the following disclosure and drawings.
Economic and financial modeling and planning is an important business tool which allows companies to conduct business planning, forecast demand, model revenue, and optimize price and profit. Economic
modeling is applicable to many businesses such as manufacturing, distribution, retail, medicine, chemicals, financial markets, investing, exchange rates, inflation rates, pricing of options, value of
risk, research and development, and the like. In the face of mounting competition and high expectations from investors, most if not all businesses must look for every advantage they can muster in
maximizing market share and profits. The ability to forecast demand, in view of pricing and promotional alternatives, and to consider other factors which materially affect overall revenue and
profitability is vital to the success of the bottom line, and the fundamental need to not only survive but to prosper and grow.
In particular, economic modeling is essential to businesses which face thin profit margins, such as general customer merchandise and other retail outlets. Clearly, many businesses are keenly
interested in economic modeling and forecasting, particularly when the model provides a high degree of accuracy or confidence. Such information is a powerful tool and highly valuable to the business.
The present discussion will consider economic modeling as applied to retail merchandising. In particular, understanding the cause and effect behind promotional offerings is important to increasing
the profitability of the retail stores. The present invention addresses effective modeling techniques for various promotions, in terms of forecasting and backcasting, and provides tools for a
successful, scientific approach to promotional programs with a high degree of confidence.
In FIG. 1, retail outlet (retailer) 10 has certain product lines or services available to customers as part of its business plan. The terms products and services are interchangeable in the present
application. Retailer 10 may be a food store chain, general customer product retailer, drug store, discount warehouse, department store, specialty store, service provider, etc. Retailer 10 has the
ability to set pricing, order inventory, run promotions, arrange its product displays, collect and maintain historical sales data, and adjust its strategic business plan. The management team of
retailer 10 is held accountable for market share, profits, and overall success and growth of the business. While the present discussion will center around retailer 10, it is understood that the
promotional modeling tools described herein are applicable to other industries and businesses having similar goals, constraints, and needs. The model works for any product/service which may be
promoted by the business. Moreover, the model can be used for many other decision processes in businesses other than retail such as described above.
Retailer 10 has business or operational plan 12. Business plan 12 includes many planning, analyzing, and decision-making steps and operations. Business plan 12 gives retailer 10 the ability to
evaluate performance and trends, make strategic decisions, set pricing, order inventory, formulate and run promotions, hire employees, expand stores, add and remove product lines, organize product
shelving and displays, select signage, and the like. Business plan 12 allows retailer 10 to analyze data, evaluate alternatives, run forecasts, and make operational decisions. Retailer 10 can change
business plan 12 as needed. In order to execute on business plan 12, the management team needs accurate economic models. In one application of the subject decision model, the methodology of the model
is applied to promotional programs to help retailer 10 make important operational decisions to increase the effectiveness of such programs.
From business plan 12, retailer 10 provides certain observable data and assumptions, and receives back specific forecasts and predictions from promotional model 14. The model performs a series of
complex calculations and mathematical operations to predict and forecast the business functions in which retailer 10 is most interested. The output of promotional model 14 is a report, chart, table,
or other analysis 16, which represents the model's forecasts and predictions based on the model parameters and the given set of data and assumptions. Report 16 is made available to business plan 12
so that retailer 10 can make promotional and operational decisions.
From time to time, retailer 10 may offer one or more of its products/services on promotion. In general, a promotion relates to any effort or enticement made by retailer 10 to call attention to its
product lines, increase the attractiveness of the product, or otherwise get the attention of its customer base, which may lead to, or increase the likelihood of, the customer making the purchasing
decision. For example, the product may be given temporary sale price, discount for multiple item purchases, and reduced service charges. One or more items may be offered with a percentage off regular
price, fixed reduced price, coupon, rebate, no interest financing, no sales tax, preferred customer discounts, or “buy two get one free” sale. Retailer 10 may run advertisements and flyers in mass
communication media such as newspapers, television, and radio. Retailer 10 may place promotional items on highly visible displays and end-caps. Retailer 10 may sponsor public events, bring in
celebrities, solicit testimonials, create slogans, utilize props, and piggy-back community efforts and current events. Promotional campaigns use any and all tasteful and appropriate means of calling
attention to retailer 10 and its products/services. In general, promotional programs are classified by product/service, time of promotion, store, price reduction, and type of promotion or offer. A
store may be a single location, or a chain or logical group of stores.
The promotional programs can greatly influence the customer buying decision, in terms of selecting the store, selecting the product to buy, and choosing the number of items of selected products to
purchase. Promotional programs are designed to draw customers into the store to purchase the promoted product. Retailer 10 will hope that customer 24 also decides to buy additional merchandise,
including items which are regular price. Moreover, the customer may be motivated to purchase more quantity of a given product by the promotion than they would have normally wanted. If the promotion
is “buy two get one free”, then whereas the customer may have purchased only one without promotion, the logical choice becomes to buy at least two since, if they do, the third one is free. A similar
motivation exists with multiple item discounts, such as three for a dollar or 10% discount for buying 10 or more. The natural customer response and behavior is to seek the best overall value.
Customer 24 will likely place more than one item in his or her basket if the offer is formatted in terms of a multiple item deal. The psychology behind creating properly formatted promotional
programs is to influence the customer buying decision and increase the probability of not only selecting retailer 10's store and deciding to buy the promoted product, as well as other products, but
also to increase the quantity of products purchased. Customer response model 14 provides the forecasts and predictions which achieve precisely this desired result for retailer 10. Customer 24 gets a
good deal and retailer 10 receives greater revenue, profits, and goodwill from the customer.
For those items offered with promotion, report 16 gives retailer 10 the forecast information necessary to compare and contrast different promotional programs and combinations of promotional programs
over time. Retailer 10 can use the forecasting features of promotional model 14 to select promotional programs and products, set pricing, forecast demand, order inventory to meet such anticipated
demand, and adjust its strategic planning and thinking process, all with the purpose and intent of maximizing market share, revenue, and profits. While the business strategy is formulated at
enterprise level, the execution of that strategy usually occurs at the product or store level. Accordingly, promotional model 14 is one tool used by the management team of retailer 10 to execute
their strategy for the business.
Promotional model 14 is applied to the common problem of understanding the mental and physical process that customers practice in making the all-important buying decision. More specifically, model 14
helps retailer 10 predict and forecast the effect of different promotional efforts and programs on customer demand and the resulting levels of retail sales, revenue, volume, and profitability.
Promotional model 14 can also be used to backcast promotional programs, i.e. consider “what-if” scenarios of prior efforts with different driving factors.
In one embodiment, the customer buying decision or demand is based on three factors or components: selection of store, selection of product, selection of quantity of product, which are illustrated in
purchase decision tree 20 of FIG. 2. In block 22, customer 24 decides where to shop, e.g. in store S1 or store S2. The decision to shop in store S1 or S2 depends in part on store history, growth,
seasonality, visual appeal, image, current promotions, price, product selection, product availability, prior experience, habit, quality, brand name, convenience, store layout, location, and customer
preference built up over a long period of time. Once in store S1, customer 24 decides in block 28 whether to buy an item, or not buy an item in block 30. The decision to buy product P1 depends in
part on price, promotion, seasonality, brand name, merchandising, shelf location, and the like. Other factors which influence the decision to buy also include affinity, i.e. buying one item induces
customer 24 to buy another related item, and cannibalization, i.e. decision to purchase one item disinclines customer 24 to purchase another item. If customer 24 decides to buy a cordless
screwdriver, he or she may also buy screws or an extra battery. The purchase of the cordless screwdriver may cause the customer to forego purchase of a hand-operated screwdriver.
Once customer 24 decides to buy at least one product P1, then the customer makes a decision about how many items of product P1 to purchase, as shown in block 32. Accordingly, customer 24 places the
quantity of product P1 in the shopping cart which he or she decides to purchase.
The process repeats for other products P2 and P3 in store S1. Assuming customer 24 is still in store S1, the customer makes a decision whether to buy product P2 and, if in the affirmative, then he or
she decides how much product P2 to buy; and likewise for product P3.
When customer 24 finishes shopping in store S1, the customer may patronize store S2. The process described for purchase decision tree 20 in store S1 also applies to customer buying decisions in store
S2. In block 34, customer 24 decides whether to buy an item, or not buy an item in block 36. Once customer 24 decides to buy at least one product P4, then the customer makes a decision about how many
items of product P4 to purchase, as shown in block 38. Accordingly, customer 24 places the selected quantity of product P4 in the shopping cart.
When viewing customer buying decisions on a per product basis over time, the distribution of unit sales may look like the graph shown in FIG. 3a. Plot 40 illustrates a simplified baseline model for
product P[i ]for volume of sales over time. The baseline model represents unit sales of product P[i ]with its regular price, without any promotional offer, i.e. no sale price, discounts, incentives,
flyer, advertisement, or other customer enticements. Between times t[1 ]and t[3], the baseline model of product P[i ]shows unit sales at an elevated level because of seasonality, i.e. demand rises
during certain time(s) of the year. Unit sales returns to normal level at time t[3]. After time t[8], the baseline model shows unit sales of product P[i ]have fallen because of permanent increase in
regular price.
Plot 42 in FIG. 3b illustrates a simplified promotional model of product P[i ]for unit sales over time. Between times t[1 ]and t[2], the promotional model of product P[i ]shows unit sales increase,
over and above the increase associated with seasonality, because of promotional flyers or advertisement distributed at the beginning of the season. Between times t[2 ]and t[3], unit sales returns to
the level associated with the seasonal lift. Between times t[3 ]and t[4], unit sales returns to the baseline rate. The promotional model also shows unit sales of product P[i ]increase at time t[4 ]
because product P[i ]is placed on end-cap. At time t[5], the product P[i ]is given temporary sale price, causing unit sales to increase even further. The sale price terminates at time t[6 ]and the
end-cap display is taken down at time t[7], each causing a corresponding decrease in unit sales.
The purpose of promotional model 14 is to predict or forecast (or backcast), with reasonable certainty, the unit sales associated with or in response to one or more promotional programs, such as
shown in plot 42. Retailer 10 uses promotional model 14 to run report 16 from which the retailer makes decisions as to what promotions, and corresponding timing of such programs, will provide the
optimal and desired effect under business plan 12. Retailer 10 may project forward in time and predict volume of sales under one or more promotional efforts. The unit sales predictions for various
promotional models will help retailer 10 make well-reasoned business decisions. Alternatively, retailer 10 may analyze back in time to understand what may have happened if different decisions had
been made. The backcast analysis is particularly useful following a less than successful promotion campaign to understand how things may have been done differently. In any case, promotional model 14
gives retailer 10 far more information than previously available to make good business decisions.
In the normal course of business, retailer 10 collects a significant amount of data. Customer 24 patronizes a store, makes one or more product selections, places the items in a basket, and proceeds
to the checkout counter. The contents of the basket containing one or more products is a retail transaction. Most retail transactions are entered into a computer system.
A general purpose computer 50, as shown in FIG. 4, includes central processing unit or microprocessor 52, mass storage device or hard disk 54, electronic memory 56, and communication ports 58.
Computer 50 runs application software for managing retail sales. Each product includes Universal Product Code (UPC) or barcode label. The barcode is encoded with a unique identification number for
the product. The product is scanned over a barcode reader at the store checkout counter to read the UPC identification number. Barcode reader 59 is connected to communication port 58 to transfer the
UPC data to computer 50. Computer 50 may be part of a computer network which connects multiple barcode readers in many stores to central computer system 64.
From the UPC data, a product database on hard disk 54 retrieves the price for the product and any promotional initiatives. As each product from the customer's basket is scanned, computer 50 builds up
a transaction in temporary file space on hard disk 54. Once the transaction is complete and customer 24 has paid, the transaction becomes a permanent record in the sales transaction log or database
on hard disk 54, or as part of central computer system 64.
Another product feature which can be used by retailer 10 is radio frequency identification tags (RFID). The RFID tag can be attached to products to track time dependent status such as date codes,
inventory, and shelf stock. The RFID contains product information such as individual product identification, time, and location. The RFID information is transmitted to a receiving unit which
interfaces with the store's computer system. Retailer 10 can track shelf life for perishable items and manage product rotation, ordering, inventory, and sales over time. If a quantity of perishable
product is nearing its end of shelf life, then that product is a prime candidate for promotion to move the about-to-expire items. It is much more efficient for retailer 10 to discount the product
rather than have to destroy the merchandise. Retailer 10 will also know when inventory is low due to the promotion and needed to be restocked or order more. The location of the RFID tagged product
can be monitored to see how display location within the store effects product sales. The time of the sale, e.g. day, month, year, is important in determining the distribution of the unit sales over
time. The RFID information represents useful observable data.
The transaction log (TLOG) contains one or more line items for each retail transaction, such as shown in Table 1. Each line item includes information such as store number, product number, time of
transaction, transaction number, quantity, current price, profit, promotion number, and customer number. The store number identifies specific store; product number identifies a product; time of
transaction includes date and time of day; quantity is the number of units of the product; current price (in US dollars) can be the regular price, reduced price, or higher price in some
circumstances; profit is the difference between current price and cost of selling the item; promotion number identifies any promotion for the product, e.g. flyer, ad, sale price, coupon, rebate,
end-cap, etc; customer number identifies the customer by type, class, region, or individual, e.g. discount card holder, government sponsored or under-privileged, volume purchaser, corporate entity,
preferred customer, or special member. The TLOG data is accurate, observable, and granular product information based on actual retail transactions within the store. TLOG data represents the known and
observable results from the customer buying decision or process. The TLOG data may contain thousands of transactions for retailer 10 per store S[i ]per day, or millions of transactions per chain of
stores per day.
TABLE 1
TLOG Data
Store Product Time Trans Qty Price Profit Promotion tomer
S1 P1 D1 T1 1 1.50 0.20 PROMO1 C1
S1 P2 D1 T1 2 0.80 0.05 PROMO2 C1
S1 P3 D1 T1 3 3.00 0.40 PROMO3 C1
S1 P4 D1 T2 4 1.80 0.50 0 C2
S1 P5 D1 T2 1 2.25 0.60 0 C2
S1 P6 D1 T3 10 2.65 0.55 PROMO4 C3
S1 P1 D2 T1 5 1.50 0.20 PROMO1 C4
S2 P7 D3 T1 1 5.00 1.10 PROMO5 C5
S2 P1 D3 T2 2 1.50 0.20 PROMO1 C6
S2 P8 D3 T2 1 3.30 0.65 0 C6
A simplified example of TLOG data is shown in Table 1. The first line item shows that on day/time D1 (date and time), store S1 had transaction T1 in which customer C1 purchased one product P1 at
1.50. The next two line items also refer to transaction T1 and day/time D1, in which customer C1 also purchased two products P2 at 0.80 each and three products P3 at price 3.00 each. In transaction T
2 on day/time D1, customer C2 has four products P4 at price 1.80 each and one product P5 at price 2.25. In transaction T3 on day/time D1, customer C3 has ten products P6 at 2.65 each, in his or her
basket. In transaction T1 on day/time D2 (different day and time) in store S1, customer C4 purchased five products P1 at price 1.50 each. In store S2, transaction T1 with customer C5 on day/time D3
(different day and time) involved one product P7 at price 5.00. In store S2, transaction T2 with customer C6 on day/time D3 involved two products P1 at price 1.50 each and one product P8 at price
The TLOG data in Table 1 further shows that product P1 in transaction T1 had promotion PROMO1. For the present discussion, PROMO1 shall be a front-page featured item in a local flyer. Product P2 in
transaction T1 had promotion PROMO2, which is an end-cap display in store S1. Product P3 in transaction T1 had promotion PROMO3, which is a reduced sale price. Product P4 in transaction T2 on day/
time D1 had no promotional offering. Likewise, product P5 in transaction T2 had no promotional offering. Product P6 in transaction T3 on day/time D1 had promotion PROMO4, which is a volume discount
for 10 or more items. Product P7 in transaction T1 on day/time D3 had promotion PROMO5, which is a 0.50 rebate. Product P8 in transaction T2 had no promotional offering. A promotion may also be
classified as a combination of promotions, e.g. flyer with sale price or end-cap with rebate.
Retailer 10 also provides additional information to the TLOG database or other logical storage area of hard disk 54 such as promotional calendar and events, store set-up, shelf location, end-cap
displays, flyers, and advertisements. For example, the information associated with a flyer distribution, e.g. publication medium, run dates, distribution, product location within flyer, and
advertised prices, is stored with TLOG data on hard disk 54. The store set-up, including location of products, special displays, in-store price specials, celebrity visitations, and physical
amenities, is made available to the TLOG database.
Communication port 58 may connect by a high-speed Ethernet link to communication network 60. Communication network 60 may have dedicated communication links between multiple computers, or an open
architecture system such as the World Wide Web, commonly known as the Internet. Retailer 10 can access computer 50 remotely through communication network 60. Retailer 10 stores TLOG data on computer
50 or central computer system 64. The TLOG data is used for a variety of functions such as financial reporting, inventory control, and planning. The TLOG data is also available to use with
promotional model 14, as described hereinafter.
In one embodiment, promotional model 14 is application software or computer program residing on computer 50, central computer system 64, or computer system 66. The software is originally provided on
computer readable media, such as compact disks (CDs), or downloaded from a vendor website, and installed on the desired computer. In one case, promotional model 14 can be executed directly on
computer 50, which may be located in the facilities of retailer 10. Retailer 10 interacts with computer 50 through user control panel 62, which may take the form of a local computer terminal, to run
promotional model 14 and generate report 16. Alternatively, retailer 10 uses computer system 66 to access promotional model 14 remotely, e.g. through a website contained on hard disk 54. Retailer 10
can make requests of a third party who in turn runs promotional model 14 and generates report 16 on behalf of the retailer. The requests to generate report 16 may be made to the third party through
the website or other communication medium.
Promotional model 14 operates under the premise that an expectation of unit sales of product P[i], over time, can be defined by statistical representations of certain component(s) or factor(s)
involved in customer purchasing decisions. The model may comprise a set of one or more factors. The customer buying decision involves the customer response or behavior to a number of conditions that
lead to a decision to purchase or not purchase a given product. In one embodiment, the expected value of units sales of product P[i ][i ]is given in equation (1) as:
□ [i ]
□ [i ]
□ [i ]purchased for customers who bought P[i ]
The customer traffic, selecting a product, and quantity of selected products form the set of factors representing components of the customer buying decision. Each of the set of factors is defined in
part using a set of causal parameters related to the customer buying decision or process. The set of parameters are set forth in utility functions U(t), as described hereinafter, and contained within
the expected values of traffic, share, and count. One or more of the values of the set of parameters are functions of time. Thus, the expected values are one type of statistical relationship which
relates the set of factors to the customer buying decision process. The observable TLOG data, such as transaction, product, price, and promotion, is used to solve for the set of parameters contained
in the utility function U(t).
For the expectation of customer traffic, i.e. the total number of customers that will patronize store S[i ]in a specified time period, the expected value is defined in terms of an exponential
function of the utility function parameters, given in equation (2) as:
^U^T^(t) (2)
The traffic utility function U[T](t) is derived from certain attributes and parameters, as a function of time, related to customer decisions involved in selecting store S[i]. With respect to equation
(2), the utility function for the expectation of traffic is given in equation (3) as:
U[T](t)=Q[0]+αt+f[T](t) (3)
□ Q[0 ]is base transaction rate for store S[i ]
□ α is growth rate for store S[i ]
□ t is time period (e.g. hour, day, week)
□ f[T](t) is time dependent function of coefficients
f[T](t)=C[TQ1]θ[Q1](t)+C[TQ2]θ[Q2](t)+C[TQ3]θ[Q3](t)+C[TQ4]θ[Q4](t) (4)
□ θ[Q1](t) is indicator function for quarter Q1
□ θ[Q2](t) is indicator function for quarter Q2
□ θ[Q3](t) is indicator function for quarter Q3
□ θ[Q4](t) is indicator function for quarter Q4
□ C[TQ1 ]is traffic coefficient for quarter Q1
□ C[TQ2 ]is traffic coefficient for quarter Q2
□ C[TQ3 ]is traffic coefficient for quarter Q3
□ C[TQ4 ]is traffic coefficient for quarter Q4
Coefficients C[TQ1]-C[TQ4 ]are parameters which collectively define the driving forces behind store traffic for each time period. The indicator function θ[Q1](t) has value 1 during quarter Q1 and is
zero otherwise; indicator function θ[Q2](t) has value 1 during quarter Q2 and is zero otherwise; indicator function θ[Q3](t) has value 1 during quarter Q3 and is zero otherwise; indicator function θ
[Q4](t) has value 1 during quarter Q4 and is zero otherwise.
The indicator functions may represent any arbitrary set of time periods. The set represented may be periodic, e.g., day of week or month of year. The indicator functions may be recurring holidays,
e.g., week before Christmas or Memorial day, or may represent non-contiguous non-recurring sets of time periods that have a relationship to the decision utility in the minds of the customers.
To evaluate U[T](t) as per equation (3), the observable TLOG data in Table 1 is used to determine the number of transactions (baskets of merchandise) per store, per time period. According to the
observable data in Table 1, the number of transactions (baskets) B for day/time D1 is B(D1)=3; for day/time D2 is B(D2)=1; for day/time D3 is B(D3)=2. Hence, there exists a set of observations from
the TLOG data of Table 1 given as observed baskets B[obs]: {B(D1), B(D2), B(D3)}, which provide an objective, observable realization of the historical transaction process over time.
Next, a likelihood function is assigned for the expectation of customer traffic in each store S[i]. In the present discussion, the Poisson distribution is used to define the probability that a given
number of customers have decided to shop in store S[i], i.e. has entered the store with the intent to make purchase(s), as shown in equation (5):
□ where: n is the number baskets B for a given time [i], i.e. expectation of seeing any given number of baskets B in store S[i ]on a given day D[i ]
Other likelihood functions such as Multinomial can be used, for example, when considering market share of the products. In the present case, the probability of the observed set of baskets P(B[obs]),
is given in equation (6) as a multiplicative or product combination of the probability of each observation within the set of observations. To evaluate P(B[obs]) in equation (6), each observation of
the set of observations {B(D1), B(D2), B(D3)} is evaluated within the Poisson function P(n) of equation (5), i.e. P(B(D1)), P(B(D2)), and P(B(D3)), which are given in equations (7)-(9).
The likelihood function P(B[obs]) in equation (6) is maximized using the Maximum Likelihood Method to evaluate the observable TLOG data within the Poisson probability function, and thereby solve for
and calibrate the parameters of U[T](t), which are unknown at the present time. Recall that [T](t) as defined in equation (3). P(B[obs]) is thus a function of the parameters of U[T](t), i.e. Q[0], α,
C[TQ1], C[TQ2], C[TQ3], C[TQ4], as per equations (3)-(4). As described above, the parameters of U[T](t) are contained within the factors that are related to the customer buying decision.
To solve for the parameters of U[T](t), a parameter vector V: {Q[0], α, C[TQ1], C[TQ2], C[TQ3], C[TQ4]} is defined. The goal is to resolve the parameter vector V into a set of values which maximizes
the magnitude of P(B[obs]). The function P(B[obs]) can be visualized as a response surface in n-dimensional space which has a maximum or peak value. The Maximum Likelihood Method is an iterative
process in which the parameter vector V is updated each iteration until the maximum value for P(B[obs]) is found.
Accordingly, an initial set of values for each parameter of the vector V is estimated to create initial parameter vector V[0]. The estimation of initial values for V[0 ]may come from historical data,
or merely an educated guess. The function P(B(D1)) is evaluated using the observed value B(D1) from Table 1 and the initial parameter vector V[0 ]to obtain a first value for P(B(D1)). The function P
(B(D2)) is evaluated using the observed value B(D2) from Table 1 and the initial parameter vector V[0 ]to obtain a first value for P(B(D2)). The function P(B(D3)) is evaluated using the observed
value B(D3) from Table 1 and the initial parameter vector V[0 ]to obtain a first value for P(B(D3)). A first value of P(B[obs]) is calculated in equation (6) from the product combination of the first
values of P(B(D1))*P(B(D2))*P(B(D3)).
Another set of values for each parameter of V is estimated, again using historical data or an educated guess, to create parameter vector V[1]. The function P(B(D1)), given the observed value B(D1)
and the parameter vector V[1], is evaluated to obtain a second value for P(B(D1)). The function P(B(D2)) is evaluated using the observed value B(D2) and the parameter vector V[1 ]to obtain a second
value for P(B(D2)). The function P(B(D3)) is evaluated using the observed value B(D3) and the parameter vector V[1 ]to obtain a second value for P(B(D3)). A second value of P(B[obs]) is calculated in
equation (6) from the product combination of the second values of P(B(D1))*P(B(D2))*P(B(D3)).
If the second value of P(B[obs]) is greater than the first value of P(B[obs]), then the second value of P(B[obs]) is closer to the solution of finding the maximum value of P(B[obs]). The second value
of P(B[obs]) is a better solution than the first value, i.e. the second estimate is moving in the correct direction, up the response surface of P(B[obs]) toward the optimal maximum solution. If the
second value of P(B[obs]) is not greater than the first value of P(B[obs]), then another parameter vector V[1 ]is estimated. The values are re-calculated and, if necessary, re-estimated until the
second value of P(B[obs]) is greater than the first value of P(B[obs]).
A third set of values for each parameter of V is estimated, e.g. V[2]=V[1]+(V[1]−V[0]) or other educated guess, to create parameter vector V[2]. The process of choosing iterative sets of values for
the parameter vector V can take a variety of approaches. There are many standard and specialized algorithms that may be applied: steepest descent, conjugate-gradient method, Levenberg-Marquardt, and
Newton-Raphson, to name a few. The function P(B(D1)) is evaluated using the observed value B(D1) and the parameter vector V[2 ]to obtain a third value for P(B(D1)). The function P(B(D2)) is evaluated
using the observed value B(D2) and the parameter vector V[2 ]to obtain a third value for P(B(D2)). The function P(B(D3)) is evaluated using the observed value B(D3) and the parameter vector V[2 ]to
obtain a third value for P(B(D3)). A third value of P(B[obs]) is calculated in equation (6) from the product combination of the third values of P(B(D1))*P(B(D2))*P(B(D3)).
If the third value of P(B[obs]) is greater than the second value of P(B[obs]), then the third value of P(B[obs]) is closer to the solution of the maximum value of P(B[obs]). If the third value of P(B
[obs]) is not greater than the second value of P(B[obs]), then another parameter vector V[2 ]is estimated. The values are re-calculated and, if necessary, re-estimated until the third value of P(B
[obs]) is greater than the second value of P(B[obs]).
The Maximum Likelihood Method repeats until the difference between the j-th value of P(B[obs]) and the j+1-th value of P(B[obs]) is less than an error threshold ε, or until a stopping criteria has
been reached, wherein iterative solutions of P(B[obs]) are no longer changing by an appreciable or predetermined amount. The error threshold ε or stopping criteria is selected according to desired
tolerance and accuracy of the solution.
Assume that the parameter vector V has reached a solution of values for Q[0], α, C[TQ1], C[TQ2], C[TQ3], C[TQ4], using the Maximum Likelihood Method, which provides the maximum value for the
likelihood function using the Poisson probability distribution P(B[obs]). With the parameters of U[T](t) now known, the utility function U[T](t) is thus completely defined as per equations (3)-(4),
and the expected value of traffic is defined as per equation (2) for the purposes of the model. Each store S[i ]will have its own solution for U[T](t). Since U[T](t) is a function of time, the
expected value of traffic for store S[i ]can be modeled forward and backward in time. Equation (2) can be evaluated for any time t to predict the expected value of customer traffic in store S[i].
Turning now to computing the expected value of share in equation (1). In this case, the share is an expectation of a probability. The expected value of share is given in equation (10) as a function
of exponential time dependent utility functions:
The utility function U[S](t) for the expected value of share is defined using a set of parameters or attributes, as a function of time, related to customer decisions involved in selecting at least
one product P[i]. The share utility function U[S](t) is defined at the product level, within store S[i], and has specific promotions associated with the various products. With respect to equation
(10), the utility function for the expectation of share is given in equations (11)-(13) as:
□ V(t) is time dependent base rate of sales for product P[i ]
□ β is price response for product P[i ]
□ P is price from TLOG data
□ P[0 ]is reference price, e.g. baseline price
□ f[S](t) is time dependent function of share coefficients
V(t)=V[REG]θ[REG](t)+V[PROMO1]θ[PROMO1](t)+V[PROMO2]θ[PROMO2](t) (12)
f[S](t)=C[SQ1]θ[Q1](t)+C[SQ2]θ[Q2](t)+C[SQ3]θ[Q3](t)+C[SQ4]θ[Q4](t) (13)
□ θ[REG](t) is indicator function for regular price
□ θ[PROMO1](t) is indicator function for promotion 1
□ θ[PROMO2](t) is indicator function for promotion 2
□ V[REG ]is the rate of sales for regular product status, i.e. no promotion
□ V[PROMO1 ]is rate of sales for first promotion
□ V[PROMO2 ]is rate of sales for second promotion
□ C[SQ1 ]is share coefficient for quarter Q1
□ C[SQ2 ]is share coefficient for quarter Q2
□ C[SQ3 ]is share coefficient for quarter Q3
□ C[SQ4 ]is share coefficient for quarter Q4
Promotion coefficients β, V[REG], V[PROMO1], V[PROMO2], C[SQ1], C[SQ2], C[SQ3], C[SQ4 ]are parameters which collectively define the driving forces associated with rate of sales or promotional lift
for product P[i]. The indicator function θ[REG](t) has value 1 during a first time period associated with no promotion and is zero otherwise; indicator function θ[PROMO1](t) has value 1 during a
second time period associated with promotion 1 and is zero otherwise; indicator function θ[PROMO2](t) has value 1 during a third time period associated with promotion 2 and is zero otherwise. For
example, θ[REG](t) may be value 1 during any day, week, or month when there is no promotion in place, i.e. regular price. V[REG], rate of sales of product P[i ]under regular status (no promotion), is
in effect when θ[REG](t)=1. θ[PROMO1](t) may be value 1 during a time period when promotion 1 is in effect. The rate of sales of product P[i ]under promotion 1 (V[PROMO1]) factors into U[S](t) when θ
[PROMO1](t)=1. θ[PROMO2](t) may be value 1 during a time period when promotion 2 is in effect. The rate of sales of product P[i ]under promotion 2 (V[PROMO2]) factors into U[S](t) when θ[PROMO2](t)=
1. One or more indicator functions θ[REG](t), θ[PROMO1](t), and θ[PROMO2](t) may be enabled at any given time.
To evaluate U[S](t) as per equation (11), the observable data in Table 1 is used to determine the number of transactions b(t) (baskets of merchandise) containing at least one product P[i], per store,
per time period, and the total number of transactions B(t) in store S[i ]per time period. Equation (14) defines the relationship between b(t), B(t), and expected value of share.
Define a set of observable data from Table 1, b[OBS]: {b(D1), b(D2), b(D3)}, where b(D1) is number of transactions or baskets containing at least one product P1 for day/time D1, b(D2) is number of
transactions containing at least one product P1 for day/time D2, and b(D3) is number of transactions containing at least one product P1 for day/time D3. From Table 1, the observable TLOG data, with
respect to product P1, shows b(D1)=2, b(D2)=0, and b(D3)=1. The set of observable transaction data B[OBS]: {B(D1), B(D2), D(D3)} is defined in the above discussion of the traffic utility function.
Hence, there exists from the TLOG data of Table 1 an observable realization of the product selection process per product P[i ]over time.
Next, a likelihood function is assigned for the expectation of share for each product P[i], in each store S[i]. In the present discussion, the Binominal distribution is used to define the probability
that customer 24 has decided to purchase at least one product P[i], i.e. has placed product P[i ]in the basket with the intent to make a purchase:
Each of the set of observations {b(D1), b(D2), b(D3)} is evaluated within the Binominal function P(b|B) of equation (15), as the probability of b(t) given total number of transactions B(t), and set
forth in equations (16)-(19).
The likelihood function P(b[OBS]|B[OBS]) in equation (16) is maximized using the Maximum Likelihood Method, in a similar manner as described above for equations (6)-(9), to solve for the parameters U
[S](t). Recall that <SHARE> is a function of U[S](t) as defined in equation (10). P(b(D1)|B(D1)) is thus a function of the parameters of U[S](t), i.e. β, V[REG], V[PROMO1], V[PROMO2], C[SQ1], C[SQ2],
C[SQ3], C[SQ4], as per equations (11)-(13). However, the parameters of U[S](t) are unknown at the present time.
To solve for the parameters of U[S](t), a parameter vector W: {β, V[REG], V[PROMO1], V[PROMO2], C[SQ1], C[SQ2], C[SQ3], C[SQ4]} is defined. The goal is to resolve the parameter vector W into a set of
values which maximizes the magnitude of P(b[OBS]|B[OBS]). The function P(b[OBS]|B[OBS]) can be visualized as a response surface in n-dimensional space which has a maximum or peak value. The Maximum
Likelihood Method is an iterative process in which the parameter vector W is updated each iteration until the maximum value for P(b[OBS]|B[OBS]) is found.
Accordingly, an initial set of values for each parameter of the vector W is estimated to create initial parameter vector W[0]. The estimation of initial values for W[0 ]may come from historical data,
or merely an educated guess. The function P(b(D1)|B(D1)) is evaluated using the observed values b(D1) and B(D1) from Table 1 and the initial parameter vector W[0 ]to obtain a first value for P(b(D1)|
B(D1)). The function P(b(D2)|B(D2)) is evaluated using the observed values b(D2) and B(D2) from Table 1 and the initial parameter vector W[0 ]to obtain a first value for P(b(D2)|B(D2)). The function
P(b(D3)|B(D3)) is evaluated using the observed values b(D3) and B(D3) from Table 1 and the initial parameter vector W[0 ]to obtain a first value for P(b(D3)|B(D3)). A first value of P(b[OBS]|B[OBS])
is calculated in equation (16) from the product combination of the first values of P(b(D1)|B(D1))*P(b(D2)|B(D2))*P(b(D3)|B(D3)).
Another set of values for each parameter of W is estimated, again using historical data or an educated guess, to create parameter vector W[1]. The function P(b(D1)|B(D1)) is evaluated using the
observed values b(D1) and B(D1) and the parameter vector W[1 ]to obtain a second value for P(b(D1)|B(D1)). The function P(b(D2)|B(D2)) is evaluated using the observed values b(D2) and B(D2) and the
parameter vector W[1 ]to obtain a second value for P(b(D2)|B(D2)). The function P(b(D3)|B(D3)) is evaluated using the observed values b(D3) and B(D3) and the parameter vector W[1 ]to obtain a second
value for P(b(D3)|B(D3)). A second value of P(b[OBS]|B[OBS]) is calculated in equation (16) from the product combination of the second values of P(b(D1)|B(D1))*P(b(D2)|B(D2))*P(b(D3)|B(D3)).
If the second value of P(b[OBS]|B[OBS]) is greater than the first value of P(b[OBS]|B[OBS]), then the second value of P(b[OBS]|B[OBS]) is closer to the solution of finding the maximum value of P(b
[OBS]|B[OBS]). The second value of P(b[OBS]|B[OBS]) is a better solution than the first value, i.e. the second estimate is moving in the correction direction, up the response surface of P(b[OBS]|B
[OBS]) toward the optimal maximum solution. If the second value of P(b[OBS]|B[OBS]) is not greater than the first value of P(b[OBS]|B[OBS]), then another parameter vector W[1 ]is estimated. The
values are re-calculated and, if necessary, re-estimated until the second value of P(b[OBS]|B[OBS]) is greater than the first value of P(b[OBS]|B[OBS]).
A third set of values for each parameter of W is estimated, e.g. W[2]=W[1]+(W[1]−W[0]) or other educated guess, to create parameter vector W[2]. Again, the process of choosing iterative sets of
values for the parameter vector W can take a variety of approaches: steepest assent, gradient method, Levenberg-Marquadt, Newton-Raphson. The function P(b(D1)|B(D1)) is evaluated using the observed
values b(D1) and B(D1) and the parameter vector W[2 ]to obtain a third value for P(b(D1)|B(D1)). The function P(b(D2)|B(D2)) is evaluated using the observed values b(D2) and B(D2) and the parameter
vector W[2 ]to obtain a third value for P(b(D2)|B(D2)). The function P(b(D3)|B(D3)) is evaluated using the observed values b(D3) and B(D3) and the parameter vector W[2 ]to obtain a third value for P
(b(D3)|B(D3)). A third value of P(b[OBS]|B[OBS]) is calculated in equation (16) from the product combination of the third values of P(b(D1)|B(D1)))*P(b(D2)|B(D2))*P(b(D3)|B(D3)).
If the third value of P(b[OBS]|B[OBS]) is greater than the second value of P(b[OBS]|B[OBS]), then the third value of P(b[OBS]|B[OBS]) is closer to the solution of the maximum value of P(b[OBS]|B
[OBS]). If the third value of P(b[OBS]|B[OBS]) is not greater than the second value of P(b[OBS]|B[OBS]), then another parameter vector V[2 ]is estimated. The values are re-calculated and, if
necessary, re-estimated until the third value of P(b[OBS]|B[OBS]) is greater than the second value of P(b[OBS]|B[OBS]).
The Maximum Likelihood Method repeats until the difference between the j-th value of P(b[OBS]|B[OBS]) and the j+1-th value of P(b[OBS]|B[OBS]) is less than an error threshold ε, or until a stopping
criteria has been reached, wherein iterative solutions of P(b[OBS]|B[OBS]) are no longer changing by an appreciable or predetermined amount. The error threshold ε or stopping criteria is selected
according to desired tolerance and accuracy of the solution.
Assume that the parameter vector W has reached a solution of values for β, V[REG], V[PROMO1], V[PROMO2], C[SQ1], C[SQ2], C[SQ3], C[SQ4], using the Maximum Likelihood Method, which provides the
maximum value for the likelihood function P(b[OBS]|B[OBS]). With the parameters of U[S](t) now known, the utility function U[S](t) is thus defined as per equation (11), and the expected value of
share is defined as per equation (10). Each product P[i ]will have its own solution for U[S](t). Since U[S](t) is a function time, the expected value of share for product P[i ]can be modeled forward
and backward in time. Equation (10) can be evaluated for any time t to predict the expected value of share for each product P[i].
Turning to computation of the expected value of count in equation (1). In one embodiment, the expected value of count is an average of the quantity of items of product P[i ]in those baskets
containing at least one product P[i], given in equation (20) as:
Alternatively, the expected value of count is defined in terms of a probability distribution. In equation (21), P[a ]is the probability of accepting one more item of product P[i], given that customer
24 has already selected at least one product P[i]. In equation (22), the expected value of count is given in terms of P[a].
The utility function U[C](t) for the expected value of count is defined using attributes and parameters, as a function of time, related to customer decisions involved in selecting a specific quantity
of product P[i], given that customer 24 has chosen to purchase at least one product P[i]. The count utility function U[C](t) is defined at the product level, within store S[i], and has specific
promotions associated with the various products. With respect to equation (21), the utility function for the expectation of count is given in equations (23)-(25) as:
□ where: f[C](t) is time dependent function of share coefficients
V(t)=V[REG]θ[REG](t)+V[PROMO1]θ[PROMO1](t)+V[PROMO2]θ[PROMO2](t) (24)
f[C](t)=C[CQ1]θ[Q1](t)+C[CQ2]θ[Q2](t)+C[CQ3]θ[Q3](t)+C[CQ4]θ[Q4](t) (25)
□ C[CQ1 ]is share coefficient for quarter Q1
□ C[CQ2 ]is share coefficient for quarter Q2
□ C[CQ3 ]is share coefficient for quarter Q3
□ C[CQ4 ]is share coefficient for quarter Q4
Promotion coefficients β, V[REG], V[PROMO1], V[PROMO2], C[CQ1], C[CQ2], C[CQ3], C[CQ4 ]are parameters which collectively define the driving forces associated with rate of sales or promotional lift
for product P[i]. To evaluate U[C](t) as per equation (23), the observable data in Table 1 is used to determine the expected value of the count of product P[i ]in a transaction containing at least
one product P[i ]per store, per time period.
Define a set of observable data from Table 1 for product P[i], C[OBS]: {C(D1,T1), C(D1,T2), C(D1,T3), C(D2,T1), C(D3,T1), C(D3,T2)}, where C(D[i],T1) is the count of items of product P[i ]in the
basket for transaction T1 for time D[i], C(D[i],T2) is the count of items of product P[i ]in the basket for transaction T2 for time D[i], and C(D[i],T3) is the count of items of product P[i ]in the
basket for transaction T3 for time D[i]. From Table 1, the observable data, with respect to product P1, shows C(D1,T1)=1, C(D1,T2)=4, C(D2,T1)=5, and C(D3,T2)=2. P(c) for other products P[i ]and sets
of observable data are calculated in a similar manner. Hence, there exists from the TLOG data of Table 1 an observable realization of the count process for product P[i ]over time.
Next, a likelihood function is assigned for the expectation of count for each product P[i], in each store S[i]. The probability of having c items of product P[i ]in the basket, given that there is at
least one item of product P[i ]in the basket, is given in equation (26):
P(c)=P[a]^c−1(1−P[a]) (26)
The probability of seeing each of the set of observables C[OBS ]is evaluated within the P(c) in equations (27)-(31) as:
P(C[OBS])=P(C(D1,T1))*P(C(D1,T2))*P(C(D2,T1))*P(C(D3,T2)) (27)
P(C(D1,T1))=P[a]^C(D1,T1)−1(1−P[a]) (28)
P(C(D1,T2))=P[a]^C(D1,T2)−1(1−P[a]) (29)
P(C(D2,T1))=P[a]^C(D2,T1)−1(1−P[a]) (30)
P(C(D3,T2))=P[a]^C(D3,T2)−1(1−P[a]) (31)
The likelihood function P(C[OBS]) in equation (27) is maximized using the Maximum Likelihood Method, in a similar manner as described above for P(B[OBS]) in equation (6) and P(b[OBS]|B[OBS]) in
equation (16), to solve for the parameters of U[C](t). P(C[obs]) is a function of the parameters of U[C](t), i.e. β, V[REG], V[PROMO1], V[PROMO2], C[CQ1], C[CQ2], C[CQ3], C[CQ4], as per equation
(21). The Maximum Likelihood resolves the parameter vector X: {β, V[REG], V[PROMO1], V[PROMO2], C[CQ1], C[CQ2], C[CQ3], C[CQ4]} associated with the count utility function U[C](t) using a similar
iterative process, until the difference between the j-th value of P(C[obs]) and the j+1-th value of P(C[obs]) is less than an error threshold ε, or a stopping criteria has been reached, wherein
iterative solutions of P(C[obs]) are no longer changing by an appreciable amount. The error threshold ε or stopping criteria is selected according to desired tolerance and accuracy of the solution.
Assume that the parameter vector X has reached a solution of values for β, V[REG], V[PROMO1], V[PROMO2], C[CQ1], C[CQ2], C[CQ3], C[CQ4], using the Maximum Likelihood Method, which provides the
maximum value for the likelihood function P(C[obs]). With the parameters for U[C](t) now known, the utility function U[C](t) is thus defined as per equation (23), and the expected value of count is
defined as per equations (21) and (22). Each product P[i ]will have its own solution for U[C](t). Since U[C](t) is a function time, the expected value of count for product P[i ]can be modeled forward
and backward in time. Equation (22) can be evaluated for any time t to predict the expected value of count for product P[i].
With the expected values of customer traffic, share, and count reevaluated in terms of a plurality of parameters, the promotional model 14 is defined as the expected value of unit sales for product P
[i ]as per equation (1). The promotional model 14 gives retailer 10 the ability to forecast or predict economic models, given proposed sets of input data or what-ifs, which have significant impact on
its business.
A set of graphs for unit sales of one product P[i ][i]72 represents actual unit sales of product P[i ]sold in store S[i ]over the time period. Plot 74 represents forecast of product P[i ]under
promotion using promotional model 14. Plot 76 represents baseline (no promotion) of product P[i]. Plot 78 represents a moving average of units sales of product P[i ]from plot 72. FIG. 5 illustrates
the time series of the expected value of unit sales for product P[i ][i]16 may include FIG. 5 as part of its forecast information.
The time series of [i][i]. Promotion model 14 also provides additional information to retailer 10, in the form of a table or chart, including (1) baseline of product P[i], such as shown in plot 40,
(2) observables B[OBS], b[OBS]|B[OBS], and C[OBS ]determined from Table 1, and (3) values of the drivers of [i][T](t), U[S](t), and U[C](t). The additional information is used to resolve and
understand the promotional variation per unit time period seen in FIG. 5. For example, retailer 10 can analyze the difference between V[REG ]and V[PROMO ]at specific points in time to help explain
movement in the model.
In another embodiment, combinations of one or more of the expected values of customer traffic, share, and count can be modeled individually or collectively. For example, model 14 may consider the
expected value of customer traffic alone, or the expected value of share alone, the expected value of count alone, or the combination of expected values of customer traffic and share. The orientation
of the model will be determined in part by the forecast request made by retailer 10.
The above discussion has considered the TLOG data from Table 1 as one embodiment of the data observable from customer responses. The observable data can originate from other sources and with
different characteristics, format, and content. In general, the observable data can be categorized as customer response data, causal data, and other data which influences customer buying decisions.
Customer response data includes the TLOG data from the checkout counter of customer purchases in the basket, as discussed above in the solution of the expected values of traffic, share, and count.
Other sources of customer response data include direct and indirect observations or detection of the customer interest in one or more products. The detection of customer 24 interest in product P[i ]
may be made by observing the customer handle the product. If a particular display or shelf storage of product P[i ]is regularly found in disarray, even after housekeeping, then it is reasonable to
conclude that the customers have been showing sufficient interest to handle the product and read the label, ingredients, and benefits. The store may even use motion sensors to detect where and for
how long customers are spending their time. Individual products, especially high price items, may be given sensors to detect when each product has been handled by the customer for consideration of
purchase. The store camera system may detect congregations of customers around certain products or displays over time. The camera system can detect eye movement of the customers to see if they are
focusing on certain products. Even through the customer did not select the product for purchase, the interest in the product and probability of future purchases may be high.
Another source of customer response data is customer surveys and body counts. The store may entice customers with coupons, gifts, and prizes to participate in surveys which ask what product(s) they
have come to shop for, interest in specific products, and their opinions on the store and its products. The store may electronically count customers entering and leaving the store over time to get a
temporal distribution of store traffic. The store camera system or sensors may detect how many customers are in each isle or part of the store over time.
Causal data is another form of observable data. Causal data relates to various attributes that cause or induce the customer buying decision. Product attributes include brand, size, color, and form
factor. Location attributes involve store, website, and catalog. Customer attributes include loyalty cards and demographics. Promotional attributes include advertisement, price, and incentives.
Competitor data considers relative market share, competing price and promotions, store density per geographic area, and product assortment. Calendar data may involve merchandising; e.g. store layout,
allocation of space, and display timing; inventory; promotion and price calendar; and demand planning, e.g. holidays, customer cycles, special events, weather, store density per geographic area.
Credit card history and loyalty cards provide significant information related to store and product selections. Priors is another type of data which provides information about the expected
distribution of parameter values within the model, which may be based on information from prior purchases of similar or different products, mean and standard deviation data from prior transactions,
and general trends in the relevant marketplace.
The different types of observable data, customer response data, causal data, and other data related to the customer buying decision, provide significant insight into determining the customer's
interests, preferences, likes, dislikes, quirks, and responses to store promotion, layout, product selection, presentation, and arrangement. The process of collecting and analyzing all types of
observable data is important to servicing the customer's needs and increasing store profitability.
The observable data can readily be applied to solving parameters which define promotional model 14. As previously discussed, the model comprises one or more factors which relate to the customer
buying decision. The factors contain one or more parameters which define the behavior of the model. All of the types of data observable from the customer buying decision, as discussed above, can be
used to solve the parameters which in turn completes the model.
FIG. 6 illustrates a summary of creating promotional model 14. Block 80 represents observable data, e.g. customer response data, causal data, priors. Block 82 performs a segmentation process on the
observable data to prepare the data for presentation to the modeling program. The segmentation process filters and separates the observable data by location, product, promotion, time, customers,
etc., based on a set of logical rules applied to one or more of the data elements included in each transaction. These rules may include restricting the store, product, customer, price, or promotion
values to a logical subset of the possible values. The filters may also restrict the customers to those belonging to a specific customer profile. The net result is that the models generated from the
segmented data will describe the customer behavior displayed by that particular market segment. Such information may be of great value to retailers seeking to enhance performance through marketing to
customer segments or identifying business strategies based on knowledge of customer information.
Block 84 performs an aggregation of the segmented data to prepare the data for modeling. The aggregation phase may transform input data at the rawest granularity to a higher granularity in accordance
with the segmentation groupings outlined above. The transformation may include simple summations over the groups, or a more complex functional or logical form. Throughout the discussion, “store” is
understood to refer to one of the following: a single sales outlet, location or channel, a specified set of outlets, locations or channels, or a logical grouping of outlets, locations or channels.
Similarly, “product” is understood to refer to a single item or service, a specific set of items and/or services, a logical grouping of items or services. The “product” may correspond to a single
scan item in a Point of Sale system, or it may correspond to multiple scan items. Customers and promotions may be similarly grouped and labeled for the purposes of data processing and/or modeling.
Time periods are arbitrary but presumed uniform, e.g., hour, day, week, etc. The aggregation process allows observable data to be grouped within stable and relatively constant causal indicators.
Block 86 calibrates the model by solving for the unknown parameters. Block 88 stores the parameters on hard disk 54 or other accessible database for later use by customer response model 14.
In FIG. 7, the proposed model data (proposed prices, promotions, time period, etc.) and solved model parameters from blocks 90 and 92 are executed by the model. Customer response model 14 runs the
forecast in block 94 using the proposed model data and generates report 16 in block 96.
In FIG. 8, a promotional evaluation builds a model baseline in block 100 using the solved parameters and historical data. The model baseline represents expected sales without promotion. Block 102
compares the model baseline to actuals to determine lift in unit sales attributed to the promotion. Block 104 generates report 16.
The process of forecasting a promotional model using observable data from customer purchase decisions is shown in FIG. 9. Step 120 provides data observable from customer responses. The observable
data includes transaction, product, price, and promotion. Step 122 provides an expected value of customer traffic within a store in terms of a first set of parameters. The expected value of customer
traffic is an exponential function of the first set of parameters. A likelihood function using the Poisson distribution is also defined for the customer traffic. Step 124 solves the first set of
parameters through an iterative estimation process using the likelihood function and the observable data. Step 126 provides an expected value of selecting a product in terms of a second set of
parameters. The expected value of selecting a product is comprised of exponential functions of the second set of parameters. A likelihood function using the Binominal distribution is also defined for
selecting a product. Step 128 solves the second set of parameters using the likelihood function and the observable data. Step 130 provides an expected value of quantity of selected product in terms
of a third set of parameters. A likelihood function is also defined for the quantity of selected product. Step 132 solves for the third set of parameters using the likelihood function and the
observable data. Step 134 provides a promotional model as a product combination of the expected value of customer traffic and the expected value of selecting a product and the expected value of
quantity of selected product. The promotional model is used to generate report 16 for retailer 10.
The analysis of report 16, as generated by model 14, helps explain the effect of promotional programs on unit sales, revenue, and profitability. Retailer 10 can use promotional model 14 to forecast
unit sales per product over time, under a variety of promotional programs, and combinations of promotional programs. Understanding the cause and effect behind promotional offerings is important to
increasing the profitability of the retail stores. The promotional model addresses effective analysis techniques for various promotions, in terms of forecasting and backcasting, and provides tools
for a successful, scientific approach to promotional programs with a high degree of confidence. Although unit sales in the retail environment has been discussed in detail as the economic model,
promotional model 14 is applicable to many other economic, scientific, and commercial decision processes. The model can be used for business planning, assessing a variety of what-if scenarios,
business optimization, and evaluation of historical events to compare actuals to forecast and actuals to baseline data.
Another economic model which is useful in the retail business is described in FIG. 10. Components of FIG. 10 having a similar function are assigned the same reference number used in FIG. 1. Retailer
10 uses business plan 12 to collect and analyze transaction data, evaluate alternatives, run forecasts, and make operational decisions. From business plan 12, retailer 10 provides certain observable
data and assumptions, and receives back specific forecasts and predictions from affinity and cannibalization model 140. The model performs a series of complex calculations and mathematical operations
to predict and forecast the effect on product P[i ]attributed to product P[j ]which is on promotion. Product P[i ]may or may not be on promotion. The affinity and cannibalization model 140 may also
interact with promotional model 14 to exchange, format, and evaluation data. The output of affinity and cannibalization model 140 is a report, chart, table, or other analysis 142, which represents
the model's forecasts and predictions based on the given set of data and assumptions. Report 142 is made available to business plan 12 so that retailer 10 can make promotional and operational
The method presented herein attempts to accurately predict the fiscal impact of affinity and cannibalization (AC) with a practical and scalable solution applicable to the complexities and data
volumes found in the retail industry. The features of a practical system for estimating the fiscal impact of AC are: (1) model for cause and effect of changes in sales of individual products, (2)
model relating correlated changes in sales of pairs or groups of products, (3) observable data including transaction line-item detail to calibrate models, and (4) flexible processing system to
process data, generate, and evaluate the models.
In the overall process, a first function picks up the TLOG data and inputs it into a segmentation function described above. The segmentation function essentially filters the data according to logical
rules. After segmentation, the data may be aggregated to any logical groupings in the primary dimensions: store, product, time, customer, and promotion. Finally, a process extracts the relevant
co-occurrence data within the segment and logical groupings discussed above. Co-occurrence data describes the number and frequency of transaction events containing both products and each product
individually. The data is input into the modeling process, which both characterizes the discovered relationships and also calibrates the model. The modeling process may be run multiple times for
various market segments and aggregation levels. The output of each run is a set of parameter values which characterizes AC relationships, and is input to a forecasting process which generates
estimates (forecasts and/or backcasts) of the fiscal impact of the AC effect.
Two differences between the present method and the other methods discussed in the background is the ability to generate estimates of fiscal impact, and also to characterize and estimate
cannibalization relationships. In addition, the method captures the drivers of cause and effect at both the product level and the relationship level. These drivers include but are not limited to
seasonality, price, promotion, product availability (inventory, product introduction/discontinuation), product visibility (merchandizing), and competitor strategy (price image). These drivers may be
captured through the modeling process, which can be based on powerful Maximum Likelihood methods for nonlinear parameter estimation, and provides a rigorous framework for estimating the impact of
2-body product interactions. The framework supports the incorporation of non-observable data such as prior information of parameter probability distributions. Finally, the method is flexible enough
to accept as input and leverage any methods for market segmentation of data and aggregation for strategic reporting, evaluation, and decision support.
It is possible for sales of product P[j ]to induce sales of another product P[i], or for sales of product P[j ]to hinder sales of another product P[i]. Product P[j ]is considered the driver product
and product P[i ]is the product being driven. When a sale of product P[j ]causes or induces customer 24 to also buy product P[i], the relationship is referred to as affinity. When a sale of product P
[j ]impedes or causes customer 24 to forgo purchase of product P[i], the relationship is referred to as cannibalization. Accordingly, the affinity or cannibalization relationship is a linkage between
products P[i ]and P[j]. In general, the relationship between product P[i ]and P[j ]will be either affinity or cannibalization, or there may be no linkage. The processes of affinity and
cannibalization are important to understand when modeling unit sales of products, and for understanding the impact which a promotion assigned to one product may have on unit sales of another product.
Consider the example of the cordless drill. Customer 24 may make the decision to purchase the cordless drill to use in various home projects, or as part of his or her business. Cordless drills are
adapted to receive drill bits, screwdriver heads, and other attachments. Cordless drills have the advantage and convenience of drawing power from a battery source, which means there is no electrical
cord to plug into an electrical outlet. The purpose of the cordless drill is to drill holes and, with the proper attachments, to screw fasteners into building materials. When customer 24 makes the
decision to purchase the cordless drill, he or she will consider the variety of options also available for purchase. Customer 24 may need drill bits, screwdriver head, screwdriver bits, socket set,
screws, and extra batteries. Cordless drill can also operate attachments such as alignment tool, grinder, wire brush, and buffer, which customer 24 may decide to buy. Retailer 10 may also offer a
tool chest to contain all the components, home improvement classes, and extended warranty service programs. Once customer 24 decides to buy the cordless drill, then the stage is set for projects
which make use of the tool, e.g. building shelves, installing ceiling fans, finishing a patio, replacing door hinges, just to name a few. Accordingly, the purchase of the cordless drill creates an
affinity for other products/services.
On the other hand, the cordless drill also has the opposite effect on other purchases. If customer 24 decides to purchase the cordless drill, he or she will likely forgo purchase of a drill having an
electrical cord. Moreover, customer 24 may decide to not purchase a hand-held screwdriver since the cordless drill will likely do the same job in less time and with greater ease. The purchase of the
cordless drill may create a cannibalization effect for other products/services.
The effects of affinity and cannibalization are even greater when product P[j ]is placed on promotion. A promotion of product P[j ]usually increases its unit sales. When product P[j ]is placed on
promotion and experiences an increase in unit sales as projected, then the lift in sales of product P[j ]due the promotion causes a corresponding increase in sales of product P[i ]by the affinity
linkage. The product P[i ]having the affinity relationship with product P[j ]should be modeled to understand the relationship and its effect on store operations. Retailer 10 should factor the
increase in sales of product P[i], caused by the promotion of product P[j], into business plan 12 to plan promotions, accurately forecast revenue, and ensure that sufficient product P[i ]is on-hand
to meet customer demand.
Likewise, when product P[j ]is placed on promotion and experiences an increase in unit sales as projected, then the increase in sales of product P[j ]from the promotion may also cause a corresponding
decrease in sales of product P[i ]by any cannibalization linkage. The product P[i ]having the cannibalization relationship with product P[j ]should be modeled to understand the relationship and its
effect on store operations. Retailer 10 should factor the decrease in sales of product P[i], caused by the promotion of product P[j], into business plan 12 to plan promotions, accurately forecast
revenue, and ensure that product P[i ]does not end up over-stocked. Thus, the promotion of product P[j ]has multi-faceted impact on products P[i ]and P[j].
From the discussion of the share model, b[i](t) is the number of transactions (baskets of merchandise) containing at least one product P[i ]per store, per time period. The basket share model b[j](t)
is the number of transactions containing at least one product P[j ]per store, per time period. The affinity and cannibalization model uses co-occurrence of products P[i ]and P[j ]in the same basket,
as observed from the TLOG data, to estimate relationship between the pair of products. If products P[i ]and P[j ]are uncorrelated, then they may be found together in the same transaction with a
certain base probability. If the observed occurrence of products P[i ]and P[j ]being found together in the same transaction is greater than the uncorrelated base probability by statistically
significant amount, then an affinity relationship exists between the products. The purchase of product P[j ]influences the buying decision for customer 24 to also purchase product P[i]. If the
observed occurrence of products P[i ]and P[j ]being found together in the same transaction is less than the uncorrelated base probability by statistically significant amount, then a cannibalization
relationship exists between the products. The purchase of product P[j ]influences the buying decision for customer 24 to forgo purchase of product P[i].
An observed increased (decreased) probability of co-occurrence is not sufficient to quantify the affinity (cannibalization) relationship between the products. Typically the relationship will not be
symmetric and one product will be the dominant, or driver, product.
The relationship between changes of basket share models b[i](t) and b[j](t) is shown in equation (32). We define the change in the basket share models, Δb[i](t) and Δb[j](t), of products P[i ]and P[j
]due to a change in the price or promotion state of the driver product Pj as the model share values before the change minus the model share values after the change. The change in basket share models
Δb[i](t) and Δb[j](t) are functions of products P[i ]and P[j].
Δ(b[i](t))=α[ij]Δ(b[j](t)) (32)
The linear relationship in equation (32) shows that the change in baskets of b[j](t) is proportional to change in baskets of b[i](t) by the constant of proportionality (α[ij]) between products P[i ]
and P[j]. In other words, the linear relationship between b[i](t) and b[j](t) relates baskets of the first product to baskets of the second product by the constant of proportionality. Consider the
example where transaction counts of product P[j ]increases by 10 per store per day due to a promotional lift. If the constant of proportionality α[ij ]between products P[i ]and P[j ]is 0.1, then
retailer 10 can expect to have 1 more transaction containing product P[i ]per store per day, due to the promotion of product P[j].
The linear relationship between changes in basket share models can be defined in terms of the following conditional probability Γ(P[i]|P[j]), see equations (33)-(34).
b[i]=Γ(P[i]|P[j])b[j]+Γ(P[i]| P[j]) b[j] (33)
b[i]=Γ(P[i]|P[j])b[j]+Γ(P[i]| P[j])(1−b[j]) (34)
The conditional probability Γ(P[i]|P[j]) is total baskets containing products P[i ]and P[j ]over total baskets containing product P[j]. It is interpreted as the probability of purchasing P[i ]given
that P[j ]is already selected for purchase. The conditional probability Γ(P[i]| P[j]) is total baskets containing product P[i ]and no product P[j ]over total baskets containing no product P[j]. The
basket share model b[j] is the number of transactions (baskets of merchandise) containing no product P[j ]per store, per time period.
As an approximation toward the solution of constant of proportionality α[ij], assume that the conditional probabilities Γ(P[i]|P[j]) and Γ(P[i]| P[j]) are independent of basket share model b[j].
Equation (32) is then derived from a Taylor series expansion:
of the functional dependence of b[i ]on b[j]. The Taylor series is evaluated at the non-promotional state of b[i]. Each term in the expansion represents a deviation from this baseline state. The
early term(s) in the Taylor series can be used to estimate and define the constant of proportionality. For example, the first derivative term of the Taylor series represents the slope of the function
at x[0]. The first derivative can be used to estimate α[ij], as shown in equations (36)-(38). Higher order terms can be used as well.
The constant of proportionality α[ij ]can now be determined from the TLOG data. Consider the following set of TLOG data in Table 2:
TABLE 2
TLOG Data
Store Product Time Trans Qty Price Profit Promotion tomer
S1 P1 D1 T1 1 1.50 0.20 PROMO1 C1
S1 P1 D1 T2 1 1.50 0.20 PROMO1 C2
S1 P2 D1 T2 1 3.00 0.40 0 C2
S1 P2 D1 T3 1 3.00 0.40 0 C3
S1 P3 D1 T4 1 2.25 0.60 0 C4
S1 P4 D1 T5 1 2.65 0.55 0 C5
S1 P5 D1 T6 1 1.55 0.20 0 C6
S1 P6 D1 T7 1 5.00 1.10 0 C7
S1 P7 D1 T8 1 1.90 0.50 0 C8
S1 P8 D1 T9 1 3.30 0.65 0 C9
S1 P9 D1 T10 1 2.30 0.55 0 C10
The simplified example of TLOG data is shown in Table 2 for the purpose of illustrating the solution of the constant of proportionality α[ij ]in affinity and cannibalization model 140. The first line
item shows that on day/time D1 (date and time) store S1 had transaction T1 in which customer C1 purchased one product P1 at 1.50. In transaction T2 on day/time D1, customer C2 has one product P1 at
price 1.50 each and one product P2 at price 3.00. In transaction T3, customer C3 has one product P2 at 3.00. The remaining transactions T4-T10 on day/time D1 in store S1 involved other products P3-P9
as shown.
The TLOG data in Table 2 further shows that products P1 in transactions T1 and T2 had promotion PROMO1. Again, to simplify the explanation, no other product in Table 2 has any promotion.
The probability of having products P1 and P2 in the same transaction, assuming that the products are uncorrelated, is the number of transactions containing product P1 over the total number of
transactions times the number of transactions containing product P2 over the total number of transactions. The uncorrelated base probability is ( 2/10)*( 2/10)=0.04. The actual probability of
observing the products together is the number of transactions containing both products divided by the total number of transactions is ( 1/10)=0.1>0.04. Accordingly, the presumption is that these
products are positively correlated, i.e., that there is an affinity relationship between them.
The conditional probability Γ(P2|P1) is the number of basket with products P1 and P2 over total number of baskets divided by the number of basket with product P1 over total number of baskets. From
the TLOG data in Table 2, Γ(P1|P2)=( 1/10)/( 2/10)=0.5. Γ(P2| P1) is total baskets containing product P2 and no product P1 over total baskets divided by total baskets containing no product P1 over
total number of baskets. From the TLOG data in Table 2, Γ(P2| P1)=( 1/10)/( 8/10)=0.125. From equation (37), α[P2P1 ]is Γ(P2|P1)−Γ(P2| P1)=0.5−0.125=0.375. Since α[P2P1 ]is greater than zero, the
constant of proportionality α[P2P1 ]defines an affinity relationship between products P1 and P2. If α[ij ]had been less than zero, then the products would have a cannibalization relationship. An
offset threshold τ may be assigned to the constant of proportionality such that an affinity relationship exists if α[ij ]is greater than τ, and a cannibalization relationship exists if α[ij ]is less
than −τ. In practice, the offsets for affinity and cannibalization may be asymmetric.
Returning to equation (32), the Δb[P1](D1) in equation (32) is the change in b[P1](D1), usually an increase in basket share, associated with promotion PROMO1 of product P1 over time period D1, i.e.
promotional lift. With the constant of proportionality α[P2P1 ]known from the TLOG data, the expectation of Δb[P2](D1) can be determined. If Δb[P1](D1) changes by 10 due to promotion PROMO1, then Δb
[P2](D1) changes by 10*0.375=3.75 due to the promotion PROMO1 on product P1.
The process of forecasting an affinity and cannibalization model using observable data from customer purchase decisions is shown in FIG. 11. Step 150 provides a linear relationship between functions
related to first and second products. The linear relationship includes a constant of proportionality between the functions related to the first and second products. The linear relationship between
the functions related to the first and second products relates basket share of the first product to basket share of the second product by the constant of proportionality, i.e. the linear relationship
is given by Δb[i](t)=α[ij]Δb[j](t), where α[ij ]is the constant of proportionality. Step 152 expresses the linear relationship using a Taylor Series expansion. Step 154 expresses the constant of
proportionality using a first derivative term of the Taylor Series expansion. Step 156 solves for the constant of proportionality using data observable from customer transactions. The data observable
from customer transactions includes transaction, product, and promotion. In step 158, the constant of proportionality represents an affinity relationship between the first and second products if the
constant of proportionality is greater than zero. In step 160, the constant of proportionality represents a cannibalization relationship between the first and second products if the constant of
proportionality is less than zero.
The analysis of report 142, as generated by affinity and cannibalization model 140, helps explain the effect of affinity and cannibalization on customer response in terms of unit sales, revenue, and
profitability. Retailer 10 can use model 140 to forecast unit sales per product over time, under a variety of promotional programs, and combinations of promotional programs. More specifically,
retailer 10 can understand the effect on unit sales of one product as influenced by a promotional offering on another product, where there is an affinity or cannibalization relationship between the
two products. Understanding the cause and effect behind promotional offerings and inter-product relationships is important to increasing the profitability of the retail stores. The promotional model
addresses effective analysis techniques for various promotions, in terms of forecasting and backcasting, and provides tools for a successful, scientific approach to promotional programs with a high
degree of confidence. Although unit sales in the retail environment has been discussed in detail as the economic model, affinity and cannibalization model 140 is applicable to many other economic,
scientific, and commercial decision processes.
While one or more embodiments of the present invention have been illustrated in detail, the skilled artisan will appreciate that modifications and adaptations to those embodiments may be made without
departing from the scope of the present invention as set forth in the following claims. | {"url":"http://www.freepatentsonline.com/7680685.html","timestamp":"2014-04-20T08:51:23Z","content_type":null,"content_length":"197069","record_id":"<urn:uuid:8e2fd8c8-d44f-41e9-9605-2aade8702a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Date Subject Author
4/20/04 cube root of a given number vsvasan
4/20/04 Re: cube root of a given number A N Niel
4/20/04 Re: cube root of a given number Richard Mathar
7/14/07 Re: cube root of a given number Sheila
7/14/07 Re: cube root of a given number amzoti
7/14/07 Re: cube root of a given number quasi
7/14/07 Re: cube root of a given number arithmeticae
7/16/07 Re: cube root of a given number Gottfried Helms
7/16/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/22/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Iain Davidson
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number gwh
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/25/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number semiopen
7/26/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number Iain Davidson
7/28/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number Iain Davidson
8/5/07 Re: cube root of a given number arithmeticae
8/5/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmetic
8/6/07 Re: cube root of a given number Iain Davidson
8/6/07 Re: cube root of a given number arithmeticae
8/7/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/11/07 Re: cube root of a given number Iain Davidson
8/11/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number Iain Davidson
8/17/07 Re: cube root of a given number r3769@aol.com
8/12/07 Re: cube root of a given number arithmetic
8/13/07 Re: cube root of a given number Iain Davidson
8/24/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number narasimham
1/10/13 Re: cube root of a given number ... Milo Gardner
8/28/07 Re: cube root of a given number arithmetic
8/28/07 Re: cube root of a given number Iain Davidson
8/7/07 Re: cube root of a given number mike3
8/7/07 Re: cube root of a given number Iain Davidson
8/10/07 Re: cube root of a given number arithmetic
8/10/07 Re: cube root of a given number arithmetic
7/28/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/27/07 Re: cube root of a given number arithmetic
7/26/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/25/07 Re: cube root of a given number Iain Davidson
7/26/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/16/07 Re: cube root of a given number Proginoskes
7/21/07 Re: cube root of a given number arithmetic
7/22/07 Re: cube root of a given number Proginoskes
7/22/07 Re: cube root of a given number Virgil
7/22/07 Re: cube root of a given number Proginoskes
7/23/07 Re: cube root of a given number arithmetic
7/23/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number Proginoskes
7/16/07 Re: cube root of a given number gwh
7/17/07 Re: cube root of a given number Iain Davidson
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/21/07 Re: cube root of a given number arithmetic
7/24/07 Re: cube root of a given number pomerado@hotmail.com
7/25/07 Re: cube root of a given number orangatang1@googlemail.com | {"url":"http://mathforum.org/kb/message.jspa?messageID=5827110","timestamp":"2014-04-17T06:57:27Z","content_type":null,"content_length":"150345","record_id":"<urn:uuid:b372d9a4-8e44-4ad5-b5a5-3ebc3c3adfd2>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
A limiting product formula for the exponential function
up vote 2 down vote favorite
By the way, does anyone know how to prove in an elementary way (i.e. expanding) that $\prod_1^n (1+a_i r)$ tends to $e^r=\sum \frac{r^k}{k!}$ as you let $\max|a_i|\to 0$ with $0\leq a_i \leq 1$ and $
\sum a_i = 1$? An easy solution goes by writing the product with the exponential function so that you get the exponential of $\sum \log(1+a_i r) = \sum \int_0^1 \frac{a_i r}{(1+s a_i r)} ds$.
You can then integrate by parts (i.e. Taylor expand) to obtain $\sum a_ i r − \sum \int_0^1 (1−s)\frac{(a_i r)2}{(1+s a_i r)2}ds$. Now, $\sum a_i r = r$ is the main term. After you take $\max|a_i|$
to be less than $.5/|r|$, the error term is bounded in absolute value by $C \sum |a_i r|^2 \leq \max|a_i|\cdot \sum |a_i| |r|^2 \leq C |r|^2 \max |a_i|$.
I was hoping to find an elementary proof of this convergence by expanding the product $\prod_1^n (1+a_i r)$ and gathering terms with a common power of $r$. In particular, it would be nice to prove
the convergence of this limit without the exponential function, since then the limit could be considered a definition of $e^r$. The case when all of the $a_i$ are equal is done in Rudin's "Principles
of Mathematical Analysis".
The motivation for this problem comes from compound interest, which I described in a different thread here: Generalizing a problem to make it easier .
1 Apply the inequality between geometric and arithmetic mean to both the product and its reciprocal, that can be written in analogous form; then use the sandwich theorem and $(1+r/n)^n\to e^r$. –
Pietro Majer Jul 1 '11 at 17:25
Just from the fact that $e^x$ has slope $1$ at $(0,1)$, for any $\epsilon > 0$ if $a$ is sufficiently small then $e^{(1 -\epsilon)a_ir} < 1 + a_ir < e^{(1 + \epsilon)a_ir}$ whenever $0 < a_i r <a
$. So multiply these together. – Michael Greenblatt Jul 1 '11 at 17:34
I think the two suggestions above don't quite meet the demands of the question (that you should just expand and look at coefficients of separate powers). – gowers Jul 1 '11 at 20:28
@gowers Well, showing $e^{x + y} = e^xe^y$ from the power series is not too bad, as are the inequalities used above. But I agree I did take a few liberties on his suggested path. – Michael
Greenblatt Jul 1 '11 at 20:50
add comment
4 Answers
active oldest votes
$\prod_{i=1}^n (1+a_ir)=1+\sum_{k=1}^n r^k\sum_{i_1 < \ldots < i_k}a_{i_1}\ldots a_{i_k}$.
Notice that $1^k=\left(\sum_{i=1}^n a_i\right)^k =k!\sum_{i_1 < \ldots < i_k}a_{i_1}\ldots a_{i_k}+\text{other terms}$, where the other terms are (positive) terms with a repeated $a_i$.
It follows that the $k$th term in the original sum is at most $1/k!$ so this gives the upper bound.
To show the original claim it suffices to bound the terms with repeated $a_i$'s in terms of $\delta=\max a_i$. More specifically given $\epsilon>0$ and $k$ it suffices to show that for
any finite sequence of positive numbers summing to 1 whose maximum is less than $\delta$, one has $$\left(\sum a_i\right)^k -k!\sum_{a_{i_1} < \ldots < a_{i_k}}a_{i_1}\ldots a_{i_k} < \
up vote 4
down vote This summation consists of terms of a number of "types" e.g. (2,1,1,1,1,1) represents terms in which one $a_i$ occurs twice and 5 other $a_i$'s occur once each; $(5,3,2,1,1)$ represents
accepted terms of the form $a_{i_1}^5a_{i_2}^3a_{i_3}^2a_{i_4}a_{i_5}$ where there is no longer a requirement that the $i_j$ are increasing; instead the $i_j$ should be increasing within the
terms that have the same power.
For a fixed $k$, the number of types is finite. So that it suffices to show that for each type, the contribution goes to 0 uniformly as $\delta\to 0$. Clearly for the type $(p_1,p_2,\
ldots,p_r)$ you can bound the term $a_{i_1}^{p_1}\ldots a_{i_r}^{p_r}$ by $\delta^{\sum p_i-r}a_{i_1}\ldots a_{i_r}$. Let $\Delta=\sum p_i-r$ (this is at least 1 for all non-trivial
types). The summation is then bounded above by $\delta^\Delta\sum_{a_{i_j}\text{ distinct}}a_{i_1}\ldots a_{i_r}$ which is at most $\delta^\Delta$.
I've been away from my computer all day, but the above is essentially the argument I had in mind ... – gowers Jul 1 '11 at 20:27
Thanks! I wrote your argument in detail by induction below, but this was exactly the idea I was looking for. – Phil Isett Jul 2 '11 at 2:57
add comment
Thanks, Anthony, for finding this solution. I was completely at a loss for how to handle all the indices. If you don't mind, I would like to write down one version of the argument that
you've given in full detail.
Claim: Under the hypotheses of the question $1 = k! \sum_{i_1 < \ldots < i_k} a_{i_1} \cdots a_{i_k} + O(\max |a_i|) $ where the error is non-negative.
The claim is true without an error when $k = 1$, and follows from induction. If we write $1 = (\sum a_i)^{k+1} = (\sum a_i) ( \sum a_i )^k$ The induction hypothesis allows us to write this
product as $(\sum a_i)\cdot(k! \sum_{i_1 < \ldots < i_k} a_{i_1} + O(\max |a_i|) ) = (\sum a_i)\cdot (k! \sum_{i_1 < \ldots < i_k} a_{i_1} ) + O(\max |a_i| ) $
If we now distribute out the product, we get the term we want $(k+1)! \sum_{i_1 < \ldots < i_k < i_{k+1} } a_{i_1} \cdots a_{i_{k+1} }$ from the products with no repeats and then an error
up vote 1 coming from products with exactly one term repeated. Take whichever term is repeated and bound one copy of it in absolute value by $\max |a_i|$. Then the error is bounded by $\max |a_i| ( \
down vote sum |a_i| )^k = O(\max |a_i|)$.
Having this claim established and looking slightly more carefully at the dependence of the error on $k$ (the constant in the big O only grows like $C^k$), we also have prove the convergence
that I was looking for (and we don't need non-negativity of the terms; just that $\sum |a_i|$ is bounded). In the non-negative case we can just observe the error is non-negative, so that
the dominated convergence theorem applies (with respect to the finite measure $\frac{|r|^k}{k!}$), giving a small shortcut and a soft way to see the convergence without a rate.
All credit goes to Anthony Quas for the idea; I just thought the induction was a fairly clear way to get the details all down.
add comment
I record this answer because I think that Pietro Majer's comment can be made into a solution which meets the proposer's criterion of potentially being able to be used to define $e^r$ ( I had
been thinking along the same lines, although I am not sure there would be an advantage over the usual $\lim_{n \to \infty} (1 + \frac{r}{n})^{n}$ definition). If $r > 0$, then the GM-AM
inequality gives $\prod_{i=1}^{n}( 1+a_{i}r) \leq ( 1 + \frac{r}{n})^{n}$. If $r > 0$ and each $a_{i} < \frac{1}{r}$, we obtain $\prod_{i=1}^{n}( 1-a_{i}r) \leq ( 1 - \frac{r}{n})^{n}$. Taking
up reciprocals in the second case and using standard approximations gives $\prod_{i=1}^{n}( 1+ a_{i}r + \frac{a_{i}^{2}r^{2}}{1-a_{i}r}) \geq ( 1 +\frac{r}{n})^{n}$. Choose $\varepsilon > 0$, and
vote 1 suppose that $a_{i} < \frac{\varepsilon}{2}$ for each $i$, and that, furthermore, $1 - a_{i}^{2}r^{2} > \frac{1}{2}$ for each $i$. Then we obtain $\prod_{i=1}^{n}( 1+ a_{i}r + \frac{a_{i}^{2}r
down ^{2}}{1-a_{i}r} ) \leq \left[ \prod_{i=1}^{n}(1 + a_{i}r) \right]. (1 + \frac{r^{2}\varepsilon}{n})^{n}$, so that $\lim_{max(a_{i}) \to 0} \prod_{i=1}^{n} (1 + a_{i}r) \geq \lim_{n \to \infty}
vote (1 + \frac{r}{n})^{n}$. A similar argument can be devised for $r < 0$.
add comment
Knowing $\lim_{N\rightarrow\infty} \left(1+\frac{r}{N}\right)^N=e^r$ (the case where all $a_i$ are equal) we can use continuity and $\left(1+\frac{r}{N}\right)^{Na_i}\sim 1+a_ir$ if
up vote 0 down $N$ is very huge and $a_i\rightarrow 0$.
add comment
Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/69272/a-limiting-product-formula-for-the-exponential-function/69273","timestamp":"2014-04-17T04:25:49Z","content_type":null,"content_length":"72253","record_id":"<urn:uuid:1de4eb81-4056-4b68-aa29-c7391003864e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
ESL Holiday Lessons.com
Pi Day is on March 14. This is the day when mathematicians and geometry enthusiasts around the world get together and celebrate the mathematical constant of Pi. There are two good reasons to hold Pi
Day on March 14. The first is that in the American date format, it is 3/14 and Pi, to two decimal places, is 3.14. The other reason is that March 14 is Albert Einstein’s birthday, and he knew a thing
or two about numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum staff walked around the circular spaces in the building eating fruit pies. Cherry and apple pies have very
little to do with Pi, unless you want to calculate the circumference of your fruity treat.
Pi is a letter of the Greek alphabet. The symbol was first used in mathematical calculations in 1737 by the Swiss mathematician Leonhard Euler. It is used for the ratio of the circumference of a
circle to its diameter. Computers have calculated Pi to over one trillion digits. It has countless numbers of digits. Perhaps more impressive is the feat of British autism sufferer Daniel Tammet. He
holds the European record for reciting Pi from memory to 22,514 digits in just over five hours. He also suffers from a condition in which he sees numbers as colours. He says Pi is particularly
beautiful. The worlds of science, engineering and mathematics would be lost without it. Celebrate today with your favourite pie!
Sources: http://www.wikipedia.org/ and assorted sites.
Match the following phrases from the article.
Paragraph 1
1. geometry enthusiasts around a. two about numbers
2 There are two good reasons to hold b. places
3. …to two decimal c. the circular spaces
4. he knew a thing or d. the world get together
5. staff walked around e. circumference
6. calculate the f. Pi Day on March 14
Paragraph 2
1. Pi is a letter of a. circle to its diameter
2 The symbol was first used in b. to 22,514 digits
3. the ratio of the circumference of a c. mathematical calculations
4. It has countless d. beautiful
5. reciting Pi from memory e. the Greek alphabet
6. He says Pi is particularly f. numbers of digits
Pi Day is on March 14. This __________________ mathematicians and geometry enthusiasts around the world get together and celebrate the mathematical constant of Pi. There are __________________ hold
Pi Day on March 14. The first is that in the American date format, it is 3/14 and Pi, __________________ places, is 3.14. The other reason is that March 14 is Albert Einstein’s birthday, and he
__________________ two about numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum staff walked around __________________ in the building eating fruit pies. Cherry and apple
pies have very little to do with Pi, unless you want to calculate the circumference __________________.
Pi __________________ Greek alphabet. The symbol was first used in mathematical calculations in 1737 by the Swiss mathematician Leonhard Euler. It is used __________________ circumference of a circle
to its diameter. Computers have calculated Pi to over one trillion digits. It has __________________ of digits. Perhaps more impressive is the feat of British autism sufferer Daniel Tammet. He holds
the European __________________ Pi from memory to 22,514 digits in just over five hours. He also suffers from a condition in which he sees __________________. He says Pi is particularly beautiful.
The worlds of science, engineering and mathematics would be __________________. Celebrate today with your favourite pie!
Put the words into the gaps in the text.
Pi Day is on March 14. This is the day when mathematicians and geometry __________ around the world get together and celebrate the mathematical __________ of Pi. There are two good circular
__________ to hold Pi Day on March 14. The first is that in the American date format, it is 3/14 and Pi, to two __________ places, is 3.14. The other reason is that March 14 is Albert constant
Einstein’s birthday, and he knew a thing or __________ about numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum staff walked around the __________ spaces in decimal
the building eating fruit pies. Cherry and apple pies have very __________ to do with Pi, unless you want to calculate the circumference of your fruity __________. treat
Pi is a letter of the Greek alphabet. The __________ was first used in mathematical calculations in 1737 by the Swiss mathematician Leonhard Euler. It is used for the __________ of the particularly
circumference of a circle to its diameter. Computers have calculated Pi to over one trillion __________. It has countless numbers of digits. Perhaps more impressive is the __________ of digits
British autism sufferer Daniel Tammet. He holds the European record for __________ Pi from memory to 22,514 digits in just over five hours. He also __________ from a condition in which reciting
he sees numbers as colours. He says Pi is __________ beautiful. The worlds of science, engineering and mathematics would be __________ without it. Celebrate today with your favourite symbol
pie! lost
Delete the wrong word in each of the pairs of italics.
Pi Day is on March 14. This is the day when mathematicians and geometry enthusiasm / enthusiasts around the world get together and celebrate the mathematical constant of Pi. There are two well / good
reasons to hold Pi Day on March 14. The first is that in the American date / dating format, it is 3/14 and Pi, to two decimal places, is 3.14. The other reasons / reason is that March 14 is Albert
Einstein’s birthday, and he knew a thing or two / things about numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum stuff / staff walked around the circular spaces in the
building eating fruit pies. Cherry and apple pies have very lot / little to do with Pi, unless you want to calculation / calculate the circumference of your fruity treat.
Pi is a letter of the Greek alphabet / alphabets. The symbol was first used in mathematical calculations in 1737 by the Swiss mathematics / mathematician Leonhard Euler. It is used for the ratio of
the circumference of a round / circle to its diameter. Computers have calculated Pi to over one trillion digits. It has countless / counts numbers of digits. Perhaps more impressive is the feat /
feet of British autism sufferer Daniel Tammet. He holds the European record for reciting Pi from memory to 22,514 digital / digits in just over five hours. He also suffers from a condition in which
he sees numbers as colours. He says Pi is particular / particularly beautiful. The worlds of science, engineering and mathematics would be lost / loser without it. Celebrate today with your favourite
Pi Day is on March 14. This is the (1) ____ when mathematicians and geometry enthusiasts around the world get together and celebrate the mathematical constant of Pi. There are two good (2) ____ to
hold Pi Day on March 14. The first is that in the American date format, it is 3/14 and Pi, to two (3) ____ places, is 3.14. The other reason is that March 14 is Albert Einstein’s birthday, and he
knew a thing or (4) ____ about numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum staff walked around the (5) ____ spaces in the building eating fruit pies. Cherry and
apple pies have very little to do with Pi, unless you want to calculate the circumference of your fruity (6) ____.
Pi is a letter of the Greek alphabet. The (7) ____ was first used in mathematical calculations in 1737 by the Swiss mathematician Leonhard Euler. It is used for the ratio of the circumference of a
(8) ____ to its diameter. Computers have calculated Pi to over one trillion digits. It has countless numbers of (9) ____. Perhaps more impressive is the feat of British autism sufferer Daniel Tammet.
He holds the European record for reciting Pi from memory to 22,514 digits in just over five hours. He also (10) ____ from a condition in which he sees numbers as colours. He says Pi is particularly
beautiful. The (11) ____ of science, engineering and mathematics would be lost without (12) ____. Celebrate today with your favourite pie!
Put the correct words from this table into the article.
1. (a) daily (b) daytime (c) day (d) days
2. (a) reasonable (b) reasoning (c) reasons (d) reason
3. (a) decimal (b) dual (c) digital (d) decibel
4. (a) duo (b) pair (c) twice (d) two
5. (a) circling (b) circular (c) circles (d) circulate
6. (a) treat (b) threat (c) teat (d) tweet
7. (a) symbolic (b) symbol (c) symbolize (d) symbolism
8. (a) circular (b) circled (c) circle (d) circus
9. (a) digital (b) dig it (c) digit (d) digits
10. (a) suffering (b) suffers (c) suffered (d) suffers
11. (a) worlds (b) planets (c) globes (d) stars
12. (a) them (b) him (c) these (d) it
Spell the jumbled words (from the text) correctly.
Paragraph 1
1. get erhogtte
2. two good rsoasen
3. American date ofmrat
4. about esunrmb
5. walked around the iurcaclr spaces
6. lauectcla the circumference
Paragraph 2
7. The osmybl was first used
8. the oiatr of the circumference
9. one trillion itdgsi
10. the European redroc
11. he sees nsmuebr as colours
12. eincsec, engineering and mathematics
Number these lines in the correct order.
( ) together and celebrate the mathematical constant of Pi. There are two good reasons to hold Pi Day
( ) mathematics would be lost without it. Celebrate today with your favourite pie!
( ) the circular spaces in the building eating fruit pies. Cherry and apple pies have very little to do with
( ) its diameter. Computers have calculated Pi to over one trillion digits. It has countless numbers of
( ) reason is that March 14 is Albert Einstein’s birthday, and he knew a thing or two about
( ) digits. Perhaps more impressive is the feat of British autism sufferer Daniel Tammet. He holds the European
( ) on March 14. The first is that in the American date format, it is 3/14 and Pi, to two decimal places, is 3.14. The other
( ) record for reciting Pi from memory to 22,514 digits in just over five hours. He also suffers from a condition in which he sees
( ) Pi, unless you want to calculate the circumference of your fruity treat.
( 1 ) Pi Day is on March 14. This is the day when mathematicians and geometry enthusiasts around the world get
( ) numbers. Pi Day started in 1988 at the San Francisco Exploratorium. The museum staff walked around
( ) numbers as colours. He says Pi is particularly beautiful. The worlds of science, engineering and
( ) Pi is a letter of the Greek alphabet. The symbol was first used in mathematical calculations
( ) in 1737 by the Swiss mathematician Leonhard Euler. It is used for the ratio of the circumference of a circle to
With a partner, put the words back into the correct order.
│1. │the get geometry around world together enthusiasts │
│2. │14 March on Day Pi hold to reasons good two are There │
│3. │numbers about two or thing a knew he │
│4. │walked staff museum The spaces circular the around │
│5. │to pies do have with very Pi little apple │
│6. │letter a is Pi alphabet Greek the of │
│7. │digits to Computers over have one calculated trillion Pi │
│8. │numbers countless has It digits of │
│9. │22,514 Pi from memory digits to reciting │
│10.│your with today Celebrate pie favourite │
DISCUSSION (Write your own questions)
STUDENT A’s QUESTIONS (Do not show these to student B)
1. ________________________________________________________
2. ________________________________________________________
3. ________________________________________________________
4. ________________________________________________________
5. ________________________________________________________
6. ________________________________________________________
STUDENT B’s QUESTIONS (Do not show these to student A)
1. ________________________________________________________
2. ________________________________________________________
3. ________________________________________________________
4. ________________________________________________________
5. ________________________________________________________
6. ________________________________________________________
Write five questions about Pi Day in the table. Do this in pairs/groups. Each student must write the questions on his / her own paper.
Without your partner, interview other students. Write down their answers.
│ │ STUDENT 1 │ STUDENT 2 │ STUDENT 3 │
│ │ │ │ │
│ │_____________ │_____________ │_____________ │
│Q.1.│ │ │ │
│Q.2.│ │ │ │
│Q.3.│ │ │ │
│Q.4.│ │ │ │
│Q.5.│ │ │ │
Return to your original partner(s) and share and talk about what you found out. Make mini-presentations to other groups on your findings.
Write about Pi Day for 10 minutes. Show your partner your paper. Correct each other’s work.
1. VOCABULARY EXTENSION: Choose several of the words from the text. Use a dictionary or Google’s search field (or another search engine) to build up more associations / collocations of each word.
2. INTERNET: Search the Internet and find more information about Pi Day. Talk about what you discover with your partner(s) in the next lesson.
3. MAGAZINE ARTICLE: Write a magazine article about Pi Day. Write about what happens around the world. Include two imaginary interviews with people who did something on this day.
Read what you wrote to your classmates in the next lesson. Give each other feedback on your articles.
4. POSTER: Make your own poster about Pi Day. Write about what will happen on this day around the world.
Read what you wrote to your classmates in the next lesson. Give each other feedback on your articles.
Check your answers in "THE READING / TAPESCRIPT" section at the top of this page. | {"url":"http://www.eslholidaylessons.com/03/pi_day.html","timestamp":"2014-04-18T11:04:34Z","content_type":null,"content_length":"45573","record_id":"<urn:uuid:443bff8a-570b-42c7-964a-ed3155bb62bc>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the role of contact geometry in the hamiltonian mechanics?
up vote 12 down vote favorite
Let us assume someone is interested in the study of Hamiltonian mechanics.
What are good examples to illustrate him of the usefulness of contact geometry in this context?
On one hand the Hamiltonian mechanics was time ago expressed in the language of symplectic geometry, but, on the other hand, the contact geometry is often presented like the brother of the symplectic
My question is:
In the hamiltonian mechanics, not necessarily only for Hamiltonian of mechanical type, what is the role played by the contact geometry?
Any kind of suggestion is welcome.
add comment
4 Answers
active oldest votes
In mechanics you often want to study systems whose Hamiltonian function depends on time (explicitly). For example, you can look at the motion of a charged particle in a time-dependent
electric field. In such cases you are solving an ODE in the "extended phase space" ($\mathbb{R}^7$ in the above example), and not in $\mathbb{R}^6\simeq T^\vee \mathbb{R}^3$. Also, the
translation between Hamiltonian and Lagrangian formulation of mechanics goes via Legendre transform, which fits very nicely in the framework of contact geometry. Contact geometry also
enters mechanics through Hamilton-Jacobi theory and the "method of characteristics".
up vote 7
down vote So you can think of contact geometry as the odd-dimensional ("non-stationary") analogue of symplectic geometry. You can go from one to the other by "symplectisation". For example, if you
accepted start with a manifold, $M$, then you have a tautological contact structure on $X=\mathbb{P}T^\vee_M$. The symplectisation of $X$ is $T^\vee _M-\{ 0 \}$, with the canonical symplectic
form. Symplectisation maps contact diffeomorphisms to symplectomorphisms, etc. etc. In the opposite direction, if you are given a symplectic manifold$(M,\omega)$ with $[\omega]=0\in H^2
(M,\mathbb{R})$, you can build a line bundle $E\to M$ with a contact structure.
Dear Dalakov, thanks for your answer. A question: About the role of contact geometry in the H--J theory, are you considering also its formulation for (systems) of generic $1^{\text
{st}}$-order pdes $H\circ j^1f=0$ on $M$ (not necessarily H--J eqns)? Infact, in such a case the H--J theory speaks about the connection between the solutions of the system and the
integral manifolds of the characteristic distribution for the corank 1 distribution on the level set $H^{-1}(0)$ which is induced by the natural contact structure on $J^1M$ (the space
of 1-jets on $M$). – Giuseppe Tortorella Aug 10 '11 at 10:02
add comment
I think the basic example is when you have a symplectic manifold $M$ with a Hamiltonian $H : M \to \mathbb{R}$. Then take a regular value $a$ of $H$, and look at the hypersurface $N := H^
{-1}(a)$, which will be a smooth submanifold of $M$ of odd dimension. Then (probably with some more hypotheses that I forget now), $N$ will have a contact structure, and the corresponding
up vote Reeb vector field will agree with the Hamiltonian vector field $X_H$ corresponding to $H$. Recall that the value of the Hamiltonian function $H$ is constant along the flows of $X_H$. In
9 down terms of physics this is interpreted as conservation of energy or something like that. So in this basic example, contact geometry can be thought of as the study of Hamiltonian mechanics for
vote a fixed value of energy.
1 Dear Kevin Lin, your answer has been very interesting. Starting from it I have reviewed how to give a geometric proof of the Maupertius principle. Thanks. – Giuseppe Tortorella Aug 10 '11
at 11:47
add comment
Form the contact one-form $\Theta = pdq -Hdt$ on extended phase space $T^* Q \times {\mathbb R}$, the secont factor being time and parameterized by $t$ the function $H = H(q,p,t)$ being
the time-dependent energy or Hamiltonian,
and $pdq$ denotes the usual canonical one-form on $T^* Q$ pulled back to extended phase space by the projection onto the first factor. (Assume $H \ne 0$ to get $\Theta$ contact.) Then the
up vote 4 Reeb vector field for this one-form, i.e the kernel of $d \Theta$, is the time-dependent Hamiltonian vector field, up to scale.
down vote
Arnol'd has a nice discussion of this in his Mathematical Methods in Classical Mechanics.
Do you know of a citable reference containing this remark? Arnol'd doesn't explicitly make the connection between time-dependent Hamiltonian mechanics and contact geometry, and I would
love to have a source that does. This answer is exactly what I'm looking for, but BibTeX doesn't seem to have a MathOverflow template yet... – Vectornaut Jan 29 at 8:20
add comment
Starting from Kevin Lin's answer I find that contact geometry gives a proof of the Maupertuis principle which is geometric and doesn't appeal at the variational principle of Least
Let be given an Hamiltonian system with configuration space $M$, potential energy $V$ and kinetic energy $K$.
For any regular value $h$ of the Hamilton function $H:=K+V\circ\tau_M^\ast$, let us introduce:
• W the open subset of $M$ where $h-V$ is positive, and
• $N:=H^{-1}(h)\setminus K^{-1}(0)$, a codimension-1 submanifold of $T^\ast W$.
Let us define $\tilde{K}=(h-V\circ\tau_M^\ast)^{-1}K|_{T^\ast W}$, a metric on $W$, seen as a smooth function on $T^\ast W$, which is a fiberwise positive definite quadratic form.
Let us denote by $X$ and $\tilde{X}$ the Hamiltonian vector fields on $(T^\ast W,d\lambda)$ havind as Hamilton functions $H$ and $\tilde{K}$ respectively.
By definition, $N:=H^{-1}(h)\setminus K^{-1}(0)$ coincides with $\tilde{K}^{-1}(1)$, and the Liouville $1$-form $\lambda$ induces a contact form on it.
up vote 1 down From the following pair of identities:
• $i(X)\lambda=2K$, $i(X)d\lambda=-dH$, and
• $i(\tilde{X})\lambda=2\tilde{K}$, $i(\tilde{X})d\lambda=-d\tilde{K}$,
we deduce that:
• $X$ and $\tilde{X}$ are tangent to $N$,
• both $(2K)^{-1}X|_N$ and $2\tilde{K})^{-1}\tilde{X}|_N\equiv 1/2\tilde{X}|_N$ satisfy the defining equations for the Reeb vector field on the strictly contact manifold $(N,j_N^\
So on $N:=H^{-1}(h)\setminus K^{-1}(0)$, the Hamiltonian vector field $X$ of $H:=K+V\circ\tau_M^ast$ coincides with $2K\tilde{X}$, being $\tilde{X}$ the geodesic vector field for the
Jacobi metric on $W$ given by $(h-V\circ\tau_M^\ast)^{-1}K$.
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ds.dynamical-systems sg.symplectic-geometry classical-mechanics contact-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/72498/what-is-the-role-of-contact-geometry-in-the-hamiltonian-mechanics","timestamp":"2014-04-19T19:50:47Z","content_type":null,"content_length":"70612","record_id":"<urn:uuid:116e811c-3730-4fd5-9d89-cb28adabac0a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00427-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hingham, MA Calculus Tutor
Find a Hingham, MA Calculus Tutor
...I've done labwork at MIT's Koch Center for Integrative Cancer Research, Sloan-Kettering Cancer Center, and the McGovern Institute for Brain Research. I've also worked as an editorial assistant
at the Boston Review, a national magazine of politics, literature, and the arts. My academic interests...
47 Subjects: including calculus, English, reading, chemistry
...I also enjoy helping others to master different types of questions. I have several years part-time experience holding office hours and working in a tutorial office. I majored in philosophy and
mathematics, which has given me a broad exposure to the kinds of material on a high school equivalency test.
29 Subjects: including calculus, reading, English, geometry
...There are a number of aspects of algebra, geometry, trigonometry, and pre-calculus that I have continued to use and are fundamental to calculus and other advanced mathematics. I have continued
to work with teens as I have coached about 10 years in youth sports. I am a non-smoker and I'm not allergic to pets.I took algebra in junior high school.
10 Subjects: including calculus, physics, geometry, algebra 2
...I scored 99th percentile on the SAT (perfect score on current scale) and 90th on the LSAT. In all I have over 20 years of experience with countless students in many different subjects. I
relate well to young people and I will make our lessons fun and enjoyable - if learning is fun then it becomes easy and interesting.
29 Subjects: including calculus, reading, geometry, GED
I am a retired university math lecturer looking for students, who need experienced tutor. Relying on more than 30 years experience in teaching and tutoring, I strongly believe that my profile is
a very good fit for tutoring and teaching positions. I have significant experience of teaching and ment...
14 Subjects: including calculus, statistics, geometry, probability
Related Hingham, MA Tutors
Hingham, MA Accounting Tutors
Hingham, MA ACT Tutors
Hingham, MA Algebra Tutors
Hingham, MA Algebra 2 Tutors
Hingham, MA Calculus Tutors
Hingham, MA Geometry Tutors
Hingham, MA Math Tutors
Hingham, MA Prealgebra Tutors
Hingham, MA Precalculus Tutors
Hingham, MA SAT Tutors
Hingham, MA SAT Math Tutors
Hingham, MA Science Tutors
Hingham, MA Statistics Tutors
Hingham, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/hingham_ma_calculus_tutors.php","timestamp":"2014-04-16T04:45:40Z","content_type":null,"content_length":"24123","record_id":"<urn:uuid:8eda53ff-31d2-4113-b3a0-9e1186621990>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
A primal-dual algorithm for computing Fisher equilibrium in the absence of gross substitutability property.
Garg, Dinesh and Jain, Kamal and Talwar, Kunal and Vazirani, Vijay V (2007) A primal-dual algorithm for computing Fisher equilibrium in the absence of gross substitutability property. In: Theoretical
Computer Science, 378 (2). pp. 143-152.
Restricted to Registered users only
Download (264Kb) | Request a copy
We provide the first strongly polynomial time exact combinatorial algorithm to compute Fisher equilibrium for the case when utility functions do not satisfy the Gross substitutability property. The
motivation for this comes from the work of Kelly, Maulloo, and Tan [F.P. Kelly, A.K. Maulloo, D.K.H. Tan, Rate control for communication networks: Shadow prices, proportional fairness and stability,
Journal of Operational Research (1998)] and Kelly and Vazirani [F.P. Kelly, Vijay V. Vazirani, Rate control as a market equilibrium (2003) (in preparation)] on rate control in communication networks.
We consider a tree like network in which root is the source and all the leaf nodes are the sinks. Each sink has got a fixed amount of money which it can use to buy the capacities of the edges in the
network. The edges of the network sell their capacities at certain prices. The objective of each edge is to fix a price that can fetch the maximum money for it, and the objective of each sink is to
buy capacities on edges in such a way that it can facilitate the sink to pull maximum flow from the source. In this problem, the edges and the sinks play precisely the role of sellers and buyers,
respectively, in Fisher’s market model. The utility of a buyer (or sink) takes the form of a Leontief function which is known for not satisfying Gross substitutability property. We develop an $O(m^3)
$ exact combinatorial algorithm for computing equilibrium prices of the edges. The time taken by our algorithm is independent of the values of sink money and edge capacities. A corollary of our
algorithm is that equilibrium prices and flows are rational numbers. Although there are algorithms to solve this problem they are all based on convex programming techniques. To the best of our
knowledge, ours is the first strongly polynomial time exact combinatorial algorithm for computing equilibrium prices of Fisher’s model under the case when buyers’ utility functions do not satisfy
gross substitutability property.
Item Type: Journal Article
Additional Information: Copyright of this article belongs to Elsevier.
Keywords: Computing market equilibria;Fisher equilibrium;Gross substitutability property;Strongly polynomial time exact algorithm;Combinatorial algorithm;Primal-dual algorithm
Department/Centre: Division of Electrical Sciences > Computer Science & Automation (Formerly, School of Automation)
Date Deposited: 20 Sep 2007
Last Modified: 19 Sep 2010 04:39
URI: http://eprints.iisc.ernet.in/id/eprint/11849
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/11849/","timestamp":"2014-04-18T11:39:34Z","content_type":null,"content_length":"29142","record_id":"<urn:uuid:038fed62-dc53-4fb5-96e4-3f60b83cf46c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry and Topology
Let's start by laying out some of the reviewer's biases. First, I like mathematics books to have a personality, and particularly like books that have a sense of humor. Second, much as I appreciate
the charms and historical importance of synthetic geometry, I do not think it must play a central role in how we teach undergraduates. Third, I am sympathetic to the idea that mathematics students
should build on what they know; in particular, I like the idea of making serious use of linear algebra (and even matrix groups) in a geometry text.
From where I stand, then, there is a lot to like about this new book. The authors have put together an introduction to geometry that is modern, interesting, and open. It is modern in that it uses
linear algebra, coordinates, and metrics from the very beginning. It is interesting both in terms of the content and in its presentation. And it is open in the sense that it creates many options for
further study and development of the material.
The authors have tried to be very informal in style, to the extent of addressing the reader as "you" and referring to themselves as "I", "despite the fact that there are more than two of me." So the
book is full of sentences such as "My aims are..." or "I usually choose the angle to be between 0 and π." There is no sacrifice of mathematical precision, except for instances where something is
beyond the level of the text; in such cases, the author (!) points out that this is the case and sketches the essence of the proof. (For example, this happens when he proves that the distance between
two points in Euclidean space is the length of the shortest curve connecting them.) The result of all this is that the book is a pleasure to read.
At the same time, students will by no means find this book easy. In particular, quite a lot of linear algebra is present from the first page on. The dot product appears on page 5, orthogonal matrices
on page 10, eigenvalues (real and/or complex) on page 12. I think this is all to the good. After all, if we spend the time teaching linear algebra students these ideas, we owe them the courtesy of
making use of them when doing so makes things easier.
I heartily approve the inclusion of some basic topology and transformation groups. The material included is interesting, ambitious, and a good foundation for further study.
I haven't, of course, tested the book in class. I'm certain that students would find it hard in parts, and that I would have to create a lot of motivation to keep them going. I suspect the results
would repay the effort involved.
Finally, anyone that has the confidence to do what the author has done in problem A5 (on page 181) has my instant respect. How often has a mathematics book made you laugh?
Fernando Q. Gouvêa is professor of mathematics at Colby College in Waterville, ME. He is the editor of FOCUS, FOCUS Online, and MAA Reviews. Somehow, he manages to stay sane. Humor helps. | {"url":"http://www.maa.org/publications/maa-reviews/geometry-and-topology","timestamp":"2014-04-18T16:51:00Z","content_type":null,"content_length":"97161","record_id":"<urn:uuid:503255b1-777e-494a-a6d1-e8916eb05110>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary of WACC - Weighted Average Cost of Capital. Abstract
Corporations create value for shareholders by earning a return on the invested capital that is above the cost of that capital. WACC (Weighted Average Cost of Capital) is an expression of this cost
and is used to see if certain intended investments or strategies or projects or purchases are worthwhile to undertake.
WACC is expressed as a percentage, like interest. So for example if a company works with a WACC of 12%, than this means that only (and all) investments should be made that give a return higher than
the WACC of 12%.
The cost of capital for any investment, whether for an entire company or for a project, is the rate of return capital providers would expect to receive if they would invest their capital elsewhere.
In other words, the cost of capital is an opportunity cost.
How can the Weighted Average Cost of Capital (WACC) be calculated?
The easy part of WACC is the debt part of it. In most cases it is clear how much a company has to pay their bankers or bondholders for debt finance. More elusive however, is the cost of equity
finance. Normally, the cost of equity finance is higher than the cost debt finance, because the cost of equity involves a risk premium. Calculating this risk premium is one thing that makes
calculating WACC complicated.
Another important complication is which mix of debt and equity should be used to maximize shareholder value (This is what "Weighted" means in WACC).
Finally, also the corporate tax rate is important, because normally interest payments are tax-deductible.
Formula WACC Calculation
debt / TF (cost of debt)(1-Tax)
+ equity/ TF (cost of equity)
In this formula,
* TF means Total Financing. Total Financing consists of the sum of the Market values of debt and equity finance. An issue with TF is whether, and under what circumstances, it should include current
liabilities, such as trade credit. In valuing a company this is important, because: a) trade credit is used aggressively by many companies, which in turn affects their business credit, b) there is an
interest (or financing) charge for such use, and c) trade credit can be quite a large sum on the balance sheet.
* Tax stands for the Corporate Tax Rate.
Example: suppose this company:
The Market value of debt = € 300 million
The Market value of equity = € 400 million
The Cost of debt = 8%
The Corporate Tax rate = 35%
The Cost of equity is 18%
The WACC of this company is:
+ 400:700*18%
12,5% (WACC - Weighted Average Cost of Capital)
Compare: IRR | Net Present Value | DCF
More valuation methodologies | {"url":"http://www.valuebasedmanagement.net/methods_wacc.html","timestamp":"2014-04-18T13:07:40Z","content_type":null,"content_length":"13683","record_id":"<urn:uuid:0d9eea93-95ec-4f1d-b3d5-bb161d43f527>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Invisible in the Storm is the first book to recount the history, personalities, and ideas behind one of the greatest scientific successes of modern times--the use of mathematics in weather
prediction. Although humans have tried to forecast weather for millennia, mathematical principles were used in meteorology only after the turn of the twentieth century. From the first proposal for
using mathematics to predict weather, to the supercomputers that now process meteorological information gathered from satellites and weather stations, Ian Roulstone and John Norbury narrate the
groundbreaking evolution of modern forecasting.
The authors begin with Vilhelm Bjerknes, a Norwegian physicist and meteorologist who in 1904 came up with a method now known as numerical weather prediction. Although his proposed calculations could
not be implemented without computers, his early attempts, along with those of Lewis Fry Richardson, marked a turning point in atmospheric science. Roulstone and Norbury describe the discovery of
chaos theory's butterfly effect, in which tiny variations in initial conditions produce large variations in the long-term behavior of a system--dashing the hopes of perfect predictability for weather
patterns. They explore how weather forecasters today formulate their ideas through state-of-the-art mathematics, taking into account limitations to predictability. Millions of variables--known,
unknown, and approximate--as well as billions of calculations, are involved in every forecast, producing informative and fascinating modern computer simulations of the Earth system.
Accessible and timely, Invisible in the Storm explains the crucial role of mathematics in understanding the ever-changing weather.
Ian Roulstone is professor of mathematics at the University of Surrey. John Norbury is a fellow in applied mathematics at Lincoln College, University of Oxford. They are the coeditors of Large-Scale
Atmosphere-Ocean Dynamics.
"Mathematicians Ian Roulstone and John Norbury demystify the maths behind meteorology. Trailblazers' work is vividly evoked, from eighteenth-century mathematician Leonhard Euler on hydrostatics to
physicist Vilhelm Bjerknes's numerical weather prediction. The pace cranks up with twentieth-century advances such as Jule Gregory Charney's harnessing of the gargantuan ENIAC computer for his work
in the 1940s and 1950s on forecasting pressure patterns."--Nature
"[O]ne of the great strengths of the book is the way it picks apart the challenge of making predictions about a chaotic system, showing what improvements we might yet hope for and what factors
confound them."--Philip Ball, Prospect
"A welcome and authoritative account of the 20th-century contributions of mathematically sophisticated meteorologists such as Vilhelm Berknes (1862--1951), Carl-Gustav Rossby (1898--1957), Jule
Charney (1917--1981), and Ed Lorenz (1917--2008). . . . Clearly, this book is informative and inspirational, leaving plenty of room for innovations by future generations of mathematicians and
modelers."--James Rodger Fleming, MAA Reviews
"This book gives a deep insight of the mathematics involved in the forecast of weather. . . . The authors have done a brilliant work to collect a huge amount of historical information, as well as
mathematical information, but keeping always a level in the explanations that makes the text accessible to undergraduate students in the first years, and even to people not so familiar with
mathematics. All in all, this is a very interesting and enjoyable reading."--Vicente Muñoz, European Mathematical Society
"Shows how much modern weather forecasting depends on mathematics. . . . A superior read."--Alexander Bogolomny, CTK Insights
More reviews
Table of Contents:
Preface vii
Prelude: New Beginnings 1
ONE The Fabric of a Vision 3
TWO From Lore to Laws 47
THREE Advances and Adversity 89
FOUR When the Wind Blows the Wind 125
Interlude: A Gordian Knot 149
FIVE Constraining the Possibilities 153
SIX The Metamorphosis of Meteorology 187
Color Insert follows page 230
SEVEN Math Gets the Picture 231
EIGHT Predicting in the Presence of Chaos 271
Postlude: Beyond the Butterfly 313
Glossary 317
Bibliography 319
Index 323
Subject Areas: | {"url":"http://press.princeton.edu/titles/9957.html","timestamp":"2014-04-20T13:31:27Z","content_type":null,"content_length":"18637","record_id":"<urn:uuid:6fbbd8d8-7a48-497d-8d55-3566e8520889>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to the Theory of Point Processes
Results 11 - 20 of 396
- Statistica Sinica , 2003
"... Abstract: The class of species sampling mixture models is introduced as an extension of semiparametric models based on the Dirichlet process to models based on the general class of species
sampling priors, or equivalently the class of all exchangeable urn distributions. Using Fubini calculus in conj ..."
Cited by 53 (8 self)
Add to MetaCart
Abstract: The class of species sampling mixture models is introduced as an extension of semiparametric models based on the Dirichlet process to models based on the general class of species sampling
priors, or equivalently the class of all exchangeable urn distributions. Using Fubini calculus in conjunction with Pitman (1995, 1996), we derive characterizations of the posterior distribution in
terms of a posterior partition distribution that extend the results of Lo (1984) for the Dirichlet process. These results provide a better understanding of models and have both theoretical and
practical applications. To facilitate the use of our models we generalize the work in Brunner, Chan, James and Lo (2001) by extending their weighted Chinese restaurant (WCR) Monte Carlo procedure, an
i.i.d. sequential importance sampling (SIS) procedure for approximating posterior mean functionals based on the Dirichlet process, to the case of approximation of mean functionals and additionally
their posterior laws in species sampling mixture models. We also discuss collapsed Gibbs sampling, Pólya urn Gibbs sampling and a Pólya urn SIS scheme. Our framework allows for numerous applications,
including multiplicative counting process models subject to weighted gamma processes, as well as nonparametric and semiparametric hierarchical models based on the Dirichlet process, its two-parameter
extension, the Pitman-Yor process and finite dimensional Dirichlet priors. Key words and phrases: Dirichlet process, exchangeable partition, finite dimensional Dirichlet prior, two-parameter
Poisson-Dirichlet process, prediction rule, random probability measure, species sampling sequence.
- IEEE Trans. SP , 2006
"... Abstract — A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data
association uncertainty, detection uncertainty, noise and false alarms. The approach involves modelling the respecti ..."
Cited by 51 (8 self)
Add to MetaCart
Abstract — A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association
uncertainty, detection uncertainty, noise and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability
hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first order statistic of the random finite set of targets, in time. At present, there is no closed form solution to
the PHD recursion. This work shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly,
closed form recursions for propagating the means, covariances and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these
recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation
strategies from the extended and unscented Kalman filters. Index Terms — Multi-target tracking, optimal filtering, point
- in Proc. of ACM SIGCOMM Internet Measurement Workshop , 2002
"... Our goal is to design a traffic model for uncongested IP backbone links that is simple enough to be used in network operation, and that is protocol and application agnostic in order to be as
general as possible. The proposed solution is to model the traffic at the flow level by a Poisson shot-noise ..."
Cited by 48 (6 self)
Add to MetaCart
Our goal is to design a traffic model for uncongested IP backbone links that is simple enough to be used in network operation, and that is protocol and application agnostic in order to be as general
as possible. The proposed solution is to model the traffic at the flow level by a Poisson shot-noise process. In our model, a flow is a generic notion that must be able to capture the characteristics
of any kind of data stream. We analyze the accuracy of the model with real traffic traces collected on the Sprint IP backbone network. Despite its simplicity, our model provides a good approximation
of the real traffic observed in the backbone and of its variation. Finally, we discuss three applications of our model to network design and management.
"... Abstract — Network devices put packets on an Internet link, and multiplex, or superpose, the packets from different active connections. Extensive empirical and theoretical studies of packet
traffic variables — arrivals, sizes, and packet counts — demonstrate that the number of active connections has ..."
Cited by 47 (3 self)
Add to MetaCart
Abstract — Network devices put packets on an Internet link, and multiplex, or superpose, the packets from different active connections. Extensive empirical and theoretical studies of packet traffic
variables — arrivals, sizes, and packet counts — demonstrate that the number of active connections has a dramatic effect on traffic characteristics. At low connection loads on an uncongested link —
that is, with little or no queueing on the link-input router — the traffic variables are long-range dependent, creating burstiness: large variation in the traffic bit rate. As the load increases, the
laws of superposition of marked point processes push the arrivals toward Poisson, the sizes toward independence, and reduces the variability of the counts relative to the mean. This begins a
reduction in the burstiness; in network parlance, there are multiplexing gains. Once the connection load is sufficiently large, the network begins pushing back on the attraction to Poisson and
independence by causing queueing on the link-input router. But if the link speed is high enough, the traffic can get quite close to Poisson and independence before the push-back begins in force;
while some of the statistical properties are changed in this high-speed case, the push-back does not resurrect the burstiness. These results reverse the commonly-held presumption that Internet
traffic is everywhere bursty and that multiplexing gains do not occur. Very simple statistical time series models — fractional sum-difference (FSD) models — describe the statistical variability of
the traffic variables and their change toward Poisson and independence before significant queueing sets in, and can be used to generate open-loop packet arrivals and sizes for simulation studies.
Both science and engineering are affected. The magnitude of multiplexing needs to become part of the fundamental scientific framework that guides the study of Internet
- Proc. London Math. Soc , 1992
"... Le"vy discovered that the fraction of time a standard one-dimensional Brownian motion B spends positive before time t has arcsine distribution, both for / a fixed time when B, #0 almost surely,
and for / an inverse local time, when B, = 0 almost surely. This identity in distribution is extended fro ..."
Cited by 44 (25 self)
Add to MetaCart
Le"vy discovered that the fraction of time a standard one-dimensional Brownian motion B spends positive before time t has arcsine distribution, both for / a fixed time when B, #0 almost surely, and
for / an inverse local time, when B, = 0 almost surely. This identity in distribution is extended from the fraction of time spent positive to a large collection of functionals derived from the
lengths and signs of excursions of B away from 0. Similar identities in distribution are associated with any process whose zero set is the range of a stable subordinator, for instance a Bessel
process of dimension d for 1.
- J. Telecommunication Systems , 1995
"... This paper proposes a new approach for communication networks planning; this approach is based on stochastic geometry. We first summarize the state of the art in this domain, together with its
economic implications, before sketching the main expectations of the proposed method. The main probabilisti ..."
Cited by 43 (6 self)
Add to MetaCart
This paper proposes a new approach for communication networks planning; this approach is based on stochastic geometry. We first summarize the state of the art in this domain, together with its
economic implications, before sketching the main expectations of the proposed method. The main probabilistic tools are point processes and stochastic geometry. We show how several performance
evaluation and optimization problems within this framework can actually be posed and solved by computing the mathematical expectation of certain functionals of point processes. We mainly analyze
models based on Poisson point processes, for which analytical formulae can often be obtained, although more complex models can also be analyzed, for instance via simulation.
, 2003
"... A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent
process is observed and a state equation that defines the evolution of the process through time. Inspired by neu ..."
Cited by 39 (4 self)
Add to MetaCart
A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent process is
observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent)
stimulus, we develop an algorithm to estimate a state-space model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian
autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We
develop an approximate expectation-maximization (EM) algorithm to estimate the unobservable state-space process, its parameters, and the parameters of the point process. The EM algorithm combines a
point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the state-space covariance algorithm to compute the complete data log likelihood efficiently. We use a
Kolmogorov-Smirnov test based on the time-rescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of
Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.
- in Proceedings of IEEE INFOCOM , 2002
"... Abstract — A key criterion in the design of high-speed networks is the probability that the buffer content exceeds a given threshold. We consider Ò independent identical traffic sources modelled
as point processes, which are fed into a link with speed proportional to Ò. Under fairly general assumpti ..."
Cited by 39 (1 self)
Add to MetaCart
Abstract — A key criterion in the design of high-speed networks is the probability that the buffer content exceeds a given threshold. We consider Ò independent identical traffic sources modelled as
point processes, which are fed into a link with speed proportional to Ò. Under fairly general assumptions on the input processes we show that the steady state probability of the buffer content
exceeding a threshold � � tends to the corresponding probability assuming Poisson input processes. We verify the assumptions for a large class of long-range dependent sources commonly used to model
data traffic. Our results show that with superposition, significant multiplexing gains can be achieved for even smaller buffers than suggested by previous results, which consider Ç Ò buffer size.
Moreover, simulations show that for realistic values of the exceedance probability and moderate utilisations, convergence to the Poisson limit takes place at reasonable values of the number of
sources superposed. This is particularly relevant for high-speed networks in which the cost of high-speed memory is significant. Keywords—Long-range dependence, overflow probability, Poisson limit,
heavy tails, point processes, multiplexing.
"... Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes,
mathematical techniques have been developed in the last decade to provide communication-theoretic results accoun ..."
Cited by 39 (6 self)
Add to MetaCart
Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes,
mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the network’s geometrical configuration. Often, the location of the nodes in
the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs –
including point process theory, percolation theory, and probabilistic combinatorics – have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of
wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the
literature. It also serves as an introduction to the field for the other papers in this special issue.
- Adv. in Appl. Probab , 1996
"... We consider two independent homogeneous Poisson processes Π0 and Π1 in the plane with intensities λ0 and λ1, respectively. We study additive functionals of the set of Π0-particles within a
typical Voronoi Π1-cell. We find the first and the second moments of these variables ..."
Cited by 34 (4 self)
Add to MetaCart
We consider two independent homogeneous Poisson processes Π0 and Π1 in the plane with intensities λ0 and λ1, respectively. We study additive functionals of the set of Π
0-particles within a typical Voronoi Π1-cell. We find the first and the second moments of these variables as well as upper and lower bounds on their distribution functions, implying an exponential
asymptotic behavior of their tails. Explicit formulae are given for the number and the sum of distances from Π0-particles to the nucleus within a typical Voronoi Π1-cell. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=62555&sort=cite&start=10","timestamp":"2014-04-19T01:57:26Z","content_type":null,"content_length":"42274","record_id":"<urn:uuid:5fe8d90e-e69c-4c0c-9a1d-ed216e419218>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00534-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can topologies induce a metric?
up vote 0 down vote favorite
Let {X,T} be a topology, T the set of open subsets of X.
Definition: Three points x, y, z of X are in relation N (Nxyz, read "x is nearer to y than to z") iff
1. there is a basis B of T and b in B such that x and y are in b but z is not and
2. there is no basis C of T and c in C such that x and z are in c but not y.
For some topologies there are no points x, y, z in relation N, for example if T = {Ø,X} or T = P(X), but for others there are (e.g. for ones induced by a metric [my claim]).
Definition: A topology has property M1 iff
(x)(y) ((z) (z ≠ x & z ≠ y) → Nxyz) → x = y
(This is an analogue of d(xy) = 0 → x = y, the best one I can imagine).
Definition: A topology has property M2 iff
(x)(y)(z) Nxyz & Nyzx → Nzyx
(This is a kind of an analogue of d(xy) = d(yx), the best one I can imagine)
First (bunch of) question(s):
1. Properties M1 and M2 do not capture the whole of the corresponding conditions of a metric. Can anyone figure out "better" definitions (e.g. an analogon of x = y → d(xy) = 0)?
2. Can anyone figure out a property M3 that is an analogue of the triangle equality?
If it can be shown that no such property M3 is definable, the following becomes obsolete.
If such a definition can be made, we define:
Definition: A topology has property M (read "induces a metric") iff it has properties M1, M2, M3.
Second question:
Which topologies have property M, i.e. induce a metric? Are these "accidentally" exactly those that are induced by a metric?
I don't have an answer to your questions, but, more conventionally, there is a standard theorem about which topological spaces admit a metric. It's beautiful, though not too easy -- it's one of
4 the key points of a Point-Set topology course. Look in Munkres's Topology, or here: en.wikipedia.org/wiki/Metrizable_space. Also, if you like these kinds of questions, I highly recommend you read
either Bourbaki's "General Topology" or Kelley's "General Topology". Both have a few generalizations of metrics that you might enjoy (in particular, uniformities). – Ilya Grigoriev Dec 20 '09 at
Usage comments: "analogon" is much less common than the synonomous "analogue". "Has property X" or "is in relation N" are poor choices. Better is "Given a topology T on X, say that x is closer to
y than to z if....". – Theo Johnson-Freyd Dec 20 '09 at 3:16
You might just want to learn en.wikipedia.org/wiki/… . – Qiaochu Yuan Dec 20 '09 at 4:46
2 All of the important information of a metric can be recovered by its induced uniform space. This carries both the topological information as well as the uniform information (uniform convergence,
completeness, et cetera). I don't believe you can get all of the information you want with just a topology. I am sure though, that one can only recover a metric up to topological equivalence. –
Harry Gindi Dec 20 '09 at 4:59
@Theo: I wanted to define the relation without bias. The proposed reading should be only a hint where the definition is aimed at. – Hans Stricker Dec 20 '09 at 16:04
show 1 more comment
2 Answers
active oldest votes
Your condition 1 is satisfied for all triples $x,y,z\in X$ such that $z\not\in\{x,y\}$ if the space is $T_1$.
up vote 11 down vote Maybe reading a bit about uniform spaces and the corresponding metrizability results will be of help.
Picking up Aaron's answer: Might there be a way to sensibly restrict the class of "allowed" (= "sensible") bases. What is T<sub>1</sub>? – Hans Stricker Dec 20 '09 at
There is no more reference to bases anymore in your (edited) answer. My follow-up question remains (sorry): what space is T_1= – Hans Stricker Dec 20 '09 at 1:13
I found the term T_1 linked and will follow this. Thanks! – Hans Stricker Dec 20 '09 at 1:34
In particular, HP Stricker's claim is patently false. – Theo Johnson-Freyd Dec 20 '09 at 3:19
1 +1 for mentioning uniform spaces – Harry Gindi Dec 20 '09 at 8:53
add comment
I'm not sure your "nearness" relation is the right definition to make. It makes sense on the face of it, but for example I think you can generate the usual topology on the plane by tubular
neighborhoods of half-circles. For three points x,y,z on a line in that order, I don't think you get any relationship. Using the usual disk basis you'd satisfy condition (1) for Nxyz, but
up vote 2 you could violate condition (2) using the other basis I just suggested, if the tubular neighborhood is sufficiently small.
down vote
However, I too have wondered which topologies can be induced by metrics. I think the first step in this direction would be to look for a topology which is <i>not</i> induced by a
metric... – Aaron Mazel-Gee Dec 20 '09 at 0:25
I thank you for this. Maybe the look for "unusual" bases should be sharpened (for beginners like me). Coming the standard way I thought only of disk bases. But maybe my line of thought
can be "rescued"? If not: maybe it helps understanding the difference between topologies and metric spaces. – Hans Stricker Dec 20 '09 at 0:30
I don't know much about this stuff, but it looks like people who have commented on the original question suggest a wealth of resources for you to browse through... – Aaron Mazel-Gee Dec
23 '09 at 1:47
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/9393/can-topologies-induce-a-metric","timestamp":"2014-04-19T22:29:40Z","content_type":null,"content_length":"71600","record_id":"<urn:uuid:e017d77c-de17-4e07-8486-736fc66202d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with math on prop [Archive] - Parallax Forums
Old man Earl
04-06-2009, 10:36 AM
I have lost most of my math memories due to the fact I am 63 and never did know math ! Here is the problem. I need spin code for solving the formula to convert mmHg to altitude in feet. I have an i2c
module that returns the mmHg number ~ 600 here in Mountainair,NM. I know the alt here is ~6500 feet. I could use the float32 math routines in the program, but I don't know how to write the code.
Someone please help....
mmHg is ~ 600
The formula is .....
altitudeF :=((1-(mmHg*1.333224/1013.25)^0.190284)*145366.45)
Result is altitude in feet.
I know the formula works because my 16 yo daughter showed me on her fancy dan calculator the answer comes out correct !
I know the answer is right because I go to an online calculator for solving mmHg to altitude. | {"url":"http://forums.parallax.com/archive/index.php/t-111827.html","timestamp":"2014-04-21T07:56:10Z","content_type":null,"content_length":"16123","record_id":"<urn:uuid:43e43004-efea-4e75-9e43-100d242f16cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total # Posts: 230
math am i right?
convert fraction to percent 32/20 = ?/100 =16% is that right
math up dated
let me know this is right i am so lost 11/25(4/4) i got 25% is that right
convert fraction to percent 48/4 (4/4) =?%
history of urbanization in zinder can you help me with findo some history of urbanization in zinder? thank you
history(am i correct)
o Explain how the U.S. became involved in the politics of Southeast Asia. o Explain how this involvement affected the U.S. political climate of the 1950s. can you please help me find some info for
this please. So i have a better understanding to do a powerpoint presentation. ...
Explain how this involvement affected the U.S. political climate of the 1950s? Explain how the U.S. became involved in the politics of Southeast Asia. Any ideas. I know I should talk about the domino
effect but what else?
what is the correlation between increasing GDP and rising inflation or interest rates?
snookers lumber can convert logs into either lumber or plywood. In a given day, the mills turns twice as many units of plywood as lumber. in a given day the mill turns twice as many units of plywood
as lumber. it makes a profit of $25 on a unit of lumber and $40 on a unit of p...
Math (algebra)Reiny
i still cant figure it out
Math (algebra)
the perimeter of a rectangle is 202 inches. the length exceeds the width by 91 inches. find the length and width
For a position that you envision for yourself in 5 years, which of these three plans would be most appropriate? the 3 plans i chose is improshare, rucker plan, and scalon plan
Select three different incentive plans. i am not sure if i understand what some incentive plans. are any sites?
write an equation of the line containing the given point and parallel to the given line. (5,-8); 9x-5y=2
humna resource management
can i send u what i have when i am done?
humna resource management
Analyze how emotional and physical aspects of a person s life may influence the employee s effectiveness in the work environment. are they talking about like you shouldnt bring ypur problems at home
with u to work? any sites i could look at?
math (still dont understand)
write an equation of the line containing the given pointand parrallto the given line. express the answer in the form y=mx+b (-2,4); 9x=5y+3
math (algebra)
no idea
math (algebra)
I am still lost sorry.
math (algebra)
I am so lost. i have to type my answers in point slope form (y=mx+b)
math (algebra)
write an equation of the line containing the given point and parallel to the given line. (7,8); x+5y=2 (-2,4); 9x=5y+3 (3,5); 8x+y=2 (5,-3) 9x+4y=3 I am not sure how to do them
graphing math (last ones) ill figure out the rest
how would u graph y=3/2x+4 sorry im not good at fractions how would you graph y=1/4x
graphing math
how would u graph y=3/2x+4 sorry im not good at fractions
math again
how do you graph x=4 would plotting the point (4,0) work?
how do u graph y=1/4x
find the slope x+4y=8 is it 2? or 2x?
y=3/2x+4 y intercept and graph
math algebra
y= 1/3x+2 whats they y intercept? and graph?
math am i correct
so whats the slope? is it 2?
math am i correct
find the slope? is it x? x+4y=8 4y=x+8 .......................
math am i correct???
when graphing x=4 is the ordered pair (4,0) or (0,4) i think its (4,0)
math am i correct???
a+7<_ -17 i think its suppose to say greater than or equal to i say 24 with a closed circle. and is the cirlce closed or open when u graph it?
math am i correct
5x-15=3y is the ordered pairs (-3,0)(0,-5)
math am i correct
a+7<_ -17 i think its suppose to say greater than or equal to i say 24 with a closed circle. and is the cirlce closed or open when u graph it?
math am i correct (algebra)
margie is certain that every time she parks in the city garage it costs her at least $2.25. If the garage charges 25 cents plus 25 cents for each 1/2 hr, how long is margie's car parked? i say 5 am i
math algebra am i correct
translate to an inequality my salary next year will be at least $36000 is it? x > $36000
o Name three selection tools that you would consider using for a hiring program at a supermarket. o Choose what you think is the best selection tool or combination of selection tools. o Justify your
choice by describing the advantages of your method compared to other selectio...
How would you classify the power industry in your area? in wisconsin any sites?
math (algebra)
25% of what number is 10?
HRM can u tell me if i am correct
Describe in one or two sentences how you will access job analysis and job description information for your selected position. mine is an elementary school teacher. Would it be ok to write about the
tasks, knowledge, skills, abilities, interests, and work values
Describe in one or two sentences how you will access job analysis and job description information for your selected position. i am doing it on teachers, elementary school. would it be ok to write
about the tasks, knowledge, skills, abilities, intrests, and work values. I am no...
math last and final one!
-4x-27= -28 1/3
an employee's new salary is $27,195 after getting a 5% raise. what was the salary before the increase pay? a basketball player completed 43.6% of his field goals in the most recent season. he made
285 field goal. how many did he attempt?
i got the answer now ty. just have the last math question i cant seem to get the answer to every time i try i get it wrong
it says incorrect
683 or 0.0014
i can not seem to figure out the last one.
a basketball player completed 42.3% of his field goals in the most recent season. He made 289 field goals. how many did he attempt?
what is 55% of 860?
just a ?
I am doing school online thru university phoenix axia Do you possible know the school code? I am trying to do my financial aid application.
math am i correct (28)
after diving 90 m below the sea level a diver rises at a rate of 3 meters per minute for 8 min. where is the diver in relation to the surface? d=r*t 90=24 30/8
math (27)
suppose there is a 3-degree drop in temperature for every thousand feet that an airplane climbs into the sky.If the temperature on the ground which is at sea level is 63 degrees what would be the
temperature when the plane reaches an altitude of 30,000 ft.
Alegbra (math)
suppose there is a 2-degree drop in temperature for every thousand feet that an airplane climbs into the sky.If the temperature on the ground which is at sea level is 52 degres what would be the
temperature when the plane reaches an altitude of 27,000 ft.
Human Resource Management
oh ok ty ms. sue i think i may have it maybe that is lol just rereading the page u sent me trying to get the ideas
Human Resource Management
would a gas station be a small firm. you think i can talk about thses in my homework The task of managing change. 2. An area of professional practice. 3. A body of knowledge. 4. A control mechanism
Human Resource Management
Ms, sue are you talking about this. then would a large firm be like walmart? 1. The task of managing change. 2. An area of professional practice. 3. A body of knowledge. 4. A control mechanism.
Human Resource Management
I am not quite sure what they mean. I am not understanding what they want.
Human Resource Management
Write a response recommendation analysis of 200 to 300 words of how large firms and small firms might utilize change management concepts to meet growing technology demands. Provide one example for a
large company and one example for a small company of necessary changes resulti...
If you invest $8,000 at 12% interest, how much will you have in 7 years?
what do they mean by promotion methods? i need to find some on bottled water getting exported to mexico
what do they mean by promotion methods? i need to find some on mexico
Find the reciprocal of 1/7(-7-10)
math (?)
Fractions equivalent to 4/10? are 1/2 and 4/10 equivalent fractions?
math update
is the answer 3.75? add 1 3/8+ 4 3/4+ 2 1/6 this is as far as i can go then i am lost. 11/8+19/4+13/6
if the base of a parallelogram is 2 1/2 and its height is 1 1/2 what is its area?
Foundations of Financial Management
here is the problem The small chemical company needs to borrow $500,000. The bank offers a rate of 8 1/4 percent with a 20 percent compensating balance requirement, or as an alternative, 9 3/4
percent with additional fees of $5,500 to cover services the bank is providing. In e...
International Business (am I correct) plz advis
In the following matrix, you will identify who requires a specific document to be completed or where the document needs to be filed. You will also give a short description of the document s purpose
in the importing process. Am I correct? letter of credit= buyer s ban...
Introduction to Finance: Harvesting the Money Tree
7) The problem in stretching out the maturity of marketable securities is that A. long-term rates higher than short-term rates. B. interest rates are generally lower. C. you are legally locked in
until the maturity date. D. there is greater possibility of loss. I think B
13. Healthy Foods, Inc. sells 50-pound bags of grapes to the military for $10 a bag. The fixed costs of this operation are $80,000, while the variable costs of the Grapes are $.10 per pound. a. What
is the break-even point in bags? $80,000 = $80,000 = 80,808 units $10.00 - $0....
fin. am I correct?
13. Healthy Foods, Inc. sells 50-pound bags of grapes to the military for $10 a bag. The fixed costs of this operation are $80,000, while the variable costs of the Grapes are $.10 per pound. a. What
is the break-even point in bags? $80,000 = $80,000 = 80,808 units $10.00 - $0....
i needhelp with e and i want to know if i am right
13. Healthy Foods, Inc. sells 50-pound bags of grapes to the military for $10 a bag. The fixed costs of this operation are $80,000, while the variable costs of the Grapes are $.10 per pound. a. What
is the break-even point in bags? 8,000 bags b. Calculate the profit or loss on...
please help
. If Healthy Foods has an annual interest expense of $10,000, calculate the degree of financial leverage at both 20,000 and 25,000 bags. 20,000 bags x $10 = 200,000 - $10,000 = 190,000 80,000 =
110,000 25,000 bags x $10 = 250,000 - $10,000 = 240,000 80,000 = 160,...
are these correct
b. Calculate the profit or loss on 12,000 bags and on 25,000 bags. 12,000 x $10 = 120,000 80,000 + .10 x 50lbs x 12,000 = 80,000 + 60,000 = 140,000 120,000 140,000 = -20,000 loss 25,000 x $10 =
250,000 80,000 + .10 x 50lb x 25000 = 80,000 + 125000 = 205,000 250,000 ...
fin. am i right on these
b. Calculate the profit or loss on 12,000 bags and on 25,000 bags. 12,000 x $10 = 120,000 80,000 + .10 x 50lbs x 12,000 = 80,000 + 60,000 = 140,000 120,000 140,000 = -20,000 loss 25,000 x $10 =
250,000 80,000 + .10 x 50lb x 25000 = 80,000 + 125000 = 205,000 250,000 ...
fin. am I correct?
13. Healthy Foods, Inc. sells 50-pound bags of grapes to the military for $10 a bag. The fixed costs of this operation are $80,000, while the variable costs of the Grapes are $.10 per pound. a. What
is the break-even point in bags? 8,000 bags b. Calculate the profit or loss on...
fin. am I correct?
13. Healthy Foods, Inc. sells 50-pound bags of grapes to the military for $10 a bag. The fixed costs of this operation are $80,000, while the variable costs of the Grapes are $.10 per pound. a. What
is the break-even point in bags? 8,000 bags b. Calculate the profit or loss on...
Explain how this involvement impacted the U.S. political climate of the 1950s. Explain how the U.S. became involved in the politics of Southeast Asia.
what are the business habits of South Korea s ideological beliefs, thoughts on how stable the government is, and South Korea s Corruption Perceptions Index
math am i correct.
Write the number in expanded form. 2,378 is the answer? 2×1000+3×100+7×10+8×1
how do you write 4 to the power of soemthing using microsoft word. Write 4 X 4 X 4 X 4 X 4 as a power of 4. I know the answer is 4 to the power of 5. I just dont know how to write it using microsoft
math am i correct
Multiply 3/5 times 25 Write your answer in simplest form. I also have to show my work it this correct? 3/5 times 25/1 15/1 = 15?
are 1/2 and 4/10 equivalent fractions?
Are these correct? 1. Agency theory examines the relationship between the? shareholders and the firm's transfer agent. 2. Proper risk-return management means that? the firm must determine an
appropriate trade-off between risk and return. 3. Which of the following is an out...
thank you
Write a 1050-to 1400-word summary detailing the functions of the world s major foreign currency exchange markets. Be sure to discuss the positive and negative aspects of using a gold standard. could
you please give me a better understanding.
How did the U.S. economy change after WWII ended compared to what it had been like during the war? Use specific examples of what people were able to have and to do, and what it must have felt like
after the scarcity of their wartime experience.
Describe how the Cold War ideology that crystallized after WWII changed wartime alliances that had existed during the war. o Describe how American Cold War policies and practices influenced
international relations from the late 1940s to the mid-1950s. I am not exacctly sure wh...
Introduction to Finance: Harvesting the Money Tree
does it look right so far?
Introduction to Finance: Harvesting the Money Tree
27. Prepare a statement of cash flows for the Crosby Corporation. Follow the general procedures indicated in Table 2 10 on page 38 . Statement of cash flows (L04) Current Assets Liabilities Cash . .
. . . . . . . . . . . . . . . . . . . . . . . $ 15,000 Accounts payable ....
28. Describe the general relationship between net income and net cash flows from operating activities for the firm. Net income This is the answer i came up with, represents the change in owners
equity during a period, excluding the effects of any additional investments o...
28. Describe the general relationship between net income and net cash flows from operating activities for the firm. . Has the buildup in plant and equipment been financed in a satisfactory manner?
Briefly discuss.
I am working for a company whom makes skateboards. My CEO is now interested in exporting these to australia. I need to make a powerpoint presentation to present to educate the directors in the
matters of foreign investment and international investment theory. I need to make r...
The description should include how earnings are valued, how shareholder wealth can be maximized, and how management decisions affect stockholder wealth.
waht are some setbacks that stop u from achieving goals? I have decided my degree wasnt what i wanted, not finding a job in my degree, and not passing all my classes?
ae(this right)
National Board for Professional Teaching Standards would that be one?
What professional development programs can you enroll in to help you prepare to meet the diverse needs of today's learners?
identify the y-intercept y+x=-2 the y intercept is =? type as ordered pair.
math (final one)
the length of rectangle is fixed at 17 cm. What widths will make the perimeter greater than 82 cm?
In 1990, the life expectancy of males in a certain country was 72.4 years. In 1994, it was 75.8 years...? Let E represent the life expectancy in year t and let t represent the number of years since
1990. The linear function E(t) that fits the data is E(t)=__t + ___. (round to ...
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=scooby9132002","timestamp":"2014-04-19T15:07:28Z","content_type":null,"content_length":"27572","record_id":"<urn:uuid:329948ba-4f08-4b05-b942-7382911cea97>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
i'm stuck on this problem
June 21st 2007, 10:43 AM #1
Junior Member
Jun 2007
the difference of two numbers is 33. the second is 7 l ess than 3times the first what are the numbers?
i thought 3x-7 = 33 i'm stuck to solve this
I think the two numbers should be 77 and 44 because 77-44=33 and 77 multiplied 3=231/7=33. I hope it's right
Let the numbers be x and y, then
substitute the second into the first to get:
so x=-13, and then y=-46.
But if we instead have as the first equation y-x=33, then on substitution
we have:
so x=20, and y=53.
You have seen the solution to a number of these types of problems in recent days. If you are still having problems, it might be a good idea to tell us where those problems are. It would be better
than having us give you solutions that (apparently) aren't helping you.
June 21st 2007, 11:05 AM #2
Apr 2007
June 21st 2007, 11:50 AM #3
Grand Panjandrum
Nov 2005
June 21st 2007, 12:39 PM #4 | {"url":"http://mathhelpforum.com/algebra/16148-i-m-stuck-problem.html","timestamp":"2014-04-18T12:08:04Z","content_type":null,"content_length":"40189","record_id":"<urn:uuid:ccc43ff0-3758-4a25-b87d-739c39219bc7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Connect two tanks together
What if there was no heat transfer just 2 cold water tanks at atmospheric pressure and both tanks were say a head pressure of 30psi? What would be the best claculation method to ensure both tanks
draw water at the same time and not have one tank draw down before the other one starts its draw down?
All you should need to do is to ensure the irreversible pressure drop in the pipe from each tank to the point where the two lines connect is relatively small. The best way to do this is to use lines
large enough so that given your highest expected flow through those pipes, the flow restriction is small. In the case of a 30 psi head tank, you might for example try to minimize the irreversible
pressure drop between the tank and T to a few psi or less. The smaller you make this pressure loss, the closer the tanks will come to emptying equally.
Once you get to the T in the line that connects the 2 tanks, it no longer matters how much restriction you put in the line downstream of that T.
Note that this assumes the tanks are at the same elevation. If that isn't true, you may have to actively control tank level. | {"url":"http://www.physicsforums.com/showthread.php?t=224067","timestamp":"2014-04-17T07:35:56Z","content_type":null,"content_length":"50539","record_id":"<urn:uuid:48d9615e-fc24-4ab4-ab70-459a14471893>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Romeoville Trigonometry Tutor
Find a Romeoville Trigonometry Tutor
...I've also been a middle-school science fair judge at local and regional levels for more than 15 years. I enjoyed these activities so much, in fact, that I decided to broaden my education
outreach efforts and start tutoring, focusing mainly on physics and math. My goals are to improve each student's conceptual understanding of a subject along with his/her specific problem-solving
10 Subjects: including trigonometry, calculus, physics, geometry
...We’ll start by figuring out, together, what’s holding you back. Then we can work to address your specific needs. My goal is to help you genuinely understand the concepts you need to learn and
give you the skills to use them.
18 Subjects: including trigonometry, chemistry, GRE, algebra 1
I'm a a fully qualified teacher from Scotland that has recently moved here due to marriage and now have my green card. I love teaching, and whilst I await registration at state level here I really
want to continue to teach, tutor and help wherever possible. Secondary teaching Maths in Scotland inc...
9 Subjects: including trigonometry, calculus, geometry, SAT math
...After I graduated with Physics Major, I started tutoring Physics as well. So, altogether, I have almost 20 years experience of tutoring and 10 years of teaching in the subjects I mentioned.
During the tutoring sessions, I encourage students to participate in solving problems by keeping them engaged.
11 Subjects: including trigonometry, physics, algebra 2, geometry
...We studied mostly math, but I also helped her with science and history questions. After college, I have worked with my girlfriend's 10 year old niece a few times in different areas that she
needed help in. I have also helped a friend of mine that was having trouble in math at Elmhurst College.
28 Subjects: including trigonometry, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/romeoville_il_trigonometry_tutors.php","timestamp":"2014-04-16T22:42:22Z","content_type":null,"content_length":"24266","record_id":"<urn:uuid:cc2fbd12-a859-470b-9f9c-b5ab1d5957e3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonbovine Ruminations
There are four questions on
Element 4
that are annoying me: E9D03, E9D19, E9D20, and E9D21. All four of these are of the form "What is the beamwidth of a symmetrical pattern antenna with a gain of
dB as compared to an isotropic radiator?". I've reasonably concluded that I am to assume I am asked to find the apex angle of a solid cone that cuts the unit sphere yielding a solid angle equal to
the reciprocal of the gain represented by
I derived a formula for this: first, convert dB to gain:
g = 10^x/10
; second, find solid angle:
Ω = 4π/g
; third, find apex angle corresponding to solid angle:
θ = 2 cos^-1 (1-2Ω)
. The problem is that this formula does not yield the "correct" answers.
I discovered that this formula is off by a constant factor of 1.05 dB. That is, if I add 1.05 dB to the gain before doing the calculations above, my approach above yields answers that are correct
within significant figures. What I don't understand is where the 1.05 dB is coming from.
I'm just going to memorize the answers for the test; there are only four of these questions and it's not all that likely that I'll get even one of them, let alone two, so even if I do flub them it
isn't terribly likely affect my chances of passing. But I'm annoyed that reason has gotten me this close, and no closer. Anyone who understands this stuff have an explanation? | {"url":"http://nonbovine-ruminations.blogspot.com/2008/03/symmetrical-pattern-antennas.html","timestamp":"2014-04-17T15:27:07Z","content_type":null,"content_length":"57392","record_id":"<urn:uuid:095c353c-5bd7-4490-919f-d7ab7bc27ed7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inverse Laplace Transform
May 22nd 2011, 12:54 PM #1
Mar 2011
Inverse Laplace Transform
Sorry if this is in the wrong section.
Basically I am trying to find the inverse laplace transform of
$F(s)=\frac{s}{\left ( s+1 \right )^2+1}$
using the bromiwich integral and the standard bromwich contour. I therefore used the residue theorem to evaluate
$\frac{1}{2\pi i}\int _\gamma \frac{e^{zt} z}{\left ( z+1 \right )^2+1} dz$
and I found the answer to be
$\left ( \frac{1}{2}+\frac{i}{2} \right )e^{\left ( -1+i \right )t}+\left ( \frac{1}{2}-\frac{i}{2} \right )e^{\left ( -1-i \right )t}$
Which I believe is right, however when I try to show that the curved part of the contour goes to zero as R tends to infinity I find that it does not, and so my answer if not what I want.
Now I have been looking at ways around this and jordans lemma keeps appearing, although I really have no idea how to apply it (or even if this is the right approach), could someone please explain
it or point me in the right direction?
Sorry if this is in the wrong section.
Basically I am trying to find the inverse laplace transform of
$F(s)=\frac{s}{\left ( s+1 \right )^2+1}$
using the bromiwich integral and the standard bromwich contour. I therefore used the residue theorem to evaluate
$\frac{1}{2\pi i}\int _\gamma \frac{e^{zt} z}{\left ( z+1 \right )^2+1} dz$
and I found the answer to be
$\left ( \frac{1}{2}+\frac{i}{2} \right )e^{\left ( -1+i \right )t}+\left ( \frac{1}{2}-\frac{i}{2} \right )e^{\left ( -1-i \right )t}$
Which I believe is right, however when I try to show that the curved part of the contour goes to zero as R tends to infinity I find that it does not, and so my answer if not what I want.
Now I have been looking at ways around this and jordans lemma keeps appearing, although I really have no idea how to apply it (or even if this is the right approach), could someone please explain
it or point me in the right direction?
Hello Lewis and welcome to MHF.
When you say the standard contour what do you mean? You need to integrate on a vertical line that is to the right of the real part of all of the poles of the integrand.
One such vertical line is
The imaginary axis.
This gives the integral
$\frac{1}{2\pi i}\int_{-\infty}^{\infty}\frac{ize^{izt}}{(iz+1)^2+1}(idz)= \frac{1}{2\pi i}\int_{-\infty}^{\infty}\frac{ze^{izt}}{(z-i-1)(z-i+1)}dz$
This is now an integral with z real (an integral on the real axis) edit: sorry this was a bad choice of variables
Now we can apply the residue theorem on the semicircle above the real axis and the line segment on the real axis.
So by the residue theorem we get
Now the reason we wanted to write the integral this way
$\frac{1}{2\pi i}\int_{-\infty}^{\infty}\frac{ze^{izt}}{(z-i-1)(z-i+1)}dz$
is so we can apply Jordan's lemma
We can identify
and note that
$\lim_{R \to \infty}f(Re^{i\theta})=0$
So by Jordan's Lemma Jordan's Lemma -- from Wolfram MathWorld
$\lim_{R\to \infty}\int e^{itz}f(z)dz=0$
and we are done.
Just rewrite this as
$\displaystyle F(s) = \frac{s + 1}{(s + 1)^2 + 1} - \frac{1}{(s + 1)^2 + 1}$
and then use $\displaystyle \mathcal{L}^{-1}\left\{\frac{s - a}{(s - a)^2 + \omega ^2}\right\} = e^{a\,t}\cos{(\omega \,t)}$ and $\displaystyle \mathcal{L}^{-1}\left\{\frac{\omega}{(s - a)^2 + \
omega ^2}\right\} = e^{a\,t}\sin{(\omega\,t)}$.
Hello Lewis and welcome to MHF.
When you say the standard contour what do you mean? You need to integrate on a vertical line that is to the right of the real part of all of the poles of the integrand.
One such vertical line is
$\lim_{R \to \infty}f(Re^{i\theta})=0$
So by Jordan's Lemma Jordan's Lemma -- from Wolfram MathWorld
$\lim_{R\to \infty}\int e^{itz}f(z)dz=0$
and we are done.
Thanks alot, it makes alot more sense now, I was just struggling with how I could apply it the integral I had, but I see how now.
May 22nd 2011, 06:22 PM #2
May 23rd 2011, 01:05 AM #3
May 23rd 2011, 10:57 AM #4
Mar 2011 | {"url":"http://mathhelpforum.com/differential-geometry/181342-inverse-laplace-transform.html","timestamp":"2014-04-23T21:50:10Z","content_type":null,"content_length":"47929","record_id":"<urn:uuid:39a0b815-34cf-4ad2-88ff-dcec5bf1067e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: J. Fluid Mech. (2003), vol. 483, pp. 165197. c 2003 Cambridge University Press
DOI: 10.1017/S0022112003004129 Printed in the United Kingdom
A model for diffusion-controlled solidification of
ternary alloys in mushy layers
By D. M. ANDERSON
Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA
(Received 30 July 2002 and in revised form 26 December 2002)
We describe a model for non-convecting diffusion-controlled solidification of a ternary
(three-component) alloy cooled from below at a planar boundary. The modelling
extends previous theory for binary alloy solidification by including a conservation
equation for the additional solute component and coupling the conservation equations
for heat and species to equilibrium relations from the ternary phase diagram. We
focus on growth conditions under which the solidification path (liquid line of descent)
through the ternary phase diagram gives rise to two distinct mushy layers. A primary
mushy layer, which corresponds to solidification along a liquidus surface in the ternary
phase diagram, forms above a secondary (or cotectic) mushy layer, which corresponds
to solidification along a cotectic line in the ternary phase diagram. These two mushy
layers are bounded above by a liquid layer and below by a eutectic solid layer. We
obtain a one-dimensional similarity solution and investigate numerically the role of | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/766/1139647.html","timestamp":"2014-04-17T23:13:14Z","content_type":null,"content_length":"9098","record_id":"<urn:uuid:413cb719-97c8-45d7-9125-d6f5d8d026dd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
separating variables! help!
October 15th 2009, 08:55 PM #1
Sep 2009
Sorry, but this is an upcoming exam question for me. I need help!(and I only got a short time to find out)
Find solutions of the following equation by separating variables:
$<br /> \frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}=0$
Thanks to whoever that can help!
Do you know what "separating variables" means? If you have a test coming up, this is hardly the time to start learning the subject!
It means assuming a solution of the form u(x,y)= A(x)B(y) where A and B are functions of x alone and y alone, respectively. Then $\frac{\partial u}{\partial x}= \frac{dA}{dx}B$ and $\frac{\
partial u}{\partial y}= A\frac{dB}{dy}$.
So that equation becomes $\frac{dA}{dx}B+ A\frac{dB}{dy}= 0$. That can be written as $\frac{dA}{dx}B= -A\frac{dB}{dy}$ and, dividing both sides by AB, $\frac{1}{A}\frac{dA}{dx}= -\frac{1}{B}\frac
Now, the left side is a function of x only and the right side is a function of y only (we have "separated" the variables) so the only way that can be equal for all x and y is if they are both
equal to the same constant.
That is, we can separate into two equations: $\frac{1}{A}\frac{dA}{dx}= \lambda$ or $\frac{dA}{dx}= \lambda A$ and $-\frac{1}{B}\frac{dB}{dy}= \lambda$ or $\frac{dB}{dy}= -\lambda B$.
There is no way to determine what " $\lambda$" is from the equation alone. Depending upon the additional (boundary or intial value) conditions, the solution might be the product of A and B for a
specific $\lambda$ or a sum of such products for many (possibly infinitely many) values of $\lambda$.
I might also add the one could seek solutions of the form
$u = A(x) + B(y)$
which is also a separation of variables. BTW - the general solution of the above is $u = f(x-y)$ for any $f$.
October 16th 2009, 04:56 AM #2
MHF Contributor
Apr 2005
October 16th 2009, 06:29 AM #3 | {"url":"http://mathhelpforum.com/differential-equations/108356-separating-variables-help.html","timestamp":"2014-04-17T16:02:33Z","content_type":null,"content_length":"43040","record_id":"<urn:uuid:49e35edf-5384-4b9a-8dd7-7ca0ddd13f73>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some preprints by M. A. Arcones
The large deviation principle for stochastic processes. 2003. Theory of Probability and its Applications. 47 567-583. 2004. Theory of Probability and its Applications. 48 19-44.
Large deviations of empirical processes. (2003). In "High Dimensional Probability III" 205-223. Edited by J. Hoffmann-Jorgensen, M. B. Marcus and J. A. Wellner. Birkhauser, Boston.
Moderate deviations of empirical processes. (2003). In "Stochastic Inequalities and Applications" 189-212. Edited by E. Giné, C. Houdré and D. Nualart. Birkhauser, Boston.
Large and moderate deviations of empirical processes with nonstandard rates. 2002. Statistics & Probability Letters 57 315-326.
Moderate deviations for M-estimators. 2002. Test 11 465-500.
The large deviation principle of certain series. 2004. European Series in Applied and Industrial Mathematics: Probabilites et Statistique. 8 200-220.
The large deviation principle for local processes. 2001.
Large deviation of M-estimators. 2006. Annals of the Institute of Statistical Mathematics. 58 21-52.
Confidence regions of fixed volume based on the likelihood ratio test. 2005. Far East Journal of Theoretical Statistics. 15 121-141.
Bahadur efficiency of the likelihood ratio test. 2005. Mathematical Methods of Statistics 14 163-179.
Some new tests for normality based on U-processes. 2006. With Y. Wang. Statistics & Probability Letters 76 69-82.
Two tests for multivariate normality based on the characteristic function. 2007. Mathematical Methods of Statistics. 16. 177-201.
A normality test for the errors of the linear model. 2008. International Journal of Statistics and Systems.
Minimax estimators of the coverage probability of the impermissible error for a location family. 2008. Statistics & Decisions. 28, 1001–1043
Talk on "Minimax estimators of the coverage probability of the impermissible error for a location family". | {"url":"http://www.math.binghamton.edu/arcones/prep.html","timestamp":"2014-04-18T16:31:06Z","content_type":null,"content_length":"4964","record_id":"<urn:uuid:dc67b7ea-d2b4-4df4-8de7-9e9a3f2e7b98>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avogadro’s number
Avogadro’s number N[A] is the number of atoms in 12 grams of carbon-12. It’s about 6.02 × 10^23.
Here are a few fun coincidences with Avogadro’s number.
1. N[A] is approximately 24! (i.e., 24 factorial.)
2. The mass of the earth is approximately 10 N[A] kilograms.
3. The number of stars in the observable universe is 0.5 N[A].
The first observation comes from here. I forget where I first heard the second. The third comes from Andrew Dalke in the comments below, verified by WolframAlpha.
For more constants that approximately equal factorials, see the next post.
Related posts:
Pi seconds is one nanocentury
There isn’t a googol of anything
Simpler version of Stirling’s approximation
What is the shape of the Earth?
Wolfram Alpha informs me that Avogadro’s number and 24! differ by approximately 18 sextillion, assuming I counted my commas correctly. That represents just about 3%.
I don’t know if any of this is particularly interesting, but I felt compelled to look it up and hoped that I might save others the effort.
There is about 0.5 moles of stars in the observable universe.
Nathan: Avogadro’s number and 24! are closer on a log scale: 54.75 versus 54.78.
Andrew: Great! I hadn’t seen that one. It means that the number of stars in the universe equals is roughly the same order of magnitude as the number of atoms in something you could hold in your hand.
See also Kevin Kelly’s quote here.
2^79 (via company-internal Google+ discussion)
[...] I’d mentioned in an earlier post, there are some fun coincidences with Avogadro’s [...]
Tagged with: Science
Posted in Science | {"url":"http://www.johndcook.com/blog/2011/10/17/avogadros-number/","timestamp":"2014-04-18T23:16:23Z","content_type":null,"content_length":"32571","record_id":"<urn:uuid:a1b013e3-5982-4c22-befe-5be06a12de77>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
History of the Root of an Equation
Date: 11/01/2007 at 17:32:28
From: Greg
Subject: Why roots?
Why are solutions to equations called roots? Is it because the
solutions to some equations, e.g., x^2 = 16, are in fact roots of the
number specified in the equation?
Date: 11/01/2007 at 23:04:04
From: Doctor Peterson
Subject: Re: Why roots?
Hi, Greg.
This use of the word "root" originates with al-Khwarizmi, the Arabic
mathematician who wrote the first algebra book (coining our word
"algebra" in the process, and giving us the word "algorithm" from his
name). He saw the variable as the root out of which an equation
grows; solving the equation is finding ("extracting") the root. This
was more specifically applied to the equation x^n = k, from which we
get the idea of the nth root, and the square root in particular. So
both uses of "root" in math come from the same "root" idea, that of
the hidden source of a plant.
I've had trouble in the past looking for confirmation of the fact that
both uses of root originated there; it's hidden in a footnote in
Smith's History of Mathematics (Vol. 2, p. 393). But here is a quote
from Ball, A Short Account of the History of Mathematics, p. 157,
referring to the book Al-jabr:
The unknown quantity is termed either "the thing" or "the root"
(that is, of a plant), and from the latter phrase our use of the
word root as applied to the solution of an equation is derived.
The square of the unknown is called "the power".
If you have any further questions, feel free to write back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/71575.html","timestamp":"2014-04-16T10:44:27Z","content_type":null,"content_length":"6602","record_id":"<urn:uuid:3bce8d73-c844-48fa-930b-7585fd1ff6ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Five lemma in HoTop* and arbitrary pointed model categories
up vote 4 down vote favorite
Let $\textbf{HoTop}^*$ be the homotopy category of pointed topological spaces. In the following, the word "isomorphism" shall always mean isomorphism in $\textbf{HoTop}^*$, i.e. pointed homotopy
equivalence. All constructions like cone or suspensions are pointed/reduced.
A triangle $X\to Y\to Z\to \Sigma X$ is called distinguished if it is isomorphic in $\textbf{HoTop}^*$ to a triangle of the form $X\stackrel{f}{\to} Y\hookrightarrow\text{C}f\to\Sigma X$, where $\
text{C}f\to\Sigma X$ is the map collapsing $Y$ to a point.
Let $\ \ \matrix{X & \to & Y & \to & Z & \to & \Sigma X\cr\downarrow\alpha &&\downarrow\beta&&\downarrow\gamma &&\downarrow&\Sigma\alpha\cr X^{\prime} & \to & Y^{\prime} & \to & Z^{\prime} & \to & \
Sigma X^{\prime}}\ \ $ be a morphism of distinguished triangles such that $\alpha$ and $\beta$ are isomorphisms. Is it true that $\gamma$ is an isomorphism, too?
For a morphism of triangles as above (where $\alpha$ and $\beta$ are not necessarily isomorphisms), the morphism $\gamma^*: [Z^{\prime},-]\to [Z,-]$ is equivariant with respect to $[\Sigma\alpha]^*:
[\Sigma X^{\prime},-]\to [\Sigma X,-]$. (edit: this is wrong -- see below) Therefore, I thought one could apply theorem 6.5.3 in Hoveys book on Model Categories. Unfortunately, there seems to be a
gap at the end of the proof, as already pointed out here.
Therefore, I have the following
(1) Am I misunderstanding something in Hovey's proof of 6.5.3(b), or is there really a gap in it? If it is a gap: Do you have any suggestions on how to fix the proof?
(2) If the proof can't be fixed in this generality: Do you have suggestions on how to prove the statement above only for $\textbf{HoTop}^*$?
(1) The usual proof of this fact for triangulated categories does not work here, because there one uses the fact that $[X,-]$ is abelian-group valued for any $X$ and uses the classical five lemma
together with Yoneda to conclude that $\gamma$ is an isomorphism. This doesn't seem to work here.
(2) Since partial morphisms of distinguished triangles in $\textbf{HoTop}^*$ can always be completed to morphisms of triangles, we can reduce to the case where $\alpha$ and $\beta$ both equal the
identity. Therefore, we have a commutative diagram (in $\textbf{HoTop}^*$, i.e. a homotopy commutative diagram in $\textbf{Top}^*$)
$\matrix{X & \to & Y & \to & Z & \to & \Sigma X\cr\downarrow & \text{id}_X &\downarrow & \text{id}_Y&\downarrow&\gamma&\downarrow&\text{id}_{\Sigma X}\cr X & \to & Y & \to & Z& \to & \Sigma X}$
and we have to prove that $\gamma$ is a homotopy equivalence.
Hovey's proof
The way Hovey proceeds in his proof is as follows: We know the following things:
(1) $\gamma^*: [Z,-]\to [Z,-]$ is $[\Sigma X,-]$-equivariant
(2) Two maps $c,d\in[Z,W]$ are equal in $[Y,W]$ if and only if they lie in the same $[\Sigma X,W]$-orbit.
From (2) and the commutativity of the middle square it follows that for any $h\in [Z,W]$ there is some $\rho\in[\Sigma X,W]$ such that $\gamma^*(h)=h.\rho$; in other words $\gamma^*$ doesn't change
the $[\Sigma X,-]$-orbit.
Now, suppose there are $g,h\in [Z,W]$ such that $\gamma^*(h)=\gamma^*(g)$. Then, again by the commutativity of the middle square, there is some $\alpha\in [\Sigma X,W]$ such that $g = h.\alpha$.
Thus, by (1), $\gamma^*(g) = \gamma^*(h).\alpha = \gamma^*(g).\alpha$, and so $\alpha\in\text{Stab}(\gamma^*(g))$.
The point is that Hovey now wants to show that $\text{Stab}(\gamma^*(g))=\text{Stab}(g)$; this would imply $\alpha\in\text{Stab}(g)$, and thus $h = g.\alpha^{-1} = g$ as required. The inclusion $\
text{Stab}(\gamma^*(g))\supset\text{Stab}(g)$ is obvious. For the other inclusion, I have no idea how to prove it.
Do you see how one can fix the proof?
I made a mistake in proving that for any morphism of triangles $(\alpha,\beta,\gamma)$ the morphism $\gamma^*$ is equivariant with respect to $(\Sigma\alpha)^*$. This is wrong.
So what remains is the question on how to fix the proof of theorem 6.5.3 in Hovey's book. Any suggestions?
Thank you.
at.algebraic-topology model-categories
You broke all of your LaTeX again. – Harry Gindi Jan 19 '10 at 10:55
1 Should be fixed now. – Hanno Becker Jan 19 '10 at 11:07
add comment
2 Answers
active oldest votes
This is false for spaces.
Let $X = S^0, Y = S^1$, and $f:X \to Y$ be the trivial map. Then $Z = Cf$ is $S^1 \vee S^1$. Then $[X,Y]$ is trivial, so then the the truth of this statement would imply: If you have a
map $g: Z \to Z$ which is the identity on the first circle, and such that the induced map $S^1 \to S^1$ after collapsing the first circle is homotopic to the identity, then $g$ is a
homotopy equivalence.
The map $g$ is a based map from a wedge of two circles to itself, which has fundamental group $F$, the free group on two generators, with generators $x, y$ corresponding to the two
circle factors. A self-map is determined up to homotopy by a pair of elements in $x', y' \in F$. The condition that the first circle is mapped by the identity says $x = x'$, and the
up vote 5 condition that the induced map on quotients is homotopic to the identity says that the image of $y'$ in $F/\langle x \rangle$ is the same as the image of $y$.
down vote
accepted So: Suppose you have a pair of elements $x$ and $y'$ in $F = \langle x,y \rangle$ such that $y \equiv y'$ in $F/\langle x \rangle$. Then do $x$ and $y'$ freely generate $F$?
And the answer is no. For example, if $y' = (y x)^3 y^{-2}$, then $y' \equiv y$ after taking the quotient, but the pair $x, (y x)^3 y^{-2}$ don't generate the free group. This is
believeable, but rather than being vague let's give a proof for completeness.
There is a group homomorphism $F \to S_3$ sending $x$ to the 2-cycle $(1 2)$ and $y$ to $(1 3)$. This homomorphism is surjective because these generate the group, but $y'$ maps to the
trivial element because the image of $yx$ has order 3 and the image of $y$ has order 2. Therefore, $x$ and $y'$ don't generate $S_3$, and so they can't generate $F$.
Yes! I knew an easy example had to exist. Thanks Tyler! – Chris Schommer-Pries Jan 19 '10 at 14:03
Thank you, Tyler! I also found the reason for my confusion concerning the proposition in Hoveys book: I made a horrible mistake when trying to prove that for any map of triangles $(\
alpha,\beta,\gamma)$ the map $\gamma^*$ is equivariant with respect to $(\Sigma\alpha)^*$. This is simply wrong. – Hanno Becker Jan 20 '10 at 20:00
There have been some revisions/edits to this question, and now I am confused about what the question is. I think Tyler definitely answered one version of the question, so I suggest
that you accept his answer and re-ask the revised version as a new MO question. In any event you'll get more interest with a new question. – Chris Schommer-Pries Jan 21 '10 at 2:28
@Chris: Ok, I'll do that. – Hanno Becker Jan 22 '10 at 7:30
add comment
Here are some comments on Tyler's nice example, which answers the question of whether a map from a CW complex $Z$ to itself is a homotopy equivalence if it restricts to a homotopy
equivalence from a subcomplex $A$ to itself and it induces a homotopy equivalence $Z/A\to Z/A$. The example shows the answer is no in general, but it is yes if $Z$ is simply-connected since
the hypotheses imply that the map induces an isomorphism on the homology of $Z$ so Whitehead's theorem applies. More generally the answer is yes if the fundamental group of $Z$ is abelian
and acts trivially on all the higher homotopy groups of $Z$ since Whitehead's theorem also holds in this generality. (For a textbook proof see Proposition 4.74 of my algebraic topology
up vote
5 down Tyler's example generalizes in the following way. Let $Z=S^1 \vee S^n$ for $n>1$, so $\pi_nZ=\{\mathbb Z}[t,t^{-1}]$, the Laurent polynomials, with $\pi_1Z$ acting by multiplication by
vote powers of $t$. (Look in the universal cover of $Z$ to see this.) Let $g:Z\to Z$ be the identity on $S^1$ and on $S^n$ let it represent a Laurent polynomial $p(t)$ such that $p(1)=\pm 1$, for
example $p(t)=2t-1$. Then $g$ is a homotopy equivalence on $A=S^1$ and on the quotient $Z/A=S^n$, but $g$ need not induce an isomorphism on $\pi_nZ$ so $g$ need not be a homotopy
equivalence, for example when $p(t)=2t-1$ since in this case the cokernel of $g$ on $\pi_n$ is the quotient module ${\mathbb Z}[t,t^{-1}]/(2t-1)$, which is just the dyadic rationals.
I must mention that judicious choice of elements p(t) of pi_n(Z) as an attaching map for a cell allow you to construct maps S^1 -> W which are isomorphisms on pi_1 and on all homology
groups, but not homotopy equivalences. – Tyler Lawson Jan 20 '10 at 18:17
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology model-categories or ask your own question. | {"url":"http://mathoverflow.net/questions/12226/five-lemma-in-hotop-and-arbitrary-pointed-model-categories","timestamp":"2014-04-16T13:39:17Z","content_type":null,"content_length":"69374","record_id":"<urn:uuid:b39294b8-13fb-438a-bbac-4a84b285bd23>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/leozap1/answered","timestamp":"2014-04-18T10:55:25Z","content_type":null,"content_length":"117026","record_id":"<urn:uuid:560732fe-0b57-45d8-b4eb-6705785c16ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
i have used a composition tool called compose, programmed in lisp, to make all my compositions since 2003. this text outline the organization of this composition tool.
• the formal compositional work take place in something i call a proportional space.
• the proportional space is a serie of units.
• all units have a length of one.
• the proportional space can be divided into smaller spaces called fields.
• a field has a length of at least one unit.
• the total number of units that constitute a proportional space is called the resolution of the proportional space.
• every field in a proportional space correspond to a sound or a silence in the score.
• the sound or silence objects in a score is are represented by one or more score symboles.
• the score symbols are either notes or rests.
• the notes and pauses does not correspond to the fields in the proportional space.
• a field can be represented in a score by several notes, and even in advanced cases by complex structures made of combinations of notes and pauses. but it remains a single sound or silence.
• there is no correspondence between the number of units in a field and the number of notes in the corresponding sound or silence in the score.
• the score structure is made of sections, partial sections, measures and partial measures.
• a partial measure correspond to a conductor beat.
• every measure have a measure time signature.
• every partial measure has a partial measure time signature.
• every measure has at least one partial measure.
example: a 6/8 measure beaten on every dotted quaver is then a measure with a measure time signature of 6/8, made of two partial measures with a partial measure time signature of 3/8.
• there can be several partial measure time signatures in one measure.
• to be able to work in the proportional space i first need to know its resolution.
• to know the resolution of a proportional space i have to construct that resolution.
this is done by making a nominal score.
• a nominal score is an empty score where i have decided how many nominal objects it can be in every partial measure.
• the nominal objects in a partial measure are of the same length.
• the number of nominal objects determine the nominal resolution of that partial measure.
• a nominal object correspond to one unit in the proportional space.
• if i count all the nominal objects in a nominal score i will have the total number of units of the proportional space.
• thus, the total number of nominal objects in the nominal score corresponds to the resolution of the proportional space.
• once I have the resolution of the proportional space i can start the formal compositional work.
• the temporal aspect of the formal compositional work is to divide the proportional space into nested fields.
• once these divisions are made the proportional space is now organized in a hierarchical content (that correspond closely to the "contenu formel" of G. G. Granger: formes opérations objets).
• the bottom field in a hierarchical content at a given position is the active field.
• an active field can be either a sound or a silence.
• once i have the hierarchical content I can ask for its active fields.
• once i have the active fields i can now lay out them on the nominal score.
• i now have the raw score of the composition.
• once i have the raw score the rhythmical structure is subject to a extended series of transformations in order to have the final rhythmical structure.
• these transformations now take place on the score structure and is no longer related to the proportional space method. i am thus changing to another set of rules.
• when i have the final rhythmical structure I have the final raw score. this is how far the composition tool take the structures of the composition.
• all transformations made in the score after the production of the final raw score is made by hand.
paris, le 3 mars 2012 | {"url":"http://www.joakimsandgren.com/pages/compose.html","timestamp":"2014-04-18T05:32:23Z","content_type":null,"content_length":"6592","record_id":"<urn:uuid:5d28bca8-76c1-4071-87bd-e101a3f83d77>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |