content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Building mathematical models for cancer treatments
Matt Biesecker
(PhysOrg.com) -- South Dakota State University mathematicians are helping doctors of the Mayo Clinic build mathematical models to fine-tune an innovative strategy to treat cancer.
South Dakota State University mathematicians are helping doctors of the Mayo Clinic build mathematical models to fine-tune an innovative strategy to treat cancer.
Assistant professor Matt Biesecker from SDSU’s Department of Mathematics and Statistics said the project has to do with Mayo Clinic Cancer Center physicians’ plan to use a modified measles virus
against some forms of cancer. That new line of attack against cancer is possible because Mayo researchers used bioengineering techniques to make a modified measles virus that will preferentially
infect cancer cells instead of the cells the virus would ordinarily target.
Biesecker said SDSU scientists aren’t involved in Mayo physicians’ laboratory work with the virus. The SDSU role so far has been helping Mayo researchers’ adjust their working mathematical model to
simulate treatment plans. That work can help Mayo scientists figure out timing of treatments, for example, and how large or small doses of virus affect the growth of tumors.
Biesecker said the model was based a system of ordinary differential equations, where the tumor cells and viruses are viewed as homogeneous populations and thus did not account true interactions
between viruses and cancer cells at the molecular level. On the other hand, the model provided a good fit to experimental data.
“What we have been working on is a more complicated and more accurate mathematical model. It would take into account the multiple spatial scales that are involved. Viruses are very small in scale
compared to the size of the tumor cell. The current model does not take that into account.”
Biesecker and his colleagues wrote about the project in a journal article published in late 2009 in the Bulletin of Mathematical Biology. His co-authors are assistant professor Jung-Han Kimn, also of
SDSU’s Department of Mathematics and Statistics; professor Huitian Lu of SDSU’s Department of Engineering Technology and Management; David Dingli of the Mayo Clinic’s Department of Molecular
Medicine; and Zeljko Bajzer of the Mayo Clinic’s Biomathematics Resource Core in its Department of Biochemistry and Molecular Biology.
The mathematical model could help the Mayo researchers in the process of designing viruses to target cancer. For example, having a more detailed mathematical model could help scientists understand
how likely a particular virus is to infect a tumor cell versus a normal cell, how quickly the virus would replicate within an infected cell, and how quickly an infected cancer cell would die.
“There are certain key characteristics the Mayo researchers want their viruses to have,” Biesecker said. “What mathematical modeling has allowed them to understand is which of these characteristics
plays a dominant role in reducing the tumor cell population. Ultimately their goal is to find an effective treatment. It’s not just an intellectual exercise.”
In the published study, Biesecker and his colleagues started with a model of the dynamics of tumor virotherapy, validated with experimental data, then used “optimization theory” to examine possible
improvements in tumor therapy outcomes. The scientists found that in most circumstances, giving more than two doses of a virus is not helpful; correctly timed delivery of the virus provides far
better results when compared to regularly scheduled therapy or continuous infusion; and a second dose of virus that is not properly timed leads to a worse outcome compared to a single dose of virus.
Surprisingly, the model also predicts that it is less costly to treat larger tumors than small ones. Scientists are still trying to determine why. Biesecker said though the results are
counter-intuitive, the finding makes sense in light of the fact that the therapy is trying to infect a community of cancer cells with a virus. A denser population of target cells makes it more likely
for the infection to take hold and spread.
In the next mathematical model, Biesecker said, SDSU scientists hope to simulate the behavior of individual cells, giving Mayo scientists greater insight into how viruses can be used against cancer. | {"url":"http://phys.org/news192180859.html","timestamp":"2014-04-18T19:21:21Z","content_type":null,"content_length":"68448","record_id":"<urn:uuid:b9d8c9a4-264c-4555-9a31-648ea6880f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negative Numbers Combined with Exponentials
Date: 03/09/2001 at 01:07:20
From: Dee Ryno
Subject: Negative numbers combined with exponentials.
I know that -3 ^2 {to the second power} is negative 3 because the
order of operations tells us exponentials are done first and then the
answer {9} is multiplied by (-1), but why isn't the negative attached
to the -3 to say -3 * -3, which would make it positive, since 3 ^2
{to the second power} is 3 * 3? Help.
This has confused a lot of teachers too!
Date: 03/09/2001 at 15:29:24
From: Doctor Peterson
Subject: Re: Negative numbers combined with exponentials.
Hi, Dee.
You are aware that the order of operations is the key. Because
negation is taken as a multiplication, and exponentiation is done
before multiplication, we read
rather than as
The exponent holds on to the 3 tighter than the minus sign does.
I think you're asking why we have this rule, when the minus looks so
close to the 3, and it seems so much more natural to think of it as -3
squared. I could just say this is a choice that has been made, and we
just follow the convention. But there's an additional reason besides
the logic of seeing negation as multiplication by -1. When we get to
algebra and want to write polynomials, we find ourselves working with
-x^n, which has to follow the same rules as for -3^n:
x^3 - 3x^2 - 3x + 5 = 0
There's little question here; we know that 3x^2 is taken as 3(x^2),
and then we subtract that. But what about
-x^3 - 3x^2 - 3x + 5 = 0 ?
Do we take this as (-x)^3 or as -(x^3)? Because this is a polynomial,
we know that x is meant to be the base of all the exponents; we don't
want to have to write -(x^3) to make it mean what we intend; so we are
perfectly happy to follow the logic where it takes us and treat the
negation as a multiplication done after the exponentiation, rather
than as a part of the base. The rule helps here, rather than seeming
odd: In a polynomial we want exponents to come before everything else,
because they are the stars of the show.
I suspect that polynomials drove much of the development of the order
of operations; many of the early examples of algebraic notation in
which those rules can be discerned are polynomials.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/53240.html","timestamp":"2014-04-17T02:04:56Z","content_type":null,"content_length":"7848","record_id":"<urn:uuid:978f8fd0-10b7-4e71-9a92-a3dce62d2e8c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
A box has 150 balls of which 65 are red and 30 blue. What is the probability that a ball picked up at random is red... - Homework Help - eNotes.com
A box has 150 balls of which 65 are red and 30 blue. What is the probability that a ball picked up at random is red or blue?
The box has 150 balls. Of these 65 balls are red in color and 30 are blue. We pick up one ball at random and want to know the probability that the ball is either red or blue. There are a total of 65
+ 30 = 95 balls which satisfy the given condition.
Therefore the probability that a ball picked up at random is red or blue is 95/150 = 19/30.
The required probability is 19/30
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/box-has-150-balls-which-65-red-30-blue-what-248999","timestamp":"2014-04-16T05:13:35Z","content_type":null,"content_length":"27159","record_id":"<urn:uuid:bf40e9f2-720c-4e52-a83a-122fa8bf5115>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Creating C++ program to make a Sine fuction without any library other than iostream?
November 10th, 2012, 03:46 AM
Creating C++ program to make a Sine fuction without any library other than iostream?
Hi. I am a beginner in C++ programming. My assignment is to create a C++ Program to find the sine of a number without any library other than iostream by using Taylor Series:
sin (x) = (x/1!) - (x^3/3!) + (x^5/5!) - (x^7/7!) + (x^9/9!) ...... (x^n/n!).
I've spent 4-5 hours on this but i just cant seem to make it right. I have to submit it within 6hrs.
Here is what i have done till now:
#include <iostream>
double fact (int f); //declaration of factorial function
double power(double x, int y); //declaration of power function
double sin(int x); //declaration of sine function
//double cos(int x); //declaration of cosine function
//double tan(int x); //declaration of tangent function
using namespace std;
int main()
float x=0;
cout << "Enter the value of x in Sin(x): " << endl;
cin >> x;
cout << "Sine of " << x << " is " << sin(x);
cout << endl;
return 0;
//Function for Factorial
double fact (int x)
double f=1;
if (x==0)
return f;
for (int i=1; i<=x; i++)
return f;
//Function for Power
double power (double x, int y)
double p=1;
for (int i=1; i<=y; i++)
return p;
//Function for Sin
double sin (int x);
double sum_pos = 0;
double sum_neg=0;
double t_sum=0;
for (int i=1; i<=1000; i+=4)
sum_pos = sum_pos + (power (x,i) / fact (i));
for (int i=3; i<=1000; i+=4)
sum_neg = sum_neg + (power (x,i) / fact (i));
t_sum = sum_pos - sum_neg;
return t_sum;
Please help me. I would be grateful to any sort of help.
November 10th, 2012, 05:30 AM
Re: Creating C++ program to make a Sine fuction without any library other than iostre
Why do your start a new thread with exactly the same problem that is discussed in your previous thread?
Please get back, read what guys wrote you in response to your problem and try to implement what they recommended.
November 10th, 2012, 05:35 AM
Re: Creating C++ program to make a Sine fuction without any library other than iostre
Why do your start a new thread with exactly the same problem that is discussed in your previous thread?
Please get back, read what guys wrote you in response to your problem and try to implement what they recommended.
that was some other problem. i cannot use math library and i have to call the function of factorial and power in the function of sine. i dont know how to do that. without using sine function, i
can make it work but by using sine function, the program is not compiling. i really need some help here.
November 10th, 2012, 05:54 AM
Re: Creating C++ program to make a Sine fuction without any library other than iostre | {"url":"http://forums.codeguru.com/printthread.php?t=529217&pp=15&page=1","timestamp":"2014-04-24T23:09:28Z","content_type":null,"content_length":"9932","record_id":"<urn:uuid:d9073fb9-6e19-4bb9-a653-d392e8590e26>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structured Total Least Squares for Approximate Polynomial Operations
Abstract (Summary)
This thesis presents techniques for accurately computing a number of fundamental operations on approximate polynomials. The general goal is to determine nearby polynomials which have a non-trivial
result for the operation. We proceed by first translating each of the polynomial operations to a particular structured matrix system, constructed to represent dependencies in the polynomial
coefficients. Perturbing this matrix system to a nearby system of reduced rank yields the nearby polynomials that have a non-trivial result. The translation from polynomial operation to matrix system
permits the use of emerging methods for solving sophisticated least squares problems. These methods introduce the required dependencies in the system in a structured way, ensuring a certain
minimization is met. This minimization ensures the determined polynomials are close to the original input. We present translations for the following operations on approximate polynomials:
* Division
* Greatest Common Divisor (GCD)
* Bivariate Factorization
* Decomposition
The Least Squares problems considered include classical Least Squares (LS), Total Least Squares (TLS) and Structured Total Least Squares (STLS). In particular, we make use of some recent developments
in formulation of STLS, to perturb the matrix system, while maintaining the structure of the original matrix. This allows reconstruction of the resulting polynomials without applying any heuristics
or iterative refinements, and guarantees a result for the operation with zero residual. Underlying the methods for the LS, TLS and STLS problems are varying uses of the Singular Value Decomposition
(SVD). This decomposition is also a vital tool for deter- mining appropriate matrix rank, and we spend some time establishing the accuracy of the SVD. We present an algorithm for relatively accurate
SVD recently introduced in [8], then used to solve LS and TLS problems. The result is confidence in the use of LS and TLS for the polynomial operations, to provide a fair contrast with STLS. The SVD
is also used to provide the starting point for our STLS algorithm, with the prescribed guaranteed accuracy. Finally, we present a generalized implementation of the Riemannian SVD (RiSVD), which can
be applied on any structured matrix to determine the result for STLS. This has the advantage of being applicable to all of our polynomial operations, with the penalty of decreased efficiency. We also
include a novel, yet naive, improvement that relies on ran- domization to increase the efficiency, by converting a rectangular system to one that is square. The results for each of the polynomial
operations are presented in detail, and the benefits of each of the Least Squares solutions are considered. We also present distance bounds that confirm our solutions are within an acceptable
Bibliographical Information:
School:University of Waterloo
School Location:Canada - Ontario
Source Type:Master's Thesis
Keywords:computer science
Date of Publication:01/01/2004 | {"url":"http://www.openthesis.org/documents/Structured-Total-Least-Squares-Approximate-190972.html","timestamp":"2014-04-20T19:05:13Z","content_type":null,"content_length":"10702","record_id":"<urn:uuid:acfb69f1-a345-45ad-bae5-930f60d49721>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplication Law?
January 7th 2008, 06:29 AM #1
Oct 2007
Multiplication Law?
This is a probability problem on a past exam for my university. It asks....
A test correctly identifies a disease in 95% of people who have it. I correctly identifies no disease in 94% of people who don't have it. In the population, 5% of people have this disease. A
person is tested at random.
1. What is the probability that they will test positive?
2. what is the probability that they have the disease given that they tested positive.
I'm useless at probability but I have a feeling that this has something to do with the multiplication law. I just can't seem to figure out how this would be done. I know that the second part
would be like...
P(person actually has the disease | person tested positive)
Could anyone point me in the right direction??
Mr. Bayes will help you:
Ok, I had a belt at this, here's what i have.
A = probability the the person has the disease (.05).
B = probability that the person hasn't got the disease (.95).
C = probability of detecting the disease if present (.95).
D = probability of detecting no disease if not present (.94).
P(person tests positive) = (0.05 * 0.95) + (0.95 * 0.06 (ŹD)) = 0.1045
So a person will test positive about 10% of the time, is this right or at least close to it??
And thank you Colby, you defo pointed me in the right direction
Ok, I had a belt at this, here's what i have.
A = probability the the person has the disease (.05).
B = probability that the person hasn't got the disease (.95).
C = probability of detecting the disease if present (.95).
D = probability of detecting no disease if not present (.94).
P(person tests positive) = (0.05 * 0.95) + (0.95 * 0.06 (ŹD)) = 0.1045
So a person will test positive about 10% of the time, is this right or at least close to it??
And thank you Colby, you defo pointed me in the right direction
Looks good.
A test correctly identifies a disease in 95% of people who have it. I correctly identifies no disease in 94% of people who don't have it. In the population, 5% of people have this disease. A
person is tested at random.
1. What is the probability that they will test positive?
2. what is the probability that they have the disease given that they tested positive.
You good for 2? (I get 0.454545.... = 45/99).
Regarding the answer to 2 - The moral of the story is .......
Yea, i get it now. I'd use the second formula that colby posted. Which i did and I got .4545454 repeating.
As to the moral, well I guess the moral would be that that is a pretty bad test
January 7th 2008, 06:35 AM #2
January 7th 2008, 07:38 AM #3
Oct 2007
January 7th 2008, 03:01 PM #4
January 9th 2008, 07:12 AM #5
Oct 2007
January 9th 2008, 12:19 PM #6 | {"url":"http://mathhelpforum.com/statistics/25690-multiplication-law.html","timestamp":"2014-04-20T03:20:13Z","content_type":null,"content_length":"49326","record_id":"<urn:uuid:9148c120-210a-4af8-addb-676d44371831>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of ARA Belgrano
The Torpedo Data Computer (TDC) was an early electromechanical analog computer used for torpedo fire-control on American submarines during World War II (see Figure 1). Britain, Germany, and Japan
also developed automated torpedo fire control equipment, but none were as advanced as US Navy's TDC. These nations all developed torpedo fire control computers for calculating torpedo courses to
intercept targets, but the TDC added the ability to automatically track the target. The target tracking capabilities of the TDC were unique for submarines during WWII and set the standard for
submarine torpedo fire control at that time.
The TDC was designed to provide fire-control solutions for submarine torpedo launches against ships running on the surface (surface warships used a different computer for their torpedo launches). The
TDC had a wide array of dials and switches for data input and display. To generate a fire control solution, it required inputs on
• submarine course and speed, which were read automatically from the submarine's gyrocompass and pitometer log
• estimated target course, speed, and range information (obtained using data from the submarine's periscope, target bearing transmitter, radar, and sonar observations)
• torpedo type and speed (type was needed to deal with the different torpedo ballistics)
The TDC performed the trigonometric calculations required to compute a target intercept course for the torpedo. It also had an electromechanical interface to the torpedoes that allowed it to
automatically set the torpedo courses while they were in their tubes, ready to be launched.
The TDC's target tracking capability was used by the fire control party to continuously update the fire control solution to the torpedoes even while the submarine was maneuvering. The TDC's target
tracking ability also allowed the submarine to accurately launch torpedoes even when the target was temporarily obscured by smoke or fog.
The TDC was a rather bulky addition to the sub's conning tower and required two extra crewmen: one as an expert in its maintenance, and the other as its actual combat operator. Despite these
drawbacks, the use of the TDC was an important factor in the successful commerce raiding program conducted by American submarines during the Pacific campaign of WWII. First-person accounts published
on the American submarine campaign in the Pacific often cite the use of TDC.
Two upgraded US Navy WWII-era fleet submarines (USS Tusk and USS Cutlass) with their TDCs continue in service with Taiwan's navy and US Nautical Museum staff are assisting them with maintaining their
equipment. The museum also has a fully restored and functioning TDC for the USS Pampanito, docked in San Francisco.
The problem of aiming a
has occupied military engineers since
Robert Whitehead
developed the modern torpedo in the 1860s. These early torpedoes ran at a preset depth on a straight course (consequently they are frequently referred to as "straight runners"). This was the state of
the art in torpedo guidance until the development of the homing torpedo during the latter part of
World War II
. The vast majority of the torpedoes launched during WWII were straight running torpedoes and these continued in use for many years after WWII. For example, the
standard U.S. WWII torpedo
remained in service until 1980 and still serves with foreign navies today. In fact, two WWII-era straight running torpedoes, fired by nuclear powered submarine
HMS Conqueror
sank the
ARA General Belgrano
in 1982, the last ship sunk by a submarine in combat to date.
During World War I, computing a target intercept course for a torpedo was a manual process where the fire control party was aided by various slide rules (the U.S. examples were colloquially called
"banjo", for its shape, and "Is/Was") and clever mechanical sights. During World War II, Germany, Britain, Japan, and the United States each developed analog computers to automate the process of
computing the required torpedo course,). The first US submarine designed to use the TDC was the USS Tambor, which deployed in 1940 with the Torpedo Data Computer TDC Mk III(see Figure 1). In 1943,
the Torpedo Data Computer Mk IV was developed to add support for the Torpedo Mk 18 and semi-automatic use of radar data. Both the TDC Mk III and Mk IV were developed by Arma Corporation (now American
Bosch Arma).
The Problem of Aiming a Straight Running Torpedo
A straight running torpedo has a gyroscope-based control system that ensures that the torpedo will run a straight course. The torpedo can run on a course different from that of the submarine by
adjusting a parameter called the gyro angle, which sets the course of the torpedo relative to the course of the submarine (see Figure 2). The primary role of the TDC is to determine the gyro angle
setting required to ensure that the torpedo will strike the target.
Determining the gyro angle required the real-time solution of a complex trigonometric equation (see Equation 1 for a simplified example). The TDC provided a continuous solution to this equation using
data updates from the submarine's navigation sensors and the TDC's target tracker. The TDC was also able to automatically update all torpedo gyro angle settings simultaneously with a fire control
solution, which improved the accuracy over systems that required manual updating of the torpedo's course.
The TDC enables the submarine to launch the torpedo on a course different from that of the submarine, which is important tactically. Otherwise the submarine would need to be pointed at the projected
intercept point in order to launch a torpedo. Requiring the entire vessel to be pointed in order to launch a torpedo would be time consuming, require precise submarine course control, and would
needlessly complicate the torpedo firing process. The TDC with target tracking gives the submarine the ability to maneuver independently of the required target intercept course for the torpedo.
As is shown in Figure 2, in general, the torpedo does not actually move in a straight path immediately after launch and it does not instantly accelerate to full speed, which are referred to as
torpedo ballistic characteristics. The ballistic characteristics are described by three parameters: reach, turning radius, and corrected torpedo speed. Also, the target bearing angle is different
from the point of view of the periscope versus the point of view of the torpedo, which is referred to as torpedo tube parallax. These factors are a significant complication in the calculation of the
gyro angle and the TDC must compensate for their effects.
Straight running torpedoes were usually launched in salvo (i.e. multiple launches in a short period of time) or a spread (i.e. multiple launches with slight angle offsets) to increase the probability
of striking the target given the inaccuracies present in the measurement of angles, target range, target speed, torpedo track angle, and torpedo speed. Salvos and spreads were also launched to strike
tough targets multiple times to ensure their destruction. The TDC supported the launching of torpedo salvos by allowing short time offsets between firings and torpedo spreads by adding small angle
offsets to each torpedo's gyro angle. The last ship sank by a submarine torpedo attack, the ARA Belgrano, was struck by two torpedoes from a three torpedo spread.
To accurately compute the gyro angle for a torpedo in a general engagement scenario, the target course, range, and bearing must be accurately known. During WWII, target course, range, and bearing
estimates often had to be generated using periscope observations, which were highly subjective and error prone. The TDC was used to refine the estimates of the target's course, range, and bearing
through a process of
• estimate the target's course, speed, and range based on observations.
• use the TDC to predict the target's position at a future time based on the estimates of the target's course, speed, and range.
• compare the predicted position against the actual position and correct the estimated parameters as required to achieve agreement between the predictions and observation. Agreement between
prediction and observation means that the target course, speed, and range estimates are accurate.
Estimating the target's course was generally considered the most difficult of the observation tasks. The accuracy of the result was highly dependent on the experience of the skipper. During combat,
the actual course of the target was not usually determined but instead the skippers determined a related quantity called "angle on the bow." Angle on the bow is the angle formed by the target course
and the line of sight to the submarine. Some skippers, like the legendary Richard O'Kane, practiced determining the angle on the bow by looking at IJN ship models mounted on a calibrated lazy Susan
through an inverted binocular barrel.
To generate target position data versus time, the TDC needed to solve the equations of motion for the target relative to the submarine. The equations of motion are differential equations and the TDC
used mechanical integrators to generate its solution.
The TDC needed to be positioned near other fire control equipment to minimize the amount of electromechanical interconnect. Because submarine space within the pressure hull was limited, the TDC
needed to be as small as possible. On WWII submarines, the TDC and other fire control equipment was mounted in the conning tower, which was a very small space. The packaging problem was severe and
the performance of some early torpedo fire control equipment was hampered by the need to make it small.
TDC Functional Description
Since the TDC actually performed two separate functions, generating target position estimates and computing torpedo firing angles, the TDC actually consisted of two types of analog computers:
• Angle Solver: This computer calculates the required gyro angle. The TDC had separate angle solvers for the forward and aft torpedo tubes.
• Position Keeper: This computer generates a continuously updated estimate of the target position based on earlier target position measurements.
Angle Solver
The exact equations implemented in the angle solver have not been published in any generally available reference. However, the Submarine Torpedo Fire Control Manual does discuss the calculations in a
general sense and a greatly abbreviated form of that discussion is presented here.
The general torpedo fire control problem is illustrated in Figure 2. The problem is made more tractable if we assume:
• The periscope is on the line formed by the torpedo running along its course
• The target moves on a fixed course and speed
• The torpedo moves on a fixed course and speed
As can be seen in Figure 2, these assumptions are not true in general because of the torpedo ballistic characteristics and torpedo tube parallax. Providing the details as to how to correct the
torpedo gyro angle calculation for ballistics and parallax is complicated and beyond the scope of this article. Most discussions of gyro angle determination take the simpler approach of using Figure
3, which is called the torpedo fire control triangle. Figure 3 provides an accurate model for computing the gyro angle when the gyro angle is small, usually less than < 30^o. The effects of parallax
and ballistics are minimal for small gyro angle launches because the course deviations they cause are usually small enough to be ignorable. US submarines during WWII preferred to fire their torpedoes
at small gyro angles because the TDC's fire control solutions were most accurate for small angles.
The problem of computing the gyro angle setting is a trigonometry problem that is simplified by first considering the calculation of the deflection angle, which ignores torpedo ballistics and
parallax. For small gyro angles, θ[Gyro] ≈ θ[Bearing] - θ[Deflection]. A direct application of the law of sines to Figure 3 produces Equation 1.
(Equation 1)
$frac\left\{left Vert v_\left\{Target\right\} right | \right\}\left\{ sin\left(theta_\left\{Deflection\right\}\right) \right\} = frac\left\{left Vert v_\left\{Torpedo\right\} right | \right\}\left\{
sin\left(theta_\left\{Bow\right\}\right) \right\}$
v[Target] is the velocity of the target.
v[Torpedo] is the velocity of the torpedo.
θ[Bow] is the angle of the target ship bow relative to the periscope line of sight.
θ[Deflection] is the angle of the torpedo course relative to the periscope line of sight.
Observe that range plays no role in Equation 1, which is true as long as the three assumptions are met. In fact, Equation 1 is the same equation solved by the mechanical sights of steerable torpedo
tubes used on surface ships during WWI and WWII. Torpedo launches from steerable torpedo tubes meet the three stated assumptions well. However, an accurate torpedo launch from a submarine requires
parallax and torpedo ballistic corrections when gyro angles are large. These corrections require knowing range accurately. When the target range was not known accurately, torpedo launches requiring
large gyro angles were not recommended.
Equation 1 is frequently modified to substitute track angle for deflection angle (track angle is defined in Figure 2, θ[Track]=θ[Bow]+θ[Deflection]). This modification is illustrated with Equation 2.
(Equation 2)
$frac\left\{left Vert v_\left\{Target\right\} right | \right\}\left\{ sin\left(theta_\left\{Deflection\right\}\right) \right\} = frac\left\{left Vert v_\left\{Torpedo\right\} right | \right\}\left\{
θ[Track] is the angle between the target ship's course and the submarine's course.
A number of publications state the optimum torpedo track angle as 110^o for a Torpedo Mk 14 (46 knot weapon). Figure 4 shows a plot of the deflection angle versus track angle when the gyro angle is 0
^o (i.e. θ[Deflection]=θ[Bearing]). Optimum track angle is defined as the point of minimum deflection angle sensitivity to track angle errors for a given target speed. This minimum occurs at the
points of zero slope on the curves in Figure 4 (these points are marked by small triangles). The curves show the solutions of Equation 2 for deflection angle as a function of target speed and track
angle. Figure 4 confirms that 110^o is the optimum track angle for a target, which would be a common ship speed.
There is fairly complete documentation available for a Japanese torpedo fire control computer that goes through the details of correcting for the ballistic and parallax factors While the TDC may not
have used the exact same approach, it was likely very similar.
Position Keeper
As with the angle solver, the exact equations implemented in the position keeper have not been published in any generally available reference. However, similar functions were implemented in the
rangekeepers for surface ship-based fire control systems. For a general discussion of the principles behind the position keeper, see Rangekeeper.
Notes and references
External links | {"url":"http://www.reference.com/browse/ARA+Belgrano","timestamp":"2014-04-20T03:45:27Z","content_type":null,"content_length":"99600","record_id":"<urn:uuid:892be991-7da3-41c0-844e-1792860c1b6f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diamond Solitaire
│ The Games and Puzzles Journal Issue 41, September-October 2005 │
Back to: GPJ Index Page Sections on this page: Introduction General Boards with Super-Sweeps Rhombus(6) Maximal Sweeps on Rhombus(6i) Conclusions End
Diamond Solitaire
by George Bell - September 2005
Peg solitaire is a one-person game usually played on a 33-hole cross-shaped board, or a 15-hole triangular board (figure 1). In the first case the pattern of holes (or board locations) come from a
square lattice, while in the second case they come from a triangular lattice. The usual game begins with the board filled by pegs (or marbles) except for a single board location, left vacant. The
player then jumps one peg over another into an empty hole, removing the jumped peg from the board. The goal is to choose a series of jumps that finish with one peg. The general problem of going from
a board position with one peg missing to one peg will be called a peg solitaire problem. If the missing peg and final peg are in the same place, we call it a complement problem, because the starting
and ending board positions are complements of one another (where every peg is replaced by an empty hole and vice versa).
Initially, we will consider peg solitaire on boards of rather arbitrary shape. The basic problem may seem hard enough, but we will add more conditions or constraints on the solution. The reason for
this may not be immediately clear, but it has to do with the fact that adding constraints to a problem with many solutions can make it easier to find one, and the solutions themselves can be quite
remarkable. The approach also brings out certain board shapes with interesting properties.
Suppose we add the constraint that the peg solitaire problem must finish in the most dramatic way possible: with one peg "sweeping off" all the remaining pegs, and finishing as the sole survivor.
Here we need to distinguish between a jump one peg jumping over another, and a move one or more jumps by the same peg. Any single move that captures n pegs is a sweep of length n, or an n-sweep. In
this terminology we want to finish a peg solitaire problem with the longest sweep possible.
A sweep that has the longest length geometrically possible on a board is called a maximal sweep. For most board shapes, maximal sweeps cannot be reached in the solution to a peg solitaire problem. In
GPJ #36 [1], we considered the triangular board of side n, called Triangle(n). We saw that for n odd, the board Triangle(n) supports an even more remarkable sweep that is maximal and also has the
property that every hole in the board is affected by the sweep in that it is either jumped over or is the starting or ending hole of some jump in the sweep. Such remarkable sweeps need a special name
we call them super-sweeps. While any board has a maximal sweep, only a very few support a super-sweep.
Figure 1 Maximal sweeps on the standard boards, only the second is a super-sweep.
(a) A maximal 16-sweep on the 33-hole cross-shaped board.
(b) A maximal 9-sweep on the 15-hole triangular board, Triangle(5).
In figure 1a, the central hole is not touched or jumped over by the sweep, thus while this sweep is the longest possible it is not a super-sweep, and a super-sweep is not possible on this board. It's
not hard to see that super-sweeps are never possible on square lattice boards, except for certain trivial or degenerate cases (like a 1-dimensional board). Non-trivial super-sweeps are only possible
on a triangular lattice, an example is figure 1b.
Neither of the sweep patterns of figure 1 can be reached during a peg solitaire problem on that board. The easiest way to see this is to try to figure out how one could have arrived at the sweep
board position, or equivalently try to play backwards from the sweep position. In GPJ #36 [1], we saw in the Forward/Backward Theorem that backward play is equivalent to forward play from the
complement of the board position. This concept is referred to as the "time reversal trick" in Winning Ways For Your Mathematical Plays [2].
If we take the complement of either board position in figure 1, we find that no jump is possible. This proves that there is no "parent board position" from which the sweep position can arise, it
cannot occur during the solution to a peg solitaire problem. If we take the complement of any super-sweep pattern, and a jump is possible, this means there would have to be two consecutive empty
holes in the original super-sweep. Because it is a super-sweep, both of these holes must be the starting or ending locations of some jump in the sweep, which is impossible. Therefore a super-sweep
can never occur in the solution to a peg solitaire problem.
If super-sweeps cannot occur in peg solitaire problems, the reader may wonder why we are wasting our time with them. The answer is provided by the triangular boards and GPS #36 [1]: the super-sweep
pattern of figure 1b can be reached in a problem on Triangle(6). This 9-sweep is no longer a super-sweep with respect to Triangle(6), but it is still a maximal sweep. Loosely speaking, a super-sweep
may still be reachable in a peg solitaire problem on a board of one larger size.
General Boards with Super-Sweeps
Let us consider peg solitaire boards on a triangular lattice where the board shape is a general polygon. Because it is on a triangular lattice the corners of the board are restricted to multiples of
60°. For which such boards is it possible to have a super-sweep? It is clear that the board edges must all have an odd length (in holes), because the super-sweep must pass through all the corners of
the board (a corner hole cannot be jumped over, and consequently the super-sweep must jump into it). Consider, for example, the 8-sided polygon board of figure 2.
Figure 2 An 8-sided (non-convex) polygon board and the associated super-sweep graph.
(a) An unusual 24-hole board, set up in the board position of a hypothetical super-sweep (of length 15).
(b) The graph of the hypothetical super-sweep.
Can the peg at the upper left corner of the board make a tour of the board, sweeping off all 15 other pegs? We can answer this question by looking at the graph formed by the sweep, shown separately
in figure 2b rather than on top of the board as in figure 1. In the language of graph theory the super-sweep is an Euler path on this graph, i.e. a path that traverses every edge exactly once. One of
the most basic theorems of graph theory states that an Euler path is possible if and only if there are either zero or two nodes of odd degree. Looking at figure 2b, we see that there are four nodes
of odd degree, hence an Euler path is not possible, and the answer to the question above is no. The board in figure 2 does not support a super-sweep.
Using the Euler path theorem we can eliminate many boards from having super-sweeps. Hexagonal or star-shaped boards have hexagonal symmetry, but cannot support super-sweeps because they have six
nodes of odd degree. If we restrict ourselves to board shapes that are convex polygons (no 240° or 300° corners) things are particularly simple. Using elementary geometry we can prove the following:
For a convex board on a triangular lattice, only three board shapes can have a super-sweep:
1. Triangles, and only equilateral triangles are possible on a triangular lattice. Because every node in the super-sweep graph has even degree, the super-sweep always begins and ends at the same
board location.
2. Parallelograms, which have alternating 60° and 120° corners as you go around the circumference. The super-sweep must begin at one 120° corner and end at the other.
3. Trapezoids, which have two 60° corners followed by two 120° corners. The super-sweep must begin at one 120° corner and end at the other. These boards can also be considered as equilateral
triangles with one corner cut off.
A rhombus is a special case of a parallelogram where all four sides have the same length n, we'll call this board Rhombus(n). By rotating them 60°, they become diamond-shaped and could also be called
Diamond boards. In the remainder of this paper, we'll go into some of the remarkable properties of these boards. Figure 3 shows the first few Rhombus board super-sweeps.
Figure 3 Super-sweeps on Rhombus(3), Rhombus(5) and Rhombus(7).
These sweeps have lengths of 5, 16, and 33, respectively, and end at the lower right corner.
The length of this sweep, for odd n, is (3n+1)(n 1)/4. Since the total number of holes in the board is n², this sweep removes nearly 3/4 of the pegs on the full board with a single move. For n = 3,
5, 7, 9, 11, 13, ..., the sequence of sweep lengths is 5, 16, 33, 56, 85, 120, ..., called the "Rhombic matchstick sequence" [3] because it is the number of matchsticks needed to construct a rhombus
(with (n 1)/2 matchsticks on a side).
This 36-hole board has several unusual properties. It is also of a reasonable size for playing by hand, and for exhaustive computer searches. This board is equivalent to the 6x6 square board on a
square lattice, with the addition of moves along one diagonal. It is therefore possible to play this board using a chess or go board, although this is not recommended because the symmetry of the
board is hard to see. For playing by hand I recommend using part of a Chinese Checkers board. The ideal board for playing by hand is a computer [4], because we can easily take back moves and record
move sequences.
The board Rhombus(6) is a null-class board. For a definition of this term, see [2] or [5], the important concept is that only on null class boards can complement problems be solvable. Rhombus(6) is
the smallest rhombus board on which a complement problem is solvable, and in fact all complement problems are easily solvable.
Figure 4 Rhombus(6) hole coordinates.
Potential finishing locations for a problem including a maximal sweep (16-sweep) are shown in blue.
Notation: moves are denoted by the starting and ending coordinates of the jumps, separated by dashes.
The longest sweep geometrically possible on Rhombus(6) has length 16 (as in figure 3). Can 16-sweeps occur in peg solitaire problems on this board? Here we aren't limiting the 16-sweep to be the last
move, but leave open the possibility that it could happen at any move. We note that there are only a few places where the 16-sweep can begin and end. It can go from a1 to e5, b2 to f6, b1 to f5, or
any symmetric variant of these. The 16-sweep can be the final move, or it can be the second to the last move, for example the 16-sweep can go from a1 to e5, followed by f6-d4, or e6-e4. The 16-sweep
can even be the 3rd to the last move, from a1 to e5, followed by e6-e4 and f5-d3.
In figure 4, all potential finishing locations for solutions containing a 16-sweep are shown in blue. Not all are feasible, the finishing 16-sweep from b1 to f5 is in fact impossible to reach from
any starting vacancy, as discovered by exhaustive computer search. However all other configurations of the 16-sweep can be reached. In fact, starting from any vacancy on the board, there is a
solution with a maximal sweep (16-sweep) that finishes with one peg. This board is the only one we know of, on a square or triangular lattice, with this amazing property.
It is also possible for solutions to complement problems to include maximal sweeps. This also is not known to occur for any other board (although some larger rhombus boards have this property). Here
are four essentially different complement problems that can be solved using a maximal sweep:
1. e5 complement: solve with the last move a 16-sweep.
2. d4 complement: solve with the second to last move a 16-sweep.
3. e4 complement: solve with the second to last move a 16-sweep.
4. d3 complement: solve with the third to last move a 16-sweep.
These problems are most easily solved by attempting to play backward, or equivalently by playing forward from the complement of the board position before the 16-sweep. All four problems make good
challenges to solve by hand; they are easy to solve using a computer (provided you don't try to solve them by playing forward). In figure 5 we show a solution to problem #4. This solution is
interesting in that after the third move, the board position is symmetric about the yellow line. After that moves are done in pairs, or are themselves symmetric, preserving the symmetry up until the
last two moves.
Figure 5 A symmetric solution to the d3 complement (problem #4).
This solution has 17 moves.
Note that more than one move is sometimes shown between board snapshots.
Any peg solitaire problem on this board begins with 35 pegs and finishes with one, so a solution consists of exactly 34 jumps. The number of moves, however, can be less than this, and an interesting
question is to find the solution with the least number of moves. This is different from finding solutions with maximal sweeps, and answers are more difficult to obtain. The minimal solution length
can usually be found only using exhaustive computer search, which can take many hours on this board, just for one problem.
If we take into account all possible starting and finishing locations for a peg solitaire problem on this board, we find there are 120 distinct problems. I have only solved the complement problems,
for there are only 12 of them. Of these 12, I have found that 7 can be solved in a minimum of 13 moves, with the rest requiring 14 moves (see figure 7 for all results). A sample 13-move solution is
shown in figure 6.
Figure 6 A 13-move solution to the c3-complement.
The last 4 moves originate from the 4 corners, an unusual property for a 13-move solution.
Note that more than one move is sometimes shown between board snapshots.
Using the "Merson region" analysis of GPJ #36 [1], it is possible to prove that some 13-move solutions are the shortest possible. In general, however, we rely on the exhaustive search to establish
the minimum. For much more information on minimal length solutions on Rhombus(6) and other boards see my web site [4].
36 is the first integer (aside from the trivial case 1) that is simultaneously a triangular number and a perfect square. This is reflected in the fact that Triangle(8) and Rhombus(6) both have 36
holes. Because of this, it is quite interesting to compare the properties of these two peg solitaire boards. Figure 7 shows the minimum length solution of a complement problem by color for each of
these boards. Note that Rhombus(6) in general supports slightly shorter solutions, with none requiring 15 moves. Determining the coloring for figure 7 for both boards required a lot of CPU time, over
1 week on a 1 GHz PC.
Figure 7 The length of the shortest solution on Triangle(8) and Rhombus(6).
│ Color │ Length of the shortest solution to the complement problem │
│ Red │ 13 Moves │
│ Blue │ 14 Moves │
│ Yellow │ 15 Moves │
Colors indicate the length of the shortest solution to a complement problem starting and finishing at that location. The Rhombus(6) board has been rotated to its "diamond" configuration to show the
Maximal Sweeps on Rhombus(6i)
In GPJ #36 [1], we found that maximal sweeps on Triangle(6) and Triangle(8) could occur in peg solitaire games on these boards. Although a formal proof has not been found, computational results
suggest that maximal sweeps cannot be reached on any larger triangular boards.
We have just shown that a maximal sweep can be reached by a peg solitaire problem on Rhombus(6), but what about larger rhombus boards? One might suspect that maximal sweeps would eventually become
unreachable, as with the triangular boards. Somewhat remarkably, however, this is not the case.
Theorem: For any i > 0, there exists a solution to a peg solitaire problem on Rhombus(6i) where the last move is a maximal sweep of length (9i 1)(3i 1).
To prove this, it suffices to show that the complement of some maximal sweep pattern can be reduced to one peg. The sweep pattern we choose begins at the upper left corner, and ends one hole up and
left from the lower right corner. When we take the complement, this results in the board position of figure 8.
Figure 8 The complement of the sweep pattern on Rhombus(6i).
Note that the board position is symmetric about the yellow line.
The case i = 1, or Rhombus(6), has already been solved (problem #1 in the previous section). We will use induction to prove the general case, starting with i = 2. The solution proceeds through three
Phases (A, B and C). We apply the moves of Phase A once, then B (i 2) times, followed by Phase C once.
Figure 9 The moves of Phase A.
Phase A, shown in figure 9, consists of 8 moves that clear out the leftmost 6 columns of the board and the upper 4 rows, except for the last (rightmost) column. If the board is larger than the
Rhombus(12) shown in figure 9, the long multi-jump moves must be extended accordingly. We are left with a very similar board pattern as the one we started with, just reduced in size. Note, however,
that the final board position is no longer symmetric.
Figure 10 The moves of Phase B.
The moves here are shown on Rhombus(15) to save space.
Phase B will actually be applied to boards at least as large as Rhombus(18).
Phase B, shown in figure 10, is 9 moves that reduce the sweep pattern by 6 rows and 6 columns. As before if the board is larger than shown the multi-jump moves are extended. After applying Phase B j
times the leftmost 6j+6 columns will be empty, and the topmost 6j+4 rows will also be empty, except for a trail of pegs in the last column that will be taken by the final move.
Figure 11 The moves of Phase C.
Finally Phase C is executed to take the board down to a one peg in the upper right corner. The 9 moves of this phase are shown in figure 11. Putting together all three phases, it only takes 9i 1
moves to clear the sweep pattern of figure 8. To find the solution ending in the maximal sweep, we begin from a vacancy at the upper right corner, and execute the jumps of Phases A, B and C in
exactly the reverse order. Thus we execute the jumps in Phase C reversed, followed by (i 2) Phase B's, and then Phase A, all in reverse order. The long sweeps become individual jumps, and the
solution ending in the maximal sweep has significantly more than 9i 1 moves.
Only in Phase C is the fact that the side is divisible by 6 needed. For only on such boards can the final peg finish in the upper right hand corner.
Figure 12 Putting Phases A and C together for a solution on Rhombus(12).
In figure 12 on Rhombus(12), we show the complete solution that reduces the complement of the sweep pattern to one peg. In this case we need only execute Phases A and C, and in figure 12 the two
phases have been interleaved and are no longer visually separate. This reduces the number of diagrams to show the solution, but the inductive nature of the solution becomes much harder to see. It is
unfortunate that a Chinese Checkers board is too small to play this solution on. If you can find a large enough board, it is interesting to play the moves in this solution in exactly the reverse
order, and watch as the sweep position magically appears. The final sweep in the reversed solution has length 85.
An integer that is simultaneously a perfect square and a triangular number is called a square triangular number [6, 7]. As was the case with Rhombus(6) and Triangle(8), which both have 36 holes, each
square triangular number corresponds to a rhombus and triangular board having the same number of holes. If the side of the rhombus board is divisible by 6, and the side of the triangular board by 12,
then both have long sweep finishes by the above analysis and GPJ #36 [1].
After the 36-hole boards, the next time this occurs is with Rhombus(204) and Triangle(288), which both have 41,616 holes. By our inductive arguments, we can construct solutions to peg solitaire
problems on these boards that finish with sweeps of length 30,805 and 30,793, respectively. The next larger such boards are Rhombus(235,416) and Triangle(332,928), boards with over 55 billion holes,
that can finish with sweeps over 41 billion in length.
Peg solitaire on a triangular lattice is a fascinating game, and one that has not been studied to the extent that "normal" (square lattice) peg solitaire has. This paper, and GPJ #36 [1] have shown
that triangular lattice peg solitaire is well suited for inductive arguments. Inductive arguments have also been used to create an algorithm for triangular boards of any size that can reduce any
(solvable) single vacancy down to one peg [8]. I believe inductive arguments are possible for square lattice boards, but the reduction from 6 jump directions to 4 seems to make such arguments more
While the results on this page on Rhombus(6) were obtained by exhaustive computer search, the long sweep finishes on Rhombus(6i) were found by hand. The computer was still of significant help, but
only in providing an interface to play the game on the large boards required.
There remain many unanswered questions regarding rhombus and triangular boards. Can maximal sweeps be reached on peg solitaire problems on Rhombus(2i)? I have been able to answer this question in the
affirmative for 2≤i≤9. It would also be nice to prove that maximal sweeps are not reachable in peg solitaire problems on triangular boards larger than Triangle(8) (or find a counterexample).
[1] George Bell, Triangular Peg Solitaire Unlimited: Issue #36 of The Games and Puzzles Journal, Nov-Dec 2004.
[2] John H. Conway, Elwyn R. Berlekamp, Richard K. Guy, Winning Ways for Your Mathematical Plays, Volume 4, AK Peters, 2004 (second edition).
[3] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences. Rhombic matchstick numbers are sequence A045944
[4] George Bell's Triangular Peg Solitaire Page. Note: should this web address change in the future, I suggest doing a search with keywords: "George Bell" Triangular Peg Solitaire.
[5] John D. Beasley, The Ins & Outs of Peg Solitaire, Oxford Univ. Press, 1992 (paperback edition), ISBN 0-19-286145-X.
[6] Eric W. Weisstein. "Square Triangular Number." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/SquareTriangularNumber.html
[7] N. J. A. Sloane, The On-Line Encyclopedia of Integer Sequences. Square Triangular Numbers are sequence A001110
[8] George I. Bell, Solving Triangular Peg Solitaire, submitted to The Journal of Recreational Mathematics, Feb 2005.
[Note: George Bell's Triangular Peg Solitaire Page, The On-Line Encyclopedia of Integer Sequences and The Journal of Recreational Mathematics
migrated to new sites since the publication of this article and the links have accordingly been updated (June 2013).]
Copyright © 2005 G. P. Jelliss and contributing authors.
Partial results may be quoted provided proper acknowledgement of the source is given. Back to: GPJ Index Page | {"url":"http://www.mayhematics.com/j/gpj41.htm","timestamp":"2014-04-20T13:19:06Z","content_type":null,"content_length":"29991","record_id":"<urn:uuid:38704c03-a6e3-4836-980f-6cbdffde0cc0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related Articles
The Tshepo study was the first clinical trial to evaluate outcomes of adults receiving nevirapine (NVP)-based versus efavirenz (EFV)-based combination antiretroviral therapy (cART) in Botswana. This
was a 3 year study (n=650) comparing the efficacy and tolerability of various first-line cART regimens, stratified by baseline CD4+: <200 (low) vs. 201-350 (high). Using targeted maximum likelihood
estimation (TMLE), we retrospectively evaluated the causal effect of assigned NNRTI on time to virologic failure or death [intent-to-treat (ITT)] and time to minimum of virologic failure, death, or
treatment modifying toxicity [time to loss of virological response (TLOVR)] by sex and baseline CD4+. Sex did significantly modify the effect of EFV versus NVP for both the ITT and TLOVR outcomes
with risk differences in the probability of survival of males versus the females of approximately 6% (p=0.015) and 12% (p=0.001), respectively. Baseline CD4+ also modified the effect of EFV versus
NVP for the TLOVR outcome, with a mean difference in survival probability of approximately 12% (p=0.023) in the high versus low CD4+ cell count group. TMLE appears to be an efficient technique that
allows for the clinically meaningful delineation and interpretation of the causal effect of NNRTI treatment and effect modification by sex and baseline CD4+ cell count strata in this study.
EFV-treated women and NVP-treated men had more favorable cART outcomes. In addition, adults initiating EFV-based cART at higher baseline CD4+ cell count values had more favorable outcomes compared to
those initiating NVP-based cART.
PMCID: PMC3423643 PMID: 22309114
When a large number of candidate variables are present, a dimension reduction procedure is usually conducted to reduce the variable space before the subsequent analysis is carried out. The goal of
dimension reduction is to find a list of candidate genes with a more operable length ideally including all the relevant genes. Leaving many uninformative genes in the analysis can lead to biased
estimates and reduced power. Therefore, dimension reduction is often considered a necessary predecessor of the analysis because it can not only reduce the cost of handling numerous variables, but
also has the potential to improve the performance of the downstream analysis algorithms.
We propose a TMLE-VIM dimension reduction procedure based on the variable importance measurement (VIM) in the frame work of targeted maximum likelihood estimation (TMLE). TMLE is an extension of
maximum likelihood estimation targeting the parameter of interest. TMLE-VIM is a two-stage procedure. The first stage resorts to a machine learning algorithm, and the second step improves the first
stage estimation with respect to the parameter of interest.
We demonstrate with simulations and data analyses that our approach not only enjoys the prediction power of machine learning algorithms, but also accounts for the correlation structures among
variables and therefore produces better variable rankings. When utilized in dimension reduction, TMLE-VIM can help to obtain the shortest possible list with the most truly associated variables.
PMCID: PMC3166941 PMID: 21849016
Collaborative double robust targeted maximum likelihood estimators represent a fundamental further advance over standard targeted maximum likelihood estimators of a pathwise differentiable parameter
of a data generating distribution in a semiparametric model, introduced in van der Laan, Rubin (2006). The targeted maximum likelihood approach involves fluctuating an initial estimate of a relevant
factor (Q) of the density of the observed data, in order to make a bias/variance tradeoff targeted towards the parameter of interest. The fluctuation involves estimation of a nuisance parameter
portion of the likelihood, g. TMLE has been shown to be consistent and asymptotically normally distributed (CAN) under regularity conditions, when either one of these two factors of the likelihood of
the data is correctly specified, and it is semiparametric efficient if both are correctly specified.
In this article we provide a template for applying collaborative targeted maximum likelihood estimation (C-TMLE) to the estimation of pathwise differentiable parameters in semi-parametric models. The
procedure creates a sequence of candidate targeted maximum likelihood estimators based on an initial estimate for Q coupled with a succession of increasingly non-parametric estimates for g. In a
departure from current state of the art nuisance parameter estimation, C-TMLE estimates of g are constructed based on a loss function for the targeted maximum likelihood estimator of the relevant
factor Q that uses the nuisance parameter to carry out the fluctuation, instead of a loss function for the nuisance parameter itself. Likelihood-based cross-validation is used to select the best
estimator among all candidate TMLE estimators of Q0 in this sequence. A penalized-likelihood loss function for Q is suggested when the parameter of interest is borderline-identifiable.
We present theoretical results for “collaborative double robustness,” demonstrating that the collaborative targeted maximum likelihood estimator is CAN even when Q and g are both mis-specified,
providing that g solves a specified score equation implied by the difference between the Q and the true Q0. This marks an improvement over the current definition of double robustness in the
estimating equation literature.
We also establish an asymptotic linearity theorem for the C-DR-TMLE of the target parameter, showing that the C-DR-TMLE is more adaptive to the truth, and, as a consequence, can even be super
efficient if the first stage density estimator does an excellent job itself with respect to the target parameter.
This research provides a template for targeted efficient and robust loss-based learning of a particular target feature of the probability distribution of the data within large (infinite dimensional)
semi-parametric models, while still providing statistical inference in terms of confidence intervals and p-values. This research also breaks with a taboo (e.g., in the propensity score literature in
the field of causal inference) on using the relevant part of likelihood to fine-tune the fitting of the nuisance parameter/censoring mechanism/treatment mechanism.
PMCID: PMC2898626 PMID: 20628637
asymptotic linearity; coarsening at random; causal effect; censored data; crossvalidation; collaborative double robust; double robust; efficient influence curve; estimating function; estimator
selection; influence curve; G-computation; locally efficient; loss-function; marginal structural model; maximum likelihood estimation; model selection; pathwise derivative; semiparametric model;
sieve; super efficiency; super-learning; targeted maximum likelihood estimation; targeted nuisance parameter estimator selection; variable importance
A concrete example of the collaborative double-robust targeted likelihood estimator (C-TMLE) introduced in a companion article in this issue is presented, and applied to the estimation of causal
effects and variable importance parameters in genomic data. The focus is on non-parametric estimation in a point treatment data structure. Simulations illustrate the performance of C-TMLE relative to
current competitors such as the augmented inverse probability of treatment weighted estimator that relies on an external non-collaborative estimator of the treatment mechanism, and inefficient
estimation procedures including propensity score matching and standard inverse probability of treatment weighting. C-TMLE is also applied to the estimation of the covariate-adjusted marginal effect
of individual HIV mutations on resistance to the anti-retroviral drug lopinavir. The influence curve of the C-TMLE is used to establish asymptotically valid statistical inference. The list of
mutations found to have a statistically significant association with resistance is in excellent agreement with mutation scores provided by the Stanford HIVdb mutation scores database.
PMCID: PMC3126668 PMID: 21731530
causal effect; cross-validation; collaborative double robust; double robust; efficient influence curve; penalized likelihood; penalization; estimator selection; locally efficient; maximum likelihood
estimation; model selection; super efficiency; super learning; targeted maximum likelihood estimation; targeted nuisance parameter estimator selection; variable importance
There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust
semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple
missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007)
and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by
presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the
continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We
demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges.
PMCID: PMC3173607 PMID: 21931570
censored data; collaborative double robustness; collaborative targeted maximum likelihood estimation; double robust; estimator selection; inverse probability of censoring weighting; locally efficient
estimation; maximum likelihood estimation; semiparametric model; targeted maximum likelihood estimation; targeted minimum loss based estimation; targeted nuisance parameter estimator selection
We consider two-stage sampling designs, including so-called nested case control studies, where one takes a random sample from a target population and completes measurements on each subject in the
first stage. The second stage involves drawing a subsample from the original sample, collecting additional data on the subsample. This data structure can be viewed as a missing data structure on the
full-data structure collected in the second-stage of the study. Methods for analyzing two-stage designs include parametric maximum likelihood estimation and estimating equation methodology. We
propose an inverse probability of censoring weighted targeted maximum likelihood estimator (IPCW-TMLE) in two-stage sampling designs and present simulation studies featuring this estimator.
PMCID: PMC3083136 PMID: 21556285
two-stage designs; targeted maximum likelihood estimators; nested case control studies; double robust estimation
In longitudinal and repeated measures data analysis, often the goal is to determine the effect of a treatment or aspect on a particular outcome (e.g., disease progression). We consider a
semiparametric repeated measures regression model, where the parametric component models effect of the variable of interest and any modification by other covariates. The expectation of this
parametric component over the other covariates is a measure of variable importance. Here, we present a targeted maximum likelihood estimator of the finite dimensional regression parameter, which is
easily estimated using standard software for generalized estimating equations.
The targeted maximum likelihood method provides double robust and locally efficient estimates of the variable importance parameters and inference based on the influence curve. We demonstrate these
properties through simulation under correct and incorrect model specification, and apply our method in practice to estimating the activity of transcription factor (TF) over cell cycle in yeast. We
specifically target the importance of SWI4, SWI6, MBP1, MCM1, ACE2, FKH2, NDD1, and SWI5.
The semiparametric model allows us to determine the importance of a TF at specific time points by specifying time indicators as potential effect modifiers of the TF. Our results are promising,
showing significant importance trends during the expected time periods. This methodology can also be used as a variable importance analysis tool to assess the effect of a large number of variables
such as gene expressions or single nucleotide polymorphisms.
PMCID: PMC3122882 PMID: 21291412
targeted maximum likelihood; semiparametric; repeated measures; longitudinal; transcription factors
Covariate adjustment using linear models for continuous outcomes in randomized trials has been shown to increase efficiency and power over the unadjusted method in estimating the marginal effect of
treatment. However, for binary outcomes, investigators generally rely on the unadjusted estimate as the literature indicates that covariate-adjusted estimates based on the logistic regression models
are less efficient. The crucial step that has been missing when adjusting for covariates is that one must integrate/average the adjusted estimate over those covariates in order to obtain the marginal
effect. We apply the method of targeted maximum likelihood estimation (tMLE) to obtain estimators for the marginal effect using covariate adjustment for binary outcomes. We show that the covariate
adjustment in randomized trials using the logistic regression models can be mapped, by averaging over the covariate(s), to obtain a fully robust and efficient estimator of the marginal effect, which
equals a targeted maximum likelihood estimator. This tMLE is obtained by simply adding a clever covariate to a fixed initial regression. We present simulation studies that demonstrate that this tMLE
increases efficiency and power over the unadjusted method, particularly for smaller sample sizes, even when the regression model is mis-specified.
PMCID: PMC2857590 PMID: 18985634
clinical trails; efficiency; covariate adjustment; variable selection
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this
article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of
comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our
methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human
immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial.
PMCID: PMC3420072 PMID: 22904583
Counting process; Estimating function; HIV/AIDS; Maximum likelihood estimation; Semiparametric model; Time-varying covariate
Nevirapine (NVP) and Efavirenz (EFV) have generally comparable clinical and virologic efficacy. However, data comparing NVP durability to EFV are imprecise. We analyzed cohort data to compare
durability of NVP to EFV among patients initiating ART in Mbabane, Swaziland. The primary outcome was poor regimen durability defined as any modification of NVP or EFV to the ART regimen.
Multivariate Cox proportional hazards models were employed to estimate the risk of poor regimen durability (all-cause) for the two regimens and also separately to estimate risk of drug-related
toxicity. We analyzed records for 769 patients initiating ART in Mbabane, Swaziland from March 2006 to December 2007. 30 patients (3.9%) changed their NVP or EFV-based regimen during follow up.
Cumulative incidence for poor regimen durability was 5.3% and 2.7% for NVP and EFV, respectively. Cumulative incidence for drug-related toxicity was 1.9% and 2.7% for NVP and EFV, respectively.
Burden of TB was high and 14 (46.7%) modifications were due to patients substituting NVP due to beginning TB treatment. Though the estimates were imprecise, use of NVP - based regimens seemed to be
associated with higher risk of modifications compared to use of EFV - based regimens (HR 2.03 95%CI 0.58 - 7.05) and NVP - based regimens had a small advantage over EFV - based regimens with regard
to toxicity - related modifications (HR 0.87 95%CI 0.26 - 2.90). Due to the high burden of TB and a significant proportion of patients changing their ART regimen after starting TB treatment, use of
EFV as the preferred NNRTI over NVP in high TB endemic settings may result in improved first-line regimen tolerance. Further studies comparing the cost-effectiveness of delivering these two NNRTIs in
light of their different limitations are required.
PMCID: PMC3708322 PMID: 23847702
Tolerability; Toxicity; Efavirenz; Nevirapine; Antiretroviral therapy; Resource limited setting; Swaziland
In many semiparametric models that are parameterized by two types of parameters – a Euclidean parameter of interest and an infinite-dimensional nuisance parameter, the two parameters are bundled
together, i.e., the nuisance parameter is an unknown function that contains the parameter of interest as part of its argument. For example, in a linear regression model for censored survival data,
the unspecified error distribution function involves the regression coefficients. Motivated by developing an efficient estimating method for the regression parameters, we propose a general sieve
M-theorem for bundled parameters and apply the theorem to deriving the asymptotic theory for the sieve maximum likelihood estimation in the linear regression model for censored survival data. The
numerical implementation of the proposed estimating method can be achieved through the conventional gradient-based search algorithms such as the Newton-Raphson algorithm. We show that the proposed
estimator is consistent and asymptotically normal and achieves the semiparametric efficiency bound. Simulation studies demonstrate that the proposed method performs well in practical settings and
yields more efficient estimates than existing estimating equation based methods. Illustration with a real data example is also provided.
PMCID: PMC3890689 PMID: 24436500
Accelerated failure time model; B-spline; bundled parameters; efficient score function; semiparametric efficiency; sieve maximum likelihood estimation
National initiatives offering NNRTI-based combination antiretroviral therapy (cART) have expanded in sub-Saharan Africa (SSA). The Tshepo study is the first clinical trial evaluating the long-term
efficacy and tolerability of EFV- vs. NVP-based cART among adults in Botswana.
Three year randomized study (n = 650) using a 3×2×2 factorial design comparing efficacy and tolerability among: A: ZDV/3TC vs. ZDV/ddI vs. d4T/3TC; B: EFV vs. NVP, and C: Com-DOT vs. standard
adherence strategies. This manuscript focuses on comparison B.
There was no significant difference by assigned NNRTI in time to virologic failure with resistance (log-rank p = 0.14), NVP vs. EFV risk ratio (RR) = 1.54 [0.86-2.70]. Rates of virologic failure with
resistance were 9.6% NVP-treated [6.8-13.5] vs. 6.6% EFV-treated [4.2-10.0] at 3 years. Women receiving NVP-based cART trended towards higher virological failure rates when compared to EFV-treated
women, Holm-corrected log-rank p = 0.072, NVP vs. EFV RR = 2.22 [0.94-5.00]. 139 patients had 176 treatment modifying toxicities, with shorter time to event in NVP-treated vs. EFV-treated, RR = 1.85
[1.20-2.86], log-rank p = 0.0002.
Tshepo-treated patients had excellent overall immunologic and virologic outcomes, and no significant differences were observed by randomized NNRTI comparison. NVP-treated women trended towards higher
virologic failure with resistance compared to EFV-treated women. NVP-treated adults had higher treatment modifying toxicity rates when compared to those receiving EFV. NVP-based cART can continue to
be offered to women in SSA if routine safety monitoring chemistries are done and the potential risk of EFV-related teratogenicity is considered.
PMCID: PMC3087813 PMID: 20023437
HIV/AIDS; HAART; non-nucleoside reverse transcriptase inhibitors (NNRTI’s); nevirapine versus efavirenz; sub-Saharan Africa; randomized clinical trial
During stavudine phase-out plan in developing countries, tenofovir is used to substitute stavudine. However, knowledge regarding whether there is any difference of the frequency of renal injury
between tenofovir/lamivudine/efavirenz and tenofovir/lamivudine/nevirapine is lacking.
This prospective study was conducted among HIV-infected patients who were switched NRTI from stavudine/lamivudine to tenofovir/lamivudine in efavirenz-based (EFV group) and nevirapine-based regimen
(NVP group) after two years of an ongoing randomized trial. All patients were assessed for serum phosphorus, uric acid, creatinine, estimated glomerular filtration rate (eGFR), and urinalysis at time
of switching, 12 and 24 weeks.
Of 62 patients, 28 were in EFV group and 34 were in NVP group. Baseline characteristics and eGFR were not different between two groups. At 12 weeks, comparing mean ± SD measures between EFV group and
NVP group were: phosphorus of 3.16 ± 0.53 vs. 2.81 ± 0.42 mg/dL (P = 0.005), %patients with proteinuria were 15% vs. 38% (P = 0.050). At 24 weeks, mean ± SD phosphorus and median (IQR) eGFR between
the corresponding groups were 3.26 ± 0.78 vs. 2.84 ± 0.47 mg/dL (P = 0.011) and 110 (99-121) vs. 98 (83-112) mL/min (P = 0.008). In NVP group, comparing week 12 to time of switching, there was a
decrement of phosphorus (P = 0.007) and eGFR (P = 0.034). By multivariate analysis, 'receiving nevirapine', 'old age' and 'low baseline serum phosphorus' were associated with hypophosphatemia at 24
weeks (P < 0.05). Receiving nevirapine and low baseline eGFR were associated with lower eGFR at 24 weeks (P < 0.05).
The frequency of tenofovir-associated renal impairment was higher in patients receiving tenofovir/lamivudine/nevirapine compared to tenofovir/lamivudine/efavirenz. Further studies regarding
patho-physiology are warranted.
PMCID: PMC3020664 PMID: 20937122
Although tenofovir (TDF) is a common component of antiretroviral therapy (ART), recent evidence suggests inferior outcomes when it is combined with nevirapine (NVP).
We compared outcomes among patients initiating TDF+emtricitabine or lamivudine (XTC)+NVP, TDF+XTC+efavirenz (EFV), zidovudine (ZDV)+lamuvidine (3TC)+NVP, and ZDV+3TC+EFV. We categorized drug exposure
by initial ART dispensation, by a time-varying analysis that accounted for drug substitutions, and by predominant exposure (>75% of drug dispensations) during an initial window period. Risks for
death and program failure were estimated using Cox proportional hazard models. All were regimens were compared to ZDV+3TC+NVP.
Between July 2007 and November 2010, 18,866 treatment-naïve adults initiated ART: 18.2% on ZDV+3TC+NVP, 1.8% on ZDV+3TC+EFV, 36.2% on TDF+XTC+NVP, and 43.8% on TDF+XTC+EFV. When exposure was
categorized by initial prescription, patients on TDF+XTC+NVP (adjusted hazard ratio [AHR]:1.45; 95%CI:1.03–2.06) had a higher post-90 day mortality. TDF+XTC+NVP was also associated with an elevated
risk for mortality when exposure was categorized as time-varying (AHR:1.51; 95%CI:1.18–1.95) or by predominant exposure over the first 90 days (AHR:1.91, 95%CI:1.09–3.34). However, these findings
were not consistently observed across sensitivity analyses or when program failure was used as a secondary outcome.
TDF+XTC+NVP was associated with higher mortality when compared to ZDV+3TC+NVP, but not consistently across sensitivity analyses. These findings may be explained in part by inherent limitations to our
retrospective approach, including residual confounding. Further research is urgently needed to compare the effectiveness of ART regimens in use in resource-constrained settings.
PMCID: PMC3215810 PMID: 21857354
tenofovir; zidovudine; nevirapine; antiretroviral therapy; Africa
We consider a class of semiparametric normal transformation models for right censored bivariate failure times. Nonparametric hazard rate models are transformed to a standard normal model and a joint
normal distribution is assumed for the bivariate vector of transformed variates. A semiparametric maximum likelihood estimation procedure is developed for estimating the marginal survival
distribution and the pairwise correlation parameters. This produces an efficient estimator of the correlation parameter of the semiparametric normal transformation model, which characterizes the
bivariate dependence of bivariate survival outcomes. In addition, a simple positive-mass-redistribution algorithm can be used to implement the estimation procedures. Since the likelihood function
involves infinite-dimensional parameters, the empirical process theory is utilized to study the asymptotic properties of the proposed estimators, which are shown to be consistent, asymptotically
normal and semiparametric efficient. A simple estimator for the variance of the estimates is also derived. The finite sample performance is evaluated via extensive simulations.
PMCID: PMC2600666 PMID: 19079778
Asymptotic normality; Bivariate failure time; Consistency; Semiparametric efficiency; Semiparametric maximum likelihood estimate; Semiparametric normal transformation
We define a new measure of variable importance of an exposure on a continuous outcome, accounting for potential confounders. The exposure features a reference level x0 with positive mass and a
continuum of other levels. For the purpose of estimating it, we fully develop the semi-parametric estimation methodology called targeted minimum loss estimation methodology (TMLE) [23, 22]. We cover
the whole spectrum of its theoretical study (convergence of the iterative procedure which is at the core of the TMLE methodology; consistency and asymptotic normality of the estimator), practical
implementation, simulation study and application to a genomic example that originally motivated this article. In the latter, the exposure X and response Y are, respectively, the DNA copy number and
expression level of a given gene in a cancer cell. Here, the reference level is x0 = 2, that is the expected DNA copy number in a normal cell. The confounder is a measure of the methylation of the
gene. The fact that there is no clear biological indication that X and Y can be interpreted as an exposure and a response, respectively, is not problematic.
PMCID: PMC3546832 PMID: 23336014
Variable importance measure; non-parametric estimation; targeted minimum loss estimation; robustness; asymptotics
Meta-analysis typically involves combining the estimates from independent studies in order to estimate a parameter of interest across a population of studies. However, outliers often occur even under
the random effects model. The presence of such outliers could substantially alter the conclusions in a meta-analysis. This paper proposes a methodology for identifying and, if desired, downweighting
studies that do not appear representative of the population they are thought to represent under the random effects model.
An outlier is taken as an observation (study result) with an inflated random effect variance. We used the likelihood ratio test statistic as an objective measure for determining whether observations
have inflated variance and are therefore considered outliers. A parametric bootstrap procedure was used to obtain the sampling distribution of the likelihood ratio test statistics and to account for
multiple testing. Our methods were applied to three illustrative and contrasting meta-analytic data sets.
For the three meta-analytic data sets our methods gave robust inferences when the identified outliers were downweighted.
The proposed methodology provides a means to identify and, if desired, downweight outliers in meta-analysis. It does not eliminate them from the analysis however and we consider the proposed approach
preferable to simply removing any or all apparently outlying results. We do not however propose that our methods in any way replace or diminish the standard random effects methodology that has proved
so useful, rather they are helpful when used in conjunction with the random effects model.
PMCID: PMC3050872 PMID: 21324180
Purpose of the study
Efavirenz (EFV) is still discussed for its high rate of interruption due to adverse event, in particular central nervous system side effects (CNS-SE). Aim of the study was to define if better drug
formulations up to single tablet regimen (STR), including (EFV) plus NRTI backbone (tenofovir-emtricitabine), reduced the risk of interruption.
From the database of two reference centers, patients starting any cART regimen including EFV+2 NRTI or switching to EFV+2 NRTI for simplification after virological suppression were selected.
Probability of interruption by virological failure, side effects, CNS-SE and any cause were assessed with survival analysis and Cox proportional hazard model.
Summary of results
Overall, 533 patients, starting EFV-containing regimen from May 1998 to March 2012, were included (51.2% naïve, 48.8% switched). Patients characteristics: males 70.7%, median age 39 years, injecting
drug use (IDU) 11.2%, median nadir CD4 194/mmc, median CD4 at EFV start 305/mmc: 38.7% started BID regimen, 43.9% OD regimen and 17.4% STR. At survival analysis, the overall proportion of EFV
interruption was 19.1% at 1 year and 33.0% at 3 years; interruption for virological failure were 2.8% and 7.4% and for toxicity 10.2% and 15.9%, respectively. CNS-SE accounted for about half of
interruptions for toxicity (5.7% and 8.0% at 1 and 3 years, respectively). Naïve patients had a higher risk of interruption as compared to switched patients: 37.7% vs. 28.0% at 3 years (p=0.06).
While no significant difference was observed comparing OD vs. B ID regimens, starting with STR was associated with significant lower proportion of overall interruption at 3 years (17.1% vs. 36.6%, p
<0.01). No virological failure was observed with STR up to 3 years (0.0% vs. 8.9%, p=0.05); no difference of interruption by overall toxicity and higher, though non-significant, frequency of
interruption by CNS-SE (12.8% vs. 6.8%) were also observed. STR also accounted for lower proportion of interruption by patient wish, including low adherence (1.5% vs. 12.3%, p=0.01). At adjusted Cox
model, STR (HR: 0.44; 95% CI: 0.26–0.77) and male gender (HR: 0.71; 95% CI: 0.53–0.97) were associated with lower risk of EFV interruption and IDU with higher risk (HR: 1.64; 95% CI: 1.11–2.42).
In our experience, EFV co-formulated in STR was associated with lower virological failure and higher adherence, despite keeping CNS toxicity, thus reducing the risk of treatment interruption.
PMCID: PMC3512459
Efavirenz (EFV) administration is still controversial for its high rates of interruption mainly related to central nervous system side effects (CNS-SE). Aim of the study was to define if single
tablet regimen (STR) as compared to bis-in-die (BID) or once-daily (OD) with ≥2 pills-a-day EFV formulations reduced the risk of interruption.
Patients starting any cART regimen including EFV+2NRTIs or switching to EFV+2NRTIs for simplification after virological suppression were retrospectively selected. Incidence, probability and
prognostic factors of interruption by different causes were assessed by survival analysis and Cox regression model.
Overall, 553 patients starting EFV-containing regimens were included: 38.2% started BID regimen, 44.5% OD regimens ≥2 pills and 17.4% STR. The overall proportion of EFV interruption was 37.4% at 4
years; at the same time point, interruptions for virological failure and toxicity were 8.8% and 16.5% (8% for CNS-SE), respectively. Starting EFV co-formulated in STR was associated with lower
proportion of overall interruption at 4 years (17.1% vs. 40.6%, p<0.01). Only one virological failure was observed with STR up to 4 years (1.1% vs. 10.3% in non-STR, p=0.051). STR also accounted
for lower proportion of interruption by patient decision (1.5% vs. 11.8%, p=0.01). No differences of interruption by overall toxicity and CNS-SE were observed. In multivariable analysis, STR and
male gender were associated with lower risk of EFV interruption, while higher CD4 nadir and IDU with higher risk.
In our experience, starting EFV co-formulated in STR was associated with lower virological failure and higher adherence, despite a similar proportion of CNS toxicity, thus reducing the risk of
treatment interruption.
PMCID: PMC3897945 PMID: 24418191
STR; Discontinuation; Combination antiretroviral therapy; Toxicity; Adherence
There is conflicting evidence and practice regarding the use of the non-nucleoside reverse transcriptase inhibitors (NNRTI) efavirenz (EFV) and nevirapine (NVP) in first-line antiretroviral therapy
We systematically reviewed virological outcomes in HIV-1 infected, treatment-naive patients on regimens containing EFV versus NVP from randomised trials and observational cohort studies. Data sources
include PubMed, Embase, the Cochrane Central Register of Controlled Trials and conference proceedings of the International AIDS Society, Conference on Retroviruses and Opportunistic Infections,
between 1996 to May 2013. Relative risks (RR) and 95% confidence intervals were synthesized using random-effects meta-analysis. Heterogeneity was assessed using the I2 statistic, and subgroup
analyses performed to assess the potential influence of study design, duration of follow up, location, and tuberculosis treatment. Sensitivity analyses explored the potential influence of different
dosages of NVP and different viral load thresholds.
Of 5011 citations retrieved, 38 reports of studies comprising 114 391 patients were included for review. EFV was significantly less likely than NVP to lead to virologic failure in both trials (RR
0.85 [0.73–0.99] I2=0%) and observational studies (RR 0.65 [0.59–0.71] I2=54%). EFV was more likely to achieve virologic success than NVP, though marginally significant, in both randomised
controlled trials (RR 1.04 [1.00–1.08] I2=0%) and observational studies (RR 1.06 [1.00–1.12] I2=68%).
EFV-based first line ART is significantly less likely to lead to virologic failure compared to NVP-based ART. This finding supports the use of EFV as the preferred NNRTI in first-line treatment
regimen for HIV treatment, particularly in resource limited settings.
PMCID: PMC3718822 PMID: 23894391
The full likelihood approach in statistical analysis is regarded as the most efficient means for estimation and inference. For complex length-biased failure time data, computational algorithms and
theoretical properties are not readily available, especially when a likelihood function involves infinite-dimensional parameters. Relying on the invariance property of length-biased failure time data
under the semiparametric density ratio model, we present two likelihood approaches for the estimation and assessment of the difference between two survival distributions. The most efficient maximum
likelihood estimators are obtained by the em algorithm and profile likelihood. We also provide a simple numerical method for estimation and inference based on conditional likelihood, which can be
generalized to k-arm settings. Unlike conventional survival data, the mean of the population failure times can be consistently estimated given right-censored length-biased data under mild regularity
conditions. To check the semiparametric density ratio model assumption, we use a test statistic based on the area between two survival distributions. Simulation studies confirm that the full
likelihood estimators are more efficient than the conditional likelihood estimators. We analyse an epidemiological study to illustrate the proposed methods.
PMCID: PMC3635710 PMID: 23843663
Conditional likelihood; Density ratio model; em algorithm; Length-biased sampling; Maximum likelihood approach
Researchers of uncommon diseases are often interested in assessing potential risk factors. Given the low incidence of disease, these studies are frequently case-control in design. Such a design
allows a sufficient number of cases to be obtained without extensive sampling and can increase efficiency; however, these case-control samples are then biased since the proportion of cases in the
sample is not the same as the population of interest. Methods for analyzing case-control studies have focused on utilizing logistic regression models that provide conditional and not causal estimates
of the odds ratio. This article will demonstrate the use of the prevalence probability and case-control weighted targeted maximum likelihood estimation (MLE), as described by van der Laan (2008), in
order to obtain causal estimates of the parameters of interest (risk difference, relative risk, and odds ratio). It is meant to be used as a guide for researchers, with step-by-step directions to
implement this methodology. We will also present simulation studies that show the improved efficiency of the case-control weighted targeted MLE compared to other techniques.
PMCID: PMC2835459 PMID: 20231910
It is of interest to estimate the distribution of usual nutrient intake for a population from repeat 24-h dietary recall assessments. A mixed effects model and quantile estimation procedure,
developed at the National Cancer Institute (NCI), may be used for this purpose. The model incorporates a Box–Cox parameter and covariates to estimate usual daily intake of nutrients; model parameters
are estimated via quasi-Newton optimization of a likelihood approximated by the adaptive Gaussian quadrature. The parameter estimates are used in a Monte Carlo approach to generate empirical
quantiles; standard errors are estimated by bootstrap. The NCI method is illustrated and compared with current estimation methods, including the individual mean and the semi-parametric method
developed at the Iowa State University (ISU), using data from a random sample and computer simulations. Both the NCI and ISU methods for nutrients are superior to the distribution of individual
means. For simple (no covariate) models, quantile estimates are similar between the NCI and ISU methods. The bootstrap approach used by the NCI method to estimate standard errors of quantiles appears
preferable to Taylor linearization. One major advantage of the NCI method is its ability to provide estimates for subpopulations through the incorporation of covariates into the model. The NCI method
may be used for estimating the distribution of usual nutrient intake for populations and subpopulations as part of a unified framework of estimation of usual intake of dietary constituents.
PMCID: PMC3865776 PMID: 20862656
statistical distributions; diet surveys; nutrition assessment; mixed-effects model; nutrients; percentiles
We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of
the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be
negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models
usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with
existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model
PMCID: PMC2903637 PMID: 19697127
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts
dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum
likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data.
Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with
the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
PMCID: PMC3936338 PMID: 23286178
DCE-MRI; Gaussian Stochastic Process; Pharmacokinetic Model; Bayesian Inference; Coordinate Descent Optimization | {"url":"http://pubmedcentralcanada.ca/pmcc/solr/mlt?mltid=2055116&idtype=acc&term=issn%3A1557-4679&pageSize=25","timestamp":"2014-04-20T02:34:08Z","content_type":null,"content_length":"123167","record_id":"<urn:uuid:f613dffc-96dc-49d6-aa36-0bde63865a92>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illustration 20.1
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Illustration 20.1: Maxwell-Boltzmann Distribution
Please wait for the animation to completely load.
In this animation N = nR (i.e., k[B] = 1). This, then, gives the ideal gas law as PV = NT. The average values shown, < >, are calculated over intervals of one time unit. Restart.
The particles that make up a gas do not all have the same speed. The temperature of the gas is related to the average speed of the particles, but there is a distribution of particle speeds called the
Maxwell-Boltzmann distribution. The smooth black curve on the graph is the Maxwell-Boltzmann distribution for a given temperature. What happens to the distribution as you increase the temperature?
The distribution broadens and moves to the right (higher average speed). At a specific temperature, there is a set distribution of speeds. Thus, when we talk about a characteristic speed of a gas
particle at a particular temperature we use one of the following (where M is the molar mass, m is the atomic mass):
• Average speed: (8RT/πM)^1/2 = (8k[B]T/πm)^1/2
• Most probable speed: (2RT/M)^1/2 = (2k[B]T/m)^1/2
• Root-mean-square (rms) speed: (3RT/M)^1/2 = (3k[B]T/m)^1/2
There is not simply one way to describe the speed because it is a speed distribution. This means that as long as you are clear about which one you are using, you can characterize a gas by any of
them. The different characteristic speeds are marked on the graph.
Illustration authored by Anne J. Cox.
next » | {"url":"http://www.compadre.org/Physlets/thermodynamics/illustration20_1.cfm","timestamp":"2014-04-17T10:20:29Z","content_type":null,"content_length":"18221","record_id":"<urn:uuid:727cba9a-8c02-45b2-8899-ee5c25b5c1f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raritan, NJ Algebra 1 Tutor
Find a Raritan, NJ Algebra 1 Tutor
...My goal is to break down complicated concepts for my students to ensure they have a deep understanding of the material. I look forward to working with each and every student take pride in
mentoring and guiding students through these difficult courses. I am a master MCAT tutor with another tutor...
39 Subjects: including algebra 1, chemistry, writing, calculus
...I am available to tutor your child from 5pm until 10pm any weeknight. I can also tutor some Saturdays and Sundays. I took graduate courses at the College of Saint Elizabeth to complete the
accelerated certification for teachers.
9 Subjects: including algebra 1, SAT math, grammar, elementary (k-6th)
...I have also recently passed the Praxis II Exam in Math Content Knowledge while earning Recognition of Excellence for scoring in the top 15 percent over the last 5 years. My most valuable
quality, however, is my ability to relate to students of all ages, and make even the most difficult subjects ...
22 Subjects: including algebra 1, reading, English, chemistry
...It does not deal with the real numbers and it's continuity. I have studied discrete math as I obtained my BS in mathematics from Ohio University. I have studied logic as an integral part of my
mathematics education.
14 Subjects: including algebra 1, calculus, algebra 2, geometry
I have 4.5 years of middle grades teaching experience in math and science, but I am certified to teach grades 6 through 9. I am a stay at home mom, but I am looking to get back into the
educational experience with tutoring. I am highly motivated and enjoy incorporating technology.
9 Subjects: including algebra 1, geometry, GED, algebra 2
Related Raritan, NJ Tutors
Raritan, NJ Accounting Tutors
Raritan, NJ ACT Tutors
Raritan, NJ Algebra Tutors
Raritan, NJ Algebra 2 Tutors
Raritan, NJ Calculus Tutors
Raritan, NJ Geometry Tutors
Raritan, NJ Math Tutors
Raritan, NJ Prealgebra Tutors
Raritan, NJ Precalculus Tutors
Raritan, NJ SAT Tutors
Raritan, NJ SAT Math Tutors
Raritan, NJ Science Tutors
Raritan, NJ Statistics Tutors
Raritan, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/raritan_nj_algebra_1_tutors.php","timestamp":"2014-04-21T14:55:31Z","content_type":null,"content_length":"24006","record_id":"<urn:uuid:3d8b5096-e3d5-49af-973a-702242779030>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Two violin strings are tuned to the same frequency, 277 Hz. The tension in one string is then decreased by 1.92 percent. What will be the beat frequency heard when the two strings are played
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50665225e4b0f9e4be28874b","timestamp":"2014-04-16T08:08:04Z","content_type":null,"content_length":"58567","record_id":"<urn:uuid:d73c9340-636a-4b3e-bc50-223df1711bcb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-dev] solver classes (was: help: wrapping generalized symmetric evp functions)
[SciPy-dev] solver classes (was: help: wrapping generalized symmetric evp functions)
Robert Cimrman cimrman3@ntc.zcu...
Fri Apr 11 04:08:08 CDT 2008
Ondrej Certik wrote:
> To get the discussion going, here are some comments. Everyone please
> let us know what you think:
> 1)
>> s = Solver.anyFromConf( conf, mtx = A ) # Possible pre-solves by LU.
> How about this:
> s = Solver(conf, mtx = A )
This certainly could be done, by it looks confusing to me -
instantiating a generic Solver and getting an instance of something else
(a particular solver). I prefer slightly more a static method here, with
some better name, e.g. Solver.create_any() or whatever.
> and also this (together with the syntax above):
> s = Solver(kind="ls.scipy_iterative", method="cg", mtx = A )
The same here. Also note that in some cases the configuration dictionary
can be quite large, so having one argument name reserved for 'conf' or
'config' or ... seems ok to me. Anyway, this way of constructing a
solver suits best some general framework where the code does not know in
advance which particular solver a user might need (like it is e.g. in
sfepy - it just needs to know the type of a solver (linear, nonlinear,
> This is useful for reading the configuration from some file. However,
> sometimes (a lot of times) I prefer this:
> 2) how about this:
> class SciPyIterative(LinearSolver):
> blabla
> class CG(SciPyIterative):
> pass
> class Umfpack(LinearSolver):
> pass
> and people would just use:
> from scipy.sparse.linalg import CG, Umfpack
> s = CG(epsA=1e-6, mtx=A)
> or
> s = Umfpack(mtx=A)
Yes, this is what I like to have, too. If you solve a particular
problem, you construct a particular solver directly. (But note that you
can already do that with the classes as they are now in sfepy... Again,
the Solver.anyFromConf()-like syntax is useful more for an abstract
framework than for a use in small scripts.)
> 3) I also prefer to pass the matrix as the first argument, becuase you
> always need to supply a matrix, only then some optinal default
> arguments, or preconditioners, i.e.:
> s = CG(A, epsA=1e-6)
> or
> s = Umfpack(A)
In linear solvers, yes. Other types of solvers might not need a matrix
at all. All solvers have a configuration, though. So there is some logic
in as it is, but 'conf' can be made a keyword argument, then why not.
We should also use something_and_something convention instead of
somethingAndSomething for the argument names, as it is usual in SciPy.
thanks for the comments,
More information about the Scipy-dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2008-April/008834.html","timestamp":"2014-04-18T19:22:20Z","content_type":null,"content_length":"5786","record_id":"<urn:uuid:21e25b00-bfe9-477d-9a0b-cad442495ccf>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Network flows. Theory, algorithms, and applications.
(English) Zbl 1201.90001
Englewood Cliffs, NJ: Prentice Hall (ISBN 0-13–617549-X). xvi, 846 p. (1993).
Among all topics covered in operations research, network flows theory offers the best context to illustrate the basic concepts of optimization. This book provides an integrative view of the theory,
algorithms and applications of network flows. In order for their presentation to be more intuitive and accessible to a wider audience, the authors prefer to adopt a network or graphical viewpoint
rather than relying on a linear programming approach. The material in this book can be used to serve the purpose of supplementing either an upper level undergraduate or graduate course.
The main features of the book are well outlined in the authors’ preface as follows: “In-depth and self-contained treatment of shortest path, maximum flow, and minimum cost flow problems, including
descriptions of new and novel polynomial-time algorithms for these core models. Emphasis on powerful algorithmic strategies and analysis tools, such as data scaling, geometric improvement arguments,
and potential function arguments. An easy-to-understand description of several important data structures, including $d$-heaps, Fibonacci heaps, and dynamic trees. Treatment of other important topics
in network optimization and of practical solution techniques such as Lagrangian relaxation. Each new topic introduced by a set of applications and an entire chapter devoted to applications. A special
chapter devoted to conducting empirical testing of algorithms. Over 150 applications of network flows to a variety of engineering, management, and scientific domains. Over 800 exercises that vary in
difficulty, including many that develop extensions of material covered in the text. Approximately 400 figures that illustrate the material presented in the text. Extensive reference notes that
provide readers with historical contexts and with guides to the literature.”
In addition to the in-depth analysis of shortest path, maximum flow, minimum cost flow problems, the authors devote several other chapters to more advanced topics such as assignments and matchings,
minimum spanning trees, convex cost flows, generalized flows and multicommodity flows. Furthermore, emphasis is put on design, analysis and computation testing of algorithms.
Finally, pseudocodes for several algorithms are provided for readers with a basic knowledge of computer science.
90-01 Textbooks (optimization)
90B10 Network models, deterministic (optimization)
90C35 Programming involving graphs or networks
90-02 Research monographs (optimization) | {"url":"http://zbmath.org/?q=an:1201.90001","timestamp":"2014-04-19T09:37:52Z","content_type":null,"content_length":"22806","record_id":"<urn:uuid:2ed8f34e-d043-4262-bceb-9db1fb13a9ae>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00288-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Equivariant localization and stationary phase.
(English) Zbl 1033.57016
Mladenov, Ivaïlo M. (ed.) et al., Proceedings of the 4th international conference on geometry, integrability and quantization, Sts. Constantine and Elena, Bulgaria, June 6–15, 2002. Sofia: Coral
Press Scientific Publishing (ISBN 954-90618-4-1/pbk). 88-124 (2003).
The author discusses Hamiltonian actions on symplectic manifolds and gives a self-contained introduction to the Cartan model of equivariant cohomology. He relates the results to the Cartan theorem
asserting that the
-equivariant cohomology algebra
is isomorphic to the de Rham cohomology with complex coefficients of the orbit manifold
, where
is a compact connected Lie group acting smoothly and freely on a smooth manifold
. Then the author proves the major result of the paper, the equivariant localization theorem about computing
${\int }_{M}\alpha \left(\xi \right)$
for any
-equivariantly closed differential form
and any nondegenerate element
$\xi \in 𝔤$
for which the associated vector field
${\xi }^{#}$
has only isolated zeros, where
is the Lie algebra of a compact Lie group
acting smoothly on a compact oriented manifold
of dimension
. As an application of the theorem, the author derives the generalized Duistermaat-Heckman theorem about computing
${\int }_{M}{e}^{i\mu \left(\xi \right)}{u }_{\omega }$
for any compact symplectic manifold
$\left(M,\omega \right)$
of dimension
with a Hamiltonian action of
and corresponding symplectic moments given by
$\mu :𝔤\to {C}^{\infty }\left(M\right)$
, where
is oriented with the Liouville form
${u }_{\omega }=\frac{1}{k!}{\omega }^{k}$
57R91 Equivariant algebraic topology of manifolds
53D05 Symplectic manifolds, general
53D35 Global theory of symplectic and contact manifolds
55N91 Equivariant homology and cohomology (algebraic topology)
55P60 Localization and completion (homotopy theory) | {"url":"http://zbmath.org/?q=an:1033.57016&format=complete","timestamp":"2014-04-16T07:19:13Z","content_type":null,"content_length":"24509","record_id":"<urn:uuid:114e8b96-10be-4728-9f5d-4c9ee1708718>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00639-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Additional Material for the Book
Book Web Pages | AMS Bookstore
Function Theory: Interpolation and Corona Problems
Eric T. Sawyer
Publication Year: 2009
ISBN-10: 0-8218-4734-1
ISBN-13: 978-0-8218-4734-3
Fields Institute Monographs, vol. 25
This page is maintained by the author.
Contact information:
• Eric T. Sawyer
• Department of Mathematics and Statistics
• McMaster University
• 1280 Main Street West
• Hamilton, ON L8S 4K1, Canada
• erictsawyer@gmail.com
This page will be used for updates and additional material.
Proposition 5.12 on page 100 incorrectly asserts that separation and the Carleson measure condition hold "if and only if" strong separation holds. The "if and only if" should be changed to "only if".
See the scanned pages for detailed changes.
Conclusion 5.17 on page 102 is obtained on page 36 of K. Seip [43].
The proof of Theorem 5.28 given on pages 108 and 109 is valid only for sigma equal zero since the claimed embedding of Besov-Sobolev spaces on page 109 fails for sigma positive. A correct proof can
be found in J.M. Ortega and J. Fabrega [33]. | {"url":"http://www.ams.org/publications/authors/books/postpub/fim-25","timestamp":"2014-04-21T10:31:06Z","content_type":null,"content_length":"62019","record_id":"<urn:uuid:58703cda-3f6e-40c0-9cc5-b0d67f49b202>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monday Math Madness is here!
I was contacted last week by a couple of Matlab geeks, Quan and Daniel, who have started a Blinkdagger, about co-sponsoring an ongoing Math contest. Now, I was a little confused because I think these
guys are engineers, and I didn't know that engineers took Math classes Calculus in the 4th grade.) Anyway, their email got past my spam filter so I figured this relationship was meant to be! So,
(drum roll, please), Blinkdagger and Wild About Math! have teamed up to post fun Math puzzles on the 1st and 3rd Monday of every month. There'll be a prize for the best combination of randomly
selected right answer plus good explanation of how you got your answer.
Here's how Monday Math Madness will work. I'll post a problem on the first Monday of the month. You solve it. You email your answer and explanation to the special email address for the contest. A
randomly selected excellent submission wins the prize. Two weeks later, on the third Monday of the month, the Blinkdagger guys post their problem. You solve, submit, and maybe win a prize. Then, it's
back to this blog the first Monday of next month.
Here's the first contest problem:
A popular blog has just three categories: brilliant, insightful, and clever. Every blog post belongs to exactly one of the three categories and the category for each post is selected at random.
What is the probability of reading at least one post from each category if a reader reads exactly five posts?
I will send a $10 gift certificate to Amazon.com to a randomly selected person who provides a good answer to this problem. Good answers are correct, clearly explained, and ideally elegant although
not all problems will have elegant solutions. If you cheat and find the answer to this or to a future problem on Google please change a few words to throw us off the scent as we'd rather give the
prize to someone who actually solved the problem. We will show off the first good randomly selected solution, after Daniel, Quan, and I pick it by posting it to our blogs. We may also post some other
good solutions and give out some link love so, if you have a blog, let us know your URL when you email your solution. All entries must be submitted by Sunday night after the posting Monday as I'll be
checking mail Monday morning. The special submission mailbox is MondayMathMadness at g/m/a/i/l/./c/o/m. (Discard the slashes and turn the "at" into "@".)
I'll be contacting manufacturers of Math-related products and see if I can round up Math books, games, toys, and puzzles to give as prizes. If not then I'll keep giving out Amazon certificates. The
Blinkdagger guys get to choose their own prizes. They might want to outdo me and give away sports cars or copies of Matlab, or something.
Note: You only get to win once per year. So, if you win then immediately unsubscribe from our blogs we'll understand.
I, and the Matlab guys at Blinkdagger, would appreciate your telling your friends about Monday Math Madness and talking it up on your blogs. I realize that you might only want to tell your friends
who are not as good at solving Math puzzles as you are. Just know that we are desperate for blog readers and we would love to have your competitors know about this madness even if it's at your
Quan, Daniel, and Sol
Cube photo by dps
Comments (3) Trackbacks (6) ( subscribe to comments on this post )
1. Yes, engineers do study maths. They handle many computations and model systems to analyse their limits and weakness. Many a times, maths reveal some properties that the naked eye cannot see.
Therefore maths serves as the “eye” for engineers too.
2. Great idea! I gave this problem to one of my classes to solve today, along with a explanation of how to submit solutions, so I hope you hear from some of them.
3. @Lim Ee Hai: Yes, I know engineers study Math. I just like to give the Matlab guys a hard time.
@Heather: Excellent! That’s a great idea to get your students engaged. One of them might win the $10 from Amazon. Are all of you teachers out there paying attention? | {"url":"http://wildaboutmath.com/2008/03/03/monday-math-madness-is-here/","timestamp":"2014-04-19T14:54:52Z","content_type":null,"content_length":"40324","record_id":"<urn:uuid:76c28469-9871-46a3-a369-ec3b607d0dff>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Look at the picture
• one year ago
• one year ago
Best Response
You've already chosen the best response.
help me
Best Response
You've already chosen the best response.
A polynomial is a monomial or the sum or difference of monomials. Each monomial is called a term of the poynomial
Best Response
You've already chosen the best response.
i know .
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so are you asking for the answer? or why that is the answer marked in the picture?
Best Response
You've already chosen the best response.
im asking for the correct answer because i got this wrong and i wanna know which i got wrong.
Best Response
You've already chosen the best response.
I think it's all of them except for D
Best Response
You've already chosen the best response.
yeah, i think D is the only one that isn't because it has a negative exponent in it
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
A polynomial is a sum of terms. Each term of a polynomial can only contain a number (called coefficient) and zero or more variables raised to a natural number (1, 2, 3, ...). If you have a
variable as an exponent, or a variable raised to a negative exponenet, or a fractional exponent, that is not a polynomial.
Best Response
You've already chosen the best response.
b isnt one?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
so A wouldn't be one either according to mathstudent's definition
Best Response
You've already chosen the best response.
Ya, I don't think is a polynomial either.
Best Response
You've already chosen the best response.
B & F are both polynomials @Katieann16
Best Response
You've already chosen the best response.
actually E wouldn't be one either since it has x^4 in the denominator
Best Response
You've already chosen the best response.
A has a variable as an exponent. C has a fractional exponent D & E have variables in the denominator
Best Response
You've already chosen the best response.
so its just B&F
Best Response
You've already chosen the best response.
pretty sure mathstudent just nailed it
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Correct, only B and F
Best Response
You've already chosen the best response.
thank you all(:
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
To get this question right you have to understand the definition of a monomial.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c6e818e4b0b766106dba38","timestamp":"2014-04-17T21:38:52Z","content_type":null,"content_length":"83487","record_id":"<urn:uuid:6a04b639-735a-496c-8ece-265034a925f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
The PROB function in Excel 2010
This is a discussion on The PROB function in Excel 2010 within the Applications forums, part of the Tutorials category; The PROB function in Excel 2010 The following tutorial will describe to you the
usage and syntax of the PROB ...
The PROB function in Excel 2010
The following tutorial will describe to you the usage and syntax of the PROB function that is present in Microsoft Excel 2010 Application.
The PROB function returns the probability that a particular value in a given range is between two limits, the lower limit and the upper limit.
The PROB function has the following syntax and arguments
=PROB(y_range, prob_range, [lower_limit], [upper_limit])
‘y-range’ is a required argument and it is the range of numeric values of y with which there are associated probabilities.
‘prob_range’ is also a required argument and it is the range of probability values that are associated with the y-range values.
‘lower_limit’ is an optional argument and it is considered as the lower bound on the value for which you wish to find the probability.
‘upper_limit’ is also an optional argument and it is considered as the upper bound on the value for which you wish to find the probability.
If any of the values in the prob_range is less than or equal to `1’ or greater than ‘1’, the function returns the #NUM! error value.
If the summation of values in the prob_range is not equal to ‘1’, the function returns the #NUM! error value.
If y_range and prob_range contains unequal number of values, the function returns the #N/A error value.
If upper_limit is ignored, the function returns the probability as being equal to the probability of the lower_limit.
Refer the figure below for examples on PROB function
So, this is how you can find the probability of a particular value between two limits in the given range of values.
Read Other Applications
09-28-2010, 11:29 AM #1
Senior Member | {"url":"http://www.itechtalk.com/thread10645.html","timestamp":"2014-04-20T21:09:47Z","content_type":null,"content_length":"34185","record_id":"<urn:uuid:1933b210-96d6-42c3-9526-170a8f11b38d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
Model Surface Effects On Power Dissipation
Due to the decreasing skin depth of printed-circuit boards (PCBs) used for high-speed analog and digital circuits, surface roughness plays an important role in determining electrical performance. The
effect of roughness on power dissipation has been considered an electromechanical coupling (EC) issue, studied by simultaneous mechanical and electromagnetic (EM) analyses, but with limited success.
So, an EC model was developed as an effective analysis approach for evaluating the effect of surface roughness on power dissipation.
The EC model was developed by introducing the concept of a technical index and a measurement method for validation. Using the EC model as the boundary condition, a generic formula for added power
dissipation caused by surface roughness will be deduced. Simulations and experiments will also be performed to validate the model. A typical EC model will be used to describe the surface roughness
effect of electrical conductors, and that model itself will be improved by a two-dimensional fractal function. The conventional Monte Carlo method and finite-element method (FEM) were enlisted for
the EC calculations. Comparative results will show the EC model can describe surface roughness more precisely than other models. Its advantages on the computational efficiency can be demonstrated by
a comparison in centralprocessing- unit (CPU) time based on the same computational conditions. The EC model was found to be ideal for describing the surface roughness of high-frequency circuits,
meeting various requirements for mechanical analysis and EM calculation.
Smooth surfaces are rare in nature, and most surfaces (including engineering surfaces) appear rough when viewed under a microscope. Since electronic products and systems have been steadily increasing
to analog and digital rates in the gigahertz region, the surface roughness of the PCBs used in those designs can have significant impact on key electrical properties, such as power dissipation. As a
result, a quantitative description of a PCB's surface topography is vital to understanding surface effect on electrical performance.
During the past decade, numerous methods have been developed to quantitatively analyze surface topography, including numerical solutions,^1 theoretical approaches,^2 and experimental approaches.^3
Unfortunately, the process of acquiring measured profiles results in irreversible damage to the material under investigation, making it difficult if not impossible to complete a measured analysis of
the rough surface of a PCB. In terms of computer-aided-engineering (CAE) analysis, existing commercial software tools do not model the inner surface roughness accurately. As a result, attention has
been focused on numerical simulations of the inner surface roughness of circuit materials.
Present research on modeling surface roughness focused on mechanical approaches, effect on conductor impedance, and EM scattering. Mechanical modeling has relied on the geometric deviation
description model in mechanical engineering as a means of evaluating the surface roughness of an electronic material.^1,4,5 The approach was suitable for evaluating any friction at a material
interface, any wearing of moving surfaces, sealing of joined surfaces, and reliability of a mated surface, but it did not take into consideration the effect of surface roughness on the EM
characteristics. For evaluating the impedance of conductors, common results were due to Morgan's classical paper^6 and the Hammerstad and Bekkadal formula.^2 Moreover, recent results were also
consistent with Morgan's work.^7 In these works, a common and basic assumption was the lateral uniformity of the material under study, which indicated that the surface roughness was distributed in
one direction and the surface was smooth along the direction of conducting current. However, this is a greatly oversimplified assumption, since any material fabrication process would invariably
produce conductors with surface variations and the roughness distributed without predictable regularity.
In terms of studying EM scattering, Falconer^8 and Mandelbrot^9 showed that the multiscale nature of surface roughness could be represented by fractal geometry since the actual surface of an
electrical material has a self-affinity property in nature.^10 Increased detail about a material's rough surface could be gained by repeatedly magnifying its profile. But this additional roughness
information has brought about a marked increase in the complexity of any computation and analysis performed on a material's surface roughness, and EM modeling disregarded the inter-relationships of
surface roughness with electrical performance, greatly limiting the applicability of the modeling approach.
Because the effect of surface roughness on power dissipation is not only a problem in mechanical analysis but in EM analysis, it is considered an EC problem, requiring both mechanical and electrical
parameters for complete analysis. Proekt and Cagellaris have formulated an effective conductivity by equating the power loss in a rough surface conductor with the loss for a smooth surface.^11
However, their derivation, which was based on perturbation theory, was valid only when the surface roughness was far smaller than the skin depth, which was not adaptable when the surface roughness is
comparable to skin depth. As a result, an EC modeling approach was developed for generic use with different surface roughness configurations while also meeting the different requirements for
mechanical and EM calculations. It has been applied with good results for accuracy and computational efficiency.
Figure 1 shows a comparison of current flowing on smooth (top) and rough (bottom) surfaces. Based on microwave surface theory, 98.2 percent of current flow is concentrated within 4t (wavelengths) of
the surface of a material.^12-14 Therefore, the current flowing 4t from the surface was considered as a more reasonable view. The effect of roughness on the surface of a conductor depends on the
ratio of the root mean square (RMS) surface roughness to the skin depth of the conductor, rather than the exact shape of the conductor's surface profile. Grades of roughness that were too large or
too small relative to this 4t reference point had little influence on the material's electrical properties.
According to microwave surface resistance and engineering practices, a microfluctuating shape characteristic with an amplitude less than 0.5 times the skin depth would not have adverse effect on the
material's electrical properties.^15 This indicates that some numerical filtering of the original surface contours was needed. A modified profile was obtained by filtering out shortwave surface
components having an amplitude of less than 0.5 times the skin depth. By this technique, the modeling method was called the EC modeling approach.
In terms of mechanical roughness modeling, in which only one dimension of local information is applied, the arithmetical mean deviation (Ra) is regarded as the unique evaluation index. Using such a
parameter does not make it possible to fully consider two- or three-dimensional surface features, however. In such cases, parameter R[a] is nondeterministic because the different surface contours may
have the same R[a] value. To perform EM modeling of surface roughness, two parameters (D and b) were introduced to describe the roughness of surfaces. Parameter D represents the fractal dimension,
while parameter b is the fundamental frequency in space and determines surface density. Although these two parameters can describe surface roughness accurately, they cannot be measured directly and
it is hard to represent them in terms of measured parameters of roughness, such as R[a]. For this reason, mechanical roughness modeling has limited applicability for analyzing roughness effects on
electrical performance. Studies have shown that the biggest differences for contours with the same R[a] values were the density and the regularity of the surface peaks and valleys. Such information
appears as the expansion length of the surface profile, which was the root reason for increasing the surface current path. Therefore, two parameters, R[a] and R[1], were introduced as a
two-dimensional surface roughness index. Parameter R[a] is the arithmetical mean deviation, while parameter R[1] is an auxiliary parameter equal to the length of the effective contour. The two
parameters are found by Eqs. 1 and 2, where:
l = one sampling length,
f(x) = effective conductive contour,
(x[i], y[i]) = the measured value of the conductive contour, and d = the skin depth.
Using the two-dimensional roughness technical index a,R[1]> to describe surface microgeometry errors not only reflects the relief intensity of the roughness, but also reflects the fluctuational
gradient of the roughness, in agreement with requirements of mechanical and EM field analysis.
Power dissipation according to a rough surface can be better understood by considering a rectangular waveguide. The transverse electromagnetic TE[01] fundamental mode of the waveguide has a magnetic
field in the narrow wall of the waveguide with H[X] and H[Z] components while the magnetic field in the narrow wall of the waveguide only has the H[Z] component. Considering the presence of loss, the
H[X] and H[Z] components can be found from Eqs. 3 and 4:
According to skin effect theory,^12-14 and Morgan's classical empirical formula,^6 the added power dissipation by the roughness of the one unit infinitesimal element in the broad wall of the
waveguide can be found from Eq. 5, while the added power dissipation by the roughness of the one infinitesimal unit in the narrow wall of the waveguide can be found from Eq. 6.
Continue on page 2
Page Title
Roughness is distributed throughout an inner surface, although with an irregular shape locally. So, roughness details were added as a boundary condition for precision. Given a waveguide internal
surface that was smooth, f(x,y) = 0, the power dissipation was P0, for a volume with x varying from x1 to x2, z varying from f(x) to g(x), and y varying from -8 to f(x,z), as determined by Eq. 7,
f(x) and g(x) = the upper and lower boundaries of the surface contour in the z direction, and a, b = waveguide width, height.
The added power dissipation for the whole waveguide due to surface roughness can be found by Eq. 8.
Numerical experiments were conducted to compare values from the proposed EC model and the traditional model with experimental data. For the purpose of the experiments, a seven-stage filter prototype
was manufactured of H62 brass, plated with 10-m-thick silver and connected by a bolt (Fig. 2). The surface of the prototype filter's cavity was measured by means of the Taylor Hobson surface contour
analyzer. The analyzer has a range of 200 μm and an accuracy (Ra = 0.1 μm and Rz = 0.1 μm). The local scanning pattern of the measurement report is shown in Fig. 3. Figure 4(a) shows an enlarged
profile for a section of the surface roughness profile shown in Fig. 3, while Fig. 4(b) shows the same profile improved by digital filtering, in which the shortwave component, based on the operating
frequency, is filtered out.
Although a 1D surface roughness model provides a clear understanding of the problem, it is more precise to treat a rough surface as a twodimensional (2D) model. Therefore, the rough surface was
modeled by 2D fractal grooves of one unit length using the Matlab Version 7.1 mathematical software from The Math-Works. The basic model is shown in Fig. 5(a), with an improved version in Fig. 5(b)
resulting from the removal of shortwave components based on operating frequency. The prototype is a seven-stage rectangular-waveguide filter centered at 14 GHz, with skin effect of 1.56 μm. The value
of the shortwave components filtered out was 0.78 m.
A Monte-Carlo simulation was introduced for calculating average power dissipation.^17,18 In these simulations, there were 8192 data points in each fractal rough surface, and each group contained 2000
rough surfaces. The dissipation ratio for every realization was calculated by solving Eq. 8 and computing average dissipation.
Figure 6 offers a comparison of experimental data with values determined by the proposed EC model (eq. 8), the EM model, and the mechanical model. The fractal model is the EM model. The exponential
rough surface was taken as the first mechanical model and the Gaussian rough surface as the second one. These two kinds of rough surfaces were chosen to represent two modeling levels of realistic
roughness, i.e., the former was rough in the long range but smooth in the short range, while the latter was rough evaluated over both short or long ranges.^19, 20 Results of the EC model, the EM
model, and mechanical model were obtained from a 2000-run Monte Carlo simulation; Matlab2009b was used for simulation. Experimental data from the prototypes in Fig. 2 were the reference. Results
suggest the models show similar trends: for an increase in surface roughness, power dissipation also increases, with a maximum of 1.9 times that of the smooth inner surface in agreement with the
The RMS differences of the models were calculated separately, with a difference value of e for the exponential model found to be 0.035. The value of e for the fractal model was 0.0374 while the
difference for the EC model was found to be 0.0233. The results from the EC model were found to be closer to the experimental results, indicating that it can more exactly describe the effects of the
surface roughness than the other models.
To increase model applicability, a structured model was assembled in Fig. 7. This makes it possible to change parameters without limit. Based on the structured model, values for filter power
dissipation at different frequencies are predicted in Fig. 8.
The table compares CPU times among four mathematical modes analyzing the rough surface of a rectangular waveguide with length of 15.8 cm, width of 7.04 cm, and height of 3.556 cm. The processing time
for the Gaussian and exponential functions were similar, while the fractal function was about 10 times slower than the Gaussian and exponential functions due to the large amount of detailed
information.^8, 9 Although the fractal function was more efficient if limited to a computation of power loss, the EC model remains useful in the study of the electrical properties of irregularly
shaped surfaces and structures. Compared with the two Monte Carlo models, the fractal approach took about four times more CPU time than the transverse electromagnetic (TE) approach for one surface
realization, and about ten times more CPU time than the Gaussian and exponential functions. However, considering the large number of runs for convergence with a Monte Carlo simulation, the EC model
was much faster than the original fractal function.
This work was supported by the Fundamental Research Funds for the Central Universities (No. JY10000904019).
1. N. Patir, Wear, Vol. 47, 1978, p. 263.
2. E. O. Hammerstad and F. Bekkadal, Editors, Microstrip Handbook, Trondheim, Norway, 1975.
3. L. Ponson, D. Bonamy, and E. Bouchaud, Physical Review Letters, Vol. 96, 2006, p. 3.
4. A. Majumdar and C.L. Tien, Wear, Vol. 136, 1990, p. 313.
5. A. Majumdar, and B. Bhushan, ASME Journal of Tribology, Vol. 112, 1990, p. 205.
6. S. P. Morgan, J. of Applied Physics, Vol. 20, 1949, p. 352.
7. C. L. Holloway and E. F. Kuester, IEEE Transactions on Microwave Theory & Techniques, Vol. 43, 2000, p. 2695.
8. K. Falconer, Ed., Fractal Geometry: Mathematical Foundations and Application, Wiley, New York, 1990.
9. B. B. Mandelbrot, Editor, The Fractal Geometry of Nature, Freeman, New York, 1983.
10. A. Majumdar, and B. Bhushan, ASME Journal of Tribology, Vol. 113, 1991, p. 1.
11. L. Proekt and A. C. Cangellaris, 53rd Proceedings of the International Conference on Electronic Components and Technology, 53rd, May 30, 2003, pp. 1004-1010.
12. T. Frederick, Editor, Radio Engineers' Handbook, McGraw-Hill, New York, 1943.
13. C. R. Paul, Editor, Introduction to Electromagnetic Compatibility, Wiley-InterScience, New York, 1992.
14. S. R. Seshadri, Editor, Fundamentals of Transmission Lines and Electromagnetic Fields, Addison-Wesley Educational Publishers, New York, 1971.
15. V. M. Papadopoulos, Quarterly Journal of Mechanics & Applied Mathematics, Vol. 7, 1954, p. 326.
16. Y. Konishi and K. Uenakada, IEEE Transactions on Microwave Theory & Techniques, Vol. 22, 1974, p. 869.
17. M. Kathryn Thompson, "Methods for Generating Rough Surfaces," in ANSYS, Proceedings of 2006 International ANSYS Conference, May 2-4, 2006; Pittsburgh, PA.
18. M. Q. Zou, B. M. Yu, Y. J. Feng, and P. Xu, Physica A, Vol. 386, 2007, p. 176.
19. J. A. Ogilvy and J. R. Fostert, Journal of Applied Physics, Vol. 22, 1989, p. 1243.
20. L. Tsang, X. Gu, and H. Braunisch, IEEE Microwave and Wireless Component Letters, Vol. 16, 2006, p. 221 | {"url":"http://mwrf.com/print/test-and-measurement/model-surface-effects-power-dissipation","timestamp":"2014-04-21T08:16:00Z","content_type":null,"content_length":"33440","record_id":"<urn:uuid:6afdd5f0-0227-4d8d-962a-4b374a731204>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
From 5 employees at a company, a group of 3 employees will be chosen to work on a project. How many different groups of 3 employees can be chosen?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
is their any easy technique it is hard to do mentally
Best Response
You've already chosen the best response.
what i do is (3)(3) because 3 people only once
Best Response
You've already chosen the best response.
So this is probability stuff right? Since `order` doesn't matter this is a Combination problem, not a Permutation. Understand what I mean by "order doesn't matter"? If Bob, Suzy and then Tom get
picked. That's the same as Suzy, Tom and then Bob getting picked. The order they were picked in doesn't matter. The notation can be written like this, \(\large _5C_3\) Which would be read, "5
choose 3". \[\large _nC_r \qquad = \qquad \frac{n!}{r!(n-r)!}\]
Best Response
You've already chosen the best response.
(3)(3)? Hmm maybe there is a simple way to do this like by building a tree, and such. But I'm not really comfortable with this subject of material to explain it that way :\ heh
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Yah that sounds right c:
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/513ab12ce4b01c4790d1de15","timestamp":"2014-04-19T20:01:48Z","content_type":null,"content_length":"42369","record_id":"<urn:uuid:771fa030-8233-408a-b4e2-243976e96115>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holbrook, MA Calculus Tutor
Find a Holbrook, MA Calculus Tutor
...I took calculus in high school and several levels of calculus in college. I also took 3D calculus at MIT. While tutoring in my junior and senior year of college, I tutored freshman in calculus
10 Subjects: including calculus, physics, geometry, algebra 2
...I taught review sessions, graded tests, etc. I teach basic Organic Chemistry as part of my AP Chemistry course. Also, have taken Organic Chemistry I, II, and III in college and earned A's.
9 Subjects: including calculus, chemistry, biology, algebra 1
...I explain the material so the student can learn through understanding. No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at
Rutgers University for 10 years.
24 Subjects: including calculus, chemistry, physics, statistics
...My experience in writing stems from my position as editor of my high school newspaper and through peer review. My strength is my ability to tutor Spanish. I speak Spanish fluently as a result
of bilingual school and a semester abroad in Madrid.
30 Subjects: including calculus, reading, English, elementary (k-6th)
...I am flexible with my time and can offer make-up lessons. I am looking forward to hearing from you and meeting at your earliest convenience. In high school I took mainly all advanced
mathematics courses.
14 Subjects: including calculus, chemistry, geometry, biology
Related Holbrook, MA Tutors
Holbrook, MA Accounting Tutors
Holbrook, MA ACT Tutors
Holbrook, MA Algebra Tutors
Holbrook, MA Algebra 2 Tutors
Holbrook, MA Calculus Tutors
Holbrook, MA Geometry Tutors
Holbrook, MA Math Tutors
Holbrook, MA Prealgebra Tutors
Holbrook, MA Precalculus Tutors
Holbrook, MA SAT Tutors
Holbrook, MA SAT Math Tutors
Holbrook, MA Science Tutors
Holbrook, MA Statistics Tutors
Holbrook, MA Trigonometry Tutors
Nearby Cities With calculus Tutor
Abington, MA calculus Tutors
Avon, MA calculus Tutors
Braintree calculus Tutors
Braintree Phantom, MA calculus Tutors
East Weymouth calculus Tutors
Hanover, MA calculus Tutors
Hanson, MA calculus Tutors
Hull, MA calculus Tutors
North Weymouth calculus Tutors
Norwell calculus Tutors
Randolph, MA calculus Tutors
Rockland, MA calculus Tutors
South Weymouth calculus Tutors
Stoughton, MA calculus Tutors
Whitman, MA calculus Tutors | {"url":"http://www.purplemath.com/holbrook_ma_calculus_tutors.php","timestamp":"2014-04-17T13:46:35Z","content_type":null,"content_length":"23697","record_id":"<urn:uuid:0747482e-c08b-4afa-8542-e49126927b11>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- May 1998, week 3 (#109)LISTSERV at the University of Georgia
Date: Tue, 19 May 1998 13:31:40 -0400
Reply-To: "Bassett Consulting Services, Inc."
Sender: "SAS(r) Discussion" <SAS-L@UGA.CC.UGA.EDU>
From: "Bassett Consulting Services, Inc."
Subject: Re: geometric mean
CONTENT: re: geometric mean
NAME: Michael L. Davis
INTERNET: Bassett.Consulting@worldnet.att.net
AFFILIATION: Bassett Consulting Services, Inc.
P-ADDR: 10 Pleasant Drive, North Haven, CT 06473
PHONE: (203) 562-0640
FAX: (203) 498-1414
I found Amy Savage's question and Lary Jones's reply about the
geometric mean interesting because I had been told that the
geometric mean was synonymous with the median. A quick call
to SI Tech Support and examination of the SI sample code to
compute the geometric mean quickly indicated the error of my
understanding. I am glad that I read SAS-L regularly to help
identify where some of my previous learning may be in error.
Neverless, if the goal of using the geometric mean is to
minimize the skew effect when analyzing small, asymetrical
samples, then it appears that one might consider using the
median instead of the geometric mean as a tool to estimate
the center of the population distribution. Both computations
minimize the effects of extreme values
One advantage of using the median is that it is more easily
grasped by mathematically challenged (such as myself). Also
the median computation can be easily obtained from a base SAS
procedure such as PROC UNIVARIATE or PROC CORR.
Now all we need is a spirited discussion of which PCTLDEF=
option to use <grin>. | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind9805c&L=sas-l&D=0&P=11611","timestamp":"2014-04-21T05:05:01Z","content_type":null,"content_length":"10076","record_id":"<urn:uuid:0473b4e4-4c25-4620-83fa-aeb01e4a0840>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research Home >> Synthesis Publications
Design Gain Based Synthesis
Automation Home
Page • Wavefront Technology Mapping (DATE 1999) Technology mapping that is optimal for delay.
• Gate Sizing Using Geometric Programming, David Kung, Prabhakar Kudva International Workshop on Logic Synthesis (IWLS 1998)
• Gate Sizing Selection for Standard Cell Libraries. F. Beeftink, P. Kudva, D. Kung, L. Stok. International Conference on Computer Aided Design, ICCAD 1998
Catagories: • F. Beeftink, P. Kudva, D. Kung, R. Puri, and L. Stok, Combinatorial Cell Design for CMOS Libraries, VLSI Integration, February, 2000.
• D. Kung and R. Puri, Optimal P/N Width Ratio Selection for Standard Cell Libraries, ACM/IEEE International Conference on CAD (ICCAD), 1999.
Asynchronous • D, Kung, A Fast Fanout Optimization Algorithm for Near-Continuous Buffer Libraries, ICCAD 98, p 352 - 355.
Placement Driven Synthesis
Dynamic Logic
Synthesis • A. Handa, R. Puri and J. Gu, An Efficient Steiner Tree based approach for Multilayer Routing, Proceedings of Maryland Conference on Advanced Routing of Electronic Modules,
September 1995.
FPGA • D. Montuno, G. Wilson, R. Puri, A. Marks, T. Montor, P.C. Wong, P. Quesnel, M. Rizvi, C. Goemans, C. Zhang, B. Stacey and J. Davies, Functional/Physical Codesign, Proceedings
of Canadian Conference on Electrical and Computer Engineering, September 1995.
Gain Based • Transformational Placement and Synthesis - Wilm Donath, Prabahakar Kudva, Leon Stok, Paul V illarrubia, Lakshmi Reddy, Andrew Sullivan, Kanad Chakraborty. Design Automation
Synthesis and Test i n Europe (DATE 2000)
• Performance Driven Optimization of Network Length in Physical Placement, DAC 1999. A method for changing the placement of groups of circuits so as to reduce delay.
Interconnect and • Performance Driven Optimization of Network Length in Physical Placement - Wilm Donath, Prah bakar Kudva, Lakshmi Reddy International Confercen on Computer Design (ICCD 1999)
Buffer Planning • A. H. Farrahi, "Estimation and Removal of Routing Congestion", Proc. of Int'l Workshop on System-Level Interconnect Prediction, pp. 149, April 2000, San Diego, CA.
Logic Asynchronous Synthesis
Logic Synthesis • High Level Design of Asynchronous Systems using ACK, Hans Jacobson, Erik Brunvand, Ganesh Gopalakrishnan, Prabhakar Kudva. Proceedings of the Symposium on Advanced Research in
Asynchronous Circuits and Systems 00, IEEE Computer Society Press.
Low Power • Asynchronous Transpose Matrix Architectures, Jose Tierno and Prabhakar Kudva. Proceedings of the International Conference on Computer Design: VLSI in Computers and Processors
(ICCD), October 1997.
Placement Driven • Synthesis of Distributed Burst-mode Controllers, Prabhakar Kudva and Ganesh Gopalakrishnan. Proceedings of the IEEE/ACM Design Automation Conference, June 1996. Pages 63-66.
Synthesis IEEE Computer Society Press.
• Synthesis of Hazard-free Customized Complex Gate Circuits Under Multiple-Input-Change, Prabhakar Kudva and Hans Jacobson and Ganesh Gopalakrishnan and Steven Nowick.
Regularity Proceedings of the IEEE/ACM Design Automation Conference, June 1996. pages 67-71. IEEE Computer Society Press.
• Performance Analysis and Optimization of Asynchronous Circuits, Prabhakar Kudva, Ganesh Gopalakrishnan, Erik Brunvand, Venkatesh Akella. Proceedings of the International
SOI Design And Conference on Computer Design: VLSI in Computers and Processors (ICCD), October 1994, pages 221-225.
Analysis • Testing Two-phase Transition Signalling Based Self-Timed Circuits in a Synthesis Environment, Prabhakar Kudva, Venkatesh Akella. Proceedings of the Seventh International
Symposium on High-Level Synthesis 1994. IEEE Computer Society Press, pages 104-111, IEEE Computer Society Press.
• A Technique to Estimate Power in Asynchronous Circuits, Prabhakar Kudva, Venkatesh Akella. Proceedings of the Symposium on Advanced Research in Asynchronous Circuits and
Systems 94, pages 166-175. IEEE Computer Society Press.
• An Asynchronous High Level Synthesis System Targeted at Interacting Burst-Mode Controllers, Prabhakar Kudva, Ganesh Gopalakrishnan, Venkatesh Akella. Proceedings of the
International Conference on Hardware Description Languages (CHDL 95).
• Peephole Optimization of Asynchronous Circuits, Ganesh Gopalakrishnan, Prabhakar Kudva, Erik Brunvand. Proceedings of the International Conference on Computer Design: VLSI in
Computers and Processors (ICCD) 94, October 1994 pages 471-474.
• Hazard-non-increasing gate-level optimization algorithms, ICCAD 92, pages 631-634. Given an unoptimized logic without hazards, how to generate a multi-level optimized
• Path Sensitization in Hazard-free Circuits, TAU 95. Pages are in reverse order with last page first. In hazard-free circuits many paths can be eliminated from timing analysis.
• R. Puri and J. Gu, Area Efficient Synthesis of Asynchronous Circuits, Proceedings of IEEE International Conference on Computer Design (ICCD), October 1994.
• A Modular Partitioning Approach for Asynchronous Circuit Synthesis, R. Puri and J. Gu, IEEE Transactions on CAD, August 1995, pages 961-973. How to partition a state
transition graphs into partitions small enough for existing asynchronous synthesis algorithms.
• R. Puri and J. Gu, A Modular Partitioning Approach for Asynchronous Circuit Synthesis, Proceedings of 31st ACM/IEEE Design Automation Conference (DAC), June 1994, pages 63-69.
• R. Puri and J. Gu, A Divide-and-Conquer Approach for Asynchronous Interface Synthesis, Proceedings of 7th ACM/IEEE International High-Level Synthesis Symposium, May 1994,
pages 118-125.
• R. Puri and J. Gu, Interconnecting Asynchronous Control Modules, Proceedings of Canadian Conference on VLSI, November 1993, pages 3B7-3B12.
• R. Puri and J. Gu, An Efficient State Minimization Algorithm for Finite State Machines, Proceedings of ACM/IEEE International Workshop on Logic Synthesis (IWLS), May 1993,
pages p5c.1-p5c.10.
• R. Puri and J. Gu, Signal Transition Graph Constraints for Speed-independent Circuit Synthesis, Proceedings of IEEE International Symposium on Circuits and Systems, May 1993,
pages 1686-1689.
• R. Puri, Design of Asynchronous VLSI Circuits, Invited Chapter in John Wiley & Sons Encyclopedia of Electrical and Electronics Engineering, Volume 1, 1999.
• An Efficient Algorithm to Search for Minimal Closed Covers in Sequential Machines IEEE Transactions on CAD, June 1993, pages 737-745. Used for state minimization.
• R. Puri and J. Gu, Microword Length Minimization in Microprogrammed Controller Synthesis, IEEE Transactions on CAD (TCAD), Volume 12, Number 10, October 1993, pages 1449-1457.
• J. Gu and R. Puri, Asynchronous Circuit Synthesis with Boolean Satisfiability, IEEE Transactions on CAD (TCAD), Volume 14, Number 9, September 1995, pages 961-973.
• R. Puri and J. Gu, Persistency and Complete State Coding Constraints in Signal Transition Graphs, International Journal of Electronics, Volume 75, Number 5, November 1993,
pages 933-940.
• R. Puri and J. Gu, An Efficient Algorithm for Microword Length Minimization, Proceedings of 29th ACM/IEEE Design Automation Conference (DAC), June 1992, pages 651-656.
• R. Puri and J. Gu, Searching For a Minimal Finite State Automaton, Proceedings of Third IEEE International Conference on Tools with Artificial Intelligence, November 1991,
pages 416-423.
• R. Puri and M. M. Hasan, PLASMA : An FSM Design Kernel, Proceedings of Third IEEE International ASIC Conference, September 1990, pages 162.1-162.4.
• D.Y.Montuno, R. Puri, B. Stacey, Multi-Disciplinary Analysis Using Constraint Logic Programming with Relational Interval Arithmetic, ACM International Logic Programming
Symposium, October 1997.
• A BDD SAT Solver for Satisfiability: A Case Study, R. Puri and J. Gu, Annals of Mathematics, 1996, Number 9, pages 1-23. Boolean satisfiability solver especially for
asynchronous synthesis problems.
Dynamic Logic Synthesis
• R. Puri, A. Bjorksten, and T. E. Rosser, Logic Optimization by Output Phase Assignment in Dynamic Logic Synthesis ACM/IEEE International Conference CAD (ICCAD), 1996. Output
phase assignment for minimum area duplication in dynamic logic synthesis to obtain logic with inverters at primary inputs/outputs only.
• R. Puri, Design Issues in Mixed Static-Domino Circuit Implementations, IEEE Intl. Conf. on Computer Design (ICCD), 1998.
• R. Puri and K. L. Shepard, Timing Issues in Static-Dynamic Synthesis, ACM Workshop on Timing issues in spec. and synthesis of digital systems (TAU), 1997.
SOI Design And Analysis
• R. Puri and C. T. Chuang, Hysteresis Effect in Pass-Transistor based Partially Depleted SOI CMOS Circuits, IEEE Journal of Solid State Circuits (JSSC), April 2000, pages
• R. Puri, C. T. Chuang, M. B. Ketchen, M. M. Pelella, and M. G. Rosenfield, Floating Body Effects in Low-Temperature Partially Depleted SOI CMOS Circuits, IEEE Journal of Solid
State Circuits (JSSC), Februray 2001.
• C. T. Chuang, R. Puri, J.B. Kuang, and R. Joshi, High-Performance SOI Digital Design: from Devices to Circuits, Invited Short Course in IEEE VLSI Circuits Synposium, 2001,
• C. T. Chuang, R. Puri, K. Bernstein, Effect of Gate-to-Body Tunneling current on PD/SOI CMOS Circuits, International Conference on Solid-State Devices and Materials (SSDM)
• K. A. Jenkins, R. Puri, C. T. Chuang, and F.L.Pasavento, Measurement of History Effect in PD/SOI Single Ended CPL Circuits, IEEE Intl Conference on SOI, 2001.
• C. T. Chuang, R. Puri, and R. Joshi, SOI Circuit Design for High-Performance CMOS Micro-processors, International Solid-State Circuits Conference (ISSCC), 2001 Invited
• R. Puri, C. T. Chuang, M. B. Ketchen, M. M. Pelella, and M. G. Rosenfield, On the Temperature Dependence of Hysteresis Effect in Floating-Body Partially Depleted SOI CMOS
Circuits, International Conference on Solid-State Devices and Materials (SSDM) 2000.
• M. M. Pelella, J. G. Fossum, C. T. Chuang, O. A. Torreiter, H. Schettler, R. Puri, M. B. Ketchen, and M. G. Rosenfield, Low-Temperature DC Bipolar Effect in PD/SOI MOSFETs
with Floating Bodies, IEEE Intl. SOI Conference 2000.
• R. Puri and C. T. Chuang, SOI Digital Circuits: Design Issues, IEEE Intl. Conference on VLSI Design, Invited tutorial, 2000.
• C. T. Chuang and R. Puri, Design Perspective for SOI CMOS Microprocessors, Year 2000 International Symposium on Key Technologies for Future VLSI Systems (Invited talk), Tokyo.
• R. Puri and C. T. Chuang, Hysteresis Effect in Floating Body Partially Depleted SOI CMOS Domino Circuits, ACM/IEEE Intl. Symposium on Low Power Electronics Design (ISLPED),
• C. T. Chuang and R. Puri, SOI Digital CMOS VLSI - A Design Perspective, ACM/IEEE Design Automation Conference (DAC), 1999 (Invited talk in a special session on Technology
• C. T. Chuang and R. Puri, Digital CMOS VLSI Design in SOI, Invited talk in IEEE Intl. Symposium on VLSI Technology, Systems, and Applications, 1999.
• R. Puri and C. T. Chuang, Hysteresis Effect in Pass-Transistor based Partially Depleted SOI CMOS Circuits, IEEE Intl. SOI Conference, 1998.
• Integrated Decomposition and Covering with Area vs. Running Time Trade-off in FPGA Technology Mapping (submitted to DAC 1999) Uses a fast covering algorithm to guide
decomposition and performs BDD variable ordering targeting LUT minimization.
• A. H. Farrahi, M. Sarrafzadeh, "An FPGA Technology Mapper With Fast and Accurate Prediction" IBM Research Report, 1997.
• A. H. Farrahi, M. Sarrafzadeh, TDD: A Technology Dependent Decomposition Algorithm for LUT-Based FPGAs, Proc. of the IEEE Int'l ASIC Conference, pp. 206-209, Sept. 1997,
Portland, OR. Uses a fast covering algorithm to guide decomposition phase rather than using simple cost functions.
• A. H. Farrahi, M. Sarrafzadeh, "Complexity of the Lookup-Table Minimization Problem for FPGA Technology Mapping", IEEE Trans. on Computer-Aided Design of Integrated Circuits
and Systems, Vol. 13(11) pp. 1319--1332, Nov. 1994.
• A. H. Farrahi and M. Sarrafzadeh, "FPGA Technology Mapping for Power Minimization", Proc. of Intl. Workshop on Field Programmable Logic and Applications, pp. 66-77, September
1994, Prague.
• A. H. Farrahi, M. Sarrafzadeh, "On Lookup-Table Minimization for FPGA Technology Mapping", In Int'l Workshop on Field Programmable Logic Arrays, pp. Feb. 1994, Berkeley, CA.
Low Power
• Inaccuracies in Gate-Level Power Estimation (RC 20520, August 1996) Experiments on the error in power estimation due to logic synthesis, PD, unknown inputs, glitches, and
ignoring some electrical effects.
• Geo_Part: A System Partitioning Algorithm to Maximize Sleep Time (Submitted to IEEE Trans. on Computers) Partitioning a system based on the activity patterns on its elements,
in order to maximize total sleep time.
• A. H. Farrahi, D. T. Lee, M. Sarrafzadeh, Two-Way and Multiway Partitioning of a Set of Intervals for Clique-Width Maximization, Algorithmica, Vol. 23, Issue 3, pp. 187-210,
1999. A hardware system is partitioned so as to minimize power. The approach is based on analyzing intervals of activity and inactivity for various elements of the system.
• On the Power of Logic Resynthesis, W. L. Lin, A. H. Farrahi, M. Sarrafzadeh, SIAM J. on Computing, Vol. 29, No. 4, pp. 1257 - 1289.
• Power Aware Microarchitecture: Design and Modeling Challenges for Next-Generation Microproc essors. David M. Brooks, Pradip Bose, Stanley E. Schuster, Hans Jacobson, Prabhakar
N. Kudv a, Alper Byukosunoglu, John-David Wellman, Victor Zyuban, Manish Gupta and Peter W. Cook. I EEE Micro Vol 20 No 6. Nov/Dec 2000
• Mixed Multi-Threshold-voltage DCVS Circuit Styles and Strategies for Low Power Design. W.Ch en et. al to appear ISLPED 01
• A. H. Farrahi, C. Chen, M. Sarrafzadeh, G. Tellez, "Activity-Driven Clock Design", To appear in IEEE Transactions on on Computer-Aided Design of Integrated Circuits and
• A. H. Farrahi, G. E. Tellez, M. Sarrafzadeh, Exploiting Sleep Mode for Memory Partitioning and Other Applications, VLSI Design Journal, Vol 7, No 3, pp.271-287, 1998.
Formulates the problem of partitioning a circuit based on the activity patterns of its elements for power optimization. Shows that the problem is NP-complete, and discusses a
couple of variations.
• G. E. Tellez, Amir Farrahi, and M. Sarrafzadeh, "Activity-Driven Clock Design for Low Power Circuits", Proc. IEEE Int'l Conf. on Computer-Aided Design, pp. 62-65, Nov. 1995,
San Jose, CA.
• A. Farrahi, M. Sarrafzadeh, "Geo_Part: A System Partitioning Algorithm to Maximize Sleep Time", Submitted to IEEE Trans. on Computers.
• A. H. Farrahi and M. Sarrafzadeh, "System Partitioning to Maximize Sleep Time", Proc. IEEE/ACM Int'l Conf. on Computer-Aided Design, pp. 452-455, Nov. 1995, San Jose, CA.
• A. H. Farrahi, G. E. Tellez, and M. Sarrafzadeh, "Memory Segmentation to Exploit Sleep Mode Operation", Proc. of ACM/IEEE Design Automation Conference, June 1995, pp. 36-41.
San Francisco, CA.
Interconnect and Buffer Planning
Privacy | Legal | Contact | IBM Home | Research Home | Project List | Research Sites | Page Contact
Design Automation Home Page
Asynchronous Synthesis
Dynamic Logic Synthesis
Gain Based Synthesis
Interconnect and Buffer Planning
Logic Synthesis
Low Power
Placement Driven Synthesis
SOI Design And Analysis | {"url":"http://researchweb.watson.ibm.com/da/logicpubs.html","timestamp":"2014-04-16T22:33:31Z","content_type":null,"content_length":"30335","record_id":"<urn:uuid:523f49b3-040e-4b25-b7b0-9882369464de>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center Valley Algebra Tutor
Find a Center Valley Algebra Tutor
...I am patient and understand that all students learn differently, and do my best to accommodate different learning styles.I have received a bachelor's degree in mechanical engineering from the
University of Delaware in 2013. While attending the University of Delaware for mechanical engineering, I...
9 Subjects: including algebra 1, algebra 2, calculus, physics
...In addition to teaching, I coached both science Olympiad and academic teams. Before teaching high school I taught physics, astronomy, and geology at the college level and during graduate school
I tutored college students in all of these subjects. For the past ten years, I have spent my summers teaching at summer camps for gifted students.
7 Subjects: including algebra 2, physical science, geology, algebra 1
...I believe that everyone has the potential for growth through a combination of hard work and enriching experiences. As a tutor, I understand that certain subjects can be difficult. With my
teaching style, I try to create analogies, and give real-world examples, all in an effort to make learning as enjoyable as possible.I hold a Bachelor of Science in English Education.
17 Subjects: including algebra 2, algebra 1, reading, English
...My one-on-one method is to show the student that Trig is quite understandable and not as overwhelming as they might believe. I am a certified PA math teacher and have taught all levels of Math,
including Algebra I, Geometry and Algebra II. I have also taught 6 week classes in SAT math and know what subjects are questioned the most.
12 Subjects: including algebra 2, algebra 1, calculus, geometry
...I have also worked with a variety of students preparing for the GED on all sections. I am a certified English teacher with experience teaching at both the middle school and high school level.
In addition, I am a strong test taker myself - especially in reading, as I received a perfect score on the SAT II for Literature.
25 Subjects: including algebra 1, reading, English, grammar | {"url":"http://www.purplemath.com/Center_Valley_Algebra_tutors.php","timestamp":"2014-04-19T09:57:46Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:7f196bb1-b9f1-48a9-a1f4-196d0e30a279>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathJax Test Examples
MathJax Test PageThe Lorenz Equations x (yx)y x y xzz z xy The CauchySchwarz Inequality k1nak bk2 k1nak2 k1nbk2A Cross Product Formula V1 V2 ccci jk X u Y u0 X v Y v0The probability of getting $k$
heads when flipping $n$ coins is: P(E) nk pk (1p) nk An Identity of Ramanujan 1( 5) e25 1e21e41 e61e81 A RogersRamanujan Identity 1 q2(1q)q6(1q)(1q2) j01(1q5j2)(1q5j3) forq 1 Maxwells Equations B 1c
E t 4cj E4 E 1c B t0 B0Finally, while display equations look good for a page of samples, the ability to mix math and text in a paragraph is also important This expression $3x1\left(1x\right)2$ is an
example of an inline equation As you see, MathJax equations can be used this way as well, without unduly disturbing the spacing between lines | {"url":"http://math.albany.edu/~hammond/mmlmisc/mathjax/mjExamples-css.xml","timestamp":"2014-04-18T06:12:31Z","content_type":null,"content_length":"6817","record_id":"<urn:uuid:4b83dff8-6cdd-4887-a3a5-633e3497cb6b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US7590582 - Method for analyzing investments using overlapping periods
The present invention relates generally to financial investment analysis, and, more specifically, to processes for selecting financial investments based on a comparative analysis of performance and
The principal selection criteria for investments that will constitute an investment portfolio are performance and diversification.
Although there is no guarantee that past performance patterns will be repeated in the future, it is considered desirable to avoid investments the historical performance of which has failed to meet
some minimum criteria or has been unstable or inconsistent.
In any market conditions we can expect that some investments will perform well and others will perform badly. The concept of risk diversification is to construct a multi-investment portfolio so that
under all market conditions some combination of good performers will always offset the under-performers and the portfolio consistently achieves its objectives.
Quantitative performance data tends to begin by showing average return based on different variations of the underlying data, e.g., total return, load-adjusted return or tax-adjusted return. The data
may also include other standard performance measures such as volatility, semi-variance, drawdown, Sharpe ratio or Sortino ratio together with proprietary measures specific to the particular database
provider. Perhaps the most widely recognized examples of the latter would be the Star Rating for mutual funds published by Chicago-based Morningstar, Inc. or the Timeliness Ranking for stocks
published by New York-based Value Line
In the case of mutual funds and other collective investment programmes, a second set of performance data is based on the performance attribution and style analysis approach favoured by institutional
investors. The goal of performance attribution and style analysis is to divide a fund manager's returns into two parts—style and skill. Style is the part of the returns that is attributable to market
movements and is dominated by the asset class mix in a portfolio. Skill is the part unique to the manager and is usually associated with individual security selection decisions within each asset
This analysis is usually accomplished through the construction of regression-based models, an approach that has evolved from the pioneering work of William Sharpe who first developed the Capital
Asset Pricing Model. The models try to measure the systematic, causal relationship between the price performance of a fund and the movement in one or more market indexes. The measure of a fund's
systematic relationship with a market index is called its ‘Beta’ while that portion of a fund's return that has no systematic relationship to the specified market indexes is called its ‘Alpha’.
Although theoretically this is not correct, Alpha is often interpreted as representing the skill of the manager and used to rank manager performance.
Most analytical software today calculates the average performance of an investment over a specific term, e.g., the most recent 1, 3, or 5 years, selected calendar years, or since inception. In
addition, many analytical tools compare investments by showing how much $10,000 would have grown over a specific term. Most consumers believe that these simple averages and growth graphs reflect the
results that would have been achieved for any shorter sub-period, or holding period, within the specified term. However, analysis by the inventor shows that this is often not the case, and the
discrepancy can be very large. Thus, there is a need for a better measurement that can capture not only performance during a single term but also the consistency of performance for all holding
periods within that term.
The quantitative criteria commonly used to compare performance are measured in many different units and the range of values can very greatly. For example, return and volatility are both measured in
percentages, but returns can be positive or negative whereas volatility can only be non-negative. In contrast, Sharpe ratio and correlation are both measured in integers, but Sharpe ratio is
unbounded, whereas correlation must always take a value between −1 and 1. Typically, many software applications for analyzing investments provide multiple fields with different performance
measurements for comparison among investments, but offer no methodology or technical capability to combine multiple criteria into a single composite result. Where ranking capability on single
criteria is provided, the most common form of ranking is percentiles or simple ordinal rank. The limitation of this measurement is that it provides no information about the scale of difference in the
relative performance of the ranked investments. Thus, there is a need for better investment analysis tools including a single score that allows easy comparison of investment performance.
Mutual funds are most commonly grouped by applying a pre-defined classification system to their underlying holdings. The classification systems are usually based on a combination of geography (US,
Europe, Latin America, Pacific/Asia, Japan), sector (Communications, Financial, Health etc.) and style (large-cap, mid-cap, small-cap, value, growth, balanced) for equities and duration (long term,
intermediate, short-term) or tax status (taxable, non-taxable) for bonds. Thus Morningstar Inc., mentioned above, defines four main groupings that are further subdivided into 48 categories. The
Investment Funds Standards Committee of Canada defines five main grouping that are sub-divided into 33 categories.
Under the style analysis approach, the simplest form of regression model identifies the single index with which the fund's performance is most closely related (this is sometimes referred to as the
‘best-fit index’) and funds can be grouped based on this criterion.
The investment strategies pursued by most mutual funds and ‘traditional’ institutional investment management programmes are usually subject to restrictions on shorting securities or applying leverage
and the investment manager is often constrained to buying and holding assets in a few well-defined asset classes. These buy-and-hold strategies lend themselves to the two principal grouping methods
described above.
In recent years however there has been an explosion of investment in hedge funds that employ considerably more sophisticated and dynamic trading strategies in pursuit of absolute returns with no
systematic relationship to the general market. These funds may employ a very wide range of techniques (including shorting and leverage), may trade in all markets (defined by asset type as well as
geography) and use a diverse range of trading instruments (including futures, swaps, options and other financial derivative contracts).
Because time series of performance data is very limited, because these funds generally do not disclose detailed position information, and because of the dynamic nature and complexity of their trading
strategies, traditional holdings-based or style analysis methods can not be extended to these funds.
(Extensive efforts are being made to apply style analysis methods to the performance of hedge funds but these efforts face many technical problems in the construction of appropriate indexes and as
yet there are no generally accepted standards.)
A third grouping method has therefore been developed for this class of funds, based primarily on a description of the manager's strategy rather than the characteristics of the fund's holdings.
Examples or of such descriptors are as follows: Long/Short Equity Hedge; Short-Only; Event Driven; Distressed Situations; Merger Arbitrage; Convertible Arbitrage; Fixed Income Arbitrage; Capital
Structure Arbitrage; Credit Arbitrage; Mortgage-Backed Securities; Market Neutral; Relative Value; Global Macro; Emerging Markets; and Currency.
Many of these descriptors do not have standard definitions and many funds employ multiple strategies in multiple markets, making it difficult to assign them to a single category. Therefore, although
a strategy-labeling approach is widely used the resulting classification systems have not yet coalesced into a generally accepted common format.
This invention consists of methods that constitute a unique process for the analysis of financial investments based on a comparative analysis of performance and diversification.
In this context, “investments” includes any financial asset or group of financial assets in respect of which it is possible to trade based on generally accepted and regularly available periodic
valuations and non-tradable indices and benchmarks. Such investments may include, but are not limited to, individual securities (such as stocks or bonds), collective investment vehicles (such as
mutual funds, closed-end funds, hedge funds, or commodities funds), specialist financial contracts (variable annuities or financial derivative contracts), real estate, or any combination thereof.
However, in order to simplify the material we will focus our discussion and examples primarily on mutual funds and funds pursuing absolute return strategies (which we shall refer to generically as
“hedge funds”), although analogous issues arise with other investments.
This invention is unique in a number of respects, namely that:
The apparatus and methods permit the manipulation of extremely large data sets in a manner that is simple to understand and convenient to use.
This invention permits historical performance data for investments to be analyzed in respect of every possible investment period using any pre-existing or personally defined quantitative performance
measurement algorithm. (This process is hereinafter referred to as “Multi-Period Analysis”).
The user can apply his or her personal weightings to the various performance measurements based on a combination of attribute and time period to construct a customized utility function, based on
which a comparative ranking of the Instruments can be created. (This process is hereinafter referred to as “Scoring”); and
This invention permits the complete universe of investments to be segmented into peer groups based on one of a number of similarity/dissimilarity criteria from which the User may choose. (This
process is hereinafter referred to as “Grouping”).
The invention is basically a method for analyzing the performance of a plurality of investments. The method includes: using a data source from which can be derived the percentage increase or decrease
in the value of each investment during each of consecutive reporting periods within a given time frame; calculating values of an investment performance measurement for a plurality of overlapping
holding periods within the time frame, respectively; and using the resulting values to judge the desirability of each investment.
The investments are each a tradable asset or a portfolio of tradable assets or a non-tradable index or benchmark.
Each reporting period is of the same standard length of time.
The investment performance measurement includes any quantitative measurement of the absolute performance of a single investment or any quantitative measurement of its performance relative to that of
another investment.
Each holding period is a period of time spanned by any combination of consecutive, contiguous reporting periods, such that the length of a holding period is a multiple of the standard length of the
reporting period.
In another aspect, the method includes, for each investment, calculating a weighted average of the values of the investment performance measurement and comparing the respective weighted averages of
the investments.
The weighting factor to be applied to the value in respect of each holding period may be selected by a user, but, in the absence of such determination, by default shall be based on the length of the
holding period associated with each performance measurement value.
In another aspect, the method includes: calculating a weighted average of the correlation between each pair of investments for a plurality of holding periods; performing a mathematical conversion on
the weighted average of correlation values such that these values are mapped into a range of positive values in which a higher positive value reflects a greater degree of negative correlation between
the investments; and using such converted or mapped values to partition the investments into groups such that the investments in each group are more highly correlated with each other than with those
in any other group.
In another aspect, the method includes calculating the percentage of all designated holding periods in which the performance measurement for an investment was more desirable than a fixed reference
value or that of another investment.
In another aspect, the invention includes calculating values of a plurality of performance measurements for the plurality of holding periods for each investment; calculating a weighted average of the
values of the performance measurements; calculating in respect of each weighted average its standardized value, which is the number of standard deviations such weighted average lies above or below
the mean of all weighted averages, for each performance measurement for the investments; for each investment, calculating a weighted average of the standardized values for each performance
measurement; and performing a mathematical conversion on the resulting weighted averages such that the highest resulting weighted average is mapped to one-hundred percent, the lowest is mapped to
zero percent and all other values are mapped within this range accordingly.
The weighting factor to be applied to each standardized value may be selected by the user but, in the absence of such determination, by default shall equal a fraction, the numerator of which equals
one and the denominator of which equals the number of performance measurements being averaged.
In another aspect, in respect of any performance measurement value where a lower value is more desirable, the method includes multiplying the corresponding stored standardized value by a factor of
negative one prior to calculating a weighted average of the standardized values.
In another aspect, the method includes storing the values of the performance measurement for each of the investments in a database prior to using the values to judge the desirability of each
In another aspect, the method includes storing the weighted averages for each of the investments in a database prior to using the values to judge the desirability of each investment.
In another aspect, the method includes: calculating values of a plurality of performance measurements for the plurality of holding periods for each investment; for each investment, calculating the
percentage of all holding periods in which the performance measurement for an investment was more desirable than a fixed reference value or that of another investment; calculating a normalized value
for each percentage outperformance value, wherein the normalized value is the number of standard deviations such percentage outperformance lies above or below the mean of all outperformance values,
for each of the investments; for each performance measurement, calculating a weighted average of the normalized values for each investment; and performing a mathematical conversion on the resulting
weighted averages such that the highest resulting weighted average is mapped to one-hundred percent, the lowest is mapped to zero percent and all other values are mapped within this range
In another aspect, the method includes making an investment decision based on the results of the analysis.
In another aspect, the method includes calculating a probability of loss value by counting the number of the holding periods for which the return was negative and dividing the total by the number of
the holding periods.
In another aspect, the method includes calculating the percentage of holding periods in which the value of a designated performance measurement for one investment is more desirable than a designated
fixed value or than the value of the same performance measurement for another investment
In another aspect, the performance measurement is a value representing the return of each investment.
FIG. 1 is a table showing the performance of a fund and an index when measured using traditional single periods and using the new multi-period analysis;
FIG. 2 is a table showing the returns on an investment for overlapping holding periods;
FIG. 3 is a diagram of a computer and databases for implementing the method of the invention;
FIG. 4 is a table showing a scoring profile;
FIG. 5 is a table showing various scoring profiles for several groups;
FIG. 6 is a table showing an overall score for three investments for a selected weighting of performance measurements;
FIG. 7 is a table showing an overall score for three investments for a selected weighting of performance measurements;
FIG. 8 is a table showing correlations between a pair of investments;
FIG. 9 is a table showing weighted averages of correlations for multiple pairs of investments;
FIG. 10 is a simplified flow chart of a process of comparing investments;
FIG. 11 is a simplified flow chart showing a scoring process; and
FIG. 12 is a simplified flow chart showing a grouping process.
BEST MODE FOR CARRYING OUT THE INVENTION Multi-Period Analysis
One of the biggest difficulties facing an investor who seeks to select a number of mutual funds using currently available performance analytics is that performance measurements are provided for only
a limited number of discrete periods.
Typically performance indicators such as return, volatility or the Sharpe ratio are calculated for periods of one, three, five, seven and ten years, measured by calendar year or trailing from a
recent month or quarter-end. In addition the period from inception to the present is often included. Alpha and Beta calculations are similarly based on one or perhaps two specific periods such as
three or five years.
A major difficulty with this approach is that these average numbers can be misleading and can lead to mistaken selection because they fail to adequately reflect the true performance history of a
fund. For example, the table of FIG. 1 represents actual performance data for a US-based fund and compares it to the performance of the S&P500 Index. This table shows that the fund out-performed the
S&P500 Index in each of the one-, three, five, seven- and ten-year periods ending September, 2001, which would seem to recommend it as a good candidate for investment. Using one of the leading mutual
fund databases it was possible to identify 29 US domestic mutual funds that outperformed the S&P500 Index in each of these periods and also for a fifteen-year period. Again this performance would
seem to recommend these funds for investment.
This invention however uses a different approach that provides greater depth and accuracy of analysis than is currently available. This is the Multi-Period Analysis Algorithm. The method of this
invention takes the performance data in whatever frequency is available (in this case monthly) between the dates in respect of which a comparison is required (October 1991 to September 2001 inclusive
in the case of this example) and calculates the annualized return that would have been earned by an investor in every possible sub-period, or holding period, between these dates. Thus there were 120
separate holding periods of one month each, 119 holding periods of two months each, 118 holding periods of three months etc. down to two holding periods of 119 each and a single holding period of 120
months. In this example, the total number of holding periods in respect of which the apparatus calculates returns is 7,260. A weighted average of all these results is then calculated. The method
permits the user to select their preferred weighting method. In this example, the weighting used equals the length of the relevant holding period expressed in months. The result for the S&P500 Index,
as shown in the table of FIG. 1 is 17.46% and for the sample fund is 7.62%. Based on these results, an investor might well decide not to invest in the fund.
The holding periods are called overlapping holding periods because, two periods of the same length, for example, the period spanning January, February, and March and the period spanning February,
March and April have two months in common. Thus, the holding periods overlap.
How can we explain the sharply different results of using a few discrete periods and using the comprehensive method incorporated into this invention? During the eighteen-month period ending September
2001 the fund outperformed the index by a spectacular 92.71% per annum. This was sufficient to ensure that the average fund performance, even spread over a ten-year period, exceeded the average for
the index. In fact, however, the index actually outperformed the fund in 75% of all possible investment holding periods (5,435 out of the total of 7,260). The current state-of-the-art methods
accurately reported the average result for a few discrete holding periods but failed to reflect the fact that most of the fund's performance was concentrated within a specific very short space of
time within these longer periods. This invention, by examining all periods and applying a weighting ensures that such short-term aberrations do not dominate the comparative performance analysis.
Of course, in this extreme example, the extraordinary difference in performance for the one-year period would give a strong indication to an investor that further research would be advised. However,
with more than 10,000 funds from which to choose, or even limited to the 29 funds mentioned above, it becomes impractical if not impossible to manually carry out such a detailed comparison or to draw
useful conclusions when the effect is not so extreme. The value of this invention is that the multi-period analysis directly solves this problem.
The same multi-period analysis can be applied to any quantitative measure. In addition, not only can the average of the multi-period results be compared across investments, the results for any two
investments may be compared for every single corresponding holding period. This capability dramatically extends current fund performance analysis systems by providing a deeper, more detailed and
totally comprehensive analysis of a fund's historical performance.
In another example, the table of FIG. 2 shows a multi-period analysis table for an investment for which twelve months of data has been analyzed. FIG. 2 shows that a table of multi-period analysis
results is wedge-shaped. In this example, the column labelled “Return” provides the actual, continuously compounded one-month return for the corresponding month. The values under the heading
“Annualized, Continuously Compounded Return (%)” are the annualized, continuously compounded returns for the corresponding holding periods, which are given along the horizontal axis (top row of the
Note that the return in any cell is the return for a period ending at the end of the month indicated horizontally in the leftmost column. The length of the holding period associated with any return,
as expressed in months in this example, is indicated vertically in the uppermost row. Thus, FIG. 2 shows that the annualized return for a five month holding period ending in July was 24.54%. The
annualized return for the five-month holding period ending in August was 25.81%. Note that each holding period is a period of time spanned by any combination of consecutive, contiguous reporting
periods, such that the length of a holding period is a multiple of the standard length of the reporting period. In the example of FIG. 2, the standard reporting period length is one month, and each
holding period is a multiple of one month. If investing data were reported at weekly intervals, that is, if the reporting period were one week, the length of each holding period would be a multiple
of one week, for example.
Investments may be analyzed simply by comparing the returns for the many overlapping holding periods. However, a preferred further step is to determine the weighted average of the values in the table
after forming the multi-period return table. The weighting factor to be applied shall be defined by the user; however, in the absence of such a definition, the weighting factor shall be defined as
follows: The numerator of the weighting factor is the length of the particular holding period. The denominator of the weighting factor is the same for all holding periods and equals the sum of a
series of numbers. Each number is the product of the length of each holding period in months and the number of holding periods of that length. Using the table of FIG. 2 as an example, there are
twelve holding periods of one month, eleven two-month holding periods, ten three-month holding periods, and so on. The number of holding periods of each length is given in the second row from the top
in FIG. 2. Thus, the denominator of the weighting thus would be as follows:
(1×12)+(2×11)+(3×10)+(4×9)+ . . . +(9×4)+(10×3)+(11×2)+(12×1)=364
The weighted average multi-period return value for the investment of the table of FIG. 2 is thus 25.1982%.
The weighted average provides a single measurement that captures the level, range and ordering of periodic returns over time. This permits period-by-period comparative performance analysis among any
combination of investments, including single securities, portfolios, assets such as real estate, or indices. Multi-period analysis is the only method by which one can answer the question “what would
have been an investment result if one had randomly decided the holding period?” Multi-period analysis can test an investment manager's claim to have beaten the market “over the past n years.”
Further, the multi-period analysis provides insights into whether a specific investment has performed best over shorter or longer holding periods.
A similar table can be constructed with periodic return data for any investment, including an index or benchmark, and a weighted average multi-period return value can be calculated in the same
manner. Thus, the weighted average multi-period return value for the investment of FIG. 2 can be compared to the other investments or indices or benchmarks to determine which is more desirable when
compared on this basis.
Although the example of FIG. 2 uses return as the performance measurement, other return measurements such as volatility or Sharpe Ratio or any other quantitative performance measurement can be used
as well. That is, the information that is input to prepare the table is periodic return data, but the entries in the table need not be return values. The entries in the table can be other values such
as volatility or Sharpe Ratio or any other quantitative performance measurement.
Another way of judging investments based on the multi-period analysis is to calculate a probability of loss value for a given time frame. The probability of loss value is calculated by counting the
number of holding periods within the time frame for which the percentage return is negative. This number is divided by the total number of overlapping holding periods within the time frame. The
result is a probability of loss value that is useful in judging the performance of investments. A higher probability of loss is less favourable as an indicator of performance than a lower probability
of loss.
A further way of judging investments based on the multi-period analysis is to calculate a percentage outperformance value, which is the percentage of holding period within a given time frame that an
investment performance measurement for one investment was more desirable than either a designated absolute value or than that of the same performance measurement for another designated investment or
index. This is a more general application of the method used in calculating probability of loss.
More specifically, with reference to the table of FIG. 2, suppose one wished to know the percentage of holding periods in which the return on a particular investment was greater than 15%. There are a
total of 78 holding periods in the table of FIG. 2, and in 73 of those periods the return was greater than 15%. Thus, the percentage outperformance using 15% as the criterion is 73/78×100 or 93.6%. A
table like that of FIG. 2 can be constructed for the S&P500 index for the same time period. The number of holding periods in which a particular investment outperformed the S&P500 index could be
determined, and that number divided by 78 would give a percentage outperformance using S&P500 index outperformance as the criterion.
A further way of judging investments on the multi-period analysis is to calculate a weighted average volatility value. The weighted average volatility value is the weighted average of numbers that
represent the volatility of an investment for every overlapping holding period within the given time frame. The result is useful for judging the volatility of investments.
When the method is incorporated in software, which is the preferred way of implementing the method, the software indicates to the user the common term for which data is available for all the
investments the user wishes to compare. The user would then specify the desired start date, end date and minimum and maximum holding periods to be used.
Preferably, the multi-period analysis is performed by a computer using software that incorporates the method of the invention. Further, when the method is performed by a computer, the periodic
investment data may be taken from public database such as TASS (a hedge fund performance database) or databases provided by the Center for Research in Security Prices (CRSP) or private databases.
Tables like that of FIG. 2 can be pre-calculated for every investment within a source database before any investment analysis is done. This will normally make analysis faster when investments are
selected for comparison or ranking.
One optional feature of the invention that speeds up analysis is that tables such as that of FIG. 2 are pre-calculated, as mentioned above, prior to any investment analysis. Many tables are
pre-calculated and stored in a warehouse database, which is normally a local database but can also be accessed over a local area network or over the internet. For example, each pre-calculated table
represents one investment and one performance measurement and records the performance measurement for every possible holding period between the earliest and the latest dates in respect of which
return data is provided. Tables may be prepared for any quantitative measurement such as return, volatility, or Sharpe ratio and for correlations between pairs of investments. Thus, when there is a
need to compare two investments or to rank many investments, the software need not calculate the performance measurement for each holding period. The software program needs only to refer to the
appropriate cells of the appropriate table in the database.
FIG. 3 shows a computer 10, which includes a display 12 and a user input device 14. The computer 10 is connected to an investment history database (or databases) 16. The investment history database
(or databases) 16 can be stored locally, can be on portable media, such as CD ROM, or accessed over a local area network or over the Internet. The computer 10 uses investment history data from the
investment history database (or databases) 16 to populate a warehouse database 20, which includes pre-calculated tables. The pre-calculated tables contain, for example, among other things,
multi-period return data for a universe of investments. The warehouse database may be stored locally or it may be accessed over a local area network or over the Internet. The computer 10 runs
software that performs the multi-period analysis described above on the warehouse database. The user-interface of the computer 10, which is programmed to perform the method of this invention,
indicates a common term for which investment data is available for all investments of interest. The user may specify the desired start date, end date, and minimum and maximum holding periods. The
user also may select the performance measurement to be calculated from choices such as return, volatility, probability of loss, and Sharpe ratio. In addition, the interface permits many other
parameters to be set by the user. The program provides numerical and graphical analysis of the results on the display.
Alternatively, the computer need not use pre-calculated tables and need not employ the warehouse database 20. All calculations can be done as needed from the history database 16.
FIG. 10 is a self-explanatory flow chart showing stages of an exemplary multi-period analysis in which investments are compared based on weighted averages and/or percentage of favourable holding
periods. The steps of such a method depend on the user's goals and may be varied accordingly. Step 30 is the step of calculating a value of an investment performance measurement for each holding
period. Steps 34 and 32 need not both be performed. Depending on the users goals, the user may perform one or both of steps 32 and 34 or may simply use single values for discrete holding periods from
the values calculated in step 30. Step 36 is a comparison step. For example, if only step 34 is performed and not step 32, then in step 36, only the weighted averages would be compared to determine
relative performance. If neither step 32 nor 34 is performed, and the user instead chooses to use the performance measurement from a discrete holding period, then step 36 is simply a step of
comparing the values calculated in step 30 for the chosen holding period. Step 38 is a step of making an investment decision based on the comparison of step 36. For example, step 38 may include the
purchase of shares in a stock that compared favourably in step 36.
Scoring Process
Most investors will select funds based on a number of criteria and each will have his or her personal view as to the relative importance of each criterion to the final decision. The leading tools
available today provide a vast range of performance measurements and an investor may establish a fund's rank ordering based on any single criterion. However, it appears that, until now, no method has
been available by which an investor can freely combine multiple criteria to create a unified rank ordering that reflects personal priorities. This invention provides just such functionality through
This feature of the method permits the user to specify the dates, between which the historical analysis will be applied, the range of multiple holding periods in respect of which the weighted
performance measurement will be calculated (as described in the preceding section) and the criterion that will be applied to the selection process. Finally, the user specifies the relative importance
of each selection criterion to the final decision. This may be expressed by a number of methods, including serial rank ordering or percentage weighting.
The method then includes producing a ranking for each fund in respect of each selection criterion. The ranking methodology is designed to make it independent of the units in which each criterion is
measured. The method then includes generating a scoring profile that specifies the manner in which the criteria are to be combined in accordance with the relative importance ascribed by the user. The
result of applying the scoring profile is to generate a single index with values between zero percent and one hundred percent and to assign an index value to each fund. The closer the index value is
to one hundred percent the higher the ranking of the fund in terms of the user's personalized scoring process.
A different scoring profile can be defined for every group of investments within the defined universe. Each scoring profile might reflect, for example, the stated primary performance objectives of
the group, e.g., high return or capital preservation.
For example, FIG. 4 shows a scoring profile table listing three performance measurements in the first column. The other columns show the start date, the end date, the minimum holding period, the
maximum holding period and the weight, which are chosen by the user. The weight represents the weight given to the corresponding performance measurement. One such profile can be selected for each of
a plurality of groups of investments, as shown in the table of FIG. 5.
FIG. 5 shows a table listing three groups in the first column. In the second column, a scoring profile for the corresponding group is given. The scoring profile is simply the combination of the
designated performance measurements to be used for scoring and the weight, or subjective importance, as a percentage, given to each performance measurement. The sum of the weights must equal 100%.
The scoring process includes, for each group, calculating the raw value of each performance measurement specified in the scoring profile. Then, the mean and the standard deviation of the raw values
across the group are calculated. In the case of the table of FIG. 4, the raw values are the weighted averages of three different measurements of performance. However, each raw value may be a
percentage outperformance value, which was described above. That is, a percentage outperformance value may be used for each of the performance measurements. Other raw values that indicate performance
may be selected by the user.
Then, for every investment, the scoring process includes counting the number of standard deviations the raw value is above or below the corresponding mean. This is called the standardized value.
Standardized values have the statistical property that, irrespective of the units or measurement or the distribution of the underlying raw values, the corresponding standardized values have a mean of
zero and a standard deviation of one.
For each investment, the user-specified weighting is applied to the standardized value for each measurement and a weighted average is calculated. This result is again standardized.
A score can be assigned in respect of a single criterion or to the weighted average of all criteria as follows. A score of 100% is assigned to the investment with the best standardized value within
the group. A score if 0% is assigned to the investment with the worst standardized value within the group. For all other investments, the assigned score is as follows:
$1 - { BSV - SVIBS BSV - WSV } × 100$
where BSV stands for the best standardized value, SVIBS stands for the standardized value of the investment being scored, and WSV stands for the worst standardized score.
FIG. 6 shows sample results from the scoring process. FIG. 6 is a table for three investments and two performance measurements in which return is given a weight of 70% and probability of loss is
given a weight of 30%. In this table, the three investments, Fund A, Fund B, and Fund C are scored with a single score according to the weighting for the two performance measurements. Values that are
in parentheses are negative.
FIG. 7 is a table like FIG. 6. However, in FIG. 7, the performance measurement of return is weighted at 30% and probability of loss is given a weight of 70%. The change in score that results in the
change of weighting can be seen in the rightmost column of the two tables, which gives the overall score. The scoring process is preferably implemented with a software program and performed by a
computer 10, as described with reference to the multi-period analysis. The weightings and other user-selected variables are entered by a user using the user input device 14, when prompted by the user
FIG. 11 is a self-explanatory flowchart showing an exemplary procedure for scoring. In the procedure of FIG. 11, the scoring is based on weighted averages of a performance measurement or the
percentage of favourable holding periods for a particular performance measurement. The steps of the scoring process will vary according to the user's goals but may be as shown in FIG. 11. Referring
to FIG. 11, step 40 is the step of calculating investment performance measurements for each holding period. Either or both of steps 44 and 42 may be performed or the user may go directly to step 46
by choosing to use values calculated in step 40 for discrete holding periods. Step 44 is a step of calculating weighted averages of the investment performance measurements over the holding periods.
Step 42 involves calculating a percentage outperformance, as described above. If only step 44 were performed and not step 42, for example, then in step 46, only the weighted averages would be used in
calculating the number of standard deviations. If neither step 42 nor step 44 is performed, the user may simply in step 46 calculate the number of standard deviations based on the performance
measurement values for a selected discrete holding period. In step 48, a weighted average of the standard deviation values is calculated for each investment and for each performance measurement. In
step 50, the resulting weighted averages are mapped between zero and one hundred.
Grouping methodologies that use portfolio holdings or strategy labels suffer from a number of weaknesses, including that they are not directly based on actual performance. Regression-based style
analysis of course uses performance data but share other label-based shortcomings. In addition, many individual investors are less concerned about how closely their fund tracks an index than there
are in achieving a fixed return objective, usually related to their life plan, such as funding children's education or providing for retirement. In addition, none of these state-of-the art approaches
has yet been applied successfully across both mutual funds and hedge funds.
A common characteristic of current methods is that the number of groups into which the funds may be divided is fixed by the methodology and is not related directly to how many funds the investor
wishes to select. This is a critical issue. For example, let us assume that one of the existing systems produces twenty-six categories. If an investor wants to select exactly twenty-six funds, then
he or she might decide that picking one fund from each category will provide the highest degree of diversification. If however the investor wishes to invest in only twelve funds, ideally he or she
would prefer to be able to reorganize all of the funds into just twelve groups with the highest possible diversification and again pick one from each group. Current methods have no algorithms to
achieve such a regrouping.
This invention directly solves the problem using a performance-based peer grouping process. The preferred performance measurement used in this process is correlation, although the invention supports
the use of other measurements. Correlation is a statistical technique that can show whether and how strongly pairs of variables are related. The sign of the correlation coefficient, which can be
either positive or negative, defines the direction of the relationship. A positive correlation coefficient means that as the value of one variable increases, the value of the other variable
increases; as one decreases the other decreases. A negative correlation coefficient indicates that as one variable increases, the other decreases, and vice-versa. Combining investments that have a
negative correlation to each other is usually expected to produce a more stable return across different market environments.
It is relatively easy to work with correlation for small data sets. Therefore once the complete universe has been reduced to a relatively small number of funds selected for the final portfolio,
correlation is an important element in all established methods for deciding what percentage of capital should be allocated to each investment—often referred to as portfolio optimization”.
Because correlation is calculated directly from performance data and represents the systematic relationship among the investments, it is one of the best possible criteria for creating peer groups for
risk diversification. However, there are a number of significant technical challenges in working with correlation data for a very large universe of funds. For example, a universe of 10,000 funds
would create close to 50 million different pair-wise correlation coefficients so the scale of data alone might discourage investigation in this area.
This invention uses advanced partitioning techniques in a multi-stage process that can group any large universe of investments based on their pair-wise correlation or on other user-specified measures
of similarity/dissimilarity based, for example on the absolute difference in returns between each pair of investments either in respect of a plurality of single performance reporting periods or in
respect of a plurality of designated holding periods. The user may specify the number of groups into which the universe should be divided and this of course may be determined by the number of
investments to be included in the portfolio. In addition, of course, the correlation coefficients or other measures may be calculated using the Multi-Period Analysis described above, or by any other
The grouping process implements a clustering methodology called “Partitioning Around Mediods” as detailed in chapter 2 of L. Kaufman and P. J. Rousseeuw, Finding Groups in Data: An Introduction to
Cluster Analysis, Wiley, New York (1990).
In the preferred embodiment, a universe of investments is partitioned into a user-specified number of groups based on the correlation of historical performance between each pair of investments within
the investment universe or other measurement of similarity/dissimilarity. The user can specify the number of groups. The correlation inputs are calculated using the multi-period analysis described
above. The process can be applied to any combination of investments (stocks, bonds, mutual funds, hedge funds, indices, or benchmarks).
The process also allows the user to partition the investment universe using the data provider's labelling system and to compare the grouping results obtained from applying different grouping
First, common start and end dates for all investments within the investment universe are determined. Then, for every pair of investments, the weighted average of the correlations over the overlapping
holding periods within the common term is calculated. The holding periods are defined in the same manner as described above with respect to the multi-period analysis. Using the correlation values,
the investment universe is divided into a specified number of groups such that the investments in each group are more highly correlated with each other than with those in any other group.
To divide the investment universe into the specified number of groups, dissimilarity values are employed. Each dissimilarity value is a single number that measures the degree of similarity or
dissimilarity between two objects in the dataset. The lower the dissimilarity value, the more similar the two objects are, the higher the dissimilarity value, the more dissimilar the two objects are.
For each pair of investments being considered, that is, for each pair in the investment universe, the correlations are determined for designated multiple holding periods. If specific holding periods
are not designated, the default is to begin with all holding periods of at least two reporting periods in length and end with the holding period the length of which equals the length of the common
term. The table of FIG. 8 is an example of a table of such correlations for two investments, investment B and investment E. Because of the way correlation is calculated, the correlations will always
lie between negative one and one. Therefore, the values in the cells of the table of FIG. 8 will always lie between negative one and one. The weighted average of the correlations of the table of FIG.
8 is 0.4849919, as indicated near the top of the table.
A weighted average of these correlations is calculated for each investment pair in the investment universe. The weighting factor can be selected by the user, but the default weighting factor is based
on the length of each holding period, as described with respect to multi-period analysis above. The result can be arranged and displayed in a table like that of FIG. 9, which may be referred to as a
pair correlation table, since it shows the correlations between pairs of investments. The term of the table of FIG. 9 is twelve months. Although most of the correlations in the pair correlation table
of FIG. 9 are represented by the subscripted variable “Corr,” the numerical value of the correlation between investments B and E, which is derived from the correlation table of FIG. 8, has been
written in the pair table in the appropriate cell. In the pair correlation table of FIG. 9, Corr[xy ]refers to the correlation between investment X and investment Y. The cell for Corr[eb ]shows the
correlation value calculated as the weighted average of correlation values for all holding periods of 2 months or longer during the 12-month term in respect of investment E and investment B according
to the correlation table of FIG. 8. The table also shows that every investment's correlation with itself is one.
In this context, a correlation value of positive one indicates least dissimilarity, whereas a correlation value of negative one indicates greatest dissimilarity. The preferred process of partitioning
around medoids (the process described by Kaufman and Rousseeuw mentioned above), however, is designed to work with positive values where higher values indicate greater dissimilarity. Therefore, after
taking the weighted average, the correlations are converted to positive values. The positive values are the dissimilarity values mentioned above. In other words, a mathematical conversion is
performed on the correlation values such that negative values are mapped to positive values. In the preferred embodiment, a correlation value of negative one is mapped to two, a correlation value of
one is mapped to zero, and all other correlation values are mapped within a range from zero to two accordingly. This is achieved by subtracting each value from one to generate a corresponding
dissimilarity value; however, other similar conversions that produce positive numbers, such that a higher positive number denotes greater dissimilarity can be used.
The resulting dissimilarity values are used by software that incorporates Kaufman and Rousseeuw's partitioning method to group the investments into a specified number of groups such that the
investments in each group are more highly correlated with each other than with those in any other group. Although the “Partitioning Around Medoids” method described by Kaufman and Rousseeuw is
presently preferred, other methods that can partition the groups such that the investments in each group are more highly correlated with each other than with those in any other group may be used.
The resulting groups can be used to improve risk diversification. That is, the groups can be used to construct a portfolio of investments in which an investor may have greater confidence that under
all market conditions, some combinations of good performers will likely offset the under-performers, and the portfolio will consistently achieve its objectives.
FIG. 12 is a self-explanatory flow chart showing an exemplary grouping process. The steps will vary according to the user's goals but may be as shown in FIG. 12. Referring to FIG. 12, step 60 is the
step of calculating the correlation between their returns for each pair of investments for each holding period within a given time frame. In step 62, the weighted average of the multi-period
correlations between each pair of investments is calculated. The correlation values are converted to positive values such that higher positive numbers indicate greater dissimilarity in step 64. In
step 66, the converted values are used to partition the investments into groups such that the investments in each group are more highly correlated with each other than with those in any other group.
Step 68 involves choosing a portfolio of investments in which risk is diversified using the groups. More specifically, one would normally not choose investments that are all in the same group if
diversity is a goal. Ideally, the portfolio would include investments from more than one group for diversification.
The grouping process is preferably implemented with a software program and performed by a computer 10, as described with reference to the multi-period analysis. The number of groups and other
user-selected variables are entered by a user using the user input device 14, when prompted by the user interface.
A software process has been developed for implementing the Kaufman and Rousseeuw method of portioning. That process is referred to as the PAM (Partitioning Around Medoids) algorithm. The following is
a detailed description of the PAM algorithm:
The PAM Algorithm has two stages, the Build Stage and the Swap Stage.
The purpose of the Build Stage is to identify a first set of Medoids equal to the desired number of groups.
In the Swap Stage all non-Medoid Objects are iteratively tested to see if they are better qualified than the existing selected Medoids. Usually, after each iteration of the process, one Candidate is
selected to replace one existing Medoid. The process stops when no better qualified Candidate exists.
A Candidate is an Object that has not yet been selected as a Medoid.
An Object is one of the members of the dataset being partitioned
A Test Object is the name given in the Swap stage to each Object in turn against which the swap test is applied.
DValue refers to the dissimilarity value mentioned earlier. It is a single number that measures the degree of similarity or dissimilarity between two Objects. The lower the DValue, the more similar
the two Objects are. The higher the DValue, the more dissimilar the two Objects are.
DValue (Object a×Object b) refers to the DValue for the indicated pair of Objects.
DVector refers to the Dissimilarity Vector for a single Object. It contains the same number of elements, as there are Objects in the dataset and shows the respective Object's DValue compared to all
other Objects (including itself).
Dz and DzSky are counters used in the Swap Stage (see below).
HDValue means the highest DValue across all of the DVectors for all of the Objects in the dataset.
A Medoid is the Object in a group that has the greatest similarity to all other Objects in the group. This means that the aggregate DValues for (Medoid×Each other Object in the group) is the lowest
among group members.
RValues mean the values used in the RVector.
RVector refers to the Reference Vector that is used in the Build Stage to find Medoids. This vector has the same number of elements as there are Objects in the dataset, which is also the same number
of elements in any Object's DVector.
Vector A is used in the Swap Stage. For each Object, the Medoid for which DValue (Object×Medoid) is lowest, i.e., the Medoid to which each Object is most similar, is identified. This is called the
Object's Medoid A. Vector A consists of the DValues for each Object with its respective Medoid A.
Vector B is also used in the Swap Stage. For each Object, identify the Medoid for which DValue (Object×Medoid) is second lowest. This is called the Object's Medoid B. Vector B consists of the DValues
for each Object with its respective Medoid B.
Build Stage
The purpose of the Build Stage is to identify the first group of Medoids. The number of Medoids will equal the final number of groups determined by the user.
To find the First Medoid:
Step 1: Create the first RVector. In the case of the first RVector, each element is given the same RValue which is arbitrarily calculated by the formula IRValue=(HDValue*1.1)+1.
Step 2: Select a Candidate and, one-by-one, subtract each DValue in the Candidates's DVector from the corresponding RValue in the RVector and sum all of the results.
Step 3: Repeat Step 2 for all Candidates. The Candidate with the highest accumulated total is selected as the first Medoid.
To find Subsequent Medoids:
Step 4: Calculate a new RVector. To construct the RVector to find subsequent Medoids, base the RVector upon the RVector used to identify the previous Medoid. For each Object in the Dataset, enter the
lower of DValue (Object×Most Recently Identified Medoid) and the corresponding previous RValue within the prior iteration's RVector
Step 5: Select a Candidate and one-by-one, we subtract each DValue in the Candidate's DVector from the corresponding RValue in the RVector selected in Step 4.
In respect of DValue (Candidate×Object b) the “corresponding” RValue would be the lower of DValue (Object b×Most Recently Identified Medoid) and DValue (Object b×Medoid identified one iteration
before Most Recently Identified Medoid).
Sum only those differences that are positive. (Note: in the case of finding the First Medoid, by construction all the differences are positive so all are summed as stated in Step 2 above)
Step 6: Repeat Step 5 for all Candidates. The Candidate with the highest accumulated total is selected as the next Medoid.
Step 7: Repeat Steps 4-6 until we have identified the same number of Medoids (including the first Medoid) as the desired number of groups.
Swap Stage
Step 8: Construct Vectors A and B according to the Glossary description.
Step 9: Set DzSky=1
Step 10: Set Dz=0
Step 11: Select the First Candidate, the first Medoid and the first Test Object. Calculate:
D1=DValue(First Medoid×Test Object); and
D2=DValue (Candidate×Test Object)
Step 12: If D1=DValue (Test Object's Medoid A×Test Object),
Min [DValue(Test Object's Medoid B×Test Object), D2]−DValue(Test Object's Medoid A×Test Object)
Accumulate to Dz
If D2<DValue (Test Object's Medoid A×Test Object)
Calculate D2−DValue (Test Object's Medoid A×Test Object)
Accumulate to Dz
Do not accumulate.
Step 13: Repeat Steps 9 and 10 for all Test Objects, without changing the First Candidate or the First Medoid.
The grand total is the Dz value for the First Candidate and the First Medoid
Step 14: If Dz<Dz Sky, then set DzSky=Dz and make a note of which Candidate and which Medoid it happens for.
Step 15: Repeat Steps 10-14 for the First Candidate and each Medoid, and thereafter for each Candidate and each Medoid combination, until every Candidate, Medoid, Test Object combination has been
Step 16: Finally we know which Candidate and Medoid pairing had the lowest Dz value so
If this lowest Dz (the last DzSky value)<0 then replace Medoid with that Candidate.
Don't replace.
Step 17: Repeat Steps 8-16 until no replacement is made (i.e., the last DzSky value is not <0).
This is one example of a computerized process for partitioning. As stated earlier, any process that can partition the groups such that the investments in each group are more highly correlated with
each other than with those in any other group may be used
While the above description is of the preferred embodiment of the present invention, it should be appreciated that the invention may be modified, altered, or varied without deviating from the scope
and fair meaning of the following claims. For example, tables of pre-calculated performance returns or other performance measurements are preferably used when analyzing investments; however, the
performance measurements may be calculated only when needed, without the use of pre-calculated tables. In addition, the various multi-period values and the various steps in the processes described
above may be may be used in different sequences or may be used in part but not in whole depending on the user's specific requirements. | {"url":"http://www.google.com/patents/US7590582?dq=FRAIOLI","timestamp":"2014-04-18T11:00:23Z","content_type":null,"content_length":"136747","record_id":"<urn:uuid:123614bd-4ae4-4276-a99f-4c6da7a39c01>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference request: discrete harmonic functions and ends of graphs
up vote 4 down vote favorite
Let $G$ be an infinite locally finite connected graph with finitely many ends. A real-valued function $f : G \to \mathbb{R}$ is harmonic if
$$f(v) = \frac{1}{d_v} \sum_{v \sim w} f(w)$$
where $v \sim w$ means that $v, w$ are connected by an edge. Playing around with a few examples leads me to suspect that the dimension of the space of harmonic functions on $G$ is the number of ends.
(Heuristic: given a harmonic function, start with a vertex $v$ and move to a neighbor $w$ of $v$ such that $f(w) \ge f(v)$. If $f$ is nonconstant this should give a path converging to an end, and
this should be possible for any end. Moreover a harmonic function should be determined by its "values at the ends.") Does anyone know if this is true and, if so, does anyone know of a reference for
this fact?
(Tags are because a major application is to Cayley graphs of finitely generated groups and I would be interested in seeing how far one can push this method to prove basic facts about ends of such
graph-theory gr.group-theory combinatorial-group-theor
1 Erm... So how many ends do $\mathbb Z$ and $\mathbb Z^2$ have? (I see two and one, but, maybe, I'm just nearsighted). – fedja Dec 22 '10 at 2:54
4 @fedja : Being nearsighted helps in recognizing quasi-isometric invariants like the number of ends! – Andy Putman Dec 22 '10 at 3:09
1 @fedja: oops. Seems I didn't look at enough examples... – Qiaochu Yuan Dec 22 '10 at 3:36
2 In a paper of Kapovich on Gromov's proof of Stallings' theorem, he proves that a function on the ends of a manifold taking values in {0,1} has a harmonic extension to the manifold
front.math.ucdavis.edu/0707.4231. One can probably also prove this for graphs, possibly with some extra conditions on the graph, such as bounded valence. – Ian Agol Dec 22 '10 at 4:57
1 @Ricky: Are you sure about $\mathbb{Z}^2$? It seems to me that the space of harmonic functions is infinite dimensional. – Kevin Ventullo Dec 22 '10 at 5:30
show 4 more comments
1 Answer
active oldest votes
Life is much more complicated than that. In nice situations (for instance, if your graph is $\delta$-hyperbolic), then you can attach a more refined boundary than just the ends and
(if you are lucky) solve the Dirchlet problem. A lot depends on what kinds of regularity conditions you assign to functions on the boundary at infinity.
This is by now a well-established part of geometric group theory. For instance, it plays a key role in Kleiner's recent new proof of Gromov's theorem on groups with polynomial
growth. See here.
up vote 10 down One textbook reference that covers some of this information is
vote accepted
Woess, W., Random Walks on Infinite Graphs and Groups, Cam- bridge Tracts in Math. 138, Cambridge Univ. Press, 2000
EDIT : By the way, since you are in Cambridge, Curt McMullen at Harvard is a good person to talk to about this kind of stuff. I learned most of what I know about the subject from a
course he taught last year.
1 Off topic, but I don't quite understand Andy P's last comment. Is Harvard near Cambridge MA? It certainly isn't near Cambridge, East Anglia... – Yemon Choi Dec 22 '10 at 3:13
2 Whoops, I forgot that Qiaochu is currently visiting the real Cambridge right now <grin>. Until recently, he was at MIT... – Andy Putman Dec 22 '10 at 3:15
3 Harvard is in Cambridge, MA. It is actually on the same street as MIT, about a 30 minute walk away, so I've heard people at both refer to the other as "that school down the
street". – Noah Stein Dec 22 '10 at 3:27
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory gr.group-theory combinatorial-group-theor or ask your own question. | {"url":"http://mathoverflow.net/questions/50119/reference-request-discrete-harmonic-functions-and-ends-of-graphs","timestamp":"2014-04-19T17:54:36Z","content_type":null,"content_length":"63094","record_id":"<urn:uuid:c0b82ec8-3940-40db-9ecc-938408e5de9a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Latitude and Longitude
Location on the Earth
The most common way to locate points on the surface of the Earth is by standard, geographic coordinates called latitude and longitude. These coordinates values are measured in degrees, and
represent angular distances calculated from the center of the Earth.
back to top
What is latitude?
We can imagine the Earth as a sphere, with an axis around which it spins. The ends of the axis are the North and South Poles. The Equator is a line around the earth, an equal distance from both
poles. The Equator is also the latitude line given the value of 0 degrees. This means it is the starting point for measuring latitude. Latitude values indicate the angular distance between the
Equator and points north or south of it on the surface of the Earth.
back to top
What is longitude?
Lines of longitude, called meridians, run perpendicular to lines of latitude, and all pass through both poles. Each longitude line is part of a great circle. There is no obvious 0-degree point for
longitude, as there is for latitude. Throughout history many different starting points have been used to measure longitude. By international agreement, the meridian line through Greenwich, England,
is currently given the value of 0 degrees of longitude; this meridian is referred to as the Prime Meridian. Longitude values are indicate the angular distance between the Prime Meridian and points
east or west of it on the surface of the Earth.
back to top
How precise can we be with latitude and longitude?
Degrees of latitude and longitude can be further subdivided into minutes and seconds: there are 60 minutes (') per degree, and 60 seconds (") per minute. For example, a coordinate might be written
65° 32' 15". Degrees can also be expressed as decimals: 65.5375, degrees and decimal minutes: 65° 32.25', or even degrees, minutes, and decimal seconds: 65° 32' 15.275". All these notations allow
us to locate places on the Earth quite precisely – to within inches.
A degree of latitude is approximately 69 miles, and a minute of latitude is approximately 1.15 miles. A second of latitude is approximately 0.02 miles, or just over 100 feet.
A degree of longitude varies in size. At the equator, it is approximately 69 miles, the same size as a degree of latitude. The size gradually decreases to zero as the meridians converge at the
poles. At a latitude of 45 degrees, a degree of longitude is approximately 49 miles. Because a degree of longitude varies in size, minutes and seconds of longitude also vary, decreasing in size
towards the poles.
back to top
Commonly Used Terms
Equator—The line which encircles the Earth at an equal distance from the North and South Poles.
Geographic coordinates—Coordinate values given as latitude and longitude.
Great circle—A circle formed on the surface of a sphere by a plane that passes through the center of the sphere. The Equator, each meridian, and each other full circumference of the Earth forms a
great circle. The arc of a great circle shows the shortest distance between points on the surface of the Earth.
Meridian—An imaginary arc on the Earth's surface from the North Pole to the South Pole that associates all locations running along it with a given longitude. The position of a point on the meridian
is given by its intersecting latitude. Each meridian is perpendicular to all circles of latitude at the intersection points.
Parallel—A circle or approximation of a circle on the surface of the Earth, parallel to the Equator and connecting points of equal latitude.
Prime Meridian—The meridian of longitude 0 degrees, used as the origin for the measurement of longitude. The meridian of Greenwich, England, is the internationally accepted prime meridian in most
back to top
Related Links
back to top | {"url":"http://www.nationalatlas.gov/articles/mapping/a_latlong.html","timestamp":"2014-04-20T23:26:58Z","content_type":null,"content_length":"34277","record_id":"<urn:uuid:0400521a-19f3-4e25-98f5-721f3f72a0aa>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method of Proving Euler's Formula?
How do you determine that e^ix has period [itex]2\pi[/itex] with using Euler's formula?
Euler's formula? Sine and cosine have a period of 2 pi. Therefore, an addition of the two also has a period of 2 pi.
Without, [itex]\displaystyle e^{ix} = \lim_{n \rightarrow \infty}\left(1 + \frac{ix}{n}\right)^n[/itex]. Observing the graph of [itex]f(x) = \left(1+\frac{ix}{n}\right)^n[/itex], it can be seen that
the graph looks more and more sinusoidal as n grows larger. Using a large n, it would be fairly easy to approximate the period of the function's limit as n approaches infinity. Using WolframAlpha (an
essential tool for deriving Euler's formula in the 1700s
(link goes to WolframAlpha). It can be seen that the imaginary part looks like it is approaching the form of the sine function, and the real part looks like cosine. If we look at x ≈ 3.14, the
imaginary part is around 0 and the real part is around -1. This can also be used to answer
So how do you know that [itex]e^{2\pi i} = 1[/itex] without using Euler's formula?
[itex]\displaystyle \forall k\in \mathbb{Z}, \ e^{2\pi i k} = \lim_{n \rightarrow \infty} \left(1+\frac{2\pi i k}{n}\right)^n = 1[/itex].
Though, you do make a valid point. I already knew that e
was periodic based on Euler's formula. Thus, there is a minor element of circular logic to my "proof". However, assuming that we say that the function f from above is asymptotically periodic, is
there any reason that this is not correct? | {"url":"http://www.physicsforums.com/showthread.php?p=4243213","timestamp":"2014-04-18T10:38:45Z","content_type":null,"content_length":"61764","record_id":"<urn:uuid:27f9b865-7f14-4a4c-aa55-e4830164603d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riverdale, GA Geometry Tutor
Find a Riverdale, GA Geometry Tutor
...I'm not here to force students to do things a certain way - that's what a lot of math teachers get wrong about teaching. I'm one of the best. I have taught all levels of math here in Georgia -
except for AP calculus BC, only because I haven't had any students who desired to take it.
10 Subjects: including geometry, calculus, algebra 1, algebra 2
...As a student at UGA, I worked at the university as a chemistry tutor, and have since continued tutoring chemistry to my high school students and college students. I am a high school science
teacher who loves both math and science, and I have tutored both since high school. As a high school juni...
15 Subjects: including geometry, chemistry, biology, algebra 1
...I am completing my degree in Information, Science, and Technology at Pennsylvania State University. During my time in high school and college, I did well in my Math (Calculus I-II), Chemistry,
and Physics courses and have tutored in all of these subjects. Currently, I co-teach Math 1 and GPS Algebra 1.
13 Subjects: including geometry, chemistry, physics, calculus
I formerly taught high school and middle school math (Pre-Algebra, Algebra I, Algebra II, Geometry, Pre-Calculus, and Trigonometry). I also taught all subjects to my 3 children (One graduated from
college with honors, one is still in college, and one is in middle school). I am presently taking cour...
17 Subjects: including geometry, reading, algebra 1, English
...Upon graduating with a B.S in biological sciences and in spite of not having a teaching certificate, I was offered a teaching job at life sciences Secondary H.S due, in part, for my extensive
experiences as a tutor; there I taught AP chemistry, chemistry, mathematics, and biology. Currently, I a...
22 Subjects: including geometry, reading, chemistry, writing | {"url":"http://www.purplemath.com/riverdale_ga_geometry_tutors.php","timestamp":"2014-04-16T07:57:57Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:5b93689b-42f4-420f-95f2-ceb1507d2271>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Statistics – Computing on Radford Neal's blog
Posts filed under ‘Statistics – Computing’
The microbenchmark package is a popular way of comparing the time it takes to evaluate different R expressions — perhaps more popular than the alternative of just using system.time to see how long it
takes to execute a loop that evaluates an expression many times. Unfortunately, when used in the usual way, microbenchmark can give inaccurate results.
The inaccuracy of microbenchmark has two main sources — first, it does not correctly allocate the time for garbage collection to the expression that is responsible for it, and second, its summarizes
the results by the median time for many repetitions, when the mean is what is needed. The median and mean can differ drastically, because just a few of the repetitions will include time for a garbage
collection. These flaws can result in comparisons being reversed, with the expression that is actually faster looking slower in the output of microbenchmark. (more…)
I’ve now released pqR-2013-12-29, a new version of my speedier implementation of R. There’s a new website, pqR-project.org, as well, and a new logo, seen here.
The big improvement in this version is that vector operations are sped up using task merging.
With task merging, several arithmetic operations on a vector may be merged into a single operation, reducing the time spent on memory stores and fetches of intermediate results. I was inspired to add
task merging to pqR by Renjin and Riposte (see my post here and the subsequent discussion). (more…)
The previously sleepy world of R implementation is waking up. Shortly after I announced pqR, my “pretty quick” implementation of R, the Renjin implementation was announced at UserR! 2013. Work also
proceeds on Riposte, with release planned for a year from now. These three implementations differ greatly in some respects, but interestingly they all try to use multiple processor cores, and they
all use some form of deferred evaluation.
Deferred evaluation isn’t the same as “lazy evaluation” (which is how R handles function arguments). Deferred evaluation is purely an implementation technique, invisible to the user, apart from its
effect on performance. The idea is to sometimes not do an operation immediately, but instead wait, hoping that later events will allow the operation to be done faster, perhaps because a processor
core becomes available for doing it in another thread, or perhaps because it turns out that it can be combined with a later operation, and both done at once.
Below, I’ll sketch how deferred evaluation is implemented and used in these three new R implementations, and also comment a bit on their other characteristics. I’ll then consider whether these
implementations might be able to borrow ideas from each other to further expand the usefulness of deferred evaluaton. (more…)
In R, objects of most types are supposed to be treated as “values”, that do not change when other objects change. For instance, after doing the following:
a <- c(1,2,3)
b <- a
a[2] <- 0
b[2] is supposed to have the value 2, not 0. Similarly, a vector passed as an argument to a function is not normally changed by the function. For example, with b as above, calling f(b), will not
change b even if the definition of f is f <- function (x) x[2] <- 0.
This semantics would be easy to implement by simply copying an object whenever it is assigned, or evaluated as the argument to a function. Unfortunately, this would be unacceptably slow. Think, for
example, of passing a 10000 by 10000 matrix as an argument to a little function that just accesses a few elements of the matrix and returns a value computed from them. The copying would take far
longer than the computation within the function, and the extra 800 Megabytes of memory required might also be a problem.
So R doesn’t copy all the time. Instead, it maintains a count, called NAMED, of how many “names” refer to an object, and copies only when an object that needs to be modified is also referred to by
another name. Unfortunately, however, this scheme works rather poorly. Many unnecessary copies are still made, while many bugs have arisen in which copies aren’t made when necessary. I’ll talk
about this more below, and discuss how pqR has made a start at solving these problems. (more…)
One way my faster version of R, called pqR (see updated release of 2013-06-28), can speed up R programs is by not even doing some operations. This happens in statements like for (i in 1:1000000) ...,
in subscripting expressions like v[i:1000], and in logical expressions like any(v>0) or all(is.na(X)).
This is done using pqR’s internal “variant result” mechanism, which is also crucial to how helper threads are implemented. This mechanism is not visible to the user, apart from the reductions in run
time and memory usage, but knowing about it will make it easier to understand the performance of programs running under pqR. (more…)
As part of developing pqR, I wrote a suite of speed tests for R. Some of these tests were used to show how pqR speeds up simple real programs in my post announcing pqR, and to show the speed-up
obtained with helper threads in pqR on systems with multiple processor cores.
However, most tests in the suite are designed to measure the speed of more specific operations. These tests provide insight into how much various modifications in pqR have improved speed, compared to
R-2.15.0 on which it was based, or to the current R Core release, R-3.0.1. These tests may also be useful in judging how much you would expect your favourite R program to be sped up using pqR, based
on what sort of operations the program does.
Below, I’ll present the results of these tests, discuss a bit what some of the tests are doing, and explain some of the run time differences. I’ll also look at the effect of “byte-code” compilation,
in both pqR and the R Core versions of R. (more…)
One innovative feature of pqR (my new, faster, version of R), is that it can perform some numeric computations in “helper” threads, in parallel with other such numeric computations, and with
interpretive operations performed in the “master” thread. This can potentially speed up your computations by a factor as large as the number of processor cores your system has, with no change to your
R programs. Of course, this is a best-case scenario — you may see little or no speed improvement if your R program operates only on small objects, or is structured in a way that inhibits pqR from
scheduling computations in parallel. Below, I’ll explain a bit about helper threads, and illustrate when they do and do not produce good speed ups. (more…)
pqR — a “pretty quick” version of R — is now available to be downloaded, built, and installed on Linux/Unix systems. This version of R is based on R-2.15.0, but with many performance improvements, as
well as some bug fixes and new features. Notable improvements in pqR include:
• Multiple processor cores can automatically be used to perform some numerical computations in parallel with other numerical computations, and with the thread performing interpretive operations. No
changes to R code are required to take advantage of such computation in “helper threads”.
• pqR makes a better attempt at avoiding unnecessary copying of objects, by maintaining a real count of “name” references, that can decrease when the object bound to a name changes. Further
improvements in this scheme are expected in future versions of pqR.
• Some operations are avoided completely in pqR — for example, in pqR, the statement for (i in 1:10000000) ... does not actually create a vector of 10000000 integers, but simply sets i to each of
these integers in turn.
There are also many detailed improvements in pqR that decrease general interpretive overhead or speed up particular operations.
I will be posting more soon about many of these improvements, and about the gain in performance obtained using pqR. For the moment, a quick idea of how much improvement pqR gives on simple operations
can be obtained from the graph below (click to enlarge):
This shows the relative run times (on an Intel X5680 processor) of nine simple test programs (from the 2013-06-18 version of my R speed tests), using pqR, and using all releases of R by the R Core
Team from 2.11.1 to 3.0.1. These programs mostly operate on small objects, doing simple operations, so this is a test of general interpretive overhead. A single thread was used for pqR (there is not
much scope in these programs for parallelizing numeric computations).
As one can see, there has been little change in speed of interpreted programs since R-2.12.0, when some modifications that I proposed were incorporated into the R Core versions (and the R Core Team
declined to incorporate many other modifications I suggested), though the speed of compiled programs has improved a bit since the compiler was introduced in R-2.13.0. The gain for interpreted
programs from using pqR is almost as large as the gain from compilation. pqR also improves the speed of compiled programs, though the gain is less than for interpreted programs, with the result that
the advantage of compilation has decreased in pqR. As I’ll discuss in future posts, for some operations, pqR is substantially faster when the compiler is not used. In particular, parallel computation
in helper threads does not occur for operations started from compiled R code.
For some operations, the speed-up from using pqR is much larger than seen in the graph above. For example, vector-matrix multiplies are over ten times faster in pqR than in R-2.15.0 or R-3.0.1 (see
here for the main reason why, though pqR solves the problem differently than suggested there).
The speed improvement from using pqR will therefore vary considerably from one R program to another. I encourage readers who are comfortable installing R from source on a Unix/Linux system to try it
out, and let me know what performance improvements (and of course bugs) you find for your programs. You can leave a comment on this post, or mail me at radfordneal@gmail.com.
You can get pqR here, where you can also find links to the source repository, a place to report bugs and other issues, and a wiki that lists systems where pqR has been tested, plus a few packages
known to have problems with pqR. As of now, pqR has not been tested on Windows and Mac systems, and compiled versions for those systems are not available, but I hope they will be fairly soon.
UPDATE: You can read more about pqR in my posts on parallel computation with helper threads in pqR, comparing the speed of pqR with R-2.15.0 and R-3.0.1, how pqR makes programs faster by not doing
things, and fixing R’s NAMED problems in pqR.
Two papers involving Hamiltonian Monte Carlo (HMC) have recently appeared on arxiv.org — Jascha Sohl-Dickstein’s Hamiltonian Monte Carlo with reduced momentum flips, and Jascha Sohl-Dickstein and
Benjamin Culpepper’s Hamiltonian annealed importance sampling for partition function estimation.
These papers both relate to the variant of HMC in which momentum is only partially refreshed after each trajectory, which allows random-walk behaviour to be suppressed even when trajectories are
short (even just one leapfrog step). This variant is described in Section 5.3 of my HMC review. It seems that the method described in the first paper by Sohl-Dickstein could be applied in the context
of the second paper by Sohl-Dickstein and Culpepper, but if so it seems they haven’t tried it yet (or haven’t yet written it up).
In my post on MCMC simulation as a random permutation (paper available at arxiv.org here), I mentioned that this view of MCMC also has implications for the role of randomness in MCMC. This has also
been discussed in a recent paper by Iain Murray and Lloyd Elliott on Driving Markov chain Monte Carlo with a dependent random stream.
For the simple case of Gibbs sampling for a continuous distribution, Murray and Elliott’s procedure is the same as mine, except that they do not have the updates of extra variables needed to produce
a volume-preserving map. These extra variables are relevant for my importance sampling application, but not for what I’ll discuss here. The method is a simple modification of the usual Gibbs sampling
procedure, assuming that sampling from conditional distributions is done by inverting their CDFs (a common method for many standard distributions). It turns out that after this modification, one can
often eliminate the random aspect of the simulation and still get good results! (more…) | {"url":"http://radfordneal.wordpress.com/category/statistics/statistics-computing/","timestamp":"2014-04-20T23:26:50Z","content_type":null,"content_length":"40991","record_id":"<urn:uuid:7d4f9f65-0b41-4a99-8a6a-e2cb3e9c0a28>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A chocolate chip cookie recipe asks for four and two thirds times as much flour as chocolate chips. If two and one third cups of flour is used, what quantity of chocolate chips would then be needed,
according to the recipe?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b7f5e3e4b0c789d50ff23d","timestamp":"2014-04-21T08:03:03Z","content_type":null,"content_length":"41767","record_id":"<urn:uuid:4de61ab8-74d3-44ba-9b4f-4625ce235932>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
meaning of greek symbols
1. The problem statement, all variables and given/known data
What does [tex]\mu[/tex] represent?
2. Relevant equations
3. The attempt at a solution
I understand it is a coefficient but I don't understand what it stands for especially when being applied to Free Body Diagrams. Explain please?! | {"url":"http://www.physicsforums.com/showthread.php?t=440644","timestamp":"2014-04-17T09:51:00Z","content_type":null,"content_length":"22117","record_id":"<urn:uuid:6c259996-c423-4b8f-8b4a-844524e6a357>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Auto Crash Reconstruction in LOSRIC, Part II
By Arthur Croft, DC, MS, MPH, FACO
In Part I (see the May 31, 1999 issue of DC), we discussed a few definitions and looked at a couple of applications of vehicle dynamic equations useful in reconstructing low speed rear impact crashes
In this part, we'll look at conservation of momentum equations and their use in solving these reconstruction problems. We'll also look at the coefficient of restitution -- a frequently overlooked
factor in LOSRIC ACR.
Conservation of Momentum
Newton expressed the quantity of motion (now called momentum, and expressed as p) as the product of mass and velocity. In fact, originally, his second law was expressed in terms of momentum rather
than acceleration. The momentum is:
Equation 11.
p=mv (or p=wv)
The law of conservation of momentum states that in any group of objects that act upon each other, the total momentum before the action is equal to the total momentum after the action. Thus:
Equation 12.
m[1]v[1]+m[2]v[2] = m[1]v[1]'+m[2]v[2]'
The prime sign (') indicates the after-accident condition. And remember that in accident reconstruction, we often consider weight (w) to be synonymous with mass (m) for practical purposes. However,
we generally do not add the weight of the vehicle load -- such as passengers, luggage, etc. -- to that of the vehicle because they are not technically attached to the car and thus are not a part of
the car's mass. For example, at impact, these objects will move freely within the car. Moreover, you'll find that the results of many of the vehicle dynamics equations are not changed markedly anyway
by adding the weights of, say, two passengers. Arguably though, when passengers are well restrained in the car, their mass does contribute to the overall crash dynamics. Now let's take a look at how
the vehicle's weight can affect the collision. Consider first a head-on collision between two cars (V1 and V2) which have different masses, different pre-impact velocities, and stay together after
impact, where:
w1 = 5000 lb
w2 = 2500 lb
v1 = 20 fps
v2 = 40 fps
v1w1+v2w2 = v1'w1+v2'w2 or v1w1+v2w2 = v'(w1+w2), so 20(5000)+(40)(2500) = v'(5000+2500) v' = 0
This collision resulted in zero net momentum after-impact velocity. The higher speed of vehicle two (V2) was cancelled out by the greater weight of vehicle one (V1).
Now let's see how this equation can be applied to same-direction collisions. Suppose this time, a stationary vehicle (V2) is struck from the rear by another car of the same size (V1), and we know
from vehicle dynamics calculations that the ending velocity of the two cars (i.e., after impact) was 45 fps (30.6 mph). Here is how we would calculate the impact velocity of V1:
w1 = 5000 lb
w2 = 5000 lb
v' = 45 fps
v2 = 0 fps
v1w1+v2w2 = v1'w1+v2'w2 or
v1w1+v2w2 = v'(w1+w2), so
(v1)(5000)+(0)(5000) = 45(5000+5000)
v1 = 90 fps (61.2 mph)
By now, it should be clear that to calculate beginning or ending velocities with any degree of accuracy, we must begin with the following necessary information:
1. The approach path of each vehicle to impact. This can be determined from tire prints or skid marks, or an analysis of damage to the vehicles.
2. The departure path of each vehicle. Tire prints, skid marks, gouges and grooves in the roadway, a liquid dribble path, or the final resting place of vehicles can provide this information.
3. The weight and load of each vehicle.
4. The post-impact velocity of each vehicle. This can be derived from vehicle dynamics equations (an example of which was provided earlier).
In real-world ACR scenarios involving LOSRIC, the ACR is frequently furnished with only photographs of vehicles that show little or no damage. Police scaled diagrams are usually not available, and
pre- and post-crash motions and velocities can only be crudely and unreliably estimated from eyewitness accounts (which commonly conflict when there are more than one!), or the unquestionably biased
accounts of the involved parties which, aside from the potential of bias, are also known to be unreliable. This is where the science of ACT begins to break down. While most ACRs are honest and will
freely admit the limitations of their craft in such circumstances, there are some who can't resist the temptation to apply liberal amounts of "fudge factor," facing their clients, and opposing
attorneys, with pretensions of scientific exactitude. Theirs is a "science" machine operating at the hairy edge of reality, fueled mostly by intuition and guesswork, and churning out copious amounts
of hyperbole and prevarication. One of the first signs of this type of work will be the absence of calculations in the report, with the findings (deltaV, etc.) presented with single figures (as
opposed to a range) with pretentious accuracy to the decimal place. Where a reasonable report would give the deltaV as "from 2-5 mph," these phonies will say, "The deltaV was 2.4 mph." This degree of
accuracy is
never possible.
Parallelogram or Vector Analysis Solution of Conservation of Momentum Problems
A way of solving momentum equations in which the cars do not behave and remain inline is to use a parallelogram or vector analysis. The parallelogram law states that two vectors may be replaced by a
single vector called a resultant R, obtained by drawing the diagonal of the parallelogram which has sides equal to the given vectors (see Figure 4).
[Figure 4: Vector analysis method. P = the post-collision momentum for V1; Q = the post-collision momentum for V2; R = the resultant, or total post-collision momentum. With permission from Whiplash:
The Masters' Certification Program, Module 2.]
To solve this type of problem, we first must have crated a scaled accident diagram and reconstructed the accident. Having determined the position of maximum engagement, we can then determine the
first contact position. The development of such collision diagrams is well beyond the scope of this series of articles, but provides us with the basis of the angles in the givens in the examples
below. The post-impact velocities (v') are derived from vehicle dynamics equations.
w1 = 3500 lb
w2 = 2500 lb
v1' = 30 fps
v2' = 30 fps
Theta1' = 40 °
Theta2' = 25°
Theta2 = 10°
We now know the approach and departure angles for the two vehicles (V1 and V2). By convention, the left side of the x coordinate will represent the pre-impact momentum (P1) of V1 (see Figure 5).
[Figure 5: Pre- and post-impact (primed) momenta vectors for V1 and V2. See text. With permission from Whiplash: The Masters' Certification Program, Module 2.]
P2 will be the pre-impact momentum of V2. P1' and P2' are the momenta of the respective vehicles after the crash. From equation 11, we make the following calculations:
1. P1' = w1v1'; P1' = 3500(30) = 105,000 lbs fps
2. P2' = w2v2'; P2' = 2500(30) = 75,000 lbs fps
3. Using an arbitrary scale of 50,000 lbs fps/1 inch, we calculate a line length of 2.1 inches (in) for P1' (P1' = 105,000/50,000 = 2.1 in), and 1.5 for P2' (P2' = 75,000/50,000 = 1.5 in).
4. Using these lengths, we complete the parallelogram (see Figure 6), and draw the resultant R. We then move R back into the region of P1 and P2, which in this case are 2.6 in and 2.0 inches
5. Now we can calculate the momenta of P1 and P2. P1 = 2.6(50,000) = 130,000 lb fps; P2 = 2.0(50,000) = 100,000 lb fps. This can be converted to velocity based on the relationship v = P/w: v1 =
130,000/3500 = 37.1 fps (25.2 mph), and v2 = 100,000/2500 = 40.0 fps (27.2 mph).
As you might guess, the inherent accuracy of this approach is dependent on the accuracy of the collision diagram, primarily, but also on the accuracy of the subsequent measurements. In practice, a
room full of experienced ACRs will come up with a range of figures +/- a few mph at best. Of course, this usually has nothing much to do with LOSRIC, but before we leave the topic, there are still
other ways to solve momentum problems.
[Figure 6: Vector solution to law of conservation of momentum problems. With permission from Whiplash: The Masters' Certification Program, Module 2.]
Mathematical Solution of Conservation of Momentum Problems
In addition to graphic solutions for momentum problems, we can use mathematical solutions. The same input data must be known in either case. Any vector quantity, such as force, velocity or momentum,
can easily be resolved into rectangular components. In Figure 7 below, vector P has been resolved into x and y components. These would be as follows:
Px = P cos;
Py = P sin;
The vector P can be resolved into its x and y components as long as the angle 0 is known.
[Figure 7: The vector P can be resolved into x and y components, Px and Py, if the angle 0 is known. With permission from Whiplash: The Masters' Certification Program, Module 2.]
To solve the problem presented below mathematically, we resolve the pre- and post-collision momentum vectors into their x and y components. These components for both pre- and post-impact velocity are
set to equal each other and the pre-collision velocities are then solved for. The equations then become:
Equation 13.
v1 = (v1'w1cos1'+v2'w2cos2) / (w1cos1)
Equation 14.
v2 = (v1'w1sin1'+v2'w2sin2') / (w2sin2)
We could then solve the type of problem shown in Figure 4 without having to draw parallelograms. However, once again the reader is cautioned that the output is only as good as the input, and the
accuracy of the input is often questionable. In such instances, an ACR must be conservative and calculate a range of answers based on a range of possible input. As Lynn B. Fricke, the author of
Traffic Accident Reconstruction, Volume 21 wisely cautioned: "You might start to believe your answers are more accurate than they actually are, forgetting that many of the inputs are only slightly
better than guesses. Clearly, this is a time to exercise caution and understand the limitation of your analysis."
Conservation of Momentum "Difficulties"
Newton's laws of motion are precise. Looking at the conservation of momentum relationship in more familiar terms, if a billiard ball strikes a stationary ball, and the two balls then move off at
different velocities, we could say that the combined momenta of the two balls after the impact is equal to the first ball's original momentum. In car crashes, this relationship is clouded by an
apparent loss of momentum which results from energy lost; for example, into the ground when two cars collide or, at the submicroscopic level, with the breaking of bonds. Other forms of (probably
negligible) energy loss include losses via heat transfer and sound. In cases where energy loss is significant, other methods of reconstruction must be relied upon. One of the reasons to include a
restitution factor in reconstructions of low speed crashes is to account for apparent losses of momentum. For example, Szabo and Welcher^2 reported the results of their crash testing of Ford Escorts.
Careful examination of their data revealed an apparent loss of momentum of 15%, for which they provided no explanation or discussion.
Coefficient of Restitution
In low speed crashes, this coefficient of restitution e becomes important. The relationship is illustrated in the following equation:
Equation 15.
e = (V[F]'-V[R]') / (V[F]-V[R])
This is essentially a ratio of the vehicles' (F=front; R=rear) post-impact velocities (primed values) and their pre-impact velocities: the rebound or restitutional speed divided by the deforming or
impact speed. Thus, it's related to the kinetic energies involved in crashes. The value of e is always negative but, for shorthand purposes, we usually do not sign it that way. When the value of e is
closer to zero, it is said to be a plastic type of collision, and when the value of e is closer to one, the collision is mostly elastic in nature. In truth, there are no perfectly elastic materials
(there is always some energy of deformation that can't be returned). If there were, a perpetual motion machine would become possible.
Another way of thinking about this value is that it represents the springiness of a collision. A body is said to have elastic properties if the deformation caused by the induction of force is
recovered completely after the force is removed, and the energy stored by the body during its deformed state is also recovered. When deformation and energy are not restored, the collision is plastic.
In reality, we generally consider these things by degree, since pure plastic and elastic responses don't occur. In the case of the two colliding billiard balls, the collision was quite elastic and e
would be close to one. If, however, the two balls were made of soft wax, they would deform on impact in a plastic flow and e would be closer to zero. When the sheet metal and other deformable parts
of the car are permanently bent, the collision is likewise more plastic in nature (although some of this deformation will be reversed after the crash due to the springiness of steel -- the difference
between dynamic and static crush), and less of the impact energy is transferred to the occupants. Conversely, in an elastic crash, a large portion of the collision energy is transferred to the
occupant, thereby increasing the potential for injury.
Complicating matters somewhat, the value of e varies over a range of crash velocities such that at very low velocities (e.g., 1-2 mph deltaV), e is very high (0.8 or so), and at higher velocities
(e.g., 9 mph and above), it may be as low as 0.1 or less. Therefore, while it is reasonable to ignore restitution in the reconstruction of higher velocity crashes, it is not OK in LOSRIC.
for more information about Arthur Croft, DC, MS, MPH, FACO. | {"url":"http://www.dynamicchiropractic.com/mpacms/dc/article.php?id=36135","timestamp":"2014-04-20T00:51:47Z","content_type":null,"content_length":"87011","record_id":"<urn:uuid:53a03fc7-2f46-4c7f-9be8-e2e93e54ecc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] Disabled function evaluations
Stavros Macrakis macrakis at alum.mit.edu
Thu Jul 17 08:37:12 CDT 2008
There is at least one operator in Maxima which "protects" everything inside
it from both evaluation and simplification, namely lambda.
To enter an unsimplified expression, just write, e.g. lambda([], diff(1,x) )
-- diff( <constant> ...) normally simplifies to 0 (even without evaluation),
but this stays as lambda([],diff(1,x)).
To manipulate this expression in any way, you'll need to have simp:false.
For example:
(%i1) expr: lambda([], x-x );
(%o1) lambda( [], x-x ) << no simplification or
(%i2) block([simp:false],print(part(expr,2)));
x-x << printed without simplification or
(%o2) 0 << but the *return value* from the block is
simplified since simp is true outside the block
Keep in mind that standard Maxima mathematical operations are not guaranteed
to work correctly on unsimplified expressions.
Robert's mail addresses the Tex issues.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.math.utexas.edu/pipermail/maxima/attachments/20080717/1a453f18/attachment.htm
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2008/012552.html","timestamp":"2014-04-17T12:33:22Z","content_type":null,"content_length":"3793","record_id":"<urn:uuid:bfcf41bf-7658-479a-9734-1c820bbffcc3>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arlington, VA Calculus Tutor
Find an Arlington, VA Calculus Tutor
...I currently work as a professional economist. Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I
work as a professional economist, where I utilize econometric models and concepts regularly using both STA...
16 Subjects: including calculus, geometry, statistics, ACT Math
I am a dedicated professional translator/interpreter and language teacher of English and German. I have 6+ years' experience of teaching languages in high schools, universities and language
centers, as well as translating for major companies in Russia, Dubai (UAE) and the Unites States. Being nati...
10 Subjects: including calculus, ESL/ESOL, algebra 1, German
...Mathematics has became a lost art, and I have committed myself to restoring it. I believe that everybody is able to learn anything as long as you have the will. I have a bachelor's of science
in Mathematics.
14 Subjects: including calculus, geometry, statistics, algebra 2
...These courses involved learning the techniques, both analytical and numerical, for solving ordinary, partial, and non-linear differential equations including Green's function techniques. I
have also taught Physics and Electrical Engineering courses for both undergraduate and graduate students. ...
16 Subjects: including calculus, physics, statistics, geometry
...I encourage a smooth, easy swing in my students in order to lessen stress on the body and improve accuracy. I also cover short game, etiquette, and how to choose shots well so you can reach
your goal out on the course. I have played the game for 20 years and in my youth I received formal training and played competitively.
13 Subjects: including calculus, writing, algebra 1, GRE
Related Arlington, VA Tutors
Arlington, VA Accounting Tutors
Arlington, VA ACT Tutors
Arlington, VA Algebra Tutors
Arlington, VA Algebra 2 Tutors
Arlington, VA Calculus Tutors
Arlington, VA Geometry Tutors
Arlington, VA Math Tutors
Arlington, VA Prealgebra Tutors
Arlington, VA Precalculus Tutors
Arlington, VA SAT Tutors
Arlington, VA SAT Math Tutors
Arlington, VA Science Tutors
Arlington, VA Statistics Tutors
Arlington, VA Trigonometry Tutors
Nearby Cities With calculus Tutor
Alexandria, VA calculus Tutors
Annandale, VA calculus Tutors
Bethesda, MD calculus Tutors
Fairfax, VA calculus Tutors
Falls Church calculus Tutors
Fort Myer, VA calculus Tutors
Hyattsville calculus Tutors
Mc Lean, VA calculus Tutors
Rockville, MD calculus Tutors
Rosslyn, VA calculus Tutors
Silver Spring, MD calculus Tutors
Springfield, VA calculus Tutors
Takoma Park calculus Tutors
Vienna, VA calculus Tutors
Washington, DC calculus Tutors | {"url":"http://www.purplemath.com/Arlington_VA_Calculus_tutors.php","timestamp":"2014-04-21T07:15:33Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:27d572e6-be85-49f5-8e29-099e48e0094e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding critical points
April 30th 2008, 06:37 AM #1
Apr 2008
Finding critical points
Consider the function f(x,y)=x^2+y^2+2x^3/
a) Find and classify all critical points.
b) Find the absolute extrema on the disk x^2+y^2 <= 1 using Lagrange multipliers to do the boundary.
Calculate $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$
A critical point is a point where both $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ are equal to 0.
I was more concerned about part b
April 30th 2008, 06:46 AM #2
April 30th 2008, 08:31 AM #3
Apr 2008 | {"url":"http://mathhelpforum.com/calculus/36661-finding-critical-points.html","timestamp":"2014-04-17T08:37:52Z","content_type":null,"content_length":"35278","record_id":"<urn:uuid:28913f0a-0be3-4a6f-afb0-8f9f67b8b658>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00379-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unemployment Numbers
If the August unemployment rate is down to 7.3%…
Then, why aren’t we smiling?
The reason is a declining labor force participation rate. The BLS tells us that the participation rate is, “the labor force as a percent of the civilian non-institutional population.” Meanwhile, to
be in the labor force, you have to be 16 or older and employed or looking for a job. So, if you are 21 years old and not looking for a job, then you are not in the labor force. When that 21 year old
left the labor force, she decreased the participation rate.
The US Labor Force Participation Rate:
And that takes us to a little bit of math. To calculate the unemployment rate, the number of unemployed people in the labor force is divided by the size of the labor force. What happens when your
numerator shrinks because the participation rate declines? Then the fraction gets smaller.
Astoundingly then, we can have a lower unemployment rate and more unemployed people. You just need to have individuals leave the labor force. They diminish the participation rate and the unemployment
rate numerator.
In this great Bloomberg Businessweek infographic, retirees are the largest group that is 16 or older and not in the labor force and 3% are those who “want to work.” You can also see more specifically
why a lower unemployment rate is a misleading statistic.
Sources and resources: Perfect for data and graphs, the BLS was the source of my employment and participation rate graphs while for more analysis of the August unemployment data, I recommend this
Washington Post article.
Related Posts
« Oligopoly: Two Great Ads Economic Thinkers: The Coase Theorem and Prince » | {"url":"http://www.econlife.com/a-lower-participation-rates-skews-the-unemployment-rte/","timestamp":"2014-04-16T13:13:19Z","content_type":null,"content_length":"65620","record_id":"<urn:uuid:21f125e3-2f25-405d-8f5c-c9187132d648>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hyperbolic Triangle
May 1st 2011, 10:24 AM #16
Although the question does not explicitly say so, it is usually assumed that hyperbolic space consists of the upper half of the complex plane. So in this problem I think it's safe to ignore the
lower half of the plane.
To calculate $|z|^2$, you have to take the real part of z and the imaginary part of z, square them both, and then add. If $z = a+ib$ (where a and b are both real) then $z-1 = a+ib-1 = (a-1) + ib$
. The real part is $a-1$ and the imaginary part is b. So $|z-1|^2 = (a-1)^2+b^2$.
Good grief, I think you're right.
Although the question does not explicitly say so, it is usually assumed that hyperbolic space consists of the upper half of the complex plane. So in this problem I think it's safe to ignore the
lower half of the plane.
To calculate $|z|^2$, you have to take the real part of z and the imaginary part of z, square them both, and then add. If $z = a+ib$ (where a and b are both real) then $z-1 = a+ib-1 = (a-1) + ib$
. The real part is $a-1$ and the imaginary part is b. So $|z-1|^2 = (a-1)^2+b^2$.
Good grief, I think you're right.
ah, right, so the 'a-1' part is just the real part of a 'new' compelx number? and we then treat that as if it were the original 'a'. THEN square that bracket? i can solve it from there on and i
got the answer as described. thanks a lot.
i might need more help with the next parts when I get to it but I'm going to get on with what I can do, thanks!
please help me with part iii
i don't know how to deal with the curvy angles.
what is the formula for a hyperbolic angle? i looked on wiki and a few math sites but couldn't really find it. can someone explain in laymans terms please!
The nice thing about hyperbolic angles is that they are the same as regular (euclidean) angles. So x is the angle between the two circles at the point B. I will show in detail how to find cot(x),
as required for part (iii). Then you can do something similar for the angle y at the point C.
The angle between two circles at a point where they intersect is equal to the angle between the tangents to the circles at that point. So you need to find the slopes of the tangents to the two
circles at B.
A tangent to a circle is perpendicular to the radius at that point, so we'll start by finding the slopes of the radii at B.
For the smaller circle, its centre is at the point (1,0). And B is the point (0,√3). The radius is the line joining (1,0) and (0,√3), and its slope is –√3. The tangent is perpendicular to the
radius, so its slope is 1/√3. This means that the tangent makes an angle $\arctan(1/\sqrt3) = \pi/6$ with the x-axis.
For the larger circle, its centre is at the point (3,0). The radius in this case is the line joining (3,0) and (0,√3), and its slope is –√3/3 = –1/√3. The tangent is perpendicular to the radius,
so its slope is √3. This means that the tangent makes an angle $\arctan(\sqrt3) = \pi/3$ with the x-axis.
Therefore the angle between the two tangents is $\pi/3-\pi/6 = \pi/6$ radians. Finally, $\tan(x) = \tan(\pi/6) = 1/\sqrt3$ and so $\cot(x) = 1/\tan(x) = \sqrt3.$
can you please go over the method of how to work out 'A' for part (i) as i don't understand the method used to find 'B', using the non-graphical method
do you need to do a differentiation on the circle equations? to find the angle of the tangents.
how does the SLOPE (GRADIENT?)help you to find the ANGLES? i can work out the two tangers
i understand what you are doing subtracting the smaller angle from the bigger angle using the xaxis for both to isolate the x angle. but i still dont know where your tan expressions are coming
from. also is there a way of finding x straight away without doing the subtracting? (just so i know)
also does a tangent have just a slope or is it a line equation that i need? do i need to do the m formula?
(are u just using that the slopes forms triangles with the xaxis and using trig
can you not just use the slopes of the radiuses themselves instead of the tangents and use that the opposite angle is equal. because you can make the triangles with the x axis and the radiuses to
B and use the triangle with the y axis to isolate the angle with the two circle centres atB, it is equal to x?)
i can work out what to do with te angle at c because it isnt like you perscribed with the two cricles. it is circle and one vertical line so i dont know what to do. i mean surely its differrent
right. the hyperbolic angle being the same with the two cricles is because they both fold away form each toher? so with a circle and a line its not going to be the same...=/
i really can't figure out what to do at point C...
with B you have the two circles and you can use the TWO tangets to find the answer. C is made with a circle and a straight line
Last edited by LumusRedfoot; May 6th 2011 at 03:21 AM. Reason: i really can't figure out what to do at point C...
i habe the same problem at point C i have worked out at point B but as was stated what do you do when you are using the Re(z) =2 line?
also can you show a graphical model of this calculation for reference to the workings
do you need to do a differentiation on the circle equations? to find the angle of the tangents. You could find the slope of the tangents using calculus, by differentiating the equations of the
circles, but the method using trigonometry is easier and quicker.
how does the SLOPE (GRADIENT?)help you to find the ANGLES? i can work out the two tangents The connection between the slope of a line and the angle that it makes with the x-axis is that if the
line has slope m, and the angle it makes with the axis is θ, then m = tan(θ). (In fact, that is probably why the tan function got its name: it tells you the slope of a tangent.)
i understand what you are doing subtracting the smaller angle from the bigger angle using the xaxis for both to isolate the x angle. but i still dont know where your tan expressions are coming
from. also is there a way of finding x straight away without doing the subtracting? (just so i know) I don't think there is a way of finding this angle except by subtraction.
also does a tangent have just a slope or is it a line equation that i need? do i need to do the m formula? You don't need the complete equation of the tangent, just its slope.
(are u just using that the slopes forms triangles with the xaxis and using trig Yes.
can you not just use the slopes of the radiuses themselves instead of the tangents and use that the opposite angle is equal. because you can make the triangles with the x axis and the radiuses to
B and use the triangle with the y axis to isolate the angle with the two circle centres at B, it is equal to x?) That is an excellent suggestion. The angle between the tangents is the same as the
angle between the radiuses, which is easier to calculate.
i can work out what to do with the angle at c because it isnt like you perscribed with the two cricles. it is circle and one vertical line so i dont know what to do. i mean surely its differrent
right. the hyperbolic angle being the same with the two cricles is because they both fold away form each toher? so with a circle and a line its not going to be the same...=/ In fact, it's much
easier in this case, because one of the curves is already a straight line, so you don't have to bother finding out its tangent. You even know the angle that this line makes with the x-axis,
namely a right angle. So all you need to find is the angle between the vertical line and the tangent to the circle at C.
opalg on that last point
i thought the angle between the tangets coudl be used because the two circles 'bend away' from each other thus taking the tangets either side makes the angle equal to the angle with the two
do you not need to compensate for the fact thatyoure looking for the angle with a line one one side and a circle on the other? such that it wouldnt be equal to the angle between the line and the
but using this method i make it -1/root 3? what do you think? am i doing it right
or you can see that the tangent at B and C are equal just one negative and one positive cos they are oppositie again... just either side of the circle l
its backwards so the angle is negative? do we just make it positive?>?
if you do it backwards pi/2 - (- pi/6), if you do it forwards, pi/2 - pi/6? different results. can you have a negative angle. it doesnt seem right. i made the anser to y = pi/3. and then just
take cos and sin for that as the answers. but the answers are in numbers not expressions. is that alright is that ok.
nilding who r u
edit please now help me with part 4. i have found two formulas... oen usus integrals, one does not use integrals. am i supposed to be looking at one with integrals or one without. for the
hyperbolic arc length...
is the length of a, arccosh(5/3), can someone verify, and is the ln form, ln(3). just ln(3)? as the answee?
i am only 14 why am i doing this my mates are dont even know what a quadratic is
edit also as for AC, is it not just root 11 - root 3? its just a line, you can get the length by reading it off the coordinates just like that...?
Last edited by LumusRedfoot; May 8th 2011 at 05:04 AM.
i thought the angle between the tangents could be used because the two circles 'bend away' from each other thus taking the tangents either side makes the angle equal to the angle with the two
tangents? It doesn't matter which way they are bending, it's the angle at the actual point of intersection that matters. That is why you can replace the curves by their tangents at that point.
do you not need to compensate for the fact that youre looking for the angle with a line one one side and a circle on the other? such that it wouldnt be equal to the angle between the line and the
tangent? No, you don't need to "compensate", you just need the angle between the line and the tangent.
but using this method i make it -1/root 3? what do you think? am i doing it right
or you can see that the tangent at B and C are equal just one negative and one positive cos they are opposite again... just either side of the circle l Yes, that is the smart way to do it. The
two points B and C are symmetrically placed on opposite sides of the circle.
its backwards so the angle is negative? do we just make it positive?>? Just make it positive. It is the size of the angle that you are looking for, not its orientation.
if you do it backwards pi/2 - (- pi/6), if you do it forwards, pi/2 - pi/6? different results. can you have a negative angle. it doesnt seem right. i made the anser to y = pi/3. and then just
take cos and sin for that as the answers. but the answers are in numbers not expressions. is that alright is that ok. Yes, the angle at C is pi/3.
There are three of you (LumusRedfoot, Neilding and Alexg42 from this other thread) who are all working on this problem. I suggest that you should get in touch with each other. You can use the
private message feature in this forum to do that. In the long run, you will find it more useful to share ideas with each other than to keep coming for help here.
please now help me with part 4. i have found two formulas... oen usus integrals, one does not use integrals. am i supposed to be looking at one with integrals or one without. for the hyperbolic
arc length...
is the length of a, arccosh(5/3), can someone verify, and is the ln form, ln(3). just ln(3)? as the answee? Yes! ln(3) is correct.
i am only 14 why am i doing this my mates are dont even know what a quadratic is You're either a genius or a maniac. Maybe both.
edit also as for AC, is it not just root 11 - root 3? its just a line, you can get the length by reading it off the coordinates just like that...?
I use the integral method. You have to parametrise the path by some function $\sigma(t)\; (a\leqslant t\leqslant b)$. Then the hyperbolic length is given by the integral $\int_a^b\frac{|\sigma'
(t)|}{\text{Im}(\sigma(t))}d t$. For the circular arc BC, you can parametrise it by $\sigma(t) = 1+2e^{it}\ (\pi/3\leqslant t\leqslant 2\pi/3)$. For the line AC you can use $\sigma(t) = 2+it\ (\
sqrt3\leqslant t\leqslant\sqrt{11}).$ But note that the hyperbolic length is not the same as the ordinary length. So it will not be equal to $\sqrt{11}-\sqrt3$. You have to get it by feeding that
function into the integral formula.
Last edited by Opalg; May 8th 2011 at 07:40 AM. Reason: tidied up LaTeX
May 2nd 2011, 12:12 AM #17
Apr 2011
May 4th 2011, 11:35 AM #18
Apr 2011
May 5th 2011, 11:46 AM #19
May 5th 2011, 12:39 PM #20
May 2011
May 5th 2011, 07:55 PM #21
Apr 2011
May 6th 2011, 04:19 AM #22
May 2011
May 6th 2011, 06:38 AM #23
May 2011
May 6th 2011, 11:04 AM #24
May 8th 2011, 01:29 AM #25
Apr 2011
May 8th 2011, 05:53 AM #26 | {"url":"http://mathhelpforum.com/geometry/178519-hyperbolic-triangle-2.html","timestamp":"2014-04-24T09:53:31Z","content_type":null,"content_length":"84050","record_id":"<urn:uuid:4875f117-7176-4f19-aeb0-bea3161f84c5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Weighted Average
Edit Article
PreparationIdentify ValuesMultiply for Weighted AverageWeighted Averages without Percentages
Edited by RoxyRichy, BR, Anna
A weighted average is a more accurate measurement of scores or investments that are of relative importance to each other. This is often the case with investment portfolios, grade scoring and other
statistics. You can find out how to calculate weighted averages.
Part 1 of 4: Preparation
Part 2 of 4: Identify Values
1. 1
Identify the numbers that are weighted. You may want to write them down on your paper in a chart form.
□ For example, if you are trying to figure out a grade, you should identify what you were graded on each exam.
2. 2
Identify the weights of each number. This is often a percentage. List the weight next to the number.
□ Percentages are common because weights are often a percentage of a total of 100. If you are figuring out the weighted average of grades, investments and other financial data, look for the
percentage of the occurrence out of 100.
□ If you are figuring the weighted average of grades, you should identify the weight of each exam or project.
3. 3
Convert percentages to decimals. Always multiply decimals by decimals, instead of decimals by percentages.
Part 3 of 4: Multiply for Weighted Average
1. 1
Multiply each number by its weight.
□ You can choose to write this at the end of the chart or to do it on 1 line, in a formula. For example, if you are trying to figure out the weighted average of certain grades, you might write
0.9(0.25) to indicate a 90 percent grade times 25 percent of the total grade.
2. 2
Add the weighted scores together.
□ For example, 0.9(0.25) + 0.75(0.50) + 0.87(0.25). The total weighted score for the class would be 0.8175.
3. 3
Note that the weights should total 100 if you are using percentages. Continue reading to adjust the weighted average for different types of weights.
4. 4
Multiply by 100 to get the percentage. In our grade example, this is 81.75 percent.
Part 4 of 4: Weighted Averages without Percentages
1. 1
Adjust your formula for an answer that does not include percentages.
□ Identify a numerical weight for each number value. Multiply the number by the weight, just as you did with percentages.
2. 2
Add together the values after you have multiplied them by their weights.
3. 3
Add together the weights for each value.
4. 4
Divide the total of the values by the total of the weights. The answer is the average value for each number.
• You can solve for the grade you need to receive on a test by plugging a variable in the weighted averages formula. For example, if you need to find out what grade you need to receive an 80
percent grade in our example above, write 0.9(0.25) + 0.75(0.50) + x(0.25)=0.80. Solve for x. You would need an 80 percent on the test to get an 80 percent in the class.
• Weighted average is not the same as the mean. If you took the mean of 90, 75 and 87 percent scores, you would arrive at an answer of 84 percent, an incorrect answer when weights of 25, 50 and 25
percent are to be factored in. The answer should be 81.75 percent.
Things You'll Need
• Calculator
• Pencil
• Paper
• Report/data
• Chart
Article Info
Thanks to all authors for creating a page that has been read 152,731 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-Weighted-Average","timestamp":"2014-04-16T16:05:22Z","content_type":null,"content_length":"73123","record_id":"<urn:uuid:28fb2102-b1a2-432d-97c8-c64de94288ca>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chicago Heights Statistics Tutor
Find a Chicago Heights Statistics Tutor
...I have helped many students to prepare for PRAXIS and PRAXIS II exams. My experience tutoring and teaching mathematics, English and the physical sciences at the college level qualify me to do
so. My experience as a trainer and business manager further bolster my ability to help students to succeed with PRAXIS.
49 Subjects: including statistics, reading, writing, English
...Wilcox scholarship). In high school, I scored a 2400 on the SAT, and earned a 5 on the AP Calculus BC exam from self study. I also received a 5 on the AP Statistics exam. I have teaching
experience, as well.
13 Subjects: including statistics, calculus, geometry, algebra 1
...I also obtained my middle school math endorsement, with a solid background tutoring math, as well as teaching math. I worked as a middle school and high school substitute math instructor. I
would love to have the opportunity once more to aid in the growth of students' math skills.
16 Subjects: including statistics, reading, algebra 1, geometry
...Since I have had many years of teaching from middle schools to universities, I have been afforded the luxury of teaching many ages. I have taught in middle schools, high schools, community
colleges, and universities. The main subject I have taught is mathematics.
12 Subjects: including statistics, calculus, geometry, algebra 2
My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home. My
passion for education comes through in my teaching methods, as I believe that all students have the a...
34 Subjects: including statistics, reading, writing, English | {"url":"http://www.purplemath.com/Chicago_Heights_Statistics_tutors.php","timestamp":"2014-04-18T11:12:29Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:75f4ba75-d27c-4dc9-b6df-d913b932eeee>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Center Valley Algebra Tutor
Find a Center Valley Algebra Tutor
...I am patient and understand that all students learn differently, and do my best to accommodate different learning styles.I have received a bachelor's degree in mechanical engineering from the
University of Delaware in 2013. While attending the University of Delaware for mechanical engineering, I...
9 Subjects: including algebra 1, algebra 2, calculus, physics
...In addition to teaching, I coached both science Olympiad and academic teams. Before teaching high school I taught physics, astronomy, and geology at the college level and during graduate school
I tutored college students in all of these subjects. For the past ten years, I have spent my summers teaching at summer camps for gifted students.
7 Subjects: including algebra 2, physical science, geology, algebra 1
...I believe that everyone has the potential for growth through a combination of hard work and enriching experiences. As a tutor, I understand that certain subjects can be difficult. With my
teaching style, I try to create analogies, and give real-world examples, all in an effort to make learning as enjoyable as possible.I hold a Bachelor of Science in English Education.
17 Subjects: including algebra 2, algebra 1, reading, English
...My one-on-one method is to show the student that Trig is quite understandable and not as overwhelming as they might believe. I am a certified PA math teacher and have taught all levels of Math,
including Algebra I, Geometry and Algebra II. I have also taught 6 week classes in SAT math and know what subjects are questioned the most.
12 Subjects: including algebra 2, algebra 1, calculus, geometry
...I have also worked with a variety of students preparing for the GED on all sections. I am a certified English teacher with experience teaching at both the middle school and high school level.
In addition, I am a strong test taker myself - especially in reading, as I received a perfect score on the SAT II for Literature.
25 Subjects: including algebra 1, reading, English, grammar | {"url":"http://www.purplemath.com/Center_Valley_Algebra_tutors.php","timestamp":"2014-04-19T09:57:46Z","content_type":null,"content_length":"24277","record_id":"<urn:uuid:7f196bb1-b9f1-48a9-a1f4-196d0e30a279>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
ofdm & fft processing gain
There are 2 messages in this thread.
You are currently looking at messages 1 to .
Is this discussion worth a thumbs up?
ofdm & fft processing gain - Chris Stratford - 2003-11-04 18:22:00
For 802.11a there are 64 point ifft @ 20MHz transmit signal OFDM. The
carriers are spaced 312500 Hz apart [20e6/64]. Demodulated baseband signal
is from -10MHz to +10MHz.[48 data carriers + 4 pilot carriers + some zero
channels at the highest frequencies to make up the number to 64 carriers.
Now at the receiver, if we have a 80MHz oversampling ADC sampling the 20MHz
wide signal, followed by a 80MHz digital FIR with 10MHz bandwidth +
decimate-by-4,FFT then equaliser, we get a SNR improvement at the output of
the equaliser [constellation EVM] of approximately 6dB compared to the SNR
at the input of the receiver. The modulation is, say QPSK, [although other
schemes are possible], the SNR improvement, I believe, comes from filtering
out some of the noise with the low pass filter after oversampling with the
Yep, all is good and well.
After reading through some of Richard Lyons Understanding DSP book, it says
a SNR improvement of 3dB is possible when the order of the FFT is doubled.
So...... thinking about this I used a 128 point FFT instead of a 64 point
FFT on the demodulator, and using decimate-by-2 from 80 to 40 MHz samples
instead of decimate-by-4 from 80 to 20MHz, but only kept bins 0-31 and bins
96-127 of the 128 point FFT, therefore recovering the original data, only I
do not see any increase in SNR of 3dB at the output of my equaliser that I
expected to get from the FFT processing gain.
How so ? Clearly I am missing something.
Also,. does using a 128 point fft @ 40MHz to decode the 64 point ifft @
20Mhz transmission give me any ICI improvement ? Will using a higher order
FFT give me any protection against phase noise distortion [which contributes
as ICI].
Re: ofdm & fft processing gain - Andrew Kan - 2003-11-06 04:42:00
By doubling the order of FFT (N8) and taking only half of the band
(64/128) for your output, you're essentially filtering out half of
your noise in the bins that you throw out. Think of this as cyclic
You may have achieved this same effect by your 80MHz digital FIR with
20MHz bandwidth, if implemented cyclicly at symbol boundary,
regardless of your final FFT order.
In your first system with Nd, however, I suspect you're just doing a
normal FIR. What you lose in this case is not in SNR; what you
compromise is in ISI immunity, because your FIR essentially "smears
out" the symbol boundaries in the guard prefix area. If your sim is
AWGN then you will not see the difference, hence the explanation to
what you see. Try a multipath channel that challenges the guard
interval duration + FFT length, you may see a difference.
If anyone may hire me in the Bay Area please let me know I'm young
(late 20s), educated, starving and motivated.
"Chris Stratford" <c...@NOSPAM.rogers.com> wrote in
> For 802.11a there are 64 point ifft @ 20MHz transmit signal OFDM. The
> carriers are spaced 312500 Hz apart [20e6/64]. Demodulated baseband signal
> is from -10MHz to +10MHz.[48 data carriers + 4 pilot carriers + some zero
> channels at the highest frequencies to make up the number to 64 carriers.
> Now at the receiver, if we have a 80MHz oversampling ADC sampling the
> wide signal, followed by a 80MHz digital FIR with 10MHz bandwidth +
> decimate-by-4,FFT then equaliser, we get a SNR improvement at the output
> the equaliser [constellation EVM] of approximately 6dB compared to the SNR
> at the input of the receiver. The modulation is, say QPSK, [although
> schemes are possible], the SNR improvement, I believe, comes from
> out some of the noise with the low pass filter after oversampling with the
> ADCs
> Yep, all is good and well.
> After reading through some of Richard Lyons Understanding DSP book, it
> a SNR improvement of 3dB is possible when the order of the FFT is doubled.
> So...... thinking about this I used a 128 point FFT instead of a 64 point
> FFT on the demodulator, and using decimate-by-2 from 80 to 40 MHz samples
> instead of decimate-by-4 from 80 to 20MHz, but only kept bins 0-31 and
> 96-127 of the 128 point FFT, therefore recovering the original data, only
> do not see any increase in SNR of 3dB at the output of my equaliser that I
> expected to get from the FFT processing gain.
> How so ? Clearly I am missing something.
> Also,. does using a 128 point fft @ 40MHz to decode the 64 point ifft @
> 20Mhz transmission give me any ICI improvement ? Will using a higher order
> FFT give me any protection against phase noise distortion [which
> as ICI]. | {"url":"http://www.dsprelated.com/showmessage/18066/1.php","timestamp":"2014-04-18T18:11:10Z","content_type":null,"content_length":"23372","record_id":"<urn:uuid:17219aaa-464a-461a-90fa-2795ef29e8fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
The function e and ln are inverses of each other. The part we care about right now is that for any positive real number a,
e^{ln a} = a.
If we turn this equation around, we can write any positive real number a as
e^{ln a}.
For example,
7 = e^ln 7
is the same thing as
(e^{ln 7})^x
which by rules of exponents is equal to
e^{(ln 7)x}.
We can find the derivative of
h(x) = e^{(ln 7)x}
using the chain rule. The outside function is
whose derivative is also
and the inside function is
(ln 7)x,
whose derivative is the constant
(ln 7).
The chain rule says
h'(x) = e^{(ln 7)x} × (ln 7)
Turning e^{(ln 7)x} back into 7^x, we see that
h'(x) = 7^x × (ln 7).
This is where we find the rule for taking derivatives of exponential functions that are in other bases than e. | {"url":"http://www.shmoop.com/computing-derivatives/derivative-power-function-help.html","timestamp":"2014-04-16T10:23:07Z","content_type":null,"content_length":"33415","record_id":"<urn:uuid:76037da2-bd67-4a3e-9b14-2905f68b84e7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are the categories of bialgebras and weak biaglebras cocomplete/algebraic?
up vote 3 down vote favorite
I am trying to verify whether the category of bialgebras and then the category of weak bialgebras are cocomplete.
We know that algebraic categories are cocomplete (Thm. 4.5 of this book), so I have been trying to show that these two categories are algebraic.
I suppose that this is something known for bialgebras at least, but I am not able to find any reference about that (or a proof for the matter). Does anyone know a reference or a proof?
universal-algebra ct.category-theory
add comment
2 Answers
active oldest votes
The category of bialgebras and the category of Hopf algebras are algebraic over the category of coalgebras. (See the edit below for a reference.) Since the category of coalgebras is
locally presentable (as remarked in the paper Ralph referred to), it is complete, and since any category that is algebraic over a complete category is complete, the category of
bialgebras and the category of Hopf algebras are complete.
Indeed, one nice way of thinking of coalgebras (which falls out from its being locally finitely presentable) is that it's equivalent to the category of left exact functors from the
category of finite-dimensional algebras to $Set$. Limits of such left exact functors may be computed pointwise.
We can say more: any category that is algebraic over a locally presentable category is also locally presentable. It follows that the category of bialgebras and the category of Hopf
algebras are also cocomplete.
up vote 9 down However, I claim neither the category of bialgebras nor the category of Hopf algebras are algebraic over set, i.e., the underlying set functors are not monadic. In fact, the underlying
vote accepted set functors don't even have left adjoints. If they did, then they would preserve the terminal object, but in each of these cases the ground field $k$ is the terminal bialgebra/Hopf
algebra, and since the underlying set of $k$ is not terminal, the claim is proven.
Edit: I am not sure whether Hopf algebras are coalgebraic over the category of algebras; this is perhaps a difficult problem. The status of this and related universal properties
(including the case of bialgebras, which is easier) are well-presented in a paper by Porst. (Link fixed)
Edit: The category of weak bialgebras is also complete and cocomplete; this is proved by similar methods from the theory of accessible categories. Namely, the category of weak
bialgebras (like that of bialgebras and of Hopf algebras) can be constructed as an equifier between two natural transformations in the 2-category of accessible categories, just as in
the bialgebra case. I'll refer you to another paper of Porst for details (section 2).
Thanks for the answer! what about the category of weak bialgebras? – user15007 Sep 28 '12 at 17:15
I've just made another edit to answer the question. The paper by Porst doesn't treat weak bialgebras explicitly, but the exact same method applies to them. – Todd Trimble♦ Sep 28
'12 at 19:33
1 Actually, you should probably take a look at Porst's website: math.uni-bremen.de/~porst because he has a lot of papers on this sort of topic. – Todd Trimble♦ Sep 28 '12 at 19:36
I've seen in Adamek - Rosicky that an equifer is accessible (Lem. 2.76), so how do we get cocompleteness? – user15007 Oct 19 '12 at 22:17
Well, the underlying functor from weak bialgebras (I guess it's weak bialgebras you're asking about?) to algebras preserves and reflects colimits (in other words, colimits of weak
bialgebras are just computed as they would be at the underlying algebra level -- you can check this directly), so cocompleteness of weak bialgebras follows from cocompleteness of
algebras. The same argument can be applied towards bialgebras and Hopf algebras. – Todd Trimble♦ Oct 19 '12 at 23:40
show 3 more comments
The category of bialgebras (as well as the category of Hopf algebras) over a field is complete and cocomplete. Completeness is proved in the paper
A.L. Agore, Limits of coalgebras, bialgebras and Hopf algebras, Proc. Amer. Math. Soc. 139 (2011), 855-863.
up vote 4 down vote Also right at the beginning of the paper the author states (with references):
The categories of coalgebras, bialgebras or Hopf algebras have arbitrary coproducts and coequalizers ([references]), hence these categories are cocomplete.
add comment
Not the answer you're looking for? Browse other questions tagged universal-algebra ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/108301/are-the-categories-of-bialgebras-and-weak-biaglebras-cocomplete-algebraic?answertab=oldest","timestamp":"2014-04-19T02:43:15Z","content_type":null,"content_length":"62461","record_id":"<urn:uuid:4ce67178-b57e-4cda-b088-8d14ace0a4a4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Algebraic" topologies like the Zariski topology?
up vote 33 down vote favorite
The fact that a commutative ring has a natural topological space associated with it is still a really interesting coincidence. The entire subject of Algebraic geometry is based on this simple fact.
Question: Are there other categories of algebraic objects that have interesting natural topologies that carry algebraic data like the Zariski topology on a ring (spectrum)? If they exist, what are
they and how are they used?
Note: I wasn't really sure what to tag this, so I chose some general tags that seem to express the idea.
big-picture geometry abstract-algebra noncommutative-geometry
3 Wonderful questions! I've had similar vague questions along these lines floating around in my head for a couple years now. Is there a deeper explanation of this "really interesting coincidence"?
It seems to suggest that the definition of "topological space", which somehow always seemed kind of a weird definition to me, has some kind of deeper significance, since it pops up everywhere... –
Kevin H. Lin Feb 5 '10 at 22:21
1 @Kevin: I think some of the mystery disappears if you think in terms of the Kuratowski closure axioms. Any relation R between a set S and a set T - any relation at all - defines a topology on S by
setting the closure operation to be cl(a) = {b | forall t such that aRt, also bRt}. You get the Zariski topology when T is a ring, S its set of prime ideals, and the relation is vanishing. –
Qiaochu Yuan Feb 5 '10 at 22:38
1 Ah, sorry, that should read cl(A) = {b | forall t such that (aRt forall a in A), also bRt}. This is because a relation defines a Galois connection between the subsets of S and the subsets of T and
the composition of the two functions making up the Galois connection is a closure operator. – Qiaochu Yuan Feb 5 '10 at 22:41
2 @Qiaochu: The Zariski topology is a bit deeper than that. Ideal addition and multiplication commute with the operations in nontrivial ways. – Harry Gindi Feb 5 '10 at 22:42
Harry and Qiaochu: Thanks for your responses to my question, though I don't feel like I fully understand either of your answers. I'd like to hear you guys elaborate on this though. I welcome you
2 to email me on this matter. (Or perhaps I should make a new question on this.) BTW, I just noticed this interesting comment of Allen Knutson on Grothendieck, who apparently thought that the
definition of topological space is "wrong": mathoverflow.net/questions/8204/… – Kevin H. Lin Feb 6 '10 at 7:55
show 14 more comments
7 Answers
active oldest votes
Yes, there are plenty of such things.
[In the following, "compact" implies "locally compact" implies "Hausdorff".]
1) To a Boolean algebra, one associates its Stone space, a compact totally disconnected space.
(Via the correspondence between Boolean algebras and Boolean rings, this is a special case of the Zariski topology -- but with a distinctive flavor -- that predates it.)
2) To a non-unital Boolean ring one associates its Stone space, a locally compact totally disconnected space.
3) To a commutative C*-algebra with unit, one associates its Gelfand spectrum, a compact space.
4) To a commutative C*-algebra without unit, one associates its Gelfand spectrum, a locally compact space.
up vote 36 6) To a commutative Banach ring [or a scheme over a non-Archimedean field, or...] one associates its Berkovich spectrum (the bounded multiplicative seminorms).
down vote
accepted 7) To a commutative ring R, one associates its real spectrum (prime ideals, plus orderings on the residue domain.)
8) To a field extension K/k, one associates its Zariski Riemann surface (equivalence classes of valuations on K which are trivial on k).
This is by no means a complete list...
Addendum: I hadn't addressed the second part of your question, i.e., explaining what these things are used for. Briefly, the analogy to the Zariski spectrum of a commutative ring is tight
enough to give the correct impression of the usefulness of these other spectra/spaces: they are topological spaces associated (cofunctorially) to the algebraic (or algebraic-geometric,
topological algebraic, etc.) objects in question. They carry enough information to be useful in the study of the algebraic objects themselves (sometimes, e.g. in the case of Stone and
Gelfand spaces, they give complete information, i.e., an anti-equivalence of categories, but not always). In some further cases, one can get the anti-equivalence by adding further
structure in a very familiar way: one can attach structure sheaves to these guys and thus get a class of "model spaces" for a certain species of locally ringed spaces -- e.g., Berkovich
spectra glue together to give Berkovich analytic spaces.
1 Heh, you actually put compact Hausdorff in your answer. You've been caught red-handed! – Harry Gindi Feb 5 '10 at 22:01
1 @HG: Oops. The oversight has been corrected. May Nicolas B. forgive me. – Pete L. Clark Feb 5 '10 at 22:06
2 as an addendum, and at the risk of mention "uncool" parts of functional analysis: in 3) and 4) you can replace $C^*$-algebra with "Banach algebra" -- although then the canonical map
from an algebra A to C_0(Gelf(A)) might not be injective and is usually not isometric. Note that the analogy/contrast with usual comm. algebra is that we use the max ideal spectrum not
the prime ideal spectrum. I'm not sure how this reconciles with 6) ... – Yemon Choi Feb 5 '10 at 22:07
4 Harry, have you come across Peter Johnstone's book Stone Spaces? It covers most of Pete's examples, and does almost everything in a categorical way. – Tom Leinster Feb 5 '10 at 22:59
1 @Pete, this is a wonderful answer. I'd like to nominate it for best of MO (I saw someone else nominate a post like this recently. Is that actually something we can do, or was he just
using it as a rhetorical device?) – Harry Gindi Feb 6 '10 at 0:46
show 8 more comments
The $I$-adic topology on a commutative ring $A$ (with unity), where $I$ is an ideal of $A$. The closed sets are intersections of finite unions of sets of the form $a+I^n$ with $a\in A$ and
$n\in\mathbb{N}$ (where $\mathbb{N}$ includes $0$). This topology has many trivial but very useful properties such as: The ring $A$ is separated (=Hausdorff) with respect to this topology if
and only if $\displaystyle\bigcap_{n\in\mathbb{N}}I^n=0$. The most important example is the polynomial ring $A=B\left[X_1,X_2,...,X_n\right]$ with the ideal $I=\left(X_1,X_2,...,X_n\right)$.
up vote This one is separated, but not complete. Its completion is the ring of power series $B\left[\left[X_1,X_2,...,X_n\right]\right]$.
12 down
vote This is probably the most elementary example of a topology in algebra. I think Szamuely's book has more advanced ones.
Krull topology on pro-finite completions of groups is perhaps of the same kind. – Anweshi Feb 5 '10 at 22:44
add comment
Interesting questions. Actually, this is indeed related to work on defining a natural topology on categories, which is part of noncommutative algebraic geometry.
A. Rosenberg defined the left spectrum for a noncommutative ring in 1981 (see The left spectrum, the Levitzki radical, and noncommutative schemes), and further generalized this spectrum to
any abelian category (see reconstruction of schemes), and proved the so called Gabriel-Rosenberg reconstruction theorem which led to the correct definition of noncommutative scheme. I might
have time to talk about this later. But for now, I shall just point out some papers, such as Spectra of noncommutative spaces.
In this paper, Rosenberg takes an abelian category as a "noncommutative space" and defines various spectra for different goals. (ONE remarkable destination is for representation theory of Lie
algebras and quantum groups.)
One can not only define spectrum for abelian categories; this notion also makes sense in a non-abelian category and a triangulated category. In the paper Spectra related with localizations,
Rosenberg defined the spectrum directly related to localization of categories. Roughly speaking, the spectrum of a category is a family of topologizing subcategories (which by definition, are
closed under direct sum, sub- and quotient; in particular, thick or Serre subcategories) satifying some additional conditions.
There is also another paper, Underlying spaces of noncommutative schemes, trying to investigate the underlying space of a noncommutative scheme or other noncommutative "space" in
noncommutative algebraic geometry. If we want to save flat descent in general, we might lose the base change property. In this work, Rosenberg deals with the "quasi-topology" (which means
dropping the base change property) and defines the associative spectrum of a category. Moreover: for the goals of representation theory, he built a framework relating representation theory
with the spectrum of abelian category (in particular, categories of modules). Actually, in this language, irreducible representations are in one-to-one correspondence with the closed points
in the spectrum; generic points in the spectrum also produce representations (not necessarily irreducible).
up vote
9 down The most important part in this work is that it provided a completely categorical (algebro-geometric) way to do induction in an abelian category instead of the derived category. (I will
vote explain this later if I have time). This semester, Rosenberg gave us a lecture course, using this framework to compute all the irreducible representations for the Weyl algebra, the enveloping
algebra, quantized enveloping algebras, algebras of differential operators, $SL\_2({\mathbb R})$ and other algebraic groups, or related associative algebras. It works very efficiently. For
example, computing irreducible representations of $U(sl_3)$ is believed to be very complicated, but using this spectrum framework, it becomes much simpler.
The general framework for these is contained in the paper Spectra, associated points and representation theory. If you want to see some concrete examples using this machine, you should look
at Rosenberg's old book Noncommutative Algebraic Geometry And Representations Of Quantized Algebras. There is another paper Spectra of `spaces' represented by abelian categories, providing
the general theory for this machinery.
Furthermore, we can define the spectrum for an exact category; even more generally, for any Grothendieck site, and so for any category (because any category has a canonical Grothendieck
pretopology). Rosenberg has recent work defining the spectrum for such categories -- Geometry of right exact `spaces' -- the main motivation for this work is to provide a background for
higher universal algebraic K-theory for a right exact category (a category with a family of strict epimorphisms can be taken as a one-sided exact category). More important motivation is to
study algebraic cycles for noncommutative schemes. (Warning: this paper is very abstract and hard to read. We will go through this paper in the lecture course this semester.)
All of these things will appear soon in his new book with Konstevich (but I am not sure of the exact time). If I have enough time to post, I will explain in more detail, how the theory of the
spectrum for abelian categories comes into representation theory, and how this picture is related to the derived picture of Beilinson-Bernstein and Deligne. In fact, today we have just
learned Beck's theorem for Karoubian triangulated categories and will do the DG-version of Beck's theorem later. And then he will introduce the spectrum for triangulated categories, and
explain the noncommutative algebraic geometry facts behind the BBD machine and the connection with his abelian machine.
9 Why did you do that with the text? +1, but oww, my eyes. – Harry Gindi Feb 6 '10 at 1:52
2 I've taken the liberty of editing the English in this answer/diary entry, and tweaking some of the BOLDFACE formatting. Hopefully it is now a bit easier to read; it has lots of interesting
detail and the author is evidently very enthusiastic about this programme of research, so I felt it was a shame left as it was. – Yemon Choi Feb 7 '10 at 9:59
Unfortunately the links format at MPI preprint server changed. For example previous mpim-bonn.mpg.de/preprints/send?bid=3617 does not work but instead mpim-bonn.mpg.de/preblob/3617
Therefore after the mpi website name one writes preblob/number instead of preprints/send?bid=number – Zoran Skoda May 12 '11 at 17:23
add comment
To any first order structure you can associate a Zariski-like topology, roughly by taking as closed sets the subsets definable by formulas without negation, see e.g. here and in the
article linked there.
up vote 9 down
vote If the first order structure is an algebraically closed field where you interpret the language of rings you get back the Zariski topology.
add comment
Interesting finite groups tend to have interesting inherent geometries (just as orbit-stabilizer turns external actions into internal actions, similar ideas turn many external geometries
into coset geometries). The geometry induced by conjugation on Sylow p-subgroups is important for all finite groups, and turns out to describe the (p-completion of the) classifying space of
the group.
Geometry has always been an important part of group theory. Zassenhaus groups and sharply triply transitive groups typically have an underlying affine or projective plane they are acting on.
Early investigations of these special permutation groups in the 1930s led to some of the systematic development of finite geometry over things other than fields. You can recover the
algebraic structure of something like a ring just from the permutation action of the group (often on a regular subgroup). M. Hall Jr.'s textbook on the Theory of Groups has a nice exposition
of these ideas.
Of course finite groups of Lie type acting on their Borel subgroups also define important geometries, roughly called "buildings", and there are a great many references for those. This became
a very popular way to understand the non-sporadic groups. These groups of Lie type have other nice actions, often on interesting finite geometries called generalized polygons.
Equivariant homotopy people noticed that some of these geometries are nearly enough to define a classifying space of the group, along with a nice decomposition of its cohomology ring. D.
Benson and S.D. Smith's book on Classifying Spaces of Sporadic Simple Groups (MR2378355) describes these techniques with a reasonably algebraic feel. Modulo a few details, these are the
fusion systems Scott Carnahan mention in a previous thread, MO5659. These geometries were investigated in order to provide a more natural analogue of buildings for sporadic groups.
up vote
6 down Actually, I suppose you might feel that classifying spaces themselves are naturally associated to finite groups.
Edit: I thought it might be helpful to point out the similarities to the Zariski topology: The Zariski topology basically encodes how prime ideals intersect. The fusion of a finite group
encodes how Sylow subgroups intersect. Strong fusion not only keeps track of the intersections, but also of the (G-inner) maps between those intersections, so that the fusion becomes a
category. Since fusion controls cohomology, it seemed natural to look at how fusion describes the classifying space of a group. Amazingly, it does a great job of describing the p-completion
of the classifying space and facilitates fairly direct calculations. In other words, the data encoded by the "prime subgroups" (Sylow p-subgroups) also encodes a natural topological space
associated to the group, its (p-completed) classifying space.
Several areas of combinatorics, like certain parts of graph theory and finite geometry, also seem to be based on the simple fact that interesting groups have interesting geometries. A recent
classification of Steiner triple systems followed from detailed classifications of finite simple groups and multiply transitive permutation groups, and several families of graphs are
interesting because of their automorphism groups.
I hope it is clear too that separating a group from its actions is not sensible. The actions of a group are encoded by the conjugacy classes of its subgroups, and it is entirely internal.
Most geometries associated to groups are also internal. This is basically why the classification of finite simple groups can succeed: the natural action of a group is already contained
inside it in an easy to describe way, so that once the local structure of a group is sufficiently similar to a known group, the group itself is isomorphic to a known group.
add comment
carry algebraic data like the Zariski topology on a ring (spectrum)? If they exist, what are they and how are they used?
In model theory they define and study the 'algebraic data like the Zariski topology' irrespectively of where these data come from. These data are called Zariski geometries, and e.g. admit
some intersection theory, can be used to prove Chow's lemma, admit some classification results in dim 1 etc. You may want to have a look on the recent book of Zariski geometries and
up vote 4 references therein (or the actual book Zariski Geometries : Geometry from the Logician's Point of View, by Boris Zilber).
down vote
Also, the book has some examples which sometimes need some work.
add comment
Given a group theoretic class $\mathfrak{X}$ (e.g., finite groups, soluble groups, etc.), to each group $G$ one can associate the pro-$\mathfrak{X}$ topology on $G$ by taking as a basis of
neighbourhoods of the identity the collection of normal subgroups $N$ of $G$ for which the quotient group $G/N$ belongs to $\mathfrak{X}$. A group is residually an $\mathfrak{X}$-group
up vote 2 precisely when this topology is Hausdorff. (To get an actual topology, $\mathfrak{X}$ has to be hereditary and closed under (finite) direct products.)
down vote
add comment
Not the answer you're looking for? Browse other questions tagged big-picture geometry abstract-algebra noncommutative-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/14314/algebraic-topologies-like-the-zariski-topology?sort=oldest","timestamp":"2014-04-17T15:54:15Z","content_type":null,"content_length":"104493","record_id":"<urn:uuid:3a12f2ed-01d2-4716-9ba3-5bfd22aeb384>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evidence is presented to show that the phases of two of the Earth’s major climate systems, the North Atlantic Oscillation (NAO) and the Pacific Decadal Oscillation (PDO), are related to changes in
the Earth’s rotation rate. We find that the winter NAO index depends upon the time rate of change of the Earth’s length of day (LOD). In addition, we find that there is a remarkable correlation
between the years where the phase of the PDO is most positive and the years where the deviation of the Earth’s LOD from its long-term trend is greatest.
In order to prove that the variations in the NAO and PDO indices are caused by changes in the Earth’s rotation rate, and not the other way around, we show that there is a strong correlation between
the times of maximum deviation of the Earth’s LOD from its long-term trend and the times where there are abrupt asymmetries in the motion of the Sun about the CM of the Solar System.
At first glance, there does not appear to be an obvious physical phenomenon that would link the Sun’s motion about the Solar System’s CM to the Earth’s rotation rate. However, such a link could occur
if the rate of precession of the line-of-nodes of the Moon’s orbit were synchronized with orbital periods of Terrestrial planets and Jupiter, which in turn would have to be synchronized with the
orbital periods of the three remaining Jovian planets. In this case, the orbital periods of the Jovian planets, which cause the asymmetries in the Sun’s motion about the CM, would be synchronized
with a phenomenon that is known to cause variations in the Earth’s rotation rate, namely the long term lunar tides.
The periodicities seen in the asymmetry of the solar motion about the CM are all submultiples of the 179 year Jose cycle, with the dominant periods being 1/5 (= 35.87 yrs), 1/9 (= 19.86 yrs) and 1/14
(12.78 yrs). In addition, the realignment time for the orbits of Venus, Earth and Jupiter is a ¼ of the 179 year Jose cycle (= 44.77 yrs).
Through what appears to be a “Grand Cosmic Conspiracy” we find that:
6.393 yrs = (the 179 year repetition cycle of the Solar motion about the CM) / 28
6.396 yrs = (the 44.77 year realignment time for Venus, Earth, and Jupiter) / 7
which just happens to be realignment time for orbits of the planets Venus, Earth and Mars (= 6.40 yrs).
The significance of the 6.40 year repetition period is given added weight by the fact that if you use it to modulate the sidereal year of the Earth/Moon system, the side-lobe period that is produced,
almost perfectly matches the 2nd harmonic time interval over which there are the greatest changes in the meridional and zonal tidal stresses acting upon the Earth (1 ¼ TD = 433.2751 days = 1.18622
years, where TD is the draconitic year).
We know that the strongest planetary tidal forces acting on the lunar orbit come from the planets Venus, Mars and Jupiter. In addition, we known that, over the last 4.6 billion years, the Moon has
slowly receded from the Earth. During the course of this lunar recession, there have been times when the orbital periods of Venus, Mars and Jupiter have been in resonance(s) with the precession rate
for the line-of-nodes the lunar orbit. When these resonances have occurred, they would have greatly amplified the effects of the planetary tidal forces upon the lunar orbit. Hence, the observed
synchronization between the precession rate of the line-of-nodes of the lunar orbit and the orbital periods of Venus, Earth, Mars and Jupiter, could simply be a cumulative fossil record left behind
by these historical resonances.
Here is an interesting plot which asks a very pertinent question about Solar cycle 24. Where is the cycle 24 [FeXIV] emission that usually reaches the Sun's pole around about the time of solar
maximum? Is this an indicator that we still have a few years to wait till solar maximum or is it just telling us that cycle 24 will have a very weak maximum?
Reference: http://www.boulder.swri.edu/~deforest/SPD-sunspot-release/6_altrock_rttp.pdf
THE WORLD MEAN TEMPERATURE WARMS(/COOLS) IF THE IMPACT OF EL NINOS EXCEEDS(/DOES NOT EXCEED) THE IMPACT OF LA NINAS OVER A GIVEN EPOCH.
Distinct Epochs in the Earth's Atmospheric Circulation Patterns and the Earth Rotation
A. The above graph is part of Figure 2.1. from Klyashtorin, L.B., Climate Change and Long-Term Fluctuations of Commercial Catches - The Possibility of Forecasting, FAO Fisheries Technical Paper No.
410, Rome FAO, 2001.
It shows the close correlation between the rotation rate of the Earth (measured by the Length-of-Day) and the zonal component of the Atmospheric Circulation Index (ACI). This graph shows that the
zonal circulation patterns evident in the Earth's atmosphere can be broken up into four 30 year epochs starting in the years 1880-85 [LOD curve only], 1905-1910, 1940-1945 and 1970-1975.
B. The above graph comes from figure 2.2 of Klyashtorin, L.B., Climate Change and Long-Term Fluctuations of Commercial Catches - The Possibility of Forecasting, FAO Fisheries Technical Paper No. 410,
Rome FAO, 2001.
The above graph shows that if you shift the LOD curve forward by ~ 6 years you get an excellent fit between LOD curve and the de-trended world mean temperature anomaly. Again the overall pattern can
be broken up into four distinct 30 year epoch starting in the years 1880, 1910, 1940 and 1970.
C. The above graph comes from figure 2.23 of Klyashtorin, L.B., Climate Change and Long-Term Fluctuations of Commercial Catches - The Possibility of Forecasting, FAO Fisheries Technical Paper No.
410, Rome FAO, 2001.
The above graph shows that if you shift the ACI curve forward by ~ 4 years you get an excellent fit between LOD curve and the de-trended world mean temperature anomaly (dT). Again the overall pattern
can be broken up into three distinct 30 year epoch starting in the years 1910, 1940 and 1970. The ACI index does not extend far enough back to set a starting date for the first epoch but the dT and
LOD curves suggest a date sometime around 1875 to 1880.
The (Extended) Multivariate ENSO Index
The Multivariate ENSO Index is defined at the NOAA web site located at:
The Extended Multivariate ENSO Index is defined at the NOAA web site located at:
The important point to note is that Multivariate ENSO Index is the most precise way to follow variations in the ENSO phenomenon:
Negative values of the MEI represent the cold ENSO phase, a.k.a. La Niña, while positive MEI values represent the warm ENSO phase (El Niño).
The Cumulative Sum of the MEI
If the cumulative sum of the MEI over a given epoch steadily increase throughout the epoch then the impact of El Ninos exceed the impact of the La Ninas over this epoch.
If the cumulative sum of the MEI over a given epoch steadily decrease throughout the epoch then the impact of La Ninas exceed the impact of the El Ninos over this epoch.
The dotted red line in the above graph shows the cumulative sum of the extended Multivariate ENSO Index (MEI) between the years 1880 and 2000 A.D. The cumulative sum has been taken over each of the
four 30 year epochs, starting in the years 1880, 1910, 1940, and 1970.
The solid blue line in the above graph shows the cumulative sum of the extended Multivariate ENSO Index (MEI) between the years 1886 and 2006 A.D. The cumulative sum has been taken over each of the
four 30 year epochs, staring in the years 1886, 1916, 1946, and 1976.
It is clearly evident from this plot that whenever the cumulative MEI index is systematically decreasing over a 30 year epoch i.e. between 1886 and 1915, and between 1946 and 1975, the world's mean
temperature decreases. It is also evident that whenever the cumulative MEI index is systematically increasing over a 30 year epoch i.e. between 1916 and 1945, and between 1976 and 2005, the world's
mean temperature increases.
1. The ratio of the impact of El Ninos to the impact of La Ninas upon climate can be monitored over multi-decadal time scales using the cumulative MEI.
2. The cumulative MEI shows that since roughly 1880 there have been four main climate epochs, each 30 years long. There are have been two 30 year periods of cooling (i.e. from 1886 to 1915, and from
1946 to 1975) and two 30 year peiods of heating (i.e. from 1916 to 1945, and from 1976 to 2005).
3. Periods of warming occur whenever the impact of El Ninos exceeds the impact of La Ninas. Periods of cooling occur whenever the impact of La Ninas exceed the impact of El Ninos. | {"url":"http://astroclimateconnection.blogspot.com.au/2011_12_01_archive.html","timestamp":"2014-04-18T08:06:27Z","content_type":null,"content_length":"78207","record_id":"<urn:uuid:b2542d26-4f77-4218-b6d7-3fe849463f44>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: "How many sides does a circle have?"
Re: “How many sides does a circle have?”
The post is inspired by this story told by JDH at Math.SE.
My third-grade son came home a few weeks ago with similar homework questions:
How many faces, edges and vertices do the following
□ cube
□ cylinder
□ cone
□ sphere
Like most mathematicians, my first reaction was that for the latter objects the question would need a precise definition of face, edge and vertex, and isn’t really sensible without such definitions.
But after talking about the problem with numerous people, conducting a kind of social/mathematical experiment, I observed something intriguing. What I observed was that none of my non-mathematical
friends and acquaintances had any problem with using an intuitive geometric concept here, and they all agreed completely that the answers should be
• cube: 6 faces, 12 edges, 8 vertices
• cylinder: 3 faces, 2 edges, 0 vertices
• cone: 2 faces, 1 edge, 1 vertex
• sphere: 1 face, 0 edges, 0 vertices
Indeed, these were also the answers desired by my son’s teacher (who is a truly outstanding teacher). Meanwhile, all of my mathematical colleagues hemmed and hawed about how we can’t really answer,
and what does “face” mean in this context anyway, and so on; most of them wanted ultimately to say that a sphere has infinitely many faces and infinitely many vertices and so on. For the homework, my
son wrote an explanation giving the answers above, but also explaining that there was a sense in which some of the answers were infinite, depending on what was meant.
At a party this past weekend full of mathematicians and philosophers, it was a fun game to first ask a mathematician the question, who invariably made various objections and refusals and and said it
made no sense and so on, and then the non-mathematical spouse would forthrightly give a completely clear account. There were many friendly disputes about it that evening.
Let’s track down this intuitive geometric concept that non-mathematicians possess. We are given a set ${E\subset \mathbb R^n}$ and a point ${v\in E}$, and try to figure out whether ${v}$ is a vertex,
a part of an edge, or a part of a face. The answer should depend only on the shape of the set near ${p}$.
It is natural to say that a vector ${v}$ is tangent to ${E}$ at ${p}$ if going along ${v}$ we stay close to the set. Formally, the condition is ${\lim_{t\to 0+} t^{-1}\,\mathrm{dist}\,(p+tv,E)=0}$.
Notice that the limit is one-sided: if ${v}$ is tangent, ${-v}$ may or may not be tangent.
The set of all tangent vectors to ${E}$ at ${p}$ is denoted by ${T_pE}$ and is called the tangent cone. It is indeed a cone in the sense of being invariant under scaling. This set contains the zero
vector, but need not be a linear space. Let’s say that the rank of point ${p}$ is ${k}$ if ${T_pE}$ contains a linear space of dimension ${k}$ but no linear space of dimension ${k+1}$.
Finally, define a rank ${k}$ stratum of ${E}$ as a connected component of the set of all points of rank ${k}$.
If ${E}$ is the surface of a polyhedron, we get the familiar concepts of vertices (rank 0 strata), edges (rank 1) and faces (rank 2). For each of the homework solids the answer agrees with the
opinion of non-mathematical crowd. Take the cone as an example:
At the vertex the tangent cone to the cone is… a cone. It contains no nontrivial linear space, hence the rank is 0. This is indeed a vertex.
Along the edge of the base the tangent cone is the union of two halfplanes:
Tangent cone at an edge point
Here the rank is 1: the tangent cone contains a line, but no planes.
Finally, at every point of smoothness the tangent cone is the tangent plane, so the rank is 2. The set of such points has two connected components, separated by the circular edge.
So much for the cone. As for the circle mentioned in the title, I regrettably find myself in agreement with Travis.
More seriously: the surface of a convex body is a classical example of an Alexandrov space (metric space of curvature bounded below in the triangle comparison sense). Perelman proved that any
Alexandrov space can be stratified into topological manifolds. Lacking an ambient vector space, one obtains tangent cones by taking the Gromov-Hausdorff limit of blown-up neighborhoods of ${p}$. The
tangent cone has no linear structure either — it is also a metric space — but it may be isometric to the product of ${\mathbb R^k}$ with another metric space. The maximal ${k}$ for which the tangent
cone splits off ${\mathbb R^k}$ becomes the rank of ${p}$.
Recently, Colding and Naber showed that the above approach breaks down for spaces which have only Ricci curvature bounds instead of triangle-comparison curvature. More precisely, their examples are
metric spaces that arise as a noncollapsed limit of manifolds with a uniform lower Ricci bound. In this setting tangent cones are no longer uniquely determined by ${p}$, and they show that different
cones at the same point may have different ranks. | {"url":"http://calculus7.org/2013/08/15/re-how-many-sides-does-a-circle-have/","timestamp":"2014-04-21T14:41:08Z","content_type":null,"content_length":"54136","record_id":"<urn:uuid:f3d85828-d468-4587-aa17-5e0c24f46f31>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Warrenty payout?
October 24th 2009, 12:12 PM #1
Oct 2009
Warrenty payout?
At time of purchase, the value of a certain TV is $1000, and its value in the future is given by w(t) = 12.5(3^(4-t) -1) 0<=t<=4, where t is the time in years (after 4 years, the TV has no more
value with respect to warranty). If it fails during the first four years, the warranty pays w (t). Compute the expected valueof the payment on the warranty if the lifetime distribution of the TV
is exponential with expectation 10 years.(Note: 3^x=e^(x ln 3 )
At time of purchase, the value of a certain TV is $1000, and its value in the future is given by w(t) = 12.5(3^(4-t) -1) 0<=t<=4, where t is the time in years (after 4 years, the TV has no more
value with respect to warranty). If it fails during the first four years, the warranty pays w (t). Compute the expected valueof the payment on the warranty if the lifetime distribution of the TV
is exponential with expectation 10 years.(Note: 3^x=e^(x ln 3 )
Use the known pdf of the exponential distribution, f(t), to calculate the expected value of W: $E(W) = \int_0^{+\infty} w(t) f(t) \, dt = \int_0^4 w(t) f(t) \, dt$.
If you need more help please show all your work and say exactly where you're stuck.
October 24th 2009, 04:36 PM #2 | {"url":"http://mathhelpforum.com/advanced-statistics/110144-warrenty-payout.html","timestamp":"2014-04-19T02:59:22Z","content_type":null,"content_length":"34700","record_id":"<urn:uuid:8063335c-2ea3-4e1e-83f8-d59b9a3bcd00>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the geometric point of view of an algebraic line bundle compared to a analytic line bundle?
up vote 0 down vote favorite
Hi folkz,
I'm trying to learn more about line bundles, invertible sheaves and divisors on schemes. I understand the connection beweteen Cartier and Weil Divisors and the connection between Cartier Divisors and
invertible sheaves and how to get from one to another (as far as possible).
But compared to my analytic imagination of a line bundle I don't see how to come from an invertible sheaf to the line bundle (apart from the fact, that these two terms coincide). Where is 'the line'
in my locally free of rank one $\mathcal{O}_X$-module?
greatz Johannes
ag.algebraic-geometry line-bundles
you can take a trivialising open cover for your line bundle, that will give you transition functions. You can define a scheme by gluing the open patches cross $\mathbb{A}^1$ with the given
transition functions. It's an exercise in hartshorne. – Yosemite Sam Mar 29 '12 at 17:31
1 I think this question would be better suited to math.stackexchange.com. One reference: Shafarevich, "Basic Algebraic Geometry" Book 2. – Artie Prendergast-Smith Mar 29 '12 at 17:37
if you prefer you can find 'the line' hiding in your sheaf by noticing that the fibres are of dim 1: if F is your sheaf, x is a (closed) point of your variety $X$, the fibre is defined as $F_x \
2 otimes_{\mathcal{O}_{X,x} k(x)$, where $F_x$ is the stalk of $F$ at $x$, $\mathcal{O}_{X,x}$ is the local ring of your variety at $x$ and $k(x) = \mathbb{C}$ is the residue field. (this is the
same as considering your point as a morphism $x: Spec \mathbb{C} \to X$ and taking the pullback $x^*F$). By assumption your sheaf is locally free of rank one, so $F_x \cong \mathcal{O}_{X,x}$,
hence the fibre is C – Yosemite Sam Mar 29 '12 at 17:41
1 en.wikipedia.org/wiki/… – Mark Grant Mar 29 '12 at 18:13
Do I see it right, that if I dont look at the fibres of a point, but at the pullback of some open subset I get the following: $F(U) \otimes_{\mathcal{O}_{X} k(x)$ and in some <i>good cases</i>
(for example k algebraically closed) the residue field does not depend on a point (the subset) and we end up in $F(U) \otimes_{\mathcal{O}_{X} k$? – Johannes Mar 29 '12 at 22:17
show 3 more comments
2 Answers
active oldest votes
Perhaps this might help as some intuition. Instead of looking for "the line" in a locally free sheaf, let's look in the other direction. Let's start with a line bundle, and
move back towards sheaves.
So take a line bundle $\pi : L \to X$. This bundle has a sheaf of sections $\mathcal{O}_L$ defined by
$$\mathcal{O}_L(U) = \{s : U \to L \mid \pi \circ s = id_U\}$$
up vote 4 down vote
accepted i.e. over an open set $U$ in $X$, $\mathcal{O}_L(U)$ is the collection of all sections of $L$ over $U$. It can be shown that this is a locally free sheaf of rank one.
Now, for a vector bundle of rank $n$, all of this is true, but the locally free sheaf is now or rank $n$.
Hopefully this provides at least a little intuition for the relation between the two.
add comment
Let $\mathcal F$ be a locally free $\mathcal O_X$-module. Then $\mathcal R := Sym_{\mathcal O_X}(\mathcal F)$, the tensor products being over $\mathcal O_X$, is a sheaf of rings, and we
can take its $\bf Spec$ to get a space over $X$. That space is the corresponding vector bundle.
up vote 3 $\mathcal R$'s grading is what gives the dilation action on the fibers. The map $\mathcal F \to (\mathcal F \otimes \mathcal O_X) \oplus (\mathcal O_X \otimes \mathcal F)$, $f \mapsto (f\
down vote otimes 1) + (1\otimes f)$ induces a cocommutative comultiplication $\mathcal R \to \mathcal R \otimes \mathcal R$, which gives the vector addition on ${\bf Spec}\ \mathcal R$, I think.
don't you want to write Sym somewhere? – Yosemite Sam Apr 4 '12 at 8:48
Oops! Fixed (I had the tensor algebra before). – Allen Knutson Apr 4 '12 at 10:18
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry line-bundles or ask your own question. | {"url":"http://mathoverflow.net/questions/92588/what-is-the-geometric-point-of-view-of-an-algebraic-line-bundle-compared-to-a-an","timestamp":"2014-04-24T12:48:08Z","content_type":null,"content_length":"63629","record_id":"<urn:uuid:6afbe6a5-8c9f-4bd7-8142-1d92ea14aa7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
A cow is tied to one corner ( vertex ) of a 20 m per side square barn. The cow has a 50 m length of chain to keep it from escaping while it is grazing. How big is the cow's grazing area?
This problem or similar ones appear a couple of times in this forum as well as on the internet. It does have an exact answer but that is for another thread, here we will see what geogebra can do.
1) Draw points A(0,50) and B(50,0).
2) Use the circle with radius tool to draw a circle with A as the center and radius 50.
3) Draw a line through point s A and B.
4) Draw a perpendicular line to AB through point A.
5) Draw a circle with center A and radius 20.
6) Get the points of intersection with this smaller circle and the line AB and the line perpendicular to AB. Your drawing should look like Fig 1.
7) Hide points E and D and the y axis.
8) Draw a line parallel to AFB through C and a line through F parallel to AC.
9) Get the point of intersection of the two new lines that go through C and F. It will be labeled G. Check Fig 2.
10) Hide the small circle and the 4 lines and use the polygon tool and click on A, F, G, and C and a poly1 will created with sides 20 and area 400 square feet.
11) Use the insert text tool to label the diamond. Call it "Barn".
12) Use the circle with radius tool to draw a circle with radius 30 and center C. Do the same with center F.
13) Get the point of intersection with the small leftmost circle and the larger one. Point H will be created there.
14) Get the point of intersection with the small rightmost circle and the larger one. Point I will be created there.
15) Get the point intersection of the two small circles, points K and J will be created. Hide K. For clarity color H, J and I red and make them a little bigger. See Fig 3.
16) Rename A, to "Cow".
17) We need the the blackened area at the bottom. See Fig 4. We will use the integralbetween command.
18) Enter in the input bar,
IntegralBetween[-sqrt(-x² - 20sqrt(2) x + 700) - 10sqrt(2) + 50, 50 - sqrt(2500 - x²), x(H), x(J)]
This is nothing but the area of the leftmost little circle minus the area of the big circle from H to J.
19) Immediately in the algebra pane you will see i = 120.33845614907142. This corresponds to the red area in Fig 5.
20) To get the entire area that the cow can graze enter in the input bar,
2500π - 400 - 2i. The answer will pop up in the algebra pane:
j = 7213.30472167634
which correct to all displayed digits. We are done!
For completeness the exact answer is | {"url":"http://www.mathisfunforum.com/post.php?tid=18412&qid=239899","timestamp":"2014-04-17T21:47:30Z","content_type":null,"content_length":"20378","record_id":"<urn:uuid:5f4f14ef-90d8-4f49-a93a-1da37fda0cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Feast solver - internal memory error
Hi all.
Calling dfeast_scsrgv, it returns with info=-1, an internal memory error.
The output is
Extended Eigensolvers: Size subspace 10
Extended Eigensolvers: Resize subspace 0
Extended Eigensolvers: Error with Internal memory allocation
I don't know if the error message is accurate, I am using the 64-bit version (11.1.0.103, Windows), and the matrices are pretty small, so it would need to be allocating a huge amount of memory to run
out of address space.
Is it trying to create a temporary file - my C: drive is a SSD without much space left? Changing the program working directory to E: does not help, so is it creating a file in a specific location?
Is it a problem with the matrices - the Extended Eigensolver Functionality page in the help says A should be real symmetric, and B should be real symmetric positive definite - I don't have these, I
have real symmetric positive definite, and B is real symmetric, about half of the rows in B (and corresponding columns) are zero, ie it could be reordered to $\begin{bmatrix} B & 0\\ 0 & 0 \end
{bmatrix}$ - this isn't positive definite, the submatrix isn't either, so should return an error of -3? But it seems to solve this for a different search range of eigenvalues (possibly the change
makes some internally generated matrix much larger?)
Mathematically, the zeroes in the B matrix should give eigenvalues of +inf, and an eigenvector that is +1 for the entry corresponding to the matrix row, and zero for all other values, for each row/
column that is zero. The other eigenvalues can be found by generating a modified A matrix - I dont know if the FEAST solver will do this internally, and solve the modified problem, or if these
eigenvalues all get rejected because the search range is bounded, and the solver just works without B actually being positive definite.
The alternative is to solve the reverse problem, swapping A and B, and solving for 1/λ, but then the range of eigenvalues gets inverted, and I need to find the largest instead of the smallest. I
might use ARPACK or something for this, FEAST doesnt seem particularly well suited for finding the largest.
Attached is a test program, and the a, b matrices. If changed to search a different range, test program will find 4 eigenvalues around 182.9, then another 4 around 228.
The benchmark I am testing against suggests I should find a couple of eigenvalues between 1 and 5, but I don't have access to Matlab to check my matrices directly, so the difference might be the
matrices I am solving.
So, questions are:
1. Is the out of memory error message correct, or is it a different problem?
2. Does B have to be positive definite?
3. Should I always be getting an error message with any matrix of this shape?
4. Are the values found accurate, or is B not being pos def causing FEAST to find a totally wrong solution?
Faça login para deixar um comentário. | {"url":"https://software.intel.com/pt-br/forums/topic/474787","timestamp":"2014-04-19T08:16:58Z","content_type":null,"content_length":"54267","record_id":"<urn:uuid:0a341747-4102-4971-b2ee-fa88abb2ab74>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Giving Percentages to numbers
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Aug 2013
Rep Power
Giving Percentages to numbers
Hello, I'm new to these forums and decided to join when I heard from a few friends that this was the best forum on the net for programming. I hope to learn a lot from you guys. Now onto the
I was wondering how I would take a previous given number and give it a percentage. For example If I give three numbers and a final number. I want the final number to be 30 percent of the first
number, 20 percent of the second number, and 50 percent of the third. How would I do something like that? I am confused at this point because I have never done anything like this before. Any help
would be appreciated thanks.
Multiply the numbers with respectively 0.3, 0.2 and 0.5 and then add the result together.
If you need more help, you will have to show what you have done/tried. For example, where does the numbers come from?
Originally Posted by MrFujin
Multiply the numbers with respectively 0.3, 0.2 and 0.5 and then add the result together.
If you need more help, you will have to show what you have done/tried. For example, where does the numbers come from?
This is what I have so far.
var Hmwkgrade = window.prompt("Enter Your Homework Grade","");
var Responsibilitygrade = window.prompt("Enter Your Responsibility Grade","");
var Finalgrade = window.prompt("Enter Your Final Grade","")
var HG = (Hmwkgrade*75)
var RG = (Responsibilitygrade*10)
var FG = (Finalgrade*15)
var Totalgrade = (HG + RG + FG)
document.write(Totalgrade );
The numbers seem to be much higher than they should be.
I think this is more a problem of elementary math (and not reading carefully enough) than it is a programming issue.
75 per cent(!) of Hmwkgrade is 75 hundreds of it. It's not 75 times Hmwkgrade.
I mean, I do wish that an interest rate of 3% would triple the capital each each. But unfortunately, that's not what "per cent" means.
So what you want is
Hmwkgrade * 0.75
MrFujin actually said this in his reply.
The 6 worst sins of security • How to (properly) access a MySQL database with PHP
Why can’t I use certain words like "drop" as part of my Security Question answers?
There are certain words used by hackers to try to gain access to systems and manipulate data; therefore, the following words are restricted: "select," "delete," "update," "insert," "drop" and
Originally Posted by Jacques1
I think this is more a problem of elementary math (and not reading carefully enough) than it is a programming issue.
75 per cent(!) of Hmwkgrade is 75 hundreds of it. It's not 75 times Hmwkgrade.
I mean, I do wish that an interest rate of 3% would triple the capital each each. But unfortunately, that's not what "per cent" means.
So what you want is
Hmwkgrade * 0.75
MrFujin actually said this in his reply.
Thanks, I made such a stupid mistake, I copied a a particular part of a program I made and didn't adjust it. Thank you for your help.
Originally Posted by MrFujin
Multiply the numbers with respectively 0.3, 0.2 and 0.5 and then add the result together.
If you need more help, you will have to show what you have done/tried. For example, where does the numbers come from?
How would I be able to give the total grade a letter value, for example A+ >=95 would I need to use an array?
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Aug 2013
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Aug 2013
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Aug 2013
Rep Power | {"url":"http://forums.devshed.com/javascript-development-115/giving-percentages-949648.html","timestamp":"2014-04-19T13:48:04Z","content_type":null,"content_length":"74735","record_id":"<urn:uuid:428403d2-322f-42ba-9b23-18e1f592c154>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hangman 1
Re: Hangman 1
Okay, then there is an L.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
no l
Re: Hangman 1
Could be an R?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
Re: Hangman 1
An N, maybe?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
hint:they use thid for fishing
Re: Hangman 1
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ _ _
Re: Hangman 1
T to start us off.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
No t
Re: Hangman 1
No E either?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
no e or o
Re: Hangman 1
There is definitely an A.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ _ a
Re: Hangman 1
How about an I?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
no I
Re: Hangman 1
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
Re: Hangman 1
Give me a U.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
Re: Hangman 1
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
hint:look at your avatar
Re: Hangman 1
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hangman 1
_ _ _ _ _ _ _ _ _
Re: Hangman 1
There must be hundreds of S's in there so give their position.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=251574","timestamp":"2014-04-19T22:20:44Z","content_type":null,"content_length":"32193","record_id":"<urn:uuid:78e66064-ca6e-4292-8c63-622a24e278f9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stewart Heights, TX Math Tutor
Find a Stewart Heights, TX Math Tutor
...I emphasize good object-oriented design as the means of making the code easier to verify, modify and extend. When teaching students web development, I teach them JavaScript. JavaScript and its
libraries can be used to create highly interactive websites.
30 Subjects: including statistics, Java, SQL, ADD/ADHD
Hi! My name is Eric, and my love of teaching comes from the happiness that I get when I am able to push myself to improve and achieve my full potential. This is not solely a personal pleasure as
I also love to see my students growing and thriving!
22 Subjects: including trigonometry, SAT math, photography, public speaking
I love assisting others in achieving their goals. As a trained psychologist I have discovered that learning can be inhibited by psychological factors which have not, and can't be addressed
through the traditional educational system. Frequently the fact is that learning is about making the subject ...
15 Subjects: including algebra 1, English, prealgebra, algebra 2
...This science stuff is easier than you think...I have 83 hours of science credits covering two BS degrees, one in Biology and one in Geology. I have taken coursework in this subject. Let's see
- I get up in front of students and talk all the time...during the past five years I have been the PA a...
18 Subjects: including algebra 1, biology, chemistry, public speaking
...Importing or linking data from different data sources, writing custom forms. Extensive experience with SQL queries and query building. I have wealth of information to share on MS Access which
students can definitely benefit from.
14 Subjects: including ACT Math, statistics, differential equations, algebra 1
Related Stewart Heights, TX Tutors
Stewart Heights, TX Accounting Tutors
Stewart Heights, TX ACT Tutors
Stewart Heights, TX Algebra Tutors
Stewart Heights, TX Algebra 2 Tutors
Stewart Heights, TX Calculus Tutors
Stewart Heights, TX Geometry Tutors
Stewart Heights, TX Math Tutors
Stewart Heights, TX Prealgebra Tutors
Stewart Heights, TX Precalculus Tutors
Stewart Heights, TX SAT Tutors
Stewart Heights, TX SAT Math Tutors
Stewart Heights, TX Science Tutors
Stewart Heights, TX Statistics Tutors
Stewart Heights, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Bordersville, TX Math Tutors
Caplen, TX Math Tutors
Crystal Beach, TX Math Tutors
Eastgate, TX Math Tutors
Garth, TX Math Tutors
Golden Acres, TX Math Tutors
Greens Bayou, TX Math Tutors
Lynchburg, TX Math Tutors
Mcnair, TX Math Tutors
Monroe City, TX Math Tutors
Moss Bluff, TX Math Tutors
Sunny Side, TX Math Tutors
Timber Cove, TX Math Tutors
Timberlane Acres, TX Math Tutors
Woody Acres, TX Math Tutors | {"url":"http://www.purplemath.com/stewart_heights_tx_math_tutors.php","timestamp":"2014-04-16T13:12:07Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:1d646dfc-ea1f-4add-93e5-c18272101836>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Library
Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation
spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all
available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant
bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org.
Constrained systems, meaning of diffeomorphism invariance, loops and spin networks.
The principles of Quantum Mechanics and of Classical General Relativity imply Uncertainty Relations between the different spacetime coordinates of the events, which yield to a basic model of Quantum
Minkowski Space, having the full (classical) Poincare\' group as group of symmetries.
I should like to show how particular mathematical properties can limit our metaphysical choices, by discussing old and new theorems within the statistical-model framework of Mielnik, Foulis &
Randall, and Holevo, and what these theorems have to say about possible metaphysical models of quantum mechanics.
A convergence of climate, resource, technological, and economic stresses gravely threaten the future of humankind. Scientists have a special role in humankind\\\'s response, because only rigorous
science can help us understand the complexities and potential consequences of these stresses. Diminishing the threat they pose will require profound social, institutional, and technological changes
-- changes that will be opposed by powerful status-quo special interests.
Mutually unbiased bases (MUBs) have attracted a lot of attention the last years. These bases are interesting for their potential use within quantum information processing and when trying to
understand quantum state space. A central question is if there exists complete sets of N+1 MUBs in N-dimensional Hilbert space, as these are desired for quantum state tomography. Despite a lot of
effort they are only known in prime power dimensions. | {"url":"http://www.perimeterinstitute.ca/video-library?title=&page=571","timestamp":"2014-04-18T17:29:37Z","content_type":null,"content_length":"64710","record_id":"<urn:uuid:53d6829d-a420-40a8-a123-c88f000d037b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Notice that this expression depends on calendar time and the individual
stock prices, and not just on the index level. A local volatility function for
the index (that is, one that depends only on the price of the index and
time) can be obtained by calculating the expectation of 2
B conditional on
the value of the index. More precisely, the function B, loc = B, loc(B, t),
defined as:
is such that the one-dimensional diffusion process:
(with µB representing the cost-of-carry of the ETF), returns the same prices
for European-style index options as the n-dimensional model based on the
dynamics for the entire basket.
To see this, we observe that B(S, t) can be viewed as a stochastic volatil-
ity process that drives the index price B(t), with the vector of individual
stock prices S playing the role of ancillary risk factors. The above formu-
la for 2
B, loc expresses a well-known correspondence between the sto-
chastic volatility of a pricing model and its corresponding (Dupire-type)
local volatility (see Derman, Kani & Kamal, 1997, Britten-Jones & Neu-
berger, 2000, Gatheral, 2001, and Lim, 2002).
The problem, of course, is that the conditional expectation is difficult | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/427/2692625.html","timestamp":"2014-04-20T04:18:54Z","content_type":null,"content_length":"8403","record_id":"<urn:uuid:71d9f061-54d1-48fb-a677-3b9687a239e8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Particle In A Box
by Kristen Adams
Discussion of Concept
Part A: Why do we care about examining a particle in a box?
A particle in a box resembles an electron in a stable orbit around a nucleus. Such an electron exhibits a standing wave pattern much like the standing wave pattern that can be produced on a string
that is fixed at both ends.
Figure 1 Diagram of deBroglie matter waves of an electron in a stable orbit.
Source of figure: http://online.cclt.org/physicslab/content/PhyAPB/lessonnotes/dualnature/deBroglie.asp
The particle in a box is free (there are no forces acting upon it) but is limited spatially. We can use this model to examine and define the wave that an electron makes in orbit around a nucleus.
Once we know the wave function of a particle we can then find the energy and momentum of the particle. Models, such as this one, can aid us in interpreting data gathered from actual experiments.
Part B: How does a particle in a box behave?
There are certain requirements of behavior for a particle in a box. We can visualize these requirements by again looking at the behavior of a standing wave on a string.
Figure 2 A standing wave with points of minimum amplitude (nodes) and maximum amplitude (antinodes)
Source of figure: http://www.cord.edu/dept/physics/p128/lecture99_35.html
A standing wave must meet two conditions. 1) At both ends of the string, a standing wave exhibits a node (point of zero amplitude). 2] The length of the standing wave is broken up into an integral
number of half-wavelengths. The matter wave of a particle in a 3-D box has these same characteristics.
A particle starts from one side of the box at zero amplitude, hits the opposite side of the box (also at zero amplitude) and must return to its starting point, continuing the pattern. To describe the
wave of the particle we must find a wave function that properly describes the motion of the particle. What kind of wave starts at zero amplitude and ends at zero amplitude? A sine wave! The matter
wave of a particle inside a box is a sine wave just as the standing wave on a string is a sine wave. We have found a wave function that meets the first condition (from above) which is sin x. The wave
function of form sin x should describe the wave at any point x in 1-D. If our box is 3-D, our wave function would be of the form sin(x)sin(y)sin(z) and would describe the wave at any point (x,y,z) in
In order for the sine wave to be at a point of zero amplitude at each side of the box, the length of the sine wave in the box must be limited to ½ a wavelength, 1 wavelength, 1 ½ wavelengths, etc.
which can be more succinctly written as n/2 wavelengths where n is 1, 2, 3, etc. We need to include this characteristic in our wave function. Defining the wave number, k (the number of radians of the
wave cycle per unit length), to be (where is 180 degrees of the wave or one-half of a full wave and L is the length of one side of the box) we restrict the number of wavelengths in the box to an
integer number of half wavelengths. Our wave function is now of the form or sin(kx). This wave function meets both conditions one and two.
Figure 3 Infinite potential well with wavelengths
Source of figure: http://hyperphysics.phyastr.gsu.edu/hbase/quantum/pbox.html
The final step in properly defining the wave function of a particle in a box is to normalize the wave function. The probability of finding a particular particle in all space is 1 (the particle
exists). [Aside: Probability is a range from 0 to 1 where 0 means that a particular event will not occur and 1 indicates that a particular event is certain to occur.] A quantum mechanical property of
a wave function is that the probability of finding a particle at some point in space is the absolute value of the wave function squared at the point of interest (x, y, z). The sum of the
probabilities over all points in space should equal one. Thus,
where is our wave function in 3-D
In order for our wave function to meet this requirement we must tack on a coefficient (normalization coefficient). Our normalized wave function for a 1-D box is Asin(kx) where A is our normalizing
coefficient and k = n/L where n=1,2,3, For a 3-D box, our normalized wave function would be Asin(kx) Bsin(ky) Csin(kz) with A,B,C acting as normalizing coefficients.
Notice that the requirements needed to produce a wave function suitable to a box has produced quantization. The wave number (k) increases by , but no values in between. Whereas an unbounded free
particle has no such restrictions on k. Likewise, the energy of a particle, which depends on k, is also quantized for a particle in a box and continuous for an unbound free particle.
The particle in a box problem will be solved with more detail in the next section.
Worked Examples
A) Particle in a Box or Infinitely High Potential Well in 3-D
This example will illustrate a method of solving the 3-D Schrodinger equation to find the eigenfunctions for a infinite potential well, which is also referred to as a box.
A particle of mass m is captured in a box. This box can also be thought of as an area of zero potential surrounded by walls of infinitely high potential. The particle cannot penetrate infinitely high
potential barriers. The box is of length a along the x axis, length b along the y axis and length c along the z axis.
This potential is described as follows:
V(x,y,z)=0 if 0<x<a , 0<y<b, 0<z<c Region I
V(x,y,z)= elsewhere Region II
Figure 4 Slice of 3-D infinite potential well
Source of figure: http://www.chembio.uoguelph.ca/educmat/chm386/rudiment/models/piab1/piab1prb.htm
The energy operator, , (the quantum mechanical operator we will use to find the energy of the particle) for a single particle of mass m, in a potential field V(x,y,z) is:
where is the momentum operator and is equal to
Total Energy = Kinetic Energy + Potential Energy
In symbolic form: E = T + V . In operator form: .
Kinetic Energy (T) =
The time-independent Schrodinger equation, which is used to find the possible energies, E, that the particle may have, is
is called the eigenfunction. It is the wave function that satisfies Eq-2.
Eq-2 is often called an eigenvalue equation. The eigenfunction for Eq-2 is to be found. This eigenfunction is used to determine the energies possible for the situation (generically referred to as the
In Region II, V is infinite and the Hamiltonian, , is infinite.
The wave function and the energy are finite. Thus is zero in this
region. Since , there is zero probability that the particle will be found in this region.
In Region I, V is zero and the Hamiltonian is purely kinetic.
Plugging this Hamiltonian into the eigenvalue equation [Eq-2], we find
The subscript n is used in anticipation of discrete values that depend upon some integer n.
Now we must find a wave function that solves the eigenvalue equation, where is a number.
The wave function must be continuous across the regions. Therefore, at the walls, the wave function inside the box must equal the wave function outside the box. Since we have already determined that
the wave function outside the box must be zero, the wave function inside the box must go to zero at the walls.
Figure 5 Slice of 3-D infinite potential well
Source of figure: http://scienceworld.wolfram.com/physics/InfiniteSquarePotentialWell.html
The walls of the box are located at x=0 and x=a; y=0 and y=b; z=0 and z=c. Thus,
Since the wave function does not exist beyond the walls of the box (created by the infinite potential barrier) we are not concerned with the behavior of the wave function across the barrier. However,
in other types of problems where the wave function does exist on the other side of the barrier, we need to make sure that the first derivative of the wave function is continuous across the barrier.
[See worked example B for an example of this procedure.]
Back to our time-independent Schrodinger equation [Eq-3],
+ 0
The general solution to this homogeneous differential equation in 1-D is
+ where [Eq-5]
Our boundary conditions [Eq-4] tell us that at x=0, 0. Therefore, B must equal 0 and . The same applies for the 3-D case.
A is the normalization coefficient and the superscripts (1), (2), (3) signify that each dimension is independent.
From our boundary conditions [Eq-4] we must also make the wave function equal zero at x=a, y=b and z=c. The sine function is zero at integral multiples of . Therefore,
an; b n; c n where n = 0,1,2,3
The n subscript on k has been dropped to improve clarity, but is technically still there.
Rearranging gives; ; [Eq-7]
We can now plug k into Eq-5 to find the possible values of energy for a particle in this box. Rearranging Eq-5 to solve for E gives
[Aside: ] Substituting in values for k
from Eq-7 one finds.
As a final step, we must normalize our wave function.
Our wave function [Eq-6] is .
To normalize, 1
This becomes 1
which yields
The eigenenergies, , and the normalized eigenfunctions, , for the 3-D box problem are:
B) Particle in a Finite Potential Well in 1-D
This example will illustrate a method of solving the 1-D Schrodinger equation to find the eigenfunctions for a finite potential well. The potential is defined as follows:
V(x)= 0 if x<-a Region I
V(x)=-Vo if Region II [Eq-8]
V(x)= 0 if x>a Region III
If E is greater than 0, the wave function is unbounded as there are no boundary conditions that would place any limitations on the wave function.
If E is less than zero, boundary conditions are imposed upon the wave function. This is the case that will be examined below.
Figure 6 Finite potential well
Source of figure: http://scienceworld.wolfram.com/physics/FiniteSquarePotentialWell.html
Starting with the time-independent Schrodinger equation:
Applying the conditions of Eq-8, inside the well, the Schrodinger equation is:
for Region II for Regions I and III
Let and .
Inside the well, the wave function has the general solution:
Outside the well, the general solution to the Schrodinger equation is:.
Now we must apply boundary conditions to ensure that the composite wave function behaves properly as it crosses boundaries.
There are several boundary conditions must be imposed upon our wave function.
1. The wave function for the particle outside the well must show that the likelihood of finding the particle in these regions (I and III), decreases as the distance into the barrier region increases.
2. The wave functions must be continuous across the boundaries.
3. The first derivatives of the wave function must be continuous across the boundaries.
Applying the first boundary condition, the wave function must decrease as the distance into the barrier increases. For Region I (x is negative) . Likewise, in Region III (x is positive) .
Applying the second and third boundary conditions to the boundary between Regions I and II, where x = -a,
Applying the second and third boundary conditions to the boundary between Regions II and III, where x = a,
There now exists a set of four equations ([Eq-9] - [Eq-12]) for our four unknown coefficients A, B, C, D. We can solve for the coefficients through addition and subtraction of these equations and
Adding [Eq-9] and [Eq-11] produces
Adding [Eq-10] and [Eq-12] results in
Subtracting [Eq-11] from [Eq-9] gives
Subtracting [Eq-12] from [Eq-10] yields
Dividing [Eq-16] by [Eq-13] results in .
Inserting this into [Eq-15] we obtain
In order for this equality to be true, A=0 and C=D.
Substituting these values into [Eq-13] and [Eq-15] (equations matching the wave functions at the boundaries) we get
[Eq-15] 0
Thus we get an even function inside the well.
Dividing [Eq-14] by [Eq-15] results in .
Inserting this into [Eq-15] we obtain .
In order for this equality to be true, B=0 and C=-D.
Substituting these values into [Eq-13] and [Eq-15](equations matching the wave functions at the boundaries) we get
[Eq-15] 0
Thus we get an odd function inside the well.
The precise values for the coefficients are found through normalization:
Helpful References
Books for Beginning Study of Quantum Mechanics
Bernstein, J., P. Fishbane, S. Gasiorowicz. Modern Physics. (Upper Saddle River, NJ: Prentice Hall, 2000).
Serway, Raymond. Physics : For Scientists and Engineers, 4^th ed.(Philadelphia: Saunders College Publishing,1996).
Transnational College of LEX; translated by John Nambu. What is Quantum Mechanics? A Physics Adventure. (Boston: Language Research Foundation,1996).
Greiner, Walter. Quantum Mechanics: an Introduction. (New York: Springer, 2001).
Hecht, K.T. Quantum Mechanics. (New York: Springer, 2000).
Liboff, Richard. Introductory Quantum Mechanics. 4^th ed. (New York: Addison-Wesley, 2003).
Singh, Jasprit. Quantum Mechanics: Fundamentals and Applications to Technology. (New York: John Wiley and Sons, 1997).
Thankappan, V.K. Quantum Mechanics, 2^nd ed. (New York: John Wiley and Sons, 1993).
Zettili, Nouredine. Quantum Mechanics: Concepts and Applications. (New York: John Wiley and Sons, 2001). | {"url":"http://physics.gmu.edu/~dmaria/590%20Web%20Page/public_html/qm_topics/potential/well/ParticleInABox.htm","timestamp":"2014-04-19T04:51:39Z","content_type":null,"content_length":"168341","record_id":"<urn:uuid:5e4ae03e-3996-4a06-ab5c-0818d2088ec5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which of the following is the slope between the points (-6, 0)and (2, -9)?
Best Response
You've already chosen the best response.
explain how i can find the slope between it please ?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
difference of the y co-ordinates to the difference of the x co-ordinates 0-(-9)/-6-2
Best Response
You've already chosen the best response.
Slope also known as gradient can be found with the following formula, Given 2 points, (x1,y1) and (x2,y2) \[\frac{y1-y2}{x1-x2} or \frac{y2-y1}{x2-x1}\]
Best Response
You've already chosen the best response.
to find x i should plug in 0 for y ? & the opposite to find y ?
Best Response
You've already chosen the best response.
slope is delta y over delta x. so you subtract one y co-ordinate from the other and do the same for the x. The difference in y is placed over the x. Be sure you are subtracting in the correct
order! For example: (0 - 9)/(- 6- 2) is correct (-9 - 0)/(-6 - 2) is not
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f494376e4b00c3c5d32f6c2","timestamp":"2014-04-21T10:17:52Z","content_type":null,"content_length":"54478","record_id":"<urn:uuid:4455517c-13e8-42ec-ba50-70173cd8ab62>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Focus Fanatics - View Single Post - Valve cover question?
Valve cover question?
This is probably a stupid question but ill ask anyway. Its says that a 100 core will be returned for your valve cover. Now is that 100 over the cost of the valve cover or is that the cost you provide
then 100 will be given back?
Also will you provide shipping both ways? I.E. take the same box the valve cover came in and ship it sraight back to you with a prepaid postage.
So baiscally it says 259.89 so will i be charged 359.89 or 259.89 then 100 back so it turns out to be 159.89.
Thanks for clarifing for me. | {"url":"http://www.focusfanatics.com/forum/showpost.php?p=4250951&postcount=1","timestamp":"2014-04-17T23:30:50Z","content_type":null,"content_length":"27360","record_id":"<urn:uuid:df189ad7-1f30-4aab-ba30-339f7edfef95>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
How exactly does energy "become" mass?
Aren't we describing electromagnetic radiation here?
nope the general question was "how exactly does mass "become" energy?"
One guy claimed that photons, electromagnetic waves, are "pure" energy. But where is such definition stated and motivated?
To me, photons are one form of energy, mass is one form of energy. Energy can not be created or destroyed, only converted into different forms.
Look at the electromagnetic field, and the energy equations from SR:
[tex] E = \hbar \omega [/tex] photons
[tex] E = c^2 m_0 [/tex] rest-energy for massive particles
Now how does one see that photons are "pure" energy? The field is described by an angular frequency (omega). Working in units where c = 1, mass has same units as energy. Working in units where hbar =
1, omega has same units as energy.
Why do we have to work in SI units? It is just that we are used to it and take it for granted. | {"url":"http://www.physicsforums.com/showthread.php?t=284089","timestamp":"2014-04-21T14:55:20Z","content_type":null,"content_length":"76017","record_id":"<urn:uuid:8d3b7f19-0e8f-4507-9e77-1d7b3f0098f3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In the chats I am not writing some things but its under my user and it can get worse
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa08d55e4b029e9dc318fa1","timestamp":"2014-04-16T07:50:52Z","content_type":null,"content_length":"145408","record_id":"<urn:uuid:d58f17a8-147e-46a3-9500-ad61434af56f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building a Cone
Suppose you want to build a (right circular) cone out of some flat material, perhaps paper or metal. You cut out a sector of a circle and roll it up to make the cone. Let the radius of the sector be
s, its central angle T (in radians), the height of the cone be h, the radius of its base r, and the vertex angle (i.e., the angle between its axis and any slant-height line) t (also in radians).
You are given two of {h,r,s,t,T}, and wish to determine the other three. There are ten cases, depending on what you are given:
Case 1: You know h and r. Then
t = Arctan(r/h),
s = h/cos(t) = sqrt(h^2+r^2),
T = 2*Pi*r/s.
Case 2: You know h and t. Then
r = h*tan(t),
s = h/cos(t) = sqrt(h^2+r^2),
T = 2*Pi*r/s = 2*Pi*sin(t).
Case 3: You know h and s. Then
r = sqrt(s^2-h^2),
t = Arccos(h/s),
T = 2*Pi*r/s.
Case 4: You know h and T. Then
r = h*T/sqrt(4*Pi^2-T^2),
s = 2*Pi*r/T,
t = Arctan(r/h).
Case 5: You know r and s. Then
T = 2*Pi*r/s,
h = r*sqrt(4*Pi^2-T^2)/T,
t = Arctan(r/h).
Case 6: You know r and t. Then
h = r*cot(t),
s = r/sin(t),
T = 2*Pi*r/s.
Case 7: You know r and T. Then
s = 2*Pi*r/T,
h = sqrt(s^2-r^2),
t = Arctan(r/h).
Case 8: You know s and t. Then
h = s*cos(t).
r = s*sin(t),
T = 2*Pi*sin(t),
Case 9: You know s and T. Then
r = s*T/(2*Pi),
h = sqrt(s^2-r^2),
t = Arctan(r/h).
Case 10: You know t and T. Then the values of h, r, and s cannot be determined without further information. You can determine their ratios h/s, r/s, and h/r as follows:
h/s = cos(t),
r/s = sin(t) = T/(2*Pi),
h/r = cot(t). | {"url":"http://mathforum.org/dr.math/faq/formulas/BuildCone.html","timestamp":"2014-04-16T22:29:42Z","content_type":null,"content_length":"3332","record_id":"<urn:uuid:6a2c2c4d-4434-4e4c-8778-ac197d12fff0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Guess and Check
Date: 06/17/2002 at 18:12:40
From: Johnathan Minix
Subject: Problem Solving Straegy: Guess and Check
This is an example of a question
1. Sum of two numbers = 15
Difference of the numbers = 3
Find the numbers.
What is the product?
Thank you
Date: 06/17/2002 at 21:08:28
From: Doctor Peterson
Subject: Re: Problem Solving Straegy: Guess and Check
Hi, Johnathan.
I'll demonstrate some ideas using a slightly different problem:
sum = 31
difference = 7
One "guess-and-check" strategy is just to try lots of numbers
randomly, and see WHETHER each pair works. That's extremely
A better strategy is to try a pair, and see HOW WELL it works,
getting an idea for a better guess. For example, if you try 16+15 (a
natural first choice, since it's in the middle), the difference is
only 1. So you know you have to try numbers that are farther apart,
so you might try 8+23 or something. The difference now is 15, which
is too big, so you have to try a pair that are closer. And so on...
Often you can do better than that, and actually use the error in the
first guess to find the correct answer directly, by thinking about
HOW the problem works. In this case, you need to increase the
differe nce from 1 to 7, an increase of 6. What happens if you change
the numbers you use by 1? By adding 1 to the larger number and
subtracting 1 from the smaller number, you keep the sum the same, but
increase the difference by 2. Since we have to increase the
difference by 6, we need to do this 3 times. So we add 3 to 16 and
subtract 3 from 15, giving 19+12=31 as our sum, and 19-12 = 7 as our
difference. We've got it!
Maybe we can call these three strategies "guess and check", "guess
and improve", and "guess and solve". Which you use depends on how
hard the problem is, how much you feel like thinking, and how much
time you want to take.
If you need more help, please write back and show me how far you got.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/60819.html","timestamp":"2014-04-21T08:12:49Z","content_type":null,"content_length":"7061","record_id":"<urn:uuid:c8a8a6ba-b8ed-4b26-a72d-abab2e8486bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inclusive Counting on the GMAT
Understand this common type of special counting on the GMAT!
Question #1: On Monday, there were 29 bananas in the cafeteria. No new bananas were brought in after Monday. Two days later, on Wednesday, there were 14 bananas left. How many were eaten in that
Question #2: In January of last year, MicroCorp start-up had 14 employees. All of those employees have stayed on through June. In June of last year, they had 29 employees. How many new employees
did they hire?
Clearly for both of these questions, we need simply the different: 29 – 14 = 15. That’s the correct answer for each of them. Now, try a very different kind of question, with a different answer.
Question #3: A certain workshop begins on the 14th of this month and ends on the 29th of this month. How many days long is this workshop?
Question #4: How many multiples of 5 are there from 70 to 145?
Those two have the same answer as each other, but it’s not the same as the answer for #1 and #2.
We say the final two require “inclusive” counting, because both endpoints are included, whereas in the first two questions, both endpoints were not included. What do I mean?
In Question #1, the 29th banana was eaten, the 28 banana was eaten, …. the 15 banana was eaten, BUT the 14th banana was not eaten because there 14 remaining. The endpoint 14 is “not included” in
those eaten.
In Question #2, employees #1 – #14 were already hired by last January. Employees #15 – #29 were new hires. The lower endpoint, employee #14, was “not included” in the group of new hires.
Now, by contrast, the 14th is the first day of the workshop, and the 29th is the last day of the workshop. Both the 14th and 29th are days when the workshop is happening. Both endpoints are
Similarly, in #4, the multiples of 5 from 70 to 145 include both 70 and 145. Again, both endpoints are included. Incidentally, the connection to the other three questions: 70 = 5*14 and 145 = 5*29,
so the list of multiples of 5 from 70 to 145 is really 5 times the list of consecutive integers from 14 to 29, so the underlying question is really: how many consecutive integers are there from 14 to
As may be apparent, the inclusive scenario always includes exactly one more member than the “not included” scenario. Therefore, the formula for inclusive counting is
number included = (last) – (first) + 1
Notice, this only works if the numbers are consecutive. If the question is like #4, you may have to notice what change you can make to match the given list with a list of consecutive integers.
Practice questions
1) http://gmat.magoosh.com/questions/334
2) http://gmat.magoosh.com/questions/808 | {"url":"http://magoosh.com/gmat/2012/inclusive-counting-on-the-gmat/","timestamp":"2014-04-18T21:11:35Z","content_type":null,"content_length":"54981","record_id":"<urn:uuid:2160830a-d4e9-4bf9-87de-675471583f60>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parametric equations for cylinder/plane intersection
March 17th 2010, 10:22 PM #1
Junior Member
Jan 2010
Parametric equations for cylinder/plane intersection
I'd like to make sure my thinking is right on this problem and to see what the next step would be because I'm getting stuck.
The problem is to find the parametric equations for the ellipse which made by the intersection of a right circular cylinder of radius c with the plane which intersects the z-axis at point 'a' and
the y-axis at point 'b' when t=0.
I think the equation for the cylinder would be $x^2+y^2=c^2$
As for the plane I am less sure about the equation. I have points and I could write an equation with one if I had a normal vector to the plane... or the cross product of two vectors in it. Two
vectors would be between the points (0, b, 0) and (0, 0, a) and the points (0, b, 0) and (1, 0, a) since x will take on all values. Those vectors are <0, -b, a> and <1, -b, a>
$\left|\begin{matrix}\mathbf{i} & \mathbf{j} & \mathbf{k}\\ 0 & -b & a \\ 1 & -b & a\end{matrix}\right|$ Which yields $0\mathbf{i} + a\mathbf{j} + b\mathbf{k}$
so using the y intercept and $n_1(x-x_1) + n_2(y-y_1) + n_3(z-z_1) = 0$ the equation of the plane should be $a(y - b) + bz = 0$ or $ay - ab +bz = 0$
I get stuck here though... what should I do next? Any help is appreciated.
I'd like to make sure my thinking is right on this problem and to see what the next step would be because I'm getting stuck.
The problem is to find the parametric equations for the ellipse which made by the intersection of a right circular cylinder of radius c with the plane which intersects the z-axis at point 'a' and
the y-axis at point 'b' when t=0.
I think the equation for the cylinder would be $x^2+y^2=c^2$
As for the plane I am less sure about the equation. I have points and I could write an equation with one if I had a normal vector to the plane... or the cross product of two vectors in it. Two
vectors would be between the points (0, b, 0) and (0, 0, a) and the points (0, b, 0) and (1, 0, a) since x will take on all values. Those vectors are <0, -b, a> and <1, -b, a>
$\left|\begin{matrix}\mathbf{i} & \mathbf{j} & \mathbf{k}\\ 0 & -b & a \\ 1 & -b & a\end{matrix}\right|$ Which yields $0\mathbf{i} + a\mathbf{j} + b\mathbf{k}$
so using the y intercept and $n_1(x-x_1) + n_2(y-y_1) + n_3(z-z_1) = 0$ the equation of the plane should be $a(y - b) + bz = 0$ or $ay - ab +bz = 0$
I get stuck here though... what should I do next? Any help is appreciated.
With the given conditions you'll get a family of planes with the line
$l: (x,y,z)=(0,0,a)+s(0,-b,a)$
All planes of the family have this line in common. (Line of intersection of all planes)
In general: A plane is determined by 3 points. So I assume that there are additional conditions which you have to use.
With the given conditions you'll get a family of planes with the line
$l: (x,y,z)=(0,0,a)+s(0,-b,a)$
All planes of the family have this line in common. (Line of intersection of all planes)
In general: A plane is determined by 3 points. So I assume that there are additional conditions which you have to use.
Is it enough to say that the plane never intersects the x-axis? I drew the problem from the board and thought I copied all the information. Short of giving another point of the plane explicitly
what kind of additional information could there be?
So I just want to be sure, this problem is not solvable with the given information because I cannot identify a unique plane?
If the plane never intersects the x-axis then you have actually the necessary 3rd point at P(c, 0, a). This point P belongs to the ellipse which is produced by the plane when intersecting the
surface of the cylinder.
The plane is determined by the vectors:
$\overrightarrow{b - a} = (0, b, -a)$ and $\vec v = (c, 0, 0)$
The plane has to pass through (0, 0, a) and the normal vector of the plane must be:
$(0,b,-a) \times (c,0,0) = (0, ca, bc) = c \cdot (0,a,b)$
Therefore the equation of the plane is:
$(0,a,b)((x,y,z) - (0,0,a))=0~\implies~ay+bz-ab=0$
1. The condition t = 0 has nothing to do with the ellipse.
2. You already have a parametric equation of the ellipse:
$e:\left\{\begin{array}{l}x=s \\ y = \sqrt{c^2-s^2} \\ z=a-\frac ab\cdot \sqrt{c^2-s^2} \end{array} \right.$
where s is a real variable with $-c\leq s\leq c$
To demonstrate the effects of my previous calculations I've attached a drawing showing the cylinder with the ellipse and the cylinder with the corresponding plane.
I did not, but I do now. I'm not having trouble visualizing the picture, I just don't know how to get those parametric equations. If I already had them I wouldn't have been asking the question.
Re: Parametric equations for cylinder/plane intersection
I realize this is a couple years post-post, but I should probably point out that the following expression needs to have a plus/minus in both the Y and Z parameters:
± √(c^2-s^2)
When the Y parameter is positive, the Z parameter should use the minus.
Great work getting the equations, though. It was very helpful to me.
Last edited by bmullins; July 5th 2012 at 11:30 AM.
March 17th 2010, 11:23 PM #2
March 18th 2010, 12:04 AM #3
Junior Member
Jan 2010
March 18th 2010, 10:07 AM #4
Junior Member
Jan 2010
March 19th 2010, 12:17 AM #5
March 19th 2010, 12:39 AM #6
Junior Member
Jan 2010
March 19th 2010, 02:58 AM #7
March 19th 2010, 11:08 AM #8
March 22nd 2010, 12:16 PM #9
Junior Member
Jan 2010
July 5th 2012, 05:39 AM #10
Jul 2012
Oklahoma City | {"url":"http://mathhelpforum.com/calculus/134389-parametric-equations-cylinder-plane-intersection.html","timestamp":"2014-04-18T06:58:14Z","content_type":null,"content_length":"72198","record_id":"<urn:uuid:c9b3a51b-6daa-4b9b-9fea-8d23042ef6e7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 2 x 2 x 2 case in causality, of an effect, a cause and a confounder. A cross-over’s guide to the 2 x 2 x 2 contingency table
Colignatus, Thomas (2007): The 2 x 2 x 2 case in causality, of an effect, a cause and a confounder. A cross-over’s guide to the 2 x 2 x 2 contingency table.
Download (569Kb) | Preview
Basic causality is that a cause is present or absent and that the effect follows with a success or not. This happy state of affairs becomes opaque when there is a third variable that can be present
or absent and that might be a seeming cause. The 2 x 2 x 2 layout deserves the standard name of the ETC contingency table, with variables Effect, Truth and Confounding and values {S, -S}, {C, -C},
{F, -F}. Assuming the truth we can find the impact of the cause from when the confounder is absent. The 8 cells in the crosstable can be fully parameterized and the conditions for a proper cause can
be formulated, with the parameters interpretable as regression coefficients. Requiring conditional independence would be too strong since it neglects some causal processes. The Simpson paradox will
not occur if logical consistency is required rather than conditional independence. The paper gives a taxonomy of issues of confounding, a parameterization by risk or safety, and develops the various
cases of dependence and (conditional) independence. The paper is supported by software that allows variations. The paper has been written by an econometrician used to structural equations models but
visiting epidemiology hoping to use those techniques in experimental economics.
Item Type: MPRA Paper
Institution: Thomas Cool Consultancy & Econometrics
Original The 2 x 2 x 2 case in causality, of an effect, a cause and a confounder. A cross-over’s guide to the 2 x 2 x 2 contingency table
Language: English
Keywords: Experimental economics; causality; cause and effect; confounding; contingency table; Simpson paradox; conditional independence; risk; safety; epidemiology; correlation; regression;
Cornfield’s condition; inference
Subjects: C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C10 - General
Item ID: 3351
Depositing Thomas Colignatus
Date 29. May 2007
Last 18. Feb 2013 10:07
Colignatus is the name of Thomas Cool in science.
Cool, Th. (1999, 2001), “The Economics Pack, Applications for Mathematica”, http://www.dataweb.nl/~cool, ISBN 90-804774-1-9, JEL-99-0820
Colignatus, Th. (2007d), “Correlation and regression in contingency tables. A measure of association or correlation in nominal data (contingency tables), using determinants", http://
mpra.ub.uni-muenchen.de/3226/01/MPRA_paper_3226.pdf, Retrieved from source
Colignatus, Th. (2007e), “Elementary statistics and causality”, work in progress, http://www.dataweb.nl/~cool/Papers/ESAC/Index.html
Fisher, R.A. (1958a), “Lung Cancer and Cigarettes? Letter to the editor”, Nature, vol. 182, p. 108, 12 July 1958 [Collected Papers 275], see Lee (2007), http://www.york.ac.uk/depts/
maths/histstat/fisher275.pdf, Retrieved from source
Fisher, R.A. (1958b), “Cancer and Smoking? Letter to the editor”, Nature, vol. 182, p. 596, 30 August 1958 [Collected Papers 276], see Lee (2007), http://www.york.ac.uk/depts/maths/
References: histstat/fisher276.pdf, Retrieved from source
Kleinbaum, D.G., K.M. Sullivan and N.D. Barker (2003), “ActivEpi Companion texbook”, Springer
Lee, P.M. (2007), “Life and Work of Statisticians”, http://www.york.ac.uk/depts/maths/histstat/lifework.htm, Revised 24 April 2007
Pearl, J. (1998), “Why there is no statistical test for confounding, why many think there is, and why they are almost right”, UCLA Cognitive Systems Laboratory, Technical Report
(R-256), January 1998
Pearl, J. (2000), “Causality. Models, reasoning and inference”, Cambridge
Saari, D.G. (2001), “Decisions and elections”, Cambridge
Schield, M. (1999, 2003), “Simpson’s paradox and Cornfield’s conditions”, Augsburg College ASA-JSM, http://web.augsburg.edu/~schield/MiloPapers/99ASA.pdf, 07/23/03 Updated, Retrieved
from source
URI: http://mpra.ub.uni-muenchen.de/id/eprint/3351
Available Versions of this Item
• The 2 x 2 x 2 case in causality, of an effect, a cause and a confounder. A cross-over’s guide to the 2 x 2 x 2 contingency table. (deposited 29. May 2007) [Currently Displayed] | {"url":"http://mpra.ub.uni-muenchen.de/3351/","timestamp":"2014-04-20T18:32:12Z","content_type":null,"content_length":"24386","record_id":"<urn:uuid:545556bb-03af-4b13-8304-905bb91fcded>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
A color television has a power rating of 303 W. How much current does this set draw from a potential difference of... - Homework Help - eNotes.com
A color television has a power rating of 303 W. How much current does this set draw from a potential difference of 120 V?
The relationship between current, power and voltage comes from the application of energy considerations to the motion of electric charges. The electric potential energy (voltage) is the amount of
energy required to move a charge through an energy difference. The flow of electric charge is measured by the amount of charge moved per second. If the amount of charge is measured in Coulombs (C),
then the flow of current is measured in the Ampere (A).
The electric potential energy is given by the relationship
V = IR where V is the electric potential energy, I is the current in Amps, and R is the resistance of the circuit. The resistance is actually any device which is converting the electrical energy to
some other form of energy. The rate of this energy conversion by the resistance is known as the electric power. There are several relationships that can define electric power consumption:
P = IV
P = I^2R and
P = V^2/R are three.
Of these, the first is the most useful for the problem in question:
P = IV provides
I = P/V = (303W)/(120V) = 2.53 A
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/color-television-has-power-rating-303-w-how-much-336899","timestamp":"2014-04-21T01:04:20Z","content_type":null,"content_length":"26072","record_id":"<urn:uuid:90edad67-685f-488b-85ee-cb24f7c1ca3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pitching to Contact and FIP
Last month, I wrote a
primer on defense independent pitching stats, with a heavy focus on FIP. Lately, FIP is a somewhat hot topic following the Cy Young voting, where the pitchers who dominated the defense-independent
metrics won out in both leagues, and one of the two specifically cited FIP as a focus of his pitching strategy. One of the issues that is oft-discussed is the concept that pitchers whose ERAs far
outstrip their FIPs because of good defensive support are simply pitching to their defense, and should be rewarded for that. After all, why push so hard for strikeouts when you know that your defense
is going to convert a lot of outs when you do put the ball in play? Should a pitcher (a purely hypothetical pitcher, of course) who only strikes out 6.7 per 9 have that held against him when compared
to a pitcher who strikes out 9.8 per 9 or one who strikes out 10.4 per 9 when the 6.7 per 9 pitcher is pitching to a defense that better handles balls in play?
The first thing to note is that FIP is not dependent on strikeouts in the way most people think. As we learned in last month's primer, FIP is dependent on 4 different values, one of which is
strikeouts, and another of which is balls in play. Both are important to FIP, as are BB and HR. FIP is a balance of all 4 areas for a pitcher, with each area weighted according to their observed
value in actual MLB games. They do not favour strikeouts as a style, and in fact, do not favour them at all when they come with high walk and HR totals. It is fully possible to have a poor FIP as a
high-strikeout pitcher.
FIP, in general, does not favour strikeouts in any way other than how their observed value relates to the other 3 areas included in the formula. Now that that is out of the way, FIP does overvalue
strikeouts if a team's defense is very good, but only slightly, and, conversely, it actually undervalues strikeouts if a team has poor defense. To see how this works, let's revisit the table of
values for the 4 types of events covered by FIP:
HR 1.40
BB 0.30
out -0.27
BIP -0.04
Remember that the coefficients used by FIP are derived by subtracting the value of a BIP from the value of each other event and multiplying by 9. FIP is dependent on the value of these 4 types of
events being close to those in the above table. When team defense differs significantly from average, it has no effect on the value of HR, BB, or SO (actually, that's not true, because those values
are all sensitive to the run-environment, so when a team allows more or fewer runs, those values will change slightly, but for our purposes, we'll ignore that). Defense does, however, have an obvious
effect on the value of a ball in play. With a good defense, a BIP will be worth less (to the offense) than -.04. With a bad defense, a BIP will be worth more. So the coefficients in FIP are in fact
slightly off when a defense is not close to average, because FIP is tuned to fit the average value of a ball in play.
Here enters the beauty of FIP. Because of how the coefficients are derived, the formula can be easily tuned to fit any level of team defense. FIP, it turns out, is not wrong for pitchers in front of
good or bad defenses, it just has to be tuned differently. All we have to do is recalculate the value of a ball in play and re-derive the coefficients.
Imagine a pitcher had as good a defense as he could possibly have. Say, for example, he had the 2009 Mariners' defense. This defense was worth 85 runs above average, or roughly .02 runs per BIP. An
average BIP is worth -.04 runs, so an average BIP pitching in front of this defense is worth -.06 runs (remember that lower is better for the defense). We recalculate our coefficients and get:
13.13*HR + 3.23*BB - 1.90 SO*
Strikeouts did indeed lose value, and walks and home runs both became more costly. How much difference does this make, though? Let's revisit our hypothetical 6.7 strikeout pitcher. Let's say he also
walks 2.1 per 9 (after adding in HBP and subtracting out IBB) and allows .33 HR per 9.
Using the traditionally derived coefficients, he'll have an FIP of about 2.84. Keeping in mind that we also need to calculate a new league constant to scale FIP to ERA since we changed the
coefficients, we find that he would have an FIP of about 2.79 using our defense-specific coefficients. Traditional FIP underestimated him by about .05 ER/9, or about 1.2 runs per 200 innings,
compared to defense-sensitive FIP.
Since we are already adjusting for defense in our calculations, we can go a step further in incorporating defensive context into our valuation. The league average ERA this year was 4.32, but we know
that that won't be the case given a +85 run defense. A league average pitcher, given +85 defense, will have about a 3.84 ERA. Using that figure, we can recalculate our constant for FIP and calculate
a new number that is an estimate of actual ERA, not of ERA minus defensive support. This means that we would expect our 2.79 FIP pitcher to have an actual ERA of about 2.31. This is, of course, not
as valuable as an ERA of 2.31 in front of an average defense, so we have to account for that as well. If average is 3.87, then a replacement level starter (using a .380 winning percentage as
replacement level) will have an ERA right at 5.
A 2.31 ERA is good for a .749 winning percentage against a league average 4.32 ERA. Our replacement level pitcher, who is normally .380, is not .380 against the league with his +85 defense, however.
He is .437. That means that our pitcher is worth about 6.9 WAR per 200 innings. Using his traditional FIP, we would give him a .679 winning percentage over a .380 replacement level, which comes out
to 6.6 WAR per 200 innings. Our hypothetical** pitcher actually gained .3 wins once we considered the nuances of pitching to contact in front of a stellar defense. That's actually quite a bit. It's
worth over a million dollars to the pitcher on the open market.
At the beginning of this article, I said that the traditional coefficients were only slightly off with an extreme defense. Here, we find that they can be off by as much as .3 wins if we take a Cy
Young caliber contact pitcher and put him in front of the best defense on the planet. Can we really write off .3 wins as slight enough to use traditional FIP as a stand-in for defense-sensitive FIP
if we want to capture the value of pitchers separate from, but in the context of, their defense?
If the value were ever really that high, I'd say no. It isn't, though, at least not if what we want to measure is how defense affects a pitcher's approach. Everything we plugged into the calculations
above were purely after-the-fact measurements, but the only thing a pitcher can leverage in adjusting his approach are expected values. That means that if our +85 defense only projects to be worth 60
runs a year going forward (I'm making that number up for illustration purposes), then the pitcher can only leverage 70% of those 85 runs by adjusting his approach. Even though the defense ended up
saving 85 runs, there is no way the pitcher could have leveraged the 25 they saved over their projection without knowing they would outperform the projection in advance (which, by the loose
definition of a projection, you can't). He also can't leverage his full home run rate, which in this case is probably at least to some extent anomalous. If he knew ahead of time that he would only
allow .33 HR per 9 giving up that much contact, he could leverage contact quite a bit (the .3 wins arrived at above being "quite a bit" in this case), but only knowing his projected home run rate, he
can only leverage up to his projection, not beyond.
For these reasons, our hypothetical pitcher is never going to actually be undervalued by .3 wins per 200 innings using traditional FIP just because he pitches to contact, even if we give him by far
the best defense in baseball.
This also means that just because a pitcher's ERA is better than his FIP, even if that difference is because of defensive support, it does not mean the pitcher was utilizing a better defense if his
team defense was not far above average overall. Let's create a new hypothetical pitcher who has the same FIP as the one above and an ERA in the 2.2s, but whose team defense we measure to be a bit
below average. In this case, we don't know that the difference between the pitcher's FIP and ERA is because the pitcher got better than average defensive support, but he might have. Let's assume that
he did. Does he get credit for pitching to contact and using that good defensive support? In this case, no, because whatever his defensive support ends up being, we expect it to be below average, so
deciding to pitch to contact is a bad choice. In terms of how this pitcher can leverage his defense, strikeouts are actually slightly underrated (slightly enough that we can basically ignore it, but
they are underrated) even though the pitcher's ERA over-credits him for good defensive support, because he has no way to leverage that defensive support based on decisions about pitching approach
made before the fact. Our pitcher is now far overrated by ERA because of defensive support and not at all underrated by FIP because of an ability to leverage good defense.
We return to our initial question about using FIP for pitchers who receive good defensive support: should a pitcher be punished for pitching to contact in order to leverage a good defense? The answer
is somewhat complicated. No, a pitcher should not be punished for leveraging good defense if he is doing it properly, but FIP can actually be tweaked to account for that pretty easily because the
methodology for deriving the coefficients lends itself perfectly to adjusting the formula for differing values of balls in play. Traditional FIP and defense-sensitive FIP track very closely together,
though, to the point that the difference is mostly negligible and not worth not using FIP in almost any conceivable case. Even in cases where defense-sensitive FIP is a bit off from FIP, FIP will
still capture the context of pitching to defense, while still separating the actual value of the defense, better than ERA (note how much closer defense-sensitive FIP, after we recalculated the
coefficients to take defensive context into account, was to the traditional measure than it was to the predicted ERA once we also added in the value of the defense). Furthermore, you can't tell if a
pitcher even had the opportunity to properly leverage his defensive support just by comparing FIP to ERA, even if you assume that the difference is due to defensive support. A pitcher with a 2.8 FIP
and a 2.2 ERA, even assuming that his ERA includes a lot of defensive support, did not necessarily ever have the opportunity to leverage that support by choosing to pitch to contact. In fact, the
degree to which a pitcher can leverage his defense has nothing to do with his defensive support itself, but with the projected value of his defensive support before the fact and with his expected
rates of HR, BB, and SO given a certain pitching strategy.
*NOTE: You won't be able to exactly replicate any of these values from the numbers given here because of rounding discrepancies, so if you are trying to work through the math on your own and find
some differences, that is probably why.
**NOTE: This pitcher truly is hypothetical. Don't believe me? What real life pitcher threw in a 4.32 ERA league in 2009? That's mostly why the value doesn't match up at all with the pitcher you
looked up, by the way.
6 comments:
Great article Kincaid, thanks a lot.
Great article Kincaid. It's also an excellent reminder of your great series on FIP earlier in the year. I'll be referring to this and the series in a piece later this week. Good stuff.
Thanks to both of you. Glad you enjoyed it.
Very interesting Kincaid. Quick question, I'm guessing that you found the BIP normalization by dividing PO into UZR, but I'm having trouble calculating the derivation of your coefficients. I'd
like to take a look at the Rays pitchers using this, and was hoping you could point me in the right direction. Thanks for the neural stimulation.
Team UZR is the number of runs the team saves over average on all balls in play, so to figure out how many runs they save on a single ball in play, you would take team UZR and divide it by the
number of balls in play the Rays allowed. BIP can be estimated by (BF-HR-SO-BB-HBP), which comes out to 4323 for Tampa Bay. They were +70 in the field this year according to UZR, so you would
divide 70 by 4323, and get that each batted ball in front of that defense was worth .016 runs better than average. Ideally, you would want to use a regressed value for team defense instead of the
full 70 runs, but it won't make a huge difference. If you want, you can knock it down to 50 or 60 runs and see what that does in your final outcome.
Since the Rays are an above average defense, you subtract .016 from -.04, which is the average value of a ball in play, because a BIP against the Rays is worth .016 runs less for the offense than
an average BIP. You would get the following for your new values of each of the 4 events in FIP:
HR 1.4, BB .3, SO -.27, BIP -.056
Subtract -.056 from each of the values (or add .056, same thing), and you would get
HR 1.456, BB .356, SO -.214, BIP 0
Multiply each of those by 9, and you have your coefficients.
Very big thanks. | {"url":"http://www.3-dbaseball.net/2009/11/pitching-to-contact-and-fip.html","timestamp":"2014-04-21T07:11:10Z","content_type":null,"content_length":"67934","record_id":"<urn:uuid:c671941a-9de0-44a8-bcbe-1958f86e1ccb>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Random number
Citizendium - building a quality free general knowledge encyclopedia. Click here to join and contribute—free
Many thanks December donors; special to Darren Duncan. January donations open; need minimum total $100. Let's exceed that.
Donate here. By donating you gift yourself and CZ.
Random number generator
From Citizendium, the Citizens' Compendium
While defining true randomness can drift from computer science and mathematics into philosophy, Bruce Schneier has a reasonable criterion for a source of random numbers: "It cannot be reliably
reproduced."^[1] This article focuses first on the idea of randomness, rather than on how the random or pseudo-random sequence is produced. For properly selected applications, it may be possible to
be adequately random with a technique that does not depend on a true random physical process. In other cases, it may be practical to use a combination of physical and computational methods.
In any of these cases, wise implementers do not assume randomness or even adequate pseudo-randomness. Even when the generator is an apparently random physical process, the motto "trust, but verify"
still holds because some physical phenomena may indeed be random in the short- or long-term, but the nature of the physical resource (e.g., the declining radioactivity of a source) may affect its
properties. It may be possible to compensate for a weak area, but only if it is known. See testing for randomness.
Choosing random quantities to foil a resourceful and motivated adversary is surprisingly difficult. — IETF Best Current Practice 106^[2]
A truly random one-time pad may be generated with a combination of measurement of a random physical phenomenon (e.g., thermal noise or radioactive disintegrations), which is then captured by a
computer and put into high-density storage. An expert in the physical phenomenon being measured needs to be consulted to determine if postprocessing is needed.
There are a wide range of applications for random or pseudo-random numbers, with various degrees of randomness. A computer game can give an apparently different scenario whenever played, with some
simple fairly random physical inputs, such as selected bits from the computer's time of day clock, and, perhaps, the time between the last several keystrokes or mouse movements. In other cases, such
as extremely critical cryptography, only the best uncorrelated physical random sequences will do. See related articles for examples of applications.
Manual generation
In the past, one-time pads were generated by typists who were told to type randomly, but, in practice, had patterns. Limited to manual methods, a not-unreasonable way is to put numbered balls in a
bowl, having a blindfolded person take it out, the number recorded, the ball returned to the bowl, and the bowl shaken thoroughly before taking the next ball.
This is a point mostly of historical interest, but typist patterns did cause weakness in some one-time pads. It is hard to imagine why manual generation would be useful today.
One vendor offers dice with barcodes on them so that results can easily be scanned into a computer.
Random sequences from physical phenomena
Real-world physical phenomena often exhibit some randomness, although few are completely random. Some may appear random in the short term, yet exhibit some unexpected patterns or self-similarity when
additional data is examined. The number of cosmic rays striking an object at a given second is as random a process as is known. The number of disintegration events, in a radioactive material, is
often considered random, but consider that if the amount of radioisotope is finite, as more and more half-lives pass, there will be fewer disintegrations. There is no simple answer to the question of
whether a downward trend in disintegrations makes that sequence nonrandom.
Many natural sources of random bits may be defective in that they produce biased output bits (so that the probability of a one is different than the probability of a zero), or bits which are
correlated with each other. — Goldwasser & Bellare ^[3]
There are, however, methods that can be used to postprocess such a source to remove bias and correlation.
One widespread technique is due to John von Neumann. Take the input bits in pairs, discard 11 or 00 pairs, output one if the input is 10 and zero if the input is 01. As long as the input bits are
independent, this completely eliminates bias. For example, suppose the input is 90% ones. The chance of a 10 sequence is .9*.1 while that of a 01 sequence is .1*.9; the two probabilities are exactly
equal so the output is unbiased. If the inputs are unbiased and uncorrelated, then the four input combinations 00 01 10 and 11 are equally likely and on average you get two bits out for eight in. Any
bias reduces this ratio; with large bias the technique can be quite inefficient. In our example with 90% bias, 81% of pairs are 11 and 1% are 00; these are thrown away. On average, 200 bits of input
(100 pairs) give only 18 output bits. More problematic in some applications, there is some chance of getting a long run of 11 inputs and generating no output at all for an appreciable time. Also, if
the input bits are correlated rather than independent, the technique works less well.
Another common technique is to apply a cryptographic hash to the input. Suppose we estimate that there is about 1 bit of randomness per input byte and our hash is SHA-1 with 160-bit output. Hash 160
input bytes and we should have about 160 bits of randomness; the whole hash output should be unbiased and adequately random. In practice, one would hash more than 160 bytes to give a safety margin,
and might use only part of the hash output as random output to avoid giving an enemy enough information to determine the internal state of the hash.
In theory, any high-grade compression algorithm could be substituted for the hash provided any non-random headers are thrown away; well-compressed data is very close to random. However, this
technique is rare in practice. To be confident that it was secure, it would need thorough cryptographic analysis of the compression methods; the general practice is therefore to just use a
cryptographic hash that has already undergone analysis.
A block cipher could also be used. For example, one might consider AES a mixing function taking a 128-bit plaintext and up to 256 bits of key as input and giving a 128-bit output. There is much
analysis of methods of building hash functions from block ciphers; it would provide guidance for exactly how a block cipher might be used for this. The general practice, however, is to just use a
hash; one might choose a hash that is based on a block cipher, but it would be unusual to use a block cipher directly.
Various sources may be used to provide (partly) random inputs to the process. A geiger counter (measuring either background radiation or radiation from a radioactive sample) or a radio tuned so it
only gets static are good sources. A digital camera can be used, either pointed at some random physical process or pointed at a plain background to use its circuit noise. Silicon Graphics built one
called lavarand using a video camera pointed at a group of lava lamps; the descendant lavarnd uses a digital camera with the lens cap on. A microphone can be used in much the same two ways.
The Turbid generator ^[4] is a sophisticated design using a sound card, without a microphone, for input. Other designs attempt to estimate or measure the input entropy, then include safety factors to
allow for estimation errors. Turbid takes a different approach, proving a lower bound on the input entropy from the physical properties of the devices involved. From that and some relatively weak
assumptions about properties of the hash function, it is possible to derive lower bounds on the output entropy. Parameters are chosen to make that lower bound 159.something bits of entropy per 160
output bits. The documentation talks of "smashing it up against the asymptote".
Hardware generators can be designed with internal sources of random bits. Thermal noise in diodes is often used. An alternative is to run two oscillators with different frequencies and without good
frequency stability on either. The slower one acts as a clock, determining when to take the faster one's state as an output bit. The frequency ratio is typically around 100:1, so quite minor
variations in the slow one have large effects on choice of data from the fast one. There is some research work on laser-based sources with very high data rates.
A real generator will often combine several of these techniques. For example, the RNGs built into some Intel chipsets^[5] use two oscillators, and modulate the frequency of the slower one with
thermal noise. The hardware includes a de-skewing operation based on Von Neumann's technique and the software driver uses SHA-1 hashing.
Random devices
Many operating systems provide a random device as a source of random numbers for applications. A program can just read the device when it needs random numbers. Typically input comes from events
within the operating system kernel — keystroke timings, mouse data, disk access delays, and so on. If the machine has a hardware generator, its output is often mixed in to the random device as well
so programs need only read the random device. All such devices use a cryptographic hash to mix the data, and some use a cipher for output processing as well — the Yarrow ^[6] generator uses a block
cipher in counter mode while the Open BSD device uses a stream cipher. The Linux device uses a second hash.
Generally a buffer is used, so the process can be thought of as a sequence:
1. mix some input into the buffer
2. hash the buffer to get a substantial chunk of hash output
3. use part of that as random output
4. mix the remainder back into the buffer
A complication is that the process is actually asynchronous, with input data mixed in as it arrives and output produced as required. Considerable care must be taken to ensure that there is enough
randomness to support the output. Generally two devices are implemented; one (typically named /dev/random) provides high quality random numbers for critical applications and will block (make the
client program wait) if there has not been enough input entropy to justify the output. The other device (/dev/urandom) never blocks but does not provide a strong guarantee of output quality. In some
implementations, such as on Linux, there are actually three buffers (called "entropy pools" in the documentation), one to collect input and one each for the two output devices.
For a typical desktop system, these devices are generally adequate — requirements for random numbers are not heavy, and there is plenty of mouse, keyboard and disk activity. However, on other systems
the random device may be over-stressed without hardware assistance. Consider a web server that supports many SSL connections or an IPsec gateway running many tunnels. These applications demand
considerable quantities of high-grade random numbers but available inputs are limited — a server may not even have a keyboard or mouse and it might use a solid state disk or an intelligent RAID
controller so no randomness from disk delay is available. Information from things like timing of network interrupts may be partly known to an attacker who monitors the network, so it can be used only
with great caution. Such systems usually require hardware assistance to meet their randomness requirements.
Current chips from Intel^[5], Via^[7] and others provide a hardware RNG. On a machine without that, Turbid ^[4] may be an alternative; a server has little other use for a sound card but there is
often one built into the motherboard or it may be easy to add one.
Pseudorandom number generators
In practical computing, it is convenient to use a pseudorandom number generator (PRNG), which produces a sufficiently varying sequence that, for example, a computer game driven with that generator
appears to produce different characteristics each time that it is played. Nevertheless, most PRNGs that are operating system utilities, such as UNIX random(), will eventually repeat the sequence over
a sufficiently long period of time. In addition, the programmer may have the choice of giving an explicit initialization value, which may be called a seed or a nonce, to the PRNG.
Most general-purpose PRNGs will produce the same sequence as long as they are given the same seed, which can even be useful for some software development purposes (see pitfalls in computer simulation
), or for such things as being sure that a series of psychological research volunteers all see the same set of events. A given PRNG, however, may be told to use some reasonably random physical event
in the computer as the seed, such that the seed is unpredictable.
Donald Knuth, an authority on PRNGs, quoted John von Neumann in a tongue-in-cheek but realistic view of the limits of PRNGs^[8]:
Anyone who considers arithmetical method of producing random digits is, of course, in a state of sin — John von Neumann (1951)
It is possible to write very good and very bad pseudorandom number generators. Known pitfalls need to be avoided, both in initialization and in computing the next number.
Known bad ideas
Several common types of PRNG — such as linear congruential generators or linear feedback shift registers — cannot be used directly in adversarial applications. With well-chosen parameters, they have
reasonable statistical properties and are useful for such things as controlling Monte Carlo simulations. However, in the presence of an adversary — someone who wants to break your cryptosystem or
cheat at your game — they are inadequate, easily broken. They might be used as components in some more complex generator, but they cannot safely be used alone in such applications.
Any PRNG must be initialised with a seed. If you have any concern about someone attacking your generator, such seeds must be truly random. Using something like your computer's clock or your process
ID, or even a combination of such things, makes it dangerously easy for an enemy to guess the value and break your generator. In a well-known example, an early version of Netscape's SSL server was
broken^[9] in exactly this way. Using another PRNG to provide a seed only moves the problem without solving it, since that PRNG must also be intialised.
Cryptanalytic Attacks on Pseudorandom Number Generators^[10] enumerates attacks and assesses vulnerabilities in widely used generators.
Some PRNGs
Linear congruential
Knuth illustrates the most commonly used PRNG, which he attributes to D.H. Lehmer in 1949^[11]. This is an easy generator to implement in software and is very widely used. For example, the rand()
function in most C libraries works this way.
A linear congruential generator creates successive outputs x[n] for n = 1,2, 3... using the formula:
x[i] = a * x[i-1] + b modulo m
The seed is the initial value x[0].
Choice of the constants a, b and the modulus m is critical; bad choices can give very weak generators.
The period of such a generator is at most m-1. a, b and m are normally chosen so that this maximum is achieved. Schneier ^[12] provides a table with recommended choices for a, b, m.
Shift registers
Many random number generators based on shift registers with feedback are possible. The commonest type uses linear feedback shift registers. These are very easy to implement in hardware and reasonable
in software. They are commonly used in stream ciphers; we describe some in the stream cipher article.
OFB and CTR Sequences
A block cipher can provide a good pseudo-random sequence; the modes of operation for this are Output Feedback (OFB) or Counter (CTR) mode.
Encryption devices can be used as practical and strong PRNGs, if the output applies some type of masking so an adversary cannot know the full internal state of the generator.^[13]
Blum Blum Shub
This is the strongest known generator, with the advantage of relative simplicity, but the disadvantage of being computationally intensive. If it is used to generate numbers that are not needed
frequently, it may be useful.^[14] ^[15]
The possibility also exists that one or more processors could be dedicated to BBS computation.
The possibility of pseudo-random methods that are adequately random
Some research indicates that in carefully selected situations, a pseudo-random number generator, which meets a number of cryptographic criteria, may be adequately random for security. The proofs are
complex and not universally accepted, but may be true for a useful number of situations.^[16] ^[3]
Random numbers that follow particular statistical distributions
In many applications, the exact sequence is desired to be as random as possible, but the sequence may be under additional constraints to meet specific statistical requirements.
See generating values for normal random variables for an example of generating random numbers that fit a desired distribution.
Testing for Randomness
Knuth presents an extensive range of tests, of which a few are mentioned here.
A more recent reference is a web page where the US National Institute of Standards and Technology describe their work on "a battery of statistical tests suitable in the evaluation of random number
generators and pseudo-random number generators used in cryptographic applications."
One widely used set of tests are the Diehard battery developed by George Marsaglia at Florida State University.
Testing for runs
It is entirely reasonable to have individual numbers recur in a random sequence — a random sequence may even have a "run" of a number such as 100 354 972 972 972 155 579. Indeed, when typists have
been asked to type randomly, they may intuitively avoid runs and produce a less random sequence. However, one statistical test for randomness is to look at the frequency of runs.
Frequency test
The simplest test is to chop the output up into conveniently-sized chunks and look at frequency of possible chunk values over time. For example, if bytes are the unit, all 256 possible values should
occur and their frequencies should be approximately equal. Using larger units gives more stringent tests; for example sampling 16-bit chunks and testing that all values occur equally often tells more
than just looking at frequency of byte values.
Chi-squared statistic
The Chi-squared goodness of fit test provides a more sophisticated and powerful method of analyzing frequency data.
Consider an algorithm that looks at pairs of bytes and creates a 256x256 matrix for frequencies of byte sequences, indexed with (arbitrarily) rows for the first byte and columns for the second. This
need not be equivalent to breaking the data into 16-bit chunks; one can look at byte sequences ab, bc, cd, de, ... rather than just ab, cd, ef ... If the data is random, one expects all cells of the
matrix to have approximately the same frequency values. However, many types of non-random bias will show up as deviations in that pattern.
Chi-squared is the standard technique for drawing inferences from such a matrix.
Serial test
Gap test
Partition test
Computer simulation
If it is desired to do repeated runs of a simulation, either for software testing or to evaluate variable components under repeatable condition, a PRNG that will reuse the same seed to produce the
same sequence is useful.
Cryptographic one-time pads preferably are generated from physical random phenomena. For high-volume ciphers, however, reproducibility is needed, so, while the initialization variables tend to be
truly random, a key generator for a stream or block cipher uses a demonstrably strong pseudo-random number generator. | {"url":"http://en.citizendium.org/wiki/Random_number_generator","timestamp":"2014-04-16T16:19:00Z","content_type":null,"content_length":"61224","record_id":"<urn:uuid:bf06f632-0eca-4b09-b8aa-b94516e4b783>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
dlib C++ Library - graph_labeling_ex.cpp
// The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt
This is an example illustrating the use of the graph_labeler and
structural_graph_labeling_trainer objects.
Suppose you have a bunch of objects and you need to label each of them as true or
false. Suppose further that knowing the labels of some of these objects tells you
something about the likely label of the others. This is common in a number of domains.
For example, in image segmentation problems you need to label each pixel, and knowing
the labels of neighboring pixels gives you information about the likely label since
neighboring pixels will often have the same label.
We can generalize this problem by saying that we have a graph and our task is to label
each node in the graph as true or false. Additionally, the edges in the graph connect
nodes which are likely to share the same label. In this example program, each node
will have a feature vector which contains information which helps tell if the node
should be labeled as true or false. The edges also contain feature vectors which give
information indicating how strong the edge's labeling consistency constraint should be.
This is useful since some nodes will have uninformative feature vectors and the only
way to tell how they should be labeled is by looking at their neighbor's labels.
Therefore, this program will show you how to learn two things using machine learning.
The first is a linear classifier which operates on each node and predicts if it should
be labeled as true or false. The second thing is a linear function of the edge
vectors. This function outputs a penalty for giving two nodes connected by an edge
differing labels. The graph_labeler object puts these two things together and uses
them to compute a labeling which takes both into account. In what follows, we will use
a structural SVM method to find the parameters of these linear functions which minimize
the number of mistakes made by a graph_labeler.
Finally, you might also consider reading the book Structured Prediction and Learning in
Computer Vision by Sebastian Nowozin and Christoph H. Lampert since it contains a good
introduction to machine learning methods such as the algorithm implemented by the
#include <dlib/svm_threaded.h>
#include <iostream>
using namespace std;
using namespace dlib;
// ----------------------------------------------------------------------------------------
// The first thing we do is define the kind of graph object we will be using.
// Here we are saying there will be 2-D vectors at each node and 1-D vectors at
// each edge. (You should read the matrix_ex.cpp example program for an introduction
// to the matrix object.)
typedef matrix<double,2,1> node_vector_type;
typedef matrix<double,1,1> edge_vector_type;
typedef graph<node_vector_type, edge_vector_type>::kernel_1a_c graph_type;
// ----------------------------------------------------------------------------------------
template <
typename graph_type,
typename labels_type
void make_training_examples(
dlib::array<graph_type>& samples,
labels_type& labels
This function makes 3 graphs we will use for training. All of them
will contain 4 nodes and have the structure shown below:
| |
| |
| |
In this example, each node has a 2-D vector. The first element of this vector
is 1 when the node should have a label of false while the second element has
a value of 1 when the node should have a label of true. Additionally, the
edge vectors will contain a value of 1 when the nodes connected by the edge
should share the same label and a value of 0 otherwise.
We want to see that the machine learning method is able to figure out how
these features relate to the labels. If it is successful it will create a
graph_labeler which can predict the correct labels for these and other
similarly constructed graphs.
Finally, note that these tools require all values in the edge vectors to be >= 0.
However, the node vectors may contain both positive and negative values.
std::vector<bool> label;
graph_type g;
// ---------------------------
// store the vector [0,1] into node 0. Also label it as true.
g.node(0).data = 0, 1; label[0] = true;
// store the vector [0,0] into node 1.
g.node(1).data = 0, 0; label[1] = true; // Note that this node's vector doesn't tell us how to label it.
// We need to take the edges into account to get it right.
// store the vector [1,0] into node 2.
g.node(2).data = 1, 0; label[2] = false;
// store the vector [0,0] into node 3.
g.node(3).data = 0, 0; label[3] = false;
// Add the 4 edges as shown in the ASCII art above.
// set the 1-D vector for the edge between node 0 and 1 to the value of 1.
edge(g,0,1) = 1;
// set the 1-D vector for the edge between node 1 and 2 to the value of 0.
edge(g,1,2) = 0;
edge(g,2,3) = 1;
edge(g,3,0) = 0;
// output the graph and its label.
// ---------------------------
g.node(0).data = 0, 1; label[0] = true;
g.node(1).data = 0, 1; label[1] = true;
g.node(2).data = 1, 0; label[2] = false;
g.node(3).data = 1, 0; label[3] = false;
// This time, we have strong edges between all the nodes. The machine learning
// tools will have to learn that when the node information conflicts with the
// edge constraints that the node information should dominate.
edge(g,0,1) = 1;
edge(g,1,2) = 1;
edge(g,2,3) = 1;
edge(g,3,0) = 1;
// ---------------------------
g.node(0).data = 1, 0; label[0] = false;
g.node(1).data = 1, 0; label[1] = false;
g.node(2).data = 1, 0; label[2] = false;
g.node(3).data = 0, 0; label[3] = false;
edge(g,0,1) = 0;
edge(g,1,2) = 0;
edge(g,2,3) = 1;
edge(g,3,0) = 0;
// ---------------------------
// ----------------------------------------------------------------------------------------
int main()
// Get the training samples we defined above.
dlib::array<graph_type> samples;
std::vector<std::vector<bool> > labels;
make_training_examples(samples, labels);
// Create a structural SVM trainer for graph labeling problems. The vector_type
// needs to be set to a type capable of holding node or edge vectors.
typedef matrix<double,0,1> vector_type;
structural_graph_labeling_trainer<vector_type> trainer;
// This is the usual SVM C parameter. Larger values make the trainer try
// harder to fit the training data but might result in overfitting. You
// should set this value to whatever gives the best cross-validation results.
// Do 3-fold cross-validation and print the results. In this case it will
// indicate that all nodes were correctly classified.
cout << "3-fold cross-validation: " << cross_validate_graph_labeling_trainer(trainer, samples, labels, 3) << endl;
// Since the trainer is working well. Let's have it make a graph_labeler
// based on the training data.
graph_labeler<vector_type> labeler = trainer.train(samples, labels);
Let's try the graph_labeler on a new test graph. In particular, let's
use one with 5 nodes as shown below:
(0 F)-----(1 T)
| |
| |
| |
(3 T)-----(2 T)------(4 T)
I have annotated each node with either T or F to indicate the correct
output (true or false).
graph_type g;
g.node(0).data = 1, 0; // Node data indicates a false node.
g.node(1).data = 0, 1; // Node data indicates a true node.
g.node(2).data = 0, 0; // Node data is ambiguous.
g.node(3).data = 0, 0; // Node data is ambiguous.
g.node(4).data = 0.1, 0; // Node data slightly indicates a false node.
// Set the edges up so nodes 1, 2, 3, and 4 are all strongly connected.
edge(g,0,1) = 0;
edge(g,1,2) = 1;
edge(g,2,3) = 1;
edge(g,3,0) = 0;
edge(g,2,4) = 1;
// The output of this shows all the nodes are correctly labeled.
cout << "Predicted labels: " << endl;
std::vector<bool> temp = labeler(g);
for (unsigned long i = 0; i < temp.size(); ++i)
cout << " " << i << ": " << temp[i] << endl;
// Breaking the strong labeling consistency link between node 1 and 2 causes
// nodes 2, 3, and 4 to flip to false. This is because of their connection
// to node 4 which has a small preference for false.
edge(g,1,2) = 0;
cout << "Predicted labels: " << endl;
temp = labeler(g);
for (unsigned long i = 0; i < temp.size(); ++i)
cout << " " << i << ": " << temp[i] << endl;
catch (std::exception& e)
cout << "Error, an exception was thrown!" << endl;
cout << e.what() << endl; | {"url":"http://dlib.net/graph_labeling_ex.cpp.html","timestamp":"2014-04-21T04:33:41Z","content_type":null,"content_length":"31051","record_id":"<urn:uuid:ef40d590-0ab4-4318-93d9-7e9b0d4cef2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Body mass index
More About:
body, mass and index
Six week body makeover
(99% Match) ...body makeover Six week body makeover...mass index (BMI) of 30 or greater. Vitamin ...index (BMI) of 30 or greater. Vitamin
Body for Life diet
(99% Match) ...bodybuilder Bill Phillips. The program promises...mass. The goal of the Body for Life plan is not
Six Day Body Makeover
(99% Match) ...bodybuilder while stationed on an aircraft carr...mass index (BMI) of 30 or greater. Vitamin ...index (BMI) of 30 or greater. Vitamin
Bodybuilding diet
(99% Match) ...bodybuilding diet is designed to build muscle a...mass but not all protein consumed in the diet g...index (GI)—A method of ranking of carbohy
Glycemic index diets
(99% Match) ...body compensates for the rise in blood sugar le...mass and reduced the risk of cardiovascular dis...index diets Glycemic index diets Def
Glycemic Index
(99% Match) ...body compensates for the rise in blood sugar le...index (GI) is a ranking of carbohydrate foods in
Body image
(99% Match) ...body based on what is perceived by that person....mass media advertisements and other such means
Body Image
(99% Match) ...body image refers to the view that a person has...mass media, peer groups, ethnic groups, and fam
Highlight any text in the article to look up more information!
Body mass index
Body mass index (BMI), also called the Quetelet Index, is a calculation used to determine an individual’s amount of body fat.
The BMI gives healthcare professionals a consistent way of assessing their patients’ weight and an objective way of discussing it with them. It is also useful in suggesting the degree to which the
patient may be at risk for obesity-related diseases.
BMI is a statistical calculation intended as an assessment tool. It can be applied to groups of people to determine trends or it can be applied to individuals. When applied to individuals, it is only
one of several assessments used to determine health risks related to being underweight, overweight, or obese.
The history of BMI
The formula used to calculate BMI was developed more than one hundred years ago by Belgian mathematician and scientist Lambert Adolphe Quetelet (1796-1874). Quetelet, who called his calculation the
Quetelet Index of Obesity, was one of the first statisticians to apply the concept of a regular bell-shaped statistical distribution to physical and behavioral features of humans. He believed that by
careful measurement and statistical analysis, the general characteristics of populations could be mathematically determined. Mathematically describing the traits of a population led him to the
concept of the hypothetical “average man” against which other individuals could be measured. In his quest to describe the weight to height relationship in the average man, he developed the formula
for calculating the body mass index.
Calculating BMI requires two measurements: weight and height. To calculate BMI using metric units, weight in kilograms (kg) is divided by the height squared measured in meters (m). To calculate BMI
in imperial units, weight in pounds (lb) is divided by height squared in inches (in) and then multiplied by 703. This calculation produces a number that is the individual’s BMI This number, when
compared to the statistical distribution of BMIs for adults ages 20–29, indicates whether the individual is underweight, average weight, overweight, or obese. The 20–29 age group was chosen as the
standard because it represents fully developed adults at the point in their lives when they statistically have the least amount of body fat. The formula for calculating the BMI of children is the
same as for adults, but the resulting number is interpreted differently.
Although the formula for calculating BMI was developed in the mid-1800s, it was not commonly used in the United States before the mid-1980s. Until then, fatness or thinness was determined by tables
that set an ideal weight or weight range for each height. Heights were measured in one-inch intervals, and the ideal weight range was calculated separately for men and women. The information used to
develop these ideal weight-for-height tables came from several decades of data compiled by life insurance companies. These tables determined the probability of death as it related to height and
weight and were used by the companies to set life insurance rates. The data excluded anyone with a chronic disease or anyone who, for whatever health reason, could not obtain life insurance.
Interest in using the BMI in the United States increased in the early 1980s when researchers became concerned that Americans were rapidly becoming
Interpreting BMI calculations for adults
All adults age 20 and older are evaluated on the same BMI scale as follows:
• BMI below 18.5: Underweight
• BMI 18.5-24.9: Normal weight
• BMI 25.0-29.9: Overweight
• BMI 30 and above: Obese
Some researchers consider a BMI of 17 or below an indication of serious, health-threatening malnourishment. In developed countries, a BMI this low in the absence of disease is often an indication
anorexia nervosa At the other end of the scale, a BMI of 40 or greater indicates morbid obesity that carries a very high risk of developing obesity-related diseases such as stroke, heart attack, and
type 2 diabetes.
Interpreting BMI calculations for children and teens
The formula for calculating the BMI of children ages 2-20 is the same as the formula used in calculating adult BMIs, but the results are interpreted differently. Interpretation of BMI for children
takes into consideration that the amount of body fat changes as children grow and that the amount of body fat is different in boys and girls of the same age and weight.
Instead of assigning a child to a specific weight category based on their BMI, a child’s BMI is compared to other children of the same age and sex. Children are then assigned a percentile based on
their BMI The percentile provides a comparison between their weight and that of other children the same age and gender. For example, if a girl is in the 75th percentile for her age group, 75 of every
100 children who are her age weigh less than she does and 25 of every 100 weigh more than she does. The weight categories for children are:
• Below the 5th percentile: Underweight
• 5th percentile to less than the 85th percentile: Healthy weight
• 85th percentile to less than the 95th percentile: At risk of overweight
• 95th percentile and above: Overweight
Application of BMI information
The BMI was originally designed to observe groups of people. It is still used to spot trends, such as increasing weight in a particular age group over time. It is also a valuable tool for comparing
body mass among different ethnic or cultural groups, and can indicate to what degree populations are undernourished or overnourished.
When applied to individuals, the BMI is not a diagnostic tool. Although there is an established link between BMI and the prevalence of certain diseases such as type 2 diabetes, some cancers, and
cardiovascular disease, BMI alone is not intended to predict the likelihood of an individual developing these diseases. The National Heart, Lung, and Blood Institute recommends that the following
measures be used to assess the impact of weight on health:
• BMI
• Waist circumference (an alternate measure of body fat)
• GALE ENCYCLOPEDIA OF DIETS
• Low HDL or “good” cholesterol
• High blood glucose (sugar)
• High triglycerides
• Family history of cardiovascular disease
• Low physical activity level
• Cigarette smoking
BMI is very accurate when defining characteristics of populations, but less accurate when applied to individuals. However, because it is inexpensive and easy to determine BMI is widely used.
Calculating BMI requires a scale, a measuring rod, and the ability to do simple arithmetic or use a calculator. Potential limitations of BMI when applied to individuals are:
• BMI does not distinguish between fat and muscle. BMI tends to overestimate the degree of “fatness” among elite athletes in sports such as football, weightlifting, and bodybuilding. Since muscle
weighs more than fat, many athletes who develop heavily muscled bodies are classified as overweight, even though they have a low percentage of body fat and are in top physical condition.
• BMI tends to underestimate the degree of fatness in the elderly as muscle and bone mass is lost and replaced by fat for the same reason it overestimates fatness in athletes.
• BMI makes no distinction between body types. People with large frames (big boned) are held to the same standards as people with small frames.
• BMI weight classes have absolute cut-offs, while in many cases health risks change gradually along with changing BMIs. A person with a BMI of 24.9 is classified as normal weight, while one with a
BMI of 25.1 is overweight. In reality, their health risks may be quite similar.
• BMI does not take into consideration diseases or drugs that may cause significant water retention.
• BMI makes no distinction between genders, races, or ethnicities. Two people with the same BMI may have different health risks because of their gender or genetic heritage.
BMI is a comparative index and does not measure the amount of body fat directly. Other methods do give a direct measure of body fat, but these methods generally are expensive and require specialized
equipment and training to be performed accurately. Among them are measurement of skin fold thickness, underwater (hydrostatic) weighing, bioelectrical impedance, and dual-energy x-ray absorptiometry
(DXA). Combining BMI, waist circumference, family health history, and lifestyle analysis gives healthcare providers enough information to analyze health risks related to weight at minimal cost to the
Childhood obesity is an increasing concern. Research shows that overweight children are more likely to become obese adults than normal weight children. Excess weight in childhood is also linked to
early development of type 2 diabetes, cardiovascular disease, and early onset of certain cancers. In addition, overweight or severely underweight children often pay a heavy social and emotional price
as objects of scorn or teasing.
Both the American Academy of Pediatrics (AAP) and the United States Centers for Disease Control and Prevention (CDC) recommend that the BMI of children over age two be reviewed at regular intervals
during pediatric visits. Parents of children whose BMI falls above the 85th percentile (at risk of being overweight and overweight categories) should seek information from their healthcare provider
about health risks related to a high BMI and guidance on how to moderate their child’s weight. Strenuous dieting is rarely advised for growing children, but healthcare providers can give guidance on
improving the chid’s diet, eliminating empty calories (such as those found in soda and candy) and increasing the child’s activity level in order to burn more calories and improve fitness.
Tish Davidson, A.M.
Popular Web Searches: | {"url":"http://diet.com/g/body-mass-index","timestamp":"2014-04-17T12:55:08Z","content_type":null,"content_length":"44476","record_id":"<urn:uuid:a5d1d2f3-23ef-4f51-8f89-c74a882c92fd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who Should Take this Short Course
Trigonometry for you
Trigonometry is useful. If you would like to learn a bit about trigonometry, or brush up on it, then read on. These notes are more of an introduction and guide than a full course. For a full course
you should take a class or at least read a book.
There are no grades and no tests for you to take, and no transcripts and no awards. There are a few exercises for you to work on, but only a few. The exercises are the most important aspect of a
trigonometry course, or any course in mathematics for that matter.
Your background
You should already be familiar with algebra and geometry before learning trigonometry. From algebra, you should be comfortable with manipulating algebraic expressions and solving equations. From
geometry, you should know about similar triangles, the Pythagorean theorem, and a few other things, but not a great deal.
How to learn trigonometry
Trigonometry is like other mathematics. Take your time. Write things down. Draw figures.
Work out the exercises. There aren’t many, so do them all. There are hints if you need them. There are short answers given, too, so you can check to see that you did it right. But remember, the
answers are not the goal of doing the exercises. The reason you’re doing the exercises is to learn trigonometry. Knowing how to get the answer is your goal. | {"url":"http://www.clarku.edu/~djoyce/trig/who.html","timestamp":"2014-04-20T10:52:57Z","content_type":null,"content_length":"2099","record_id":"<urn:uuid:4bbf07dc-2868-4400-b32a-fff9fe774077>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using math to speed up school buses
Credit: Carmine Savarese
(PhysOrg.com) -- Optimizing school bus routes is a lot more complicated than one might think. The International School of Geneva handed their problem over to a group of EPFL mathematicians.
Our student population is increasing rapidly, observes Michel Chinal, Director General of the International School of Geneva. And the rising number of parents picking up and dropping off their
children is creating traffic problems in the village of Founex, just outside Geneva. The bus service offered by the school is too slow. Parents often say that they would like to sign their children
up, but the bus ride is too long. The buses pick up students in an area bounded by Morges, Geneva and neighboring France. So how can they improve the routes of 11 different buses carrying a total of
283 students to and from school? That s the problem that was given to the mathematicians in EPFL s Discrete Optimization Group.
EPFL chemist Rainer Beck, whose child attends the school, offered to optimize the service during a meeting of the parent s association. He asked his mathematical colleague Friedrich Eisenbrand to
tackle the problem. Coming up with a simple arithmetic algorithm is not difficult. But that s not an efficient approach -- due to the enormous number of possible itineraries, the calculations are
painfully slow. We needed to develop an algorithm that quickly rejected most routes, so that the computation could be completed before the end of the Universe, explains Eisenbrand. With the
assistance of his PhD student Adrian Bock, the mathematician came up with a solution for this complex problem. Using a few clever techniques, the calculations only take half a day to complete.
The researchers modeled student and parent satisfaction using specific parameters, such as regret (also called opportunity loss ), a term used in decision theory. For this case, the regret was the
difference between the ideal direct route in a car and the route taken by the bus. This parameter enabled the mathematicians to determine the threshold that would convince more students to take
the bus. Once the calculations were finished, the gain was impressive: the largest discrepancies between the bus and car routes were cut by 25%.
Optimization is a technique that can be taken well beyond the problem of ferrying kids back and forth from school. The mathematicians are collaborating not only with world leaders in the
telecommunications and airline industries to improve communications devices, but also with insurance companies to streamline their lengthy computations. Thus, in everyday life, as soon as we tap into
a network, such as the Internet, we are benefiting from all the optimization work that is hidden behind it.
In addition to its evident economic advantages, this research can also help meet objectives for reducing environmental impact. Our school is seriously concerned with pollution, and we are trying to
find responsible solutions, adds Chinal.
More information: Decision theory: en.wikipedia.org/wiki/Regret_%28decision_theory%29
not rated yet Jun 23, 2011
What method is used here? There was one special problem we had where we find the shortest path for a garbage truck to pass every street in a compound. Well, sort of a graph theory and the Chinese
postman Algorithm is what we used. Wonder what algo they have here. Interesting! | {"url":"http://phys.org/news/2011-06-math-school-buses.html","timestamp":"2014-04-21T02:24:59Z","content_type":null,"content_length":"68459","record_id":"<urn:uuid:3d1fc02c-ff39-4d5c-a0d1-e299da87d584>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
ay Resources
X-Ray Resources on the Web
X-Ray Reference Data
References to measured photoabsorption cross-sections compiled by John Hubbell at NIST.
Theoretical Form Factor, Attenuation, and Scattering Tabulation for Z = 1-92 from E = (1-10 eV) to (0.4-1.0 MeV). by C. T. Chantler (J. Phys. Chem. Ref. Data 24, 71 (1995)).
By J. H. Hubbell and S. M. Seltzer, NIST. Tables and graphs of the photon mass attenuation coefficients and the mass energy-absorption coefficients from 1 keV to 20 MeV are presented for all of
the elements (Z = 1 to 92) and for 48 compounds and mixtures of radiological interest.
A web database which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element,
compound or mixture (Z=1-100), at energies from 1 keV to 100 GeV.
Theoretical calculations of the forward atomic scattering factors, elastic scattering cross sections, and angle dependence of the scattering factors, for photon energies up to 10 MeV. by Lynn
Kissel and Paul Bergstrom at LLNL.
Presently, this DABAX server provides data for photon-atomic scattering, photon-atomic scattering factors (dispersive and non-dispersive parts) from different authors. It also provides other
atomic constants (x-ray emission and absorption energies, atomic weights, etc...). Several files for XAFS are also provided.
Adam Hitchcock's database and bibliography at McMaster University.
X-Ray Programs
Multilayer optical properties: modeling and curve-fitting IDL package written by David Windt.
Web based calculations of scattering factors for dynamical x-ray diffraction and simulations of dynamical x-ray diffraction profiles from strained crystals and multilayers. Authored by Sergey
Stepanov at the Advanced Photon Source.
IDL routines which calculate the complex index of refraction for a material from the atomic scattering factors. The output of these routines may be then used in the calculation of other
quantities such as the transmission of a filter, etc. A routine to calculate reflectivity is also available. Written by Chris Jacobsen.
XOP is a widget based interface which drives different programs that calculates the synchrotron radiation source spectra (bending magnet, wigglers and undulators) and the reflection and
transmission characteristics of optical elements as: mirror, filters, flat crystals, bent perfect crystals, and multilayers. XOP is the result of a collaboration work between Manuel Sanchez del
Rio (ESRF) and Roger J. Dejus (APS). | {"url":"http://henke.lbl.gov/optical_constants/web.html","timestamp":"2014-04-19T01:47:55Z","content_type":null,"content_length":"4539","record_id":"<urn:uuid:eadb9dd7-bdff-4df4-8374-b85ccde382d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
(-3w^3 x^-6)^3 Write your answer using only positive exponents.
• 3 months ago
• 3 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/52d5f81de4b0274b88c56370","timestamp":"2014-04-17T12:52:38Z","content_type":null,"content_length":"78242","record_id":"<urn:uuid:4d29d730-2b98-4939-b63a-b8946c6ed825>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gear Inches
Over a century since the death of the penny-farthing we still quantify bicycle gearing in reference to a high wheel bicycle. Like the terms horsepower and album, gear inches quantifies today’s
technology in terms of the past, left over from when safety bicycles with equal sized wheels took over the market towards the end of the 19th century.
High wheelers are the first and most efficient of fixed gears, with the crank arms directly attached to the axle and no drivetrain to speak of yielding a 1:1 ratio of wheel to crankarm revolutions.
Given the design, it was the size of the wheel that determined and described the relative speed and pedaling effort of each bike. The larger the wheel, the harder to pedal and the faster a high
wheeler can ultimately go. With chain driven bicycles the gear ratio (front chainring : rear cog, ex. 46:16 or 3:1) represents how many times the rear wheel turns for each crank revolution, with a
3:1 ratio meaning the wheel revolves three times for each turn of the pedals. As bikes with smaller wheels and chain drives appeared and gained popularity, with the rear wheel turning at a speed
different than that of the cranks, it was necessary to describe them in the familiar terms of the high wheeler.
Gear inches is an expression of bicycle gearing equivalent to the diameter of a high wheel—one crank revolution of a modern bicycle geared at 70 gear inches moves the bicycle forward the same
distance as a penny-farthing with a 70” wheel. Note that gear inches is not an expression of the distance forward a bicycle will travel in one crank revolution but merely a measure used to equate
modern bicycles to high wheelers. Simply multiply gear inches by a factor of π to calculate the inches of rollout or “development.”
Gear Inches = Wheel Diameter in Inches X (Number of Front Chainring Teeth / Number of Rear Sprocket Teeth) | {"url":"http://www.urbanvelo.org/issue37/p72-73.html","timestamp":"2014-04-19T12:31:24Z","content_type":null,"content_length":"4822","record_id":"<urn:uuid:0a7472bc-7701-46ff-8a38-d3f9c24d5d3b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stephen Cole Kleene
Stephen Cole Kleene, (born Jan. 5, 1909, Hartford, Conn., U.S.—died Jan. 25, 1994, Madison, Wis.), American mathematician and logician whose work on recursion theory helped lay the foundations of
theoretical computer science.
Kleene was educated at Amherst College (A.B., 1930) and earned a Ph.D. in mathematics at Princeton University in 1934. After teaching briefly at Princeton, he joined the University of Wisconsin at
Madison as an instructor in 1935 and became a full professor there in 1948. He retired in 1979.
Kleene’s research was devoted to the theory of algorithms and recursive functions (i.e., functions defined in a finite sequence of combinatorial steps). Kleene, together with Alonzo Church, Kurt
Gödel, Alan Turing, and others, developed the field of recursion theory, which made it possible to prove whether certain classes of mathematical problems are solvable or unsolvable. Recursion theory
in turn led to the theory of computable functions, which governs those functions that can be calculated by a digital computer. Kleene was the author of Introduction to Metamathematics (1952) and
Mathematical Logic (1967). | {"url":"http://www.britannica.com/print/topic/319950","timestamp":"2014-04-16T12:11:01Z","content_type":null,"content_length":"9003","record_id":"<urn:uuid:e77c0dc8-f9cd-410b-996a-02de3cc63f74>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstracts -
electronically available articles
Updated 2011-09-03
Michael Johnson, Robert Rosebrugh and R.J. Wood
This article is extends the ``lens'' concept for view updating in Computer Science beyond the categories of sets and ordered sets. It is first shown that a constant complement view updating strategy
also corresponds to a lens for a categorical database model. A variation on the lens concept called c-lens is introduced, and shown to correspond to the categorical notion of Grothendieck
opfibration. This variant guarantees a universal solution to the view update problem for functorial update processes.
Mathematical Structures in Computer Science, Vol. 22, 2012, 25-42. Available in pdf.
F. Marmolejo, R. Rosebrugh and R. J. Wood
In 1978, Street and Walters defined a locally small category K to be totally cocomplete if its Yoneda functor Y has a left adjoint X. Such a K is totally distributive if X has a left adjoint W. Small
powers of the category of small sets are totally distributive, as are certain sheaf categories. A locally small category K is small cocomplete if it is a P-algebra, where P is the small-colimit
completion monad on Cat. In 2007, Day and Lack showed that P lifts to R-algebras, where R is the small-limit completion monad on Cat. It follows that there is a distributive law RP to PR and we say
that K is completely distributive if K is a PR-algebra, meaning that K is small cocomplete, small complete, and the P structure preserves small limits. Totally distributive implies completely
distributive. We show that there is a further supply of totally distributive categories provided by categories of interpolative bimodules between small taxons as introduced by Koslowski in 1997.
Available in pdf.
Michael Johnson, Robert Rosebrugh and R.J. Wood
The correspondence between view update translations and views with a constant complement reappears more generally as the correspondence between update strategies and meet complements in the order
based setting of S. Hegner. We show that these two theories of database view updatability are linked by the notion of lens which is an algebra for a monad. We generalize lenses from the category of
sets to consider them in categories with finite products, in particular the category of ordered sets.
Journal of Universal Computer Science, Vol. 16, 2010, 729 - 748.
Available in pdf .
F. Marmolejo, R. Rosebrugh and R. J. Wood
The 2-category of constructively completely distributive lattices is shown to be bidual to a 2-category of generalized orders that admits a monadic schizophrenic object biadjunction over the
2-category of ordered sets.
Theory and Applications of Categories, Vol. 22, 2009, pp 1-23
Available in dvi and ps and pdf .
M. Johnson and R. Rosebrugh
LNCS, 5140, 232-237, 2008.
Available in pdf .
M. Johnson and R. Rosebrugh
LNCS, 5140, 238-252, 2008.
Available in pdf .
Michael Johnson and Robert Rosebrugh.
Invited book chapter.
Available in pdf .
M. Johnson and R. Rosebrugh
Maintainability and modifiability of information system software can be enhanced by the provision of comprehensive support for views, since view support allows application programs to continue to
operate unchanged when the underlying information system is modified. Supporting views depends upon a solution to the view update problem. This paper presents a new treatment of view updates for
formally specified semantic data models based on the category theoretic sketch data model. The sketch data model has been the basis of a number of successful major information system consultancies.
We define view updates by a universal property in models of the formal specification, and explain why this indeed gives a complete and correct treatment of view updatability, including a solution to
the view update problem. However, a definition of updatability which is based on models causes some inconvenience in applications, so we prove that in a variety of circumstances updatability is
guaranteed independently of the current model. This is done first with a very general criterion, and then for some specific cases relevant to applications. We include some detail about the sketch
data model, noting that it involves extensions of algebraic data specification techniques. This has appeared as:
Theoretical Computer Science 388 (2007), 109–129.
Available in dvi and pdf .
Generic commutative separable algebras and cospans of graphs
R. Rosebrugh, N. Sabadini and R. F. C. Walters
We show that the generic symmetric monoidal category with a commutative separable algebra which has a $\Sigma$-family of actions is the category of cospans of finite $\Sigma$-labelled graphs
restricted to finite sets as objects, thus providing a syntax for automata on the alphabet $\Sigma$. We use this result to produce semantic functors for $\Sigma$-automata. This has appeared as:
Theory and Applications of Categories, Vol. 15, 2005, pp 164-177
Available in dvi and ps and pdf .
Split structures
R. Rosebrugh and R.J. Wood
In the early 1990's the authors proved that the full subcategory of `sup-lattices' determined by the constructively completely distributive (CCD) lattices is equivalent to the idempotent splitting
completion of the bicategory of sets and relations. Having many corollaries, this was an extremely useful result. Moreover, as the authors soon suspected, it specializes a much more general result.
Let D be a monad on a category C in which idempotents split. Write kar(C_D) for the idempotent splitting completion of the Kleisli category. Write spl(C^D) for the category whose objects are pairs
((L,s),t), where (L,s) is an object of the Eilenberg-Moore category for D, and t is a homomorphism that splits s, with spl(C^D)(((L,s),t),((L',s'),t'))=C^D((L,s)(L',s')).
The main result is that kar(C_D) is isomorphic to spl(C^D). We also show how this implies the CCD lattice characterization theorem and consider a more general context. This has appeared as:
Theory and Applications of Categories, Vol. 13, 2004, pp 172-183
Available in dvi and ps and pdf .
R. Rosebrugh, N. Sabadini and R.F.C. Walters
The context of this article is the program to study the bicategory of spans of graphs as an algebra of processes, with applications to concurrency theory. The objective here is to study functorial
aspects of reachability, minimization and minimal realization. The compositionality of minimization has application to model-checking. Mathematical Structures in Computer Science, 14(2004), 685-714.
Available in dvi , ps and pdf .
M. Johnson and R. Rosebrugh
Partial information is common in real-world databases. Yet the theoretical foundations of data models are not designed to support references to missing data (often termed nulls). Instead, we usually
analyse a clean data model based on assumptions about complete information, and later retrofit support for nulls. The sketch data model is a recently developed approach to database specification
based on category theory. The sketch data model is general enough to support references to missing information within itself (rather than by retrofitting). In this paper we explore three approaches
to incorporating partial information in the sketch data model. The approaches, while fundamentally different, are closely related, and we show that under certain fairly strong hypotheses they are
Morita equivalent (that is they have the same categories of models, up to equivalence). Despite this equivalence, the query languages arising from the three approaches are subtly different, and we
explore some of these differences. Has appeared as ENTCS, Volume 78, 1-18, 2003.
Available in pdf .
R. Rosebrugh and R. J. Wood
For the 2-monad $((-)^2,I,C)$ on CAT, with unit $I$ described by identities and multiplication $C$ described by composition, we show that a functor $F : {\cal K}^2 \rightarrow \cal K$ satisfying $FI_
{\cal K} = 1_{\cal K}$ admits a unique, normal, pseudo-algebra structure for $(-)^2$ if and only if there is a mere natural isomorphism $F F^2 \rightarrow F C_{\cal K}$. We show that when this is the
case the set of all natural transformations $F F^2 \rightarrow F C_{\cal K}$ forms a commutative monoid isomorphic to the centre of $\cal K$. This has appeared as:
Theory and Applications of Categories, Vol. 10, 2002, pp 134-147
Available in dvi and ps and pdf .
R. Rosebrugh and R. J. Wood
This article shows that the distributive laws of Beck in the bicategory of sets and matrices, wherein monads are categories, determine strict factorization systems on their composite monads.
Conversely, it is shown that strict factorization systems on categories give rise to distributive laws. Moreover, these processes are shown to be mutually inverse in a precise sense. Strict
factorization systems are shown to be the strict algebras for the 2-monad (-)^2 on the 2-category of categories. Further, an extension of the distributive law concept provides a correspondence with
the classical orthogonal factorization systems. Has appeared in Journal of Pure and Applied Algebra 175(2002), 327--353 (Kelly Volume).
Available in dvi and pdf .
M. Johnson and R. Rosebrugh.
ENTCS, Volume 61, 6, 1-13, 2002.
[ ps.gz (~50k) | ps (~130k) ] © Elsevier Science
Available in ps and pdf .
M. Johnson and R. Rosebrugh.
Proceedings of the Tenth OOPSLA Workshop on Behavioral Semantics, Tampa, Florida, October 2001, 121-132.
M. Johnson and R. Rosebrugh.
Has appeared as Proceedings of the IEEE Conference on Software Maintenance, 32--39, 2001.
[ ps.gz (~73k) | ps (~165k) ] © IEEE
Available in ps and pdf .
M. Johnson and R. Rosebrugh.
The authors have developed a new approach to database interoperability using the sketch data model. An important question is: What are the algorithms that support updates in the sketch data model?
The question arises since the sketch data model uses EA-sketches to specify data structures, and these include constraint and other information not normally supported by relational database systems.
In this paper we answer the question by providing a formal definition of insert update together with an algorithm which provably achieves updates. The algorithm treats data and constraints on an
equal categorical footing. Further exactness properties (limits and colimits) can aid specification, and we provide algorithms for updates of EA sketched databases with finite limits. The sketch data
model is being used in industry for designing interoperations for computer supported cooperative work and CASE tools are under development. In press for CSCWD2001, the Fifth International Conference
on Computer Supported Cooperative Work in Design, London, Ontario, 2001.
Available in pdf .
M. Johnson and R. Rosebrugh.
Information system software productivity can be increased by improving the maintainability and modifiability of the software produced. This can be achieved by the provision of support for views.
Supporting views depends on a solution to the view update problem. This paper presents a new treatment of view updates. The formal specification technique we use is based on category theory and has
been the basis of a number of successful major information system consultancies. We define view updates by a universal property in a subcategory of models of the formal specification, and explain why
this indeed gives a comprehensive treatment of view updatability, including a solution to the view update problem. A definition of updatability which is based on models causes inconvenience in
applications, so we prove that in a variety of circumstances updatability is guaranteed independently of the current model. The solution to the view update problem can be seen as requiring the
existence of an initial model for a specification. LNCS, 2021, 534-549, 2001.
Available in pdf .
C.N.G. Dampney, M. Johnson and R. Rosebrugh.
This paper describes the sketch data model and investigates the view update problem (VUP) in the sketch data model paradigm. It proposes an approach to the VUP, and presents a range of examples to
illustrate the scope of the proposed technique. We define under what circumstances a view update can be propagated to the underlying database. Unlike many previously proposed approaches the
definition is succinct and consistent, without ad hoc exceptions, and the propagatable updates form a broad class. The examples demonstrate that under a range of circumstances a view schema can be
shown to have propagatable views in all states, and thus state-independence can frequently be recovered. Proceedings of the twelfth Australasian Database Conference ADC2001, 29--36, IEEE Press, 2001.
Available in pdf .
M. Johnson and R. Rosebrugh.
Computer supported cooperative work (CSCW) depends more and more upon database interoperability. The design of interbusiness CSCW when the businesses are already operating independent systems depends
either upon effective reverse engineering, or upon sufficiently rich semantic models and good database management system support for logical data independence. This paper takes the second approach
presenting a rich semantic data model that the authors have been developing and have used successfully in a number of major consultancies, and a new approach to logical data independence and view
updatability based on that model. We show how these approaches support database interoperability for business-to-business transactions, and, for CSCW within an organisation, how they support
federated databases. Proceedings of CSCW2000, the Fourth International Conference on Computer Supported Collaborative Work, IEEE Hong Kong, 161-166, 2000.
Available in ps and pdf.
F. Marmolejo, R. Rosebrugh and R. J. Wood
We pursue distributive laws between monads, particularly in the context of KZ-doctrines, and show that a very basic distributive law has (constructively) completely distributive lattices for its
algebras. Moreover, the resulting monad is shown to be also the double dualization monad (with respect to the subobject classifier) on ordered sets. Journal of Pure and Applied Algebra 168(2002),
209-226 (Coimbra Volume).
Available in dvi and pdf .
R. Rosebrugh and R. J. Wood
We extend the concept of constructive complete distributivity so as to make it applicable to ordered sets admitting merely {\em bounded} suprema. The KZ-doctrine for bounded suprema is of some
independent interest and a few results about it are given. The 2-category of ordered sets admitting bounded suprema over which non-empty infima distribute is shown to be bi-equivalent to a 2-category
defined in terms of idempotent relations. As a corollary we obtain a simple construction of the non-negative reals. Appear in Applied Categorical Structures, 9(2001), pp 437-456.
Available in dvi and ps and pdf .
M. Johnson, R. Rosebrugh and R. J. Wood
Entity-Relationship-Attribute ideas are commonly used to specify and design information systems. They use a graphical technique for displaying the objects of the system and relationships among them.
The design process can be enhanced by specifying constraints of the system and the natural environment for these is the categorical notion of sketch. Here we argue that the finite-limit, finite-sum
sketches with a terminal node are the appropriate class and call them EA sketches. A model for an EA sketch in a lextensive category is a `snapshot' of a database with values in that category. The
category of models of an EA sketch is an object of models of the sketch in a 2-category of lextensive categories. Moreover, modelling the {\em same} sketch in certain objects in other 2-categories
defines both the query language for the database and the updates (the dynamics) for the database. This article has appeared in Theory and Applications of Categories 10(2002), 94-112.
Available in dvi and ps and pdf .
R. Rosebrugh, N. Sabadini and R.F.C. Walters
The context of this article is the program to develop monoidal bicategories with a feedback operation as an algebra of processes, with applications to concurrency theory. The objective here is to
study reachability, minimization and minimal realization in these bicategories. In this setting the automata are 1-cells in contrast with previous studies where they appeared as objects. As a
consequence we are able to study the relation of minimization and minimal realization to serial composition of automata using (co)lax (co)monads. We are led to define suitable behaviour categories
and prove minimal realization theorems which extend classical results. Mathematical Structures in Computer Science, 8(1998), 93-116.
Available in dvi and ps and pdf .
M. Fleming, R. Gunther and R. Rosebrugh
We describe a program which facilitates storage and manipulation of finitely-presented categories and finite-set valued functors. It allows storage, editing and recall of finitely-presented
categories and functors. Several tools for testing properties of objects and arrows, and the computation of right and left kan extensions are included. The program is written in ANSI C and is
menu-based. Use of the program requires a basic knowledge of category theory. Journal of Symbolic Computation 35(2003), 127-135.
Available in dvi and pdf .
M. Fleming, R. Gunther and R. Rosebrugh
A guide to use of a C program which allows storage and manipulation of finitely presented categories and functors, including Kan extensions of finite-set valued functors.
Available in dvi and pdf files, and here is an abridged version
M. Fleming, R. Gunther and R. Rosebrugh
A technical manual for users of the program described in the preceding abstract.
Available in dvi and pdf .
R. Rosebrugh and R. J. Wood
For an adjoint string V -| W -| X -| Y : B --> C, with Y fully faithful, it is frequently, but not always, the case that the composite VY underlies an idempotent monad. When it does, we call the
string distributive. We also study shorter and longer `distributive' adjoint strings and how to generate them. These provide a new construction of the simplicial 2-category, Delta. This article has
appeared in Theory and Applications of Categories 1(1995), 119-145.
Available in dvi and pdf .
R. Rosebrugh and R. J. Wood
If a category B with Yoneda embedding Y : B ---> CAT(B^op,set) has an adjoint string, U -| V -| W -| X -| Y, then B is equivalent to set. This paper has appeared in Proceedings of the American
Mathematical Society 122(1994), 409-413.
Available in dvi and pdf .
R. Rosebrugh and R.J. Wood
A description of relational databases in categorical terminology given here has as intended application the study of database dynamics, in particular we view (i) updates as database objects in a
suitable category indexed by a topos; (ii) L-fuzzy databases as database objects in sheaves. Indexed categories are constructed to model the databases on a fixed family of domains and also all
databases for a varying family of domains. Further, we show that the process of constructing the relational completion of a relational database is a monad in a 2-category of functors. This paper has
appeared in the Proceedings of the 1991 Category Theory Meeting, published by the AMS.
Here are the dvi and pdf files.
R. Rosebrugh and R. J. Wood
A complete lattice L is constructively completely distributive, ccd, when the sup arrow from down-closed subobjects of L to down-closed subobjects of L has a left adjoint. The Karoubian envelope of
the (bi-)category of relations is equivalent to the (bi-)category of ccd lattices and sup-preserving arrows. The equivalence restricts to an equivalence between ideals and ``totally algebraic''
lattices. Both equivalences have left exact versions. As applications we characterize projective sup lattices and recover a known characterization of projective frames. Also, the known
characterization of nuclear sup lattices in set as completely distributive lattices is extended to an equivalence with ccd lattices in a topos. This paper has appeared in Applied Categorical
Structures 2(1994), 119-144.
Here are the dvi and pdf files.
R. Rosebrugh and R.J. Wood
A complete lattice L is constructively completely distributive, (CCD)(L), if the sup map defined on down closed subobjects has a left adjoint. We characterize preservation of this property by left
exact functors between toposes using a ``logical comparison transformation''. The characterization is applied to (direct images of) geometric morphisms to show that local homeomorphisms (in
particular, product functors) preserve (CCD) objects, while preserving (CCD) objects implies openness. This paper has appeared in the Canadian Mathematical Bulletin 35(1992), 537-547.
Here are the dvi and pdf files.
R. Rosebrugh and R.J. Wood
A complete lattice L is constructively completely distributive, (CCD)(L), if the sup map defined on down-closed subobjects has a left adjoint. It was known that in boolean toposes (CCD)(L) is
equivalent to (CCD)(L\op). We show here that the latter property for all L (sufficiently, for Omega) characterizes boolean toposes. This paper has appeared in the Mathematical Proceedings of the
Cambridge Philosophical Society, 110(1991) pp. 245-249.
Here are the dvi and pdf files. | {"url":"http://www.mta.ca/~rrosebru/abstracts.html","timestamp":"2014-04-19T04:20:03Z","content_type":null,"content_length":"31431","record_id":"<urn:uuid:d96a348d-4786-490b-9770-aa47a11ba0f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next | Prev | Top | Index | JOS Index | JOS Pubs | JOS Home | Search Sound is radiated from a fully or partially open tonehole because the air just outside the tonehole is disturbed by the
vibrational motion of the ``open'' part of the tonehole volume. Hence the flow
A woodwind tonehole may be considered as an isotropic source [4]. Given a source-strength radiation pressure at a distance
where 14) and (15), we can compute the pressure radiated from a woodwind tonehole as:
Note that the frequency term pressure wave to reach the ``listening point''). Thus, the radiated pressure at any distance from the tonehole can be computed by simply scaling and delaying the bore
pressure wave digital tonehole model by formulating the digital domain version of (16) as:
where delay-line of fractional length 17) gives a good approximation at lower frequencies, but the accuracy decreases for higher frequencies. This is mainly because the WD tonehole model is based on
a low-frequency approximation of the real acoustical behaviour of the tonehole. Moreover, we have assumed that the radiation is isotropic (i.e., the flow spreads out evenly in all directions). This
assumption is valid for low frequencies, but for higher frequencies the effects of directivity need to be taken into account (such as described in [11]). Since the higher frequencies are relevant
from a perceptual point of view, an extra filter (that compensates for the deviations described above) can be applied to the pressure calculated with eq. (17) in order to obtain a better aural
result. In general, such a filter has a rather ``smooth'' high-pass amplitude response, and can be approximated with a lower-order digital filter.
Next | Prev | Top | Index | JOS Index | JOS Pubs | JOS Home | Search Download wdth.pdf | {"url":"https://ccrma.stanford.edu/~jos/wdth/SOUND_RADIATION.html","timestamp":"2014-04-16T17:52:52Z","content_type":null,"content_length":"11367","record_id":"<urn:uuid:94d1ce29-688e-47f1-aabb-6bd0db08bbaf>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Look At Home Runs In Minute Maid Park
With the UW Club Baseball team, I had the fortune to play at a myriad of parks this last year. Many of them were, to say the least, not good. Aside from the poor infields and bumpy outfields that can
be expected playing at public parks, one of the things that a player must adjust to is the distance of the fence. At one park I played at, the fence was concave - the shortest point in right field
came about 30 feet to the left of the RF foul pole. In high school, a non-conference rival's field had the right field foul pole 240 feet from home plate.
Certainly, ballparks in the major leagues don't have dimensions that ridiculous. Still, some parks play very differently from others. One of these is Minute Maid Park. Famous for the Crawford Boxes
down the left field line and the hill in center field, Minute Maid Park's dimensions make it a difficult place for opposing outfielders (and home outfielders, for that matter). Not only do the
dimensions change how outfielders play, they also has drastic effects on the number of home runs hit .
The two points of interest mentioned above are what I decided to take a closer look at. Thanks to one of my favorite websites,
, people can see the trajectory and distance of every home run hit in the major leagues. Here is a plot of all 43 home runs hit at Minute Maid Park entering last night's game.
The blue dots in the picture are the landing points of the home runs.
First, let's examine the home runs hit to left field. At first glance, it looks like these are all cheapies. This plot, however, shows landing points, and because of the wall behind the Crawford
Boxes roughly 350 feet away from the plate, none of these HRs will look like they went farther despite hitting high enough off the wall that they would've traveled much farther. There have been 18
HRs hit to into the Crawford Boxes so far this year. Here they are in chart form.
There are only 3 HRs here over 400 feet. Not a whole lot of bombs hit here. However, the most interesting thing to note here is the number of parks that a ball hit with that trajectory would be a
home run in. 11 of the 18 HRs would be out in every park. This isn't surprising, as the portion of the field we're examining is relatively close to the left field line, and a 370 foot homer down the
line is out in any stadium. What stands out in this list are the 4 home runs at the top. Between these 4 home runs, they would produce a total of 10 HRs out of 120 possible. That's 8.3%. Let's take a
look at these 4 home runs in a relatively neutral stadium - Atlanta's Turner Field.
Not only are those not HRs. Those aren't even close. Over 20 games, Minute Maid Park has added 4 HRs that wouldn't be a HR in the average ML stadium (or nearly any stadium, for that matter).
Considering these balls are so far from the fence, it's probably safe to assume that an average ML left fielder could turn these balls into outs. That means that LF in Minute Maid Park adds 16 HR
over a full season. According to linear weights, that's roughly 31 runs over 81 games.
Of course, not all of Minute Maid Park is built so short. Center field in Houston is the deepest in the majors at 436 feet, and that's even before taking the hill into account. Much unlike left
field, center field tends to swallow up HRs and turn them into either doubles, triples, or outs. Here is a plot of Minute Maid park with 4 homers that are out in 27 out of 30 parks, but would stay in
at the Juice Box.
In this picture, no home runs were hit between the two black lines coming out from home plate. The red dots show the HRs trajectory, the blue dots show the landing spot, and the green dots show their
"Std. Distance", or their projected landing spot had they not hit elevated ground. Here are the 4 home runs in tabular form.
All of these home runs were farther than 414 feet in Std. Distance. These are the kind of hits that get deemed moonshots by the announcers. These are just examples. Across the league, there have been
198 home runs hit between the black lines in the above picture, and none of them have occurred at Minute Maid Park. That's 16% of the total number of home runs hit in the major leagues (1232 total),
and yet, we see 0 of them in Minute Maid. If we take out the 4 home runs that were added by the Crawford Boxes, Minute Maid would've allowed 39 home runs as of yesterday's games. However, if we would
expect 16% of the home runs to come from between the black lines, then that would mean that there are 6 would-be home runs missing. It seems, then, that at least so far this year, these two parts of
Minute Maid combined have actually suppressed 2 homers over 20 games.
Over the small sample size, it is likely that the home runs added by the Crawford Boxes and those suppressed by the deep center field will even out. Minute Maid is also getting slightly more home
runs than average in right field and left-center, to put it's 2.15 HR/G rate slightly above the league average of 2.08, consistent with its park factors over the years. Minute Maid Park is one of
those quirky parks that, as a road team, certainly make you change the way you play both as an outfielder, a hitter, and a pitcher.
No comments: | {"url":"http://balkingtraditionalism.blogspot.com/2009/05/look-at-home-runs-in-minute-maid-park.html","timestamp":"2014-04-20T10:46:01Z","content_type":null,"content_length":"45450","record_id":"<urn:uuid:9facf13e-df1e-43f9-8ed4-6aacdf014060>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
linear quadratic systems
October 28th 2010, 03:33 PM #1
Oct 2010
linear quadratic systems
graph of f(x)=(x-2)^2-3
using slope of -4
write the equations of the line that intersects the parabola at one point, two points and no points.
please help me i don't understand what to do
Accidental double post.
To help in understanding the problem, you might graph the function. It is a parabola, opening upward, with vertex at (2, -3). Then draw several lines with slope "4". Some of them will cross
through the parabola, intersecting it in two places. Others will go below the vertex and not intersect at all. Exactly one line will be tangent to it, at a point where the derivative of the
quadratic function is 4.
Another way to do this problem, without using Calculus, is this. We can write any line with slope 4 as y= 4x+ b for some number b. That line will intersect the graph of $y= (x- 2)^2- 3$ where $y=
(x-2)^2- 3= 4x+ b$. That is a quadratic equation and will have one, two, or zero solutions depending upon whether its "discriminant" is 0, positive, or negative respectively.
(The discriminant of the quadratic $ax^2+ bx+ c$ is $b^2- 4ac$.)
October 28th 2010, 03:55 PM #2
October 29th 2010, 05:45 AM #3
MHF Contributor
Apr 2005
October 29th 2010, 05:46 AM #4
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/algebra/161355-linear-quadratic-systems.html","timestamp":"2014-04-18T01:28:17Z","content_type":null,"content_length":"42313","record_id":"<urn:uuid:7636f35b-23de-4aff-8845-2fcc921e097c>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
PEAC Seasonal Rainfall Outlook
Koror: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Koror. There is a 30% chance that rainfall during the AMJ season will be less than 34.28 inches; a 35% chance
that rainfall will be between 34.28 inches and 42.1 inches; and a 35% chance that rainfall will be greater than 42.1 inches.
Yap: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Yap. There is a 30% chance that rainfall during the AMJ season will be less than 21 inches; a 35% chance that
rainfall will be between 21 inches and 32.89 inches; and a 35% chance that rainfall will be greater than 32.89 inches.
Chuuk: Rainfall for April-May-June 2014 is expected to be ABOVE for Chuuk. There is a 25% chance that rainfall during the AMJ season will be less than 32.97 inches; a 35% chance that rainfall will be
between 32.97 inches and 39.15 inches; and a 40% chance that rainfall will be greater than 39.15 inches.
Pohnpei: Rainfall for April-May-June 2014 is expected to be ABOVE for Pohnpei. There is a 25% chance that rainfall during the AMJ season will be less than 49.71 inches; a 35% chance that rainfall
will be between 49.71 inches and 56.96 inches; and a 40% chance that rainfall will be greater than 56.96 inches.
Kosrae: Rainfall for April-May-June 2014 is expected to be ABOVE for Kosrae. There is a 25% chance that rainfall during the AMJ season will be less than 47.62 inches; a 35% chance that rainfall will
be between 47.62 inches and 51.87 inches; and a 40% chance that rainfall will be greater than 51.87 inches.
Kwajalein: Rainfall for April-May-June 2014 is expected to be ABOVE for Kwajalein. There is a 25% chance that rainfall during the AMJ season will be less than 15.41 inches; a 35% chance that rainfall
will be between 15.41 inches and 26.35 inches; and a 40% chance that rainfall will be greater than 26.35 inches.
Majuro: Rainfall for April-May-June 2014 is expected to be ABOVE for Majuro.There is a 25% chance that rainfall during the AMJ season will be less than 25.63 inches; a 35% chance that rainfall will
be between 25.63 inches and 34.51 inches; and a 40% chance that rainfall will be greater than 34.51 inches.
Guam: Rainfall for April-May-June 2014 is expected to be ABOVE for Guam. There is a 25% chance that rainfall during the AMJ season will be less than 13.05 inches; a 35% chance that rainfall will be
between 13.05 inches and 15.95 inches; and a 40% chance that rainfall will be greater than 15.95 inches.
Saipan: Rainfall for April-May-June 2014 is expected to be ABOVE for Saipan. There is a 25% chance that rainfall during the AMJ season will be less than 8.14 inches; a 35% chance that rainfall will
be between 8.14 inches and 11.06 inches; and a 40% chance that rainfall will be greater than 11.06 inches.
Pago Pago: Rainfall for April-May-June 2014 is expected to be AVERAGE OR BELOW AVERAGE for Pago Pago. There is a 35% chance that rainfall during the AMJ season will be less than 22.42 inches; a 35%
chance that rainfall will be between 22.42 inches and 33.53 inches; and a 30% chance that rainfall will be greater than 33.53 inches.
Lihue: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Lihue. There is a 30% chance that rainfall during the AMJ season will be less than 4.74 inches; a 35% chance
that rainfall will be between 4.74 inches and 5.97 inches; and a 35% chance that rainfall will be greater than 5.97 inches.
Honolulu: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Honolulu. There is a 30% chance that rainfall during the AMJ season will be less than 1.23 inches; a 35%
chance that rainfall will be between 1.23 inches and 1.77 inches; and a 35% chance that rainfall will be greater than 1.77 inches.
Kahului: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Kahului. There is a 30% chance that rainfall during the AMJ season will be less than 1.25 inches; a 35% chance
that rainfall will be between 1.25 inches and 2.17 inches; and a 35% chance that rainfall will be greater than 2.17 inches.
Hilo: Rainfall for April-May-June 2014 is expected to be AVERAGE OR ABOVE AVERAGE for Hilo. There is a 30% chance that rainfall during the AMJ season will be less than 21.42 inches; a 35 % chance
that rainfall will be between 21.42 inches and 29.01 inches; and a 35% chance that rainfall will be greater than 29.01 inches. | {"url":"http://www.prh.noaa.gov/peac/rainfall.php","timestamp":"2014-04-17T15:49:50Z","content_type":null,"content_length":"39132","record_id":"<urn:uuid:8ebfc2cc-5af3-498e-b04a-1c3b42a09e3d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diophantine Equation
I was looking for some help solving the equation 15 625x + 8404 = 1024y thank you
You have $5^6x+2^2\cdot 11\cdot 191=2^{10}y$$\Longrightarrow 5^6x-2^{10}y=-2^2\cdot 11\cdot 191$ . Since $(5^6,2^{10})=1$ , the above equation has solution in the integers, and as $5^6\cdot 313+2^
{10}\cdot (-4776)=1$, we get that $5^6(313\cdot T)+2^{10}(-4776\cdot T)=T$ , for any integer $T$ and the solution is then $x=313\,T\,,\,\,y=-4776\,T$ . Well, now just input $T=8404=2^2\cdot 11\cdot
191$ and we're done. Tonio | {"url":"http://mathhelpforum.com/number-theory/127872-diophantine-equation-print.html","timestamp":"2014-04-20T23:55:07Z","content_type":null,"content_length":"5809","record_id":"<urn:uuid:7b2d8e50-cc6b-4c60-b89a-72c4a96c6a35>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 31
Find the area of a segment formed by a chord 8" long in a circle with radius of 8".
How did number 3 work out "someone"? It says "LEAST LIKELY" not "MOST LIKELY"
Algebra I
A 15 foot flagpole is mounted on top of a school building. If the top of the flagpole forms a 31° angle with the ground 50 ft from the base of the building, about how tall is the school building? 28
feet <-My answer 15 feet 30 feet <-second choice 11 feet
Typo: *your answer would be 13 either way...*
46 Absolute Value is the distance of an integer on a number line from the origin of 0. For example the absolute value could be |-13| or |13| and your answer would be 14 either way because each of
them are 13 spaces away from 0 on a numbe rline.
English 11
Who did Thomas Paine write Common Sense to? Revolutionary War soldiers. colonists in 1776, prior to the signing of the Declaration of Independence. (I chose this one) the British monarchy. women
seeking the right to vote
Number 3 is he switches from third-person point of view to a second-person point of view. Number 4 is all of the above Number 5 is The Enlightenment (I took the same test yesterday and got 100 on it)
What is the mass of 4.419 mol of potassium nitride?
Convert 3.09 g PCl5 to molecules im getting two differnt answers when I'm doing this kind of problems I got .015mol than 14.82mol which is correct?
where can I find a ghost song for my book report?
an "extended object" is in balance when the center of mass is located directly over the fulcrum. By using a test mass mT placed at various points xT on the object (as measured from some fixed
reference point), a linear relationship between xT and the fulcrum location...
Three balls are attached to a light rigid rod, as shown in the figure. A fulcrum is located at xf = 18.0 cm measured from the left end of the rod. Ball A has mass mA = 65 g and is located at xA = 3
cm. Ball B has mass mB = 12 g located at xB = 22 cm. Ball C has mass mC = 22 g ...
A pancake recipe calls for 4 cups of pancake mix for every 3 cups of milk.A biscut recipe calls for 2 cups of biscut mix forevery 1 cup of milk.Which recipehas a greater ratio of mix to milk?Explain.
Thank you helper so much!!!!:)
Ok, the problem says: a Middle school has 125 6th graders, 150 7th graders, and 100 8th graders. Which statement is NOT true? a. the ratio of 6th graders to 7th graders is 5 to 6. b. the ratio of 8th
graders to 7th graders is 3:2. c. the ratio of 6th graders to students in all...
wait so wich is it a,b,c,d
Ok, the problem says: a Middle school has 125 6th graders, 150 7th graders, and 100 8th graders. Which statement is NOT true? a. the ratio of 6th graders to 7th graders is 5 to 6. b. the ratio of 8th
graders to 7th graders is 3:2. c. the ratio of 6th graders to students in all...
125,150,and100 eith graders,whats the ratio ?
ENGLISH !
Lol. Thats a question on my exam.
7th grade Pre algebra( math honors0
triangle ABC is smilar to triangle FED Ab= 12 Ac=15 De=4 df=5 1. write three equal ratios to show correspnding sides are prosportional. 2. find the value of bc 3. find the value of ef.
how to gain your parents trust back
I told three bad lies and now my parents don't trust me any more. What should i do?
i think it is number 1
How do you get away from it
is nternet abbreviations are like texting
I think what Mark means is that Sec(theta)=5 Tan(theta)=2(sqrt of 6) Which would make more sense :)
ugh omg i dont feel like doing this on my own
Can carbon monoxide leak (from a fireplace, barbeque, water heater)?
The free-body diagram in the drawing shows the forces that act on a thin rod. The three forces are drawn to scale and lie in the plane of the screen. Are these forces sufficient to keep the rod in
equilibrium, or are additional forces necessary f3 ' ' ' f1 ------&g...
The free-body diagram in the drawing shows the forces that act on a thin rod. The three forces are drawn to scale and lie in the plane of the screen. Are these forces sufficient to keep the rod in
equilibrium, or are additional forces necessary I assume this is what I just ans...
The tail of a vector is fixed to the origin of an x, y axis system. Originally the vector points along the +x axis. As time passes, the vector rotates counterclockwise. Describe how the sizes of the
x and y components of the vector compare to the size of the original vector fo...
partscoretotalsubmissions1--10/12--10/13--10/14--10/1----4--The tail of a vector is fixed to the origin of an x, y axis system. Originally the vector points along the +x axis. As time passes, the
vector rotates counterclockwise. Describe how the sizes of the x and y components... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=panda","timestamp":"2014-04-16T10:59:54Z","content_type":null,"content_length":"12405","record_id":"<urn:uuid:d87528fd-c35c-4546-a54f-ae06e88a91b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
Writing inequality
March 7th 2007, 06:44 AM
Writing inequality
1. Sam must have an average of 70 or more in his summer course to obtain a grade of C. His first three test grades were 75, 63, and 68. Write a inequality representing the score that Sam must get
on the last test to get a C grade.
2. The cost for a long-distance telephone call is $0.36 for the first minute and $0.21 for each additional minute or portion thereof. Write an inequality representing the number of minutes a
person could talk without exceeding $3.
March 7th 2007, 07:22 AM
1. Sam must have an average of 70 or more in his summer course to obtain a grade of C. His first three test grades were 75, 63, and 68. Write a inequality representing the score that Sam must get
on the last test to get a C grade.
2. The cost for a long-distance telephone call is $0.36 for the first minute and $0.21 for each additional minute or portion thereof. Write an inequality representing the number of minutes a
person could talk without exceeding $3.
1. Sam must have an average of 70 or more in his summer course to obtain a grade of C. His first three test grades were 75, 63, and 68. Write a inequality representing the score that Sam must get
on the last test to get a C grade.
so you know the formula for average here would be (testscore1 + testscore2 + testscore3 + testscore4)/4
the scores for the first three are given, we have to find the last using an inequility. since we want the average to be at least 70 to get a C, we have:
Let the final score Sam needs for a C be x
then (75 + 63 + 68 + x)/4 >= 70
=> 206 + x >= 280
so x >= 74
so Sam's final score has to be at least 74 to get a C (or above) according to the above inequality.
2. The cost for a long-distance telephone call is $0.36 for the first minute and $0.21 for each additional minute or portion thereof. Write an inequality representing the number of minutes a
person could talk without exceeding $3.
Let x be the additional minutes we can talk for to not exceed a $3 bill. So the total number of minutes we can talk without exceeding $3 is x+1 minutes.
we want 0.36 + 0.21x <= 3
=> 0.21x <= 2.64
=> x <=2.64/.21
=> x <= 12.57
so if y is the total number of minutes we can talk
y <= x + 1
=> y <= 13.57 | {"url":"http://mathhelpforum.com/algebra/12285-writing-inequality-print.html","timestamp":"2014-04-19T03:31:30Z","content_type":null,"content_length":"6086","record_id":"<urn:uuid:0a4a4394-dd1e-418c-82f8-9ae1f0846835>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cotati Geometry Tutors
...Numbers finally come alive! Students' natural approach to this is often a fearful one, but with the right guidance and instruction, this can turn to interest and curiosity. At this stage, I
relate algebra back to our everyday lives as much as possible.
50 Subjects: including geometry, English, reading, GRE
I'm a retired engineer and math teacher with a love for teaching anyone who wants to learn. As an engineer, I regularly used all levels of math (from arithmetic through calculus), statistics, and
physics. I hold a California Single Subject Teaching Credential in math and physics (I taught high school math from pre-algebra to geometry).
26 Subjects: including geometry, reading, calculus, physics
...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I
was trained in and provided materials for each of these topics. I often find, when working with my stu...
20 Subjects: including geometry, calculus, statistics, biology
...Many have had a debilitating fear of math, which I can personally relate to. As a high school and college student, mathematics was always my worst subject. So when my first son was a year old,
I went back to school after getting my degree to see if I could conquer it and discovered that I was good at it.
10 Subjects: including geometry, statistics, algebra 2, SAT math
...Find your instructor confusing? Wondering just what complex numbers are good for? Beginning to think these subjects are just too hard, that you will never ?get it?? Don?t worry, I can help.
37 Subjects: including geometry, chemistry, English, reading | {"url":"http://www.algebrahelp.com/Cotati_geometry_tutors.jsp","timestamp":"2014-04-17T21:27:49Z","content_type":null,"content_length":"24620","record_id":"<urn:uuid:0fc78537-4b25-43a7-878d-d39d3cfc4ea6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Curriculum Foundations Workshop on Chemistry
By David Bressoud
Nine chemists met at Macalester College in November 2000 for the MAA Curriculum Foundations Workshop in Chemistry. Their charge was to provide advice for the planning and teaching of the mathematics
curriculum as it affects chemistry majors. Two of the participants (Craig and Engstrom) were members of the American Chemical Society’s (ACS) Committee on Professional Training, the committee that
sets the requirements for the ACS accredited major. Three mathematicians (myself, Tom Halverson from Macalester, and Roger Howe from Yale) were present to answer questions and probe for
With remarkable efficiency and unanimity, the chemists identified and then fleshed out six themes that they feel are essential for the mathematical preparation of chemistry majors. Quotes are taken
from the report cited at the end of this article.
Multivariable relationship?The mathematics requirement for most chemistry majors is at least two and often three semesters of calculus. Some students also take linear algebra or differential
equations. Almost all problems in chemistry are multivariate. The most pressing concern that the chemists voiced was that their students see multivariable functions early and often so that they are
comfortable with them. They noted that while a course in linear algebra is seldom required of chemistry majors, concepts of bases, orthogonality, and eigenvectors are important in chemistry and
should be among the highest priorities.
Numerical Methods?These are at the heart of the mathematics used most frequently.
’Technology makes it possible to address old questions more quantitatively and more realistically than was possible in the past. The complexities of real chemical material can be approached more
fully. [...] In general, solving these problems depends on multivariate analysis and numerical methods. Use of computers is assumed.â?
Visualization?Geometric visualization is one of the highest priorities for chemists. ’Chemistry is highly visual. Synthetic chemistry ... depends on being able to visualize structures and atomic and
molecular orbitals in three dimensions.â? The chemists deplored the fact that ’geometry has been largely squeezed out of the secondary school curriculum. Little background in geometry helps explain
why chemistry students have growing difficulty with the spatial relationships that are at the heart of much chemical thinking.â?
Scale and estimation?Chemists work across scales that range from subatomic to cosmic. Students need a working sense of orders of magnitude and the ability to do order-of-magnitude estimation.
Mathematical reasoning ?The chemists wrote: ’Students must be able to follow and apply algebraic arguments, that is, ’listen to the equations’, if they are to understand the relationships between
various mathematical expressions, adapt these expressions to particular applications, and see that most specific mathematical expressions can be recovered from a few fundamental relationships in a
few steps. Logical, organized thinking and abstract reasoning are skills developed in mathematics courses that are essential for chemistry.â?
Learning how to reason mathematically requires writing mathematics. ’Today’s mathematicians and chemists agree on the value of having students write to learn mathematics and chemistry.â? The report
explains that this fosters critical thinking skills and builds student confidence in using mathematics as an active language.
Data analysis?Few chemistry students take a course in statistics, but statistical inference runs throughout courses in analytical chemistry and, to a lesser extent, courses in physical chemistry. The
topics that chemists feel their students need include probability, combinatorics, distributions, uncertainty, confidence intervals, and propagation of error.
Calculus is still at the core of the mathematics that chemistry majors need, but the chemists pared the essential techniques down to integration and differentiation of polynomials, logarithms,
exponentials, and trigonometric functions, differentiation of inverse functions, and integration by parts. Beyond these techniques, what they considered most important are the ideas of calculus:
derivative as slope or rate of change, integral as area or accumulator, knowing what is held constant in a partial derivative, understanding the interplay of graphical, symbolic, and numerical
interpretations, being able to read and write calculus as a language for describing complex interactions. And they expressed a profound desire that students not come out of calculus thinking that the
variable has to be x and the function labeled f.
As in most science and technical majors, it is not possible to require more math classes. Chemists teach many of these essential mathematical topics on the fly within the relevant course. But the
chemists present wonderful opportunities for mathematicians. A course that combined three-dimensional visualization with linear algebra and drew on the rich set of examples within chemistry would
entice many of their students.
Norman C. Craig. 2001. Chemistry Report: MAA-CUPM Curriculum Foundations Workshop in Biology and Chemistry. Journal of Chemical Education 78. 582?6.
David Bressoud is DeWitt Wallace Professor of Mathematics at Macalester College. He was the local organizer for the Curriculum Foundations Workshop on Biology and Chemistry held at Macalester College
in November 2000. | {"url":"http://www.maa.org/programs/faculty-and-departments/curriculum-department-guidelines-recommendations/crafty/summer-reports/workshop-chemistry?device=mobile","timestamp":"2014-04-20T01:24:26Z","content_type":null,"content_length":"25554","record_id":"<urn:uuid:57b709b6-f3ee-48db-9ce9-0046b1f94138>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Possible Answer
Curly Brackets signify an Array-Entered Formula in Excel. To get the curly brackets back on your formula hit ctrl+shift+enter instead of just enter after typing the formula. - read more
what do the brackets { } mean around a formula? Excel Worksheet Functions ... Microsoft Excel MVP ... Getting rid of curly brackets within formula: Hardy: Excel Worksheet Functions: 0: November 2nd
Share your answer: what do curly brackets mean in excel?
Question Analizer
what do curly brackets mean in excel resources | {"url":"http://www.askives.com/what-do-curly-brackets-mean-in-excel.html","timestamp":"2014-04-19T02:38:16Z","content_type":null,"content_length":"36537","record_id":"<urn:uuid:419f0f74-1c12-42c7-8e03-0abbced39793>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harvard Public Health Review: Summer 2002
ot a seat is empty in Kresge 502, the cozy auditorium tucked into the midsection of the Harvard School of Public Health's main building. Students, staff, junior faculty, several tenured professors,
and at least one dean are in the audience for the noontime talk. The speaker is dishabille: his medium-length gray hair in a state of frenzy, his brown plaid shirt half tucked into jeans that puddle
on the tops of his running shoes. His topic: a new algorithm for finding the optimum treatment strategy. Dry, dense stuff, but he manages to make it seem urgent. The overheads spew mathematical
notation as his explanation darts between flights of jargon and attempts to bring it all down to earth ("I call this the blip-down step.").
"Everybody with me?" asks Professor James Robins several times. "Everybody in the game?"
For the first half of his 20 years at the School, the answer to that question was--perhaps a little too emphatically--no. But times have changed for Robins. After years of rejection by elite journals
like the Journal of the American Statistical Association, he is now recognized as one of the leading mathematical statisticians in the country, if not the world. A gift from Mitchell L. Dong and his
wife Robin LaFoley Dong last year endowed his professorship in the epidemiology department. His advanced epidemiologic methods class demands more of students than any other course at the School, but
it's widely appreciated as an important rite of passage into deeper thinking about the fundamentals of epidemiology and statistics.
"Students, they don't realize the great effort he makes to explain something that for him is so absolutely trivial and for most of us is really hard," says Miguel Hernán, SM'99, an instructor in
epidemiology who co-teaches the methods course and has worked with Robins to make the material more accessible. Sander Greenland, a professor of statistics at UCLA's School of Public Health, says
Robins stands out because he comes up with the nitty-gritty statistical methods that put his bold concepts on solid ground. "On an intellectual level, he is one of the best I've ever worked with,"
says Greenland.
Robins admits that the rejection letters put a chip on his shoulder, but says he never doubted that he is right. Now he talks enthusiastically about his work being part of a "convergence": "There
used to be a completely different culture and language in artificial intelligence, robotics, economics, statistics, biostatistics--we now all talk exactly the same way."
Robins grew up in suburban St. Louis. His father, Eli Robins, was head of psychiatry at Washington University School of Medicine. His mother, Lee Robins, is still a psychiatric epidemiologist there.
One of four boys, Robins was competitive and rebellious: "My parents were intellectuals so, of course, I didn't like intellectual things and was totally into sports." Clearly, the
anti-intellectualism only went so far. Robins hit Cambridge in 1967 as a Harvard freshmen along with some other fairly bright kids.
But the rebelliousness runs deeper. Robins, who majored in math and philosophy, never got his undergraduate degree because he refused to take the courses necessary to fulfill the language requirement
(tests proved he had special problems processing spoken language, but college officials held him to the requirement). As a young doctor, he was fired by a community health clinic in Boston for trying
to organize a "vertical" union that would include everyone from custodial workers to the doctors. After joining the faculty at the School in 1982, Robins wasn't an outright rebel, but he was
certainly something of a misfit. Robins says he has Professor Richard Monson to thank for "protecting" him during those lean times and Dimitrios Trichopoulos, Vincent L. Gregory Professor of Cancer
Prevention and former epidemiology chair, for giving him an institutional home in the department.
Robins was helping run an occupational health clinic at Yale that he had co-founded when he started to dabble in statistics. Part of his interest came from occupational health, a field full of
problems relating exposure to disease causation. The real attraction, however, may have been that statistics gave free rein to his tremendous gifts for math and abstract thinking. Robins started just
by reading books from the medical library, which he says, smiling, may have been the beginning of his "weird self-taught views." When he started to take statistics courses he was so confused by all
the "bizarre conventions" that he got Cs. His probing questions were answered with "you really couldn't understand."
The irony: Robins's work itself is hard to understand, although he makes a valiant effort to explain it without visible condescension. At the most general level, he has come up with novel statistical
methods for sorting out spurious associations from true cause-and-effects--a problem facing anyone trying to interpret lots of data. In epidemiology, the usual approach is to control for confounding
factors--the variables that are associated both with the exposure and the outcome of interest. One problem with that approach is that it requires the statistician to make some educated guesses about
what those confounding factors might be. Another is that the confounders can also be so-called intermediate variables--events on the "causal pathway" between the exposure and the outcome. And it is a
long-standing rule in epidemiology that you can't control for an intermediate variable, so researchers were stuck.
Robins cut the Gordian knot by inventing a statistic called the "G estimator" that makes analysis of data that are simultaneously confounders and intermediate variables possible. He's branched out
from there. Robins has come up with novel methods for adjusting for treatments that haven't been randomized (for example, use of aerosolized pentamidine in a clinical trial of AZT). He has devised
techniques for using surrogate markers to stop clinical trials early. His Kresge 502 talk on optimizing treatment strategies was the trial run of a statistical model that could greatly reduce the
amount of information needed to make treatment choices.
Practical application remains a major hurdle, however. Robins says attachment to tried--if not entirely true--methods and thinking get in the way. He is optimistic about that changing as students
with some training in his methods move on to jobs and teaching positions. As for his pathway from lone wolf to top dog, Robins says, sure, he's made some adjustments but nothing major. "Now I'm a big
shot, but I haven't changed."
Peter Wehrwein | {"url":"http://www.hsph.harvard.edu/review/review_summer_02/677robins.html","timestamp":"2014-04-17T04:17:57Z","content_type":null,"content_length":"12846","record_id":"<urn:uuid:01ab7911-2df0-4d4a-bdc7-fe38ed6960e3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Linear Algebra
I am having trouble with the following question. (Just hoping to get some guidance, recommended texts etc.):
"Consider an eigenvalue problem Ax = λx, where A is a real symmetric n*n matrix, the transpose of the matrix coincides with the matrix, (A)^T = A. Find all the eigenvalues and all the
eigenvectors. Assume that n is a large number."
Any help would be fantastic! | {"url":"http://www.physicsforums.com/showpost.php?p=97021&postcount=1","timestamp":"2014-04-17T15:30:32Z","content_type":null,"content_length":"8816","record_id":"<urn:uuid:676dc7cf-2a44-495a-a3ec-4931f828c28a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US20060259880 - Optimization of circuit designs using a continuous spectrum of library cells
This application is a continuation of U.S. application Ser. No. 10/447,396, entitled “Optimization of Circuit Designs Using a Continuous Spectrum of Library Cells,” which was filed with the U.S.
Patent & Trademark Office on May 30, 2003.
The invention relates to the field of semiconductor design tools, and more particularly to tools that optimize the performance of circuit designs.
The design and optimization of an integrated circuit follows a design flow that utilizes a multi-level hierarchy of circuit specifications. At the highest level is the functional or architectural
specification of the circuit. At the lowest level are placed transistors that can be used to generate masks for fabrication. Today, design flows are often based on standard cells, in which a library
of low level circuits are generated, manually or automatically, and the design process selects cells from the library that match the circuit specifications.
In a typical design flow, referred to as a synthesis and place-and-route (or SPR) flow, a fixed set of cells—called a library—is used to map a given design into a physical implementation. The number
of cells, and the amount of optimization that can be performed, is limited. This approach is less efficient than arbitrary full-custom design. Limitations on fixed or static libraries have been
necessary because SPR tools do not efficiently automate design at the transistor level. As a result, cell generation has not been included in the EDA design flow and has been largely done by hand.
Furthermore, with the advent of third-party library companies, library creation is often out-sourced.
One reason that SPR flows result in sub-optimal designs is due to limitations on the size of the cell libraries, and on the optimization process itself. Typically, logically equivalent cells in a
library are provided according to a binary scale of drive strength. For example, 1×, 2×, 4×, 8× and 16× drive strength NAND gate cells are provided in a cell library. The optimization process can
then only select one of these five drive strengths of logically equivalent cells to optimize a circuit.
The process of creating a large number of cells scaled to different drive strengths can be automated. However, this merely creates an oversize cell library that cannot be utilized effectively by
current design tools. Most current design tools assume there will be only approximately five electrical variations for each logical function, and assume binary scaling of cells. Furthermore, a large
amount of time would be needed to synthesize each scaled cell, and the size of the libraries themselves would get cumbersome. Finally, simply optimizing a design across a large number of possible
cells may result in an unnecessarily large number of minor variations, many of which do not affect the overall performance.
The present invention is a method and apparatus for optimizing a circuit design using cells that vary across a relatively continuous spectrum of design parameters such as drive strength, while
retaining the ability to work in an SPR design flow with standard tools with minimum impact to the overall design flow. The performance of the circuit design is evaluated, and iterative cell
replacements are made from the set of cells provided until a termination condition is met. The cells may vary across a spectrum of drive strength and/or other electrical parameters, such as P/N ratio
or taper factor. The cells may be provided in real and virtual libraries, such that the virtual cells have timing information associated with them, but have not been built. The decision on whether to
replace a cell or not may be based on whether the worst case delay through the circuit improves. In some cases certain cells are replaced by cells with minimum area or power prior to the main
iteration loop.
The order in which cells are selected as candidates for replacement can be based on evaluating the worst case path, computing the area to transition time ratio for each cell in that path, and
starting with the smallest and slowest cells first. The optimization process can also include marking a cell as stable when no improvement can be found and not re-testing a cell marked as stable. In
this embodiment, a cell is unmarked as stable when its context changes; that is, when an adjacent cell is replaced.
The invention will be better understood with reference to the drawings and detailed description below.
The present invention will be described with reference to the drawings, in which:
FIG. 1 illustrates a portion of a prior art design flow.
FIG. 2A illustrates a portion of a design flow incorporating an embodiment of the present invention.
FIG. 2B illustrates an embodiment of a computer system for use with the present invention.
FIG. 3 illustrates a flow chart of an embodiment of the present invention.
FIG. 4 illustrates a flow chart of an embodiment of an optimization function of the present invention.
FIG. 5 illustrates a flow chart of an embodiment of a cell search function of the present invention.
FIG. 6 illustrates a flow chart of an embodiment of a part of a cell search function of the present invention.
FIG. 7 illustrates a flow chart of an embodiment of a cell scale function of the present invention.
FIG. 8 illustrates a flow chart of an alternative embodiment of the present invention.
The present invention is directed to a method and apparatus that improves the performance of a circuit design. The present invention is preferably directed at a SPR design flow, although it could be
applied to other design flows. Cell instances on a critical path are iteratively replaced with functionally equivalent cells having varied characteristics such as drive strength. The process repeats
until the desired performance target of the design is reached or until no more improvements can be made. In one embodiment, critical paths are improved by slowly increasing the drive strength of
cells on the critical path. When no more improvements can be found, a mechanism is employed to establish if the current design is at a local minimum or a global minimum. First, a slight timing
tolerance is applied for a certain number of iterations. This represents a hill climbing technique to accommodate minor variations in the results of static timing analysis. Second, each cell on the
critical path is tested with all of the logical equivalents to that cell to determine if an improvement can be found. This has the effect of switching to other electrical variations, such as a new P/
N ratio, a new taper factor, a cell with one or more fast inputs, or one or more low threshold (fast) transistors. If a different electrical variant improves performance, then the cell replacement is
made and scaling will again attempt to find a better cell.
The cell library contains real and virtual cells chosen so that a design parameter, such as drive strength, varies in small discrete steps. Drive strength can in principle vary continuously along an
entire spectrum. The optimum choice of drive strength depends on the context of the cell, including the load of the cell and the drive strength of the previous cell. Higher drive strengths drive
their load faster, but they load and slow down the cells that drive them, and use more power. Having a very limited set of choices of drive strength produces a design that has longer cycle times,
uses more power, and has more area than a fully optimized design. Note that other design parameters besides drive strength are possible, since each transistor or group of transistors is scaled
independently. For example, cells can be provided with different P/N ratios or different taper factors.
For these reasons, the steps of the cell library are chosen such that a relatively continuous spectrum of values across the design parameter is achieved. A “relatively continuous” spectrum means that
the spectrum is more continuous than binary scaling: the goal is to come close to the effect of a continuous spectrum, while achieving optimization times and library sizes that are practical; the
step sizes need not be smaller than static timing analysis is able to effectively distinguish; the step sizes also need not be smaller than is necessary to create a meaningful change in timing of the
FIG. 1 illustrates a prior art design flow. The specification of a circuit exists at multiple levels. RTL 110 is the specification of a circuit design at the register transfer level. It can be
written by hand, or can be generated by other design tools. A synthesis tool 115 is a program that is used to convert the RTL specification of a circuit into a gate netlist 120. Another program, the
placement tool 125 is used to convert a gate netlist representation of a circuit design into placed gates 130. Finally a detailed routing program 135 is used to generate an exact layout 140 of the
circuit design. After a layout has been generated, timing analysis consists of running an extraction program 145 to extract a Spice 150 representation of the design. Then, static timing analysis tool
160 is used to generate timing results for the circuit. One or more steps in the standard flow may be combined into a single software package. One example of a static timing analysis tool is the tool
sold by Synopsys, Inc, under the trademark PrimeTime. Once timing results are produced, hand-optimization is used to change circuit specifications at the RTL level. Optimizations at other levels are
difficult and time consuming to incorporate
FIG. 2 illustrates a design flow incorporating an embodiment of the present invention. Blocks 210, 215, 220, 225, 230, 235, 240, 245, 250, 260 and 280 operate in much the same fashion as the
analogous blocks in FIG. 1. Optimizer 270 interacts with static timing analysis program 260 to iteratively analyze the circuit design after cell replacements have been made. The timing is optimized
utilizing cells in both the real and virtual libraries until a timing goal is reached or there are no further improvements to be made. Cells are replaced only with logically equivalent cells, so the
functionality of the design is not changed. After optimizer 270 finishes the process, results can be exported in the form of engineering change order (or ECO) files to be processed by synthesis tool
215 and placement tool 225. The changes can then be incorporated into the standard design flow. In this way, the use of a standard design flow can be employed. A useful feature of static timing
analysis 260 is the ability to perform incremental timing analysis. This means that if the design is changed with only a single cell replacement, it is not necessary to re-analyze the entire circuit,
but only the portions that can be affected by the change.
FIG. 3 illustrates a flow chart of an embodiment of the present invention. In step 310, virtual library 290 is created. This step involves the generation of cells scaled in small, discrete steps
logically equivalent to cells in the real library. Cells in the virtual library are characterized sufficiently for a static timing analysis, but they are not built. Thus, the size of the virtual
library, and the time needed to generate it, are small compared to a real library with the same cells. The sizes of the discrete scaling steps is chosen such that the effect of a continuous spectrum
of scaling is achieved. In one embodiment, this involves scaling in 20% steps up to 300% and scaling in 25% steps from 300% up to 600%. Alternatively the virtual library may utilize scaling in 10%
steps. Each of these embodiments provide finer granularity than the binary scaling of a real cell library.
The generation of the virtual library involves generating timing behavior for a specified set of cells based on the timing characteristics in a real library. These timing characteristics can be
generated by scaling, by interpolation, or some combination of the two. The timing characteristics of the real cells is typically represented by a table, in which the delay through the cell and the
output transition time are specified for a set of input transition times and output capacitances. Table entries are of the form: (slew[a],C[b]): T[ab ]and (slew[c], C[d]): S[cd ]where slew[a ]and
slew[c ]are input transition times, C[b ]and C[d ]are output capacitances, T[ab ]is the delay through the cell under the conditions specified by slew[a ]and C[b], and S[cd ]is the output transition
time of the cell under the conditions specified by slew[c ]and C[d]. A set of table entries specify the timing characteristics of the cell. Other input variables besides input transition time and
output capacitance are possible, and other output variable besides delay and output transition time are possible. During static timing analysis, the actual input transition time and output
capacitance of a real or virtual cell within the circuit design are used to compute the delay and output transition time of that cell by interpolating the values in the table entries corresponding to
that cell. An alternative method of specifying timing characteristics is to use an equation where each cell is specified in terms of polynomial coefficients. This method allows static timing analysis
to use a calculated value rather than a value interpolated from a table. More generally, generating the timing for a given instance of a cell, given the adjacent connections of the cell, is an input/
output function where the characteristics of a cell are generated by using the timing characteristics of the adjacent cells.
When a virtual library is built, the cells themselves are not actually generated at the transistor level. However, the timing characteristics need to be generated so that static timing analysis can
take place. This means that in the case of a table-based timing specification, a new table must be created for each scaled cell. The basic technique is that scaling a cell by a scale factor of S
assumes that the input capacitance scales by S and that the output capacitance corresponding to a given timing characteristic also scales by S. This means that the scaled cell can drive a higher
output capacitance in the same time as the unscaled cell and has a scaled input capacitance. This can be done by multiplying each output capacitance in each table entry by S, and by multiplying the
input capacitance by S. When a scaled cell is actually built, every transistor width is multiplied by the scale factor.
Another way to create a new table for a cell in a virtual library is to interpolate two or more cells. In general when interpolating two or more cells, each cell has its own scale factor and the
resulting cell is the combination of such scaled cells. Simple interpolation, in which the sum of the scale factors equals one is a special case of the more general mechanism in which both
interpolation and scaling are involved. Interpolation works when each cell being interpolated has the same netlist. When an interpolated cell is actually built, the width of each transistor is set to
the sum of the individual scale factors multiplied by the width of the corresponding transistor width in the original cells. The basic technique for interpolation assumes that the input capacitance
is the sum of the input capacitances of each component cell multiplied by their respective scale factors. Interpolation also assumes that each component cell drives a portion of the total
capacitance. For a given output timing characteristic, the output capacitance is set to the sum of the scaled output capacitances for each component cell for an equal value of that timing
It is possible to use negative scaling factors in interpolation, in which case the timing behavior needs to be extrapolated from existing table values. For example, if a 2× inverter and a 4× inverter
are scaled by −1.0 and 2.0 respectively, the result is effectively a 6× inverter (i.e., 2×−1.0+4×2.0=6.). Each transistor is now the linear extrapolation between the 2× and 4× cells. Note that the
resulting cell is not necessarily the same as a 4× cell scaled by 1.5, because other electrical characteristics of the 2× and 4× cells, such as P/N ratios, may not be the same.
Step 320 represents one embodiment of the iterative optimization step of the present invention. Step 320 utilizes the real and virtual libraries, takes as input user configurations and terminates
when a performance goal is reached, when it cannot find any improvements to make, or when a maximum number of changes is reached. A summary of the global variables input to the Optimize Design Step
320 is shown in the table below.
Variable Input Description
Libraries Names of real and virtual libraries
Clocking Groups Names of the clocking groups to optimize
Active Count Number of groups to optimize simultaneously
Sequential Mode Mode of operation for altering flip-flops and latches:
none: don't make any changes
replace: allow real cells but not virtual cells
scale: allow virtual cells but not real cells
both: allow both real and virtual cells
Granularity Granularity of scaled library, including maximum scale
Max Drive Maximum drive strength that will be used
Max Cell Area Maximum cell area that will be used
Real Cell Effort Mode of operation of preferring real cells over
virtual cells:
low: don't prefer real cells over virtual cells
medium: try real cells first a few times, then try
virtual cells
high: try all real cells first, then try virtual cells
highest: try only real cells
Don't Touch List of cells not to try to optimize
The Libraries variable is the names of the real and virtual libraries. The Clocking Groups variable indicates to the optimizer which clocking groups should be optimized. The optimizer will analyze
the worst case timing utilizing only the groups specified by this variable. Multiple clocking groups can be specified. The next variable, Active Count, is the number of clocking groups to optimize
across simultaneously. The Sequential Mode variable is used to configure the optimizer with the preferred behavior for the treatment of sequential cells, i.e., cells that contain flip-flops or
latches. The user can specify that sequential cells should be not optimized, that they should be optimized using only cells from either the real or virtual libraries, or that any cell replacement is
The Granularity variable specifies the scaling steps to be used by the optimizer in the scaling phase. An example of scaling steps would be the following sequence: 100, 120, 140, 160, 180, 200, 220,
240, 260, 280, 300, 325, 350, 375, 400, 425, 450, 475, 500, 525, 550, 575, 600, where each number is a percentage of scaling over a base cell size. Note that additional cells beyond those specified
in this variable can be placed in the virtual library. These cells will not be used during the scaling phase, but they will be utilized during the final phase of optimization where cells on the
critical path are replaced with every equivalent cell in the virtual library. These phases are described in further detail below in conjunction with FIG. 5.
The Max Drive Strength and Max Cell Area variable are used to limit the scaling that can occur. The Real Cell Effort variable is used to configure the optimizer with the preferred behavior for the
level of preference to real cells over virtual cells. In some cases, it is desirable to utilize real cells first. This will try to find solutions where the minimum number of virtual cells are used to
achieve the desired performance. Use of real cells minimizes the amount of effort required to generate new cells once the optimizer is finished. If the Real Cell Effort variable is set to “low”,
there is no preference given to real cells over virtual cells. If the variable is set to “medium”, then a given instance will be replaced with a real cell a certain number of times, if the
performance increases, before virtual cells are utilized. In practice a reasonable number for this count is three. This count applies globally to a specific instance across the entire optimizer run.
If the Real Cell Effort is set to “high,” every cell in the real library will be tried for every cell on the critical path before virtual cells are utilized. Finally if the Real Cell Effort is set to
“highest,” only real cells will be tried. More details on the function and operation of the real library preference mode are provided below in conjunction with the description of FIGS. 5, 6 and 7.
The last global variable is the Don't Touch variable that contains a list of specific cell instances that should not be selected as candidates for replacement. This feature can be used to override
the optimization process so that certain cells are not replaced even if they are on the critical path.
Optimizer step 320 of FIG. 3 also takes two command line parameters. These are the Maximum Nudge Count and the Slack Goal. The Maximum Nudge Count specifies a termination condition to limit the run
time if the desired performance has not been reached but incremental improvements are still being found. The Slack Goal specifies a termination condition for when to terminate based on the worst case
path. Default values for the Maximum Nudge Count and the Slack Goal are 1,000,000 and 0.0 ns respectively. The function of these termination conditions is described in more detail below in
conjunction with FIG. 4.
Step 330 of FIG. 3 represents an optional clean-up phase where, after the optimizer has terminated, the overall design can be further improved. The first function is Recover Instances. Recover
Instances processes a changes file, and for each instance in which a scaled version of a cell has been specified, the performance will be compared to every real cell to determine if there is no
change in timing. If no change or a reduction in timing is achieved, the scaled cell is replaced with the real cell. In this way, the number of virtual cells utilized can be minimized. In an
alternative embodiment, rather than looking only at the timing of each individual scaled cell; a scaled cell could be replaced by a real cell if the overall worst case delay doesn't change, even if
this particular cell gets slower. In either case, the timing comparison can be given a tolerance value, so that small increases in timing are allowed.
Step 340 of FIG. 3 represents the step of generating ECO files to be used by synthesis tool 215 and placement tool 225. A previously generated file representing the changes to the design (i.e., which
cell instances have been replaced with which library cells) is processed in this step and the ECO files are generated. Step 350 represents the step of generating real library cells for those cells
that were selected by the optimizer. The ProGenesis product by Prolific, Inc. is an example of a product that can be used to automatically generate real cells from a scale specification. Finally,
step 360 represents the operation of detailed routing 235 using the updated design and the updated library.
FIG. 4 illustrates an embodiment of the optimizing step 320 shown in FIG. 3. The command line parameters Max Nudge Count and Slack Goal are passed as parameters to the algorithm shown in FIG. 4 and
the global variables illustrated in Table 1 are also utilized. Step 410 is an initialization step in which various variables are initialized. An initial timing analysis is performed on the circuit
design in step 410 and the worst timing path across all groups is determined. The active group is set to the worst group, the Best Slack variable is set to the timing slack through the critical path,
and the Nudge Count, Stuck Counter and the Tolerance Counter are set to zero.
In step 412, the current Nudge Count is compared to the Max Nudge Count parameter and if the maximum count has been reached, the optimizer terminates. The Max Nudge Count parameter is not intended to
control a normal termination condition, but to terminate in the event of infinite loops, and to allow the user to control the maximum running time of the optimizer. In step 414, the Best Slack
variable is compared to the Slack Goal parameter and if the goal has been reached, control is transferred to step 450. The Slack Goal parameter allows the user to control the termination by governing
how much improvement is desired. The optimizer will terminate only when all groups being optimized are better than the Slack Goal, as illustrated in more detail below.
If neither condition tested by steps 412 and 414 is true, control is transferred to step 420, in which the Nudge Count counter is incremented. In step 422, the incremented nudge count mod 20 is
generated and if the result is equal to zero, control is transferred to step 430. This has the effect of executing step 430 every 20 times through the main loop consisting of steps 412, 414, 420, 422
, 440 and 445. In step 430, the worst group across all optimized groups is measured, and if the currently active group is not the worst group, the new worst group is added to the active group list
and if necessary, a group is removed from the active group list. As described above, the user may set the maximum number of active groups that will control how many groups are in the active list. The
default value for this global variable is one, meaning that when the new worst group is added to the active list, the previous worst group is removed since only one group is allowed on the active
list. The purpose of step 430 is to periodically test all groups being optimized, so that if improvements are made such that the worst group is no longer the group being optimized, optimization will
shift to the new worst group. After step 430 executes control is transferred to step 440. In an alternative embodiment, an iteration count other than 20 could be used. It would be possible to execute
step 430 more frequently (even every iteration), or less frequently. The choice of 20 was chosen to balance the overhead of analyzing all optimized groups and closely tracking the worst group.
Step 440 is the main optimization step where a single improvement, or nudge, is made to the circuit design. The result of step 440 is either that an improvement has been made, or that no improvement
can be found. More detail of an embodiment of step 440 is illustrated in FIG. 5. In the case that an improvement was found, control is transferred to step 445. In step 445, the worst path in the
currently active groups is measured and the Best Slack variable is updated if the slack of the worst path has improved. In the case of improvement, the Stuck Counter and the Tolerance Counter are
also zeroed. The use of these counters is described below in conjunction with FIG. 5. After step 445, execution continues with step 412 described above. This completes the main optimization loop.
In the case that no improvement can be found in step 440, control continues with step 450, in which the circuit design is re-analyzed and the worst path across all optimized groups is determined. In
step 452, a test is made to determine if the current worst group is in the active group list. If the result of this test confirms that the worst group is in the active group, execution of the
optimizer terminates. The purpose of steps 450 and 452 is to ensure that the lack of improvement achievable from step 440 is not based on a path that is not the worst path. Since step 430 is called
only every 20 iterations, it is possible that the worst group is not currently in the active group list.
In the case that the worst group is not in the active group, control is transferred to step 460, in which the worst group is added to the active group list. In the case that the number of times this
“restart” condition has occurred is greater than the number of active groups, the limit on the number of active groups is also removed in this step. This prevents an infinite loop situation in which
close paths in different groups continuously sequence between worst groups. At the end of step 460, control is transferred to step 412 and the main optimization loop is reentered.
FIG. 5 illustrates more detail of an embodiment of the optimization step 440 described above. This step involves a search for an improved design with a cell replacement. If an improvement is found,
the step returns indicating “Improvement.” If no improvement can be found, the step returns indicating “No Improvement.” Decision block 502 tests whether the global variable Real Cell Preference is
set to “high” or “highest.” If the setting is “high” or “highest,” control transfers to step 510, and otherwise control transfers to step 520. In step 510, a search for an improved cell is made with
the restriction that only real cells are allowed, and with the tolerance parameter set to zero. More detail on an embodiment of step 510 is provided below in conjunction with FIG. 6.
If the result of step 510 is that an improved cell has been found, as tested in step 512, step 440 returns indicating “Improvement.” Otherwise, control is transferred to step 520. Step 510 forces
every real cell to be tested for a timing improvement before virtual cells are tried, and this step will only be utilized if the user has selected “high” or “highest” as the Real Cell Preference
In Step 520, a search for an improved cell is made allowing virtual cells to be used and with the tolerance variable set to zero. More detail on an embodiment of step 520 is provided below in
conjunction with FIG. 6. If the result of step 520 is that an improved cell has been found, as tested in step 522, step 440 returns indicating “Improvement.” Otherwise, control is transferred to step
In step 530, the Tolerance Count is incremented and in step 532, the incremented value is compared to three. If the result is more than three, control is transferred to step 550, otherwise control is
transferred to step 540. The Tolerance Count and the maximum allowed value are used to allow the timing to decrease slightly for a few cell replacements so that if there is a local minimum, the
optimizer will be able to get beyond the local minimum and proceed. Due to anomalies in the timing analysis, it is possible that there is no single cell that can be replaced to improve timing, but a
larger number of cell replacements will improve timing. In alternative embodiments of step 440, the maximum tolerance count value of three can be another number, including zero, in which control
would always be transferred to step 550. The user can override the default value of three with another value.
In step 540, a search for an improved cell is made allowing virtual cells to be used and with the tolerance variable set to one picosecond. The tolerance value passed as a parameter in this step can
be set by the user to any value other than its default value of one picosecond. More detail on an embodiment of step 540 is provided below in conjunction with FIG. 6. If the result of step 540 is
that an improved cell has been found, as tested in step 542, step 440 returns indicating “Improvement.” Otherwise, control is transferred to step 550.
In step 550 the Stuck Count is incremented and in step 552, the incremented value is compared to 10. If the result is greater than 10, step 440 returns indicating No Improvement, otherwise control is
transferred to step 560. The Stuck Count and the maximum value are used terminate if a certain number of cell improvements do not change the Best Slack. Recall that in step 445 the Tolerance Count
and the Stuck Count were zeroed if the Best Slack had improved. In alternative embodiments, the maximum stuck count value of 10 can be another number. The user can override the default value of 10
with another value.
In step 560, a search for an improved cell is made using every cell in both real and virtual libraries and with the tolerance variable set to zero. More detail on an embodiment of step 560 is
provided below in conjunction with FIG. 6. If the result of step 560 is that an improved cell has been found, step 440 returns indicating “Improvement,” otherwise step 440 returns indicating “No
FIG. 6 illustrates more detail of an embodiment of steps 510, 520, 540 and 560 described above. In step 610, a number of initialization steps are performed to set up the search for an improved cell.
First, an array indicating whether an instance has been tried is cleared. This has the effect of clearing this “Tried” flag for every instance in the design. Then the worst path in all active groups
is determined and for all instances in this worst path, an area to transition time ratio is computed. This is an indication of how heavily loaded the cell is relative to its drive strength. The
reason for computing this value for every cell instance in the critical path in step 610 is to control the order in which cells are tested for improvement. Cells with smaller area to transition time
ratios will be tested first to see if a replacement can improve timing. An advantage of ordering the replacements in this way is that the smallest and most heavily loaded cells get replaced first.
These cells are the ones that are most likely to yield timing improvements with the minimal impact on overall power, area and impact to final placement and routing. In alternative embodiments, the
control of the order in which replacements are attempted is based on a different computed variable. For example the ordering value could be based on a more complex function of power, area and/or
transition time.
In step 612, the Nudge Mode is tested. In the case of “real” and “any,” corresponding to steps 510 and 560 respectively, control is transferred to step 620. In the case of “virtual,” corresponding to
steps 520 and 540, control is transferred to step 650. In step 620, the cell with the lowest area to transition time ratio on the critical path that has not already been tried is determined. If no
such cell exists, as determined by step 622, control is returned. If there is such a cell, control is transferred to step 630. Step 630 tries to find a better cell for a specific instance in the
circuit design. Every logically equivalent cell will be tested for a timing improvement. In the case that the Nudge Mode is “real,” only real cells will be tried, in the case that the Nudge Mode is
“any”, all logically equivalent cells in both libraries will be tried. If the instance passed to step 630 is not in the library or on the don't touch list, it will not be replaced. Also, if it is a
sequential cell (i.e. a flip-flop or latch), the Sequential Mode global variable discussed above is enforced. Note that in the case that the Nudge Mode is set to “any,” all logically equivalent cells
in the virtual library will be tested, even if they are not cells that within the cell scale, specified by the Granularity global variable.
Step 630 returns an indication of whether a cell replacement was actually made, which is tested in step 632. If a cell replacement was made, control is returned. Otherwise, control is transferred to
step 640. In step 640, the instance that was not replaced is marked as being tried and control is transferred to step 620. The loop consisting of steps 620, 622, 630, 632 and 640 represents an
iterative process in which all instances on the critical path are tested for a replacement cell that improves timing. The process terminates when a timing improvement is found or when there are no
more instances that have not been tried.
There is an analogous loop for the case where scaled cells are used that starts with step 650. A cell instance is considered “stable” if an attempt has been made to scale it, and for which no scaled
cell has been found that improves timing. Once a cell has been marked as stable, it will not be tested again until its context changes. An instance's context changes when any cell it is driven by, or
any cell it drives, is changed. All instances start out marked unstable. In step 650, the cell with the smallest area to transition time ratio that has not been tried and is marked unstable is
determined. If no such cell exists, as determined by step 652, control is transferred to step 660. If there is such a cell, control is transferred to step 670. In step 660, the cell with the smallest
area to transition time ratio that has not been tried and is marked stable is determined. If no such cell exists, as determined by step 662, control is returned. This will happen when a scaling
attempt has been made for every cell in the critical path.
In step 670, an attempt is made to scale the instance determined to be the best candidate. There are two modes of operation for step 670, corresponding to the “low” and “medium” settings of the
global variable Real Cell Effort described above. More detail on an embodiment of step 670 is provided below in conjunction with FIG. 7. Step 670 returns an indication of whether a cell replacement
was actually made, which is tested in step 672. If a replacement was made, control is transferred to step 680, otherwise control is transferred to step 690. In step 680, the cell that was replaced is
marked as unstable along with all of the cells that drive it, and that it drives. Control is then returned. In step 690, the cell that was unsuccessfully attempted to be scaled is marked as stable
and tried, and control is transferred to step 650. The loop consisting of steps 650, 652, 660, 662, 670, 672 and 690 represents an iterative process in which an attempt is made to scale all instances
on the critical path. The unstable instances are tried first, and if none of them yield improvements, then the other instances are tried.
FIG. 7 illustrates more detail of an embodiment of step 650 described above. Step 710 ensures that if the instance passed to step 650 is not in the library or on the Don't Touch list, it will not be
scaled. Also, if it is a sequential cell (i.e. a flip-flop or latch), the Sequential Mode global variable discussed above is enforced. The Real Cell Effort mode is tested in step 712. In the case of
“medium,” effort, a global counter for the specific instance, the Real Try Count, is tested for being greater than three. If the number of times this instance has been tried with real cells is less
than three, control is transferred to step 720, otherwise control is transferred to step 740. Step 720 increments the Real Try Count for this instance and control continues with step 730. The Real
Try Count implements the “medium” mode of the Real Cell Effort global variable, in which real cells are tried a certain number of times before virtual cells are tried. In alternative embodiments, the
number of times to try real cells before utilizing virtual cells can be another number.
Step 730 tries to find a better cell for the instance by testing every logically equivalent cell in the real library. Step 730 is substantially the same as step 630 described above in the case of
FIG. 6 corresponding to step 510. In the case that step 730 returns without having found a better cell, as tested by step 732, control is transferred to step 740, otherwise control is returned. In
step 740, the given instance is scaled by one step in the scale as specified by the Granularity global variable and timing results for the circuit design with the scaled cell are generated.
FIG. 8 represents an alternative embodiment of the present invention in which power or area are optimized rather than timing. Steps 810, 820, 830, 840, 850 and 860 operate in substantially the same
way as the analogous steps in FIG. 3. Step 815 has been added in which previous to the operation of the Optimization step 820, all instances in the design are lowered to their minimum area or minimum
power. The optimizer will then increase the sizes of the cells as necessary to meet the timing goal. The result is that only the cells that are required to be increased in power or area to meet
timing will be, resulting in a design of optimal power or area.
An alternative embodiment would be to allow cell instances to be decreased in size during the optimization process. One way to implement such an embodiment would be to sort the cells in the reverse
order during a step corresponding to step 610 described above. Instead of starting with the cells with the smallest area and largest transition time, the cells with the largest area and shortest
transition time would be selected first. Then for each candidate cell selected, the smallest or lowest power cell would be found that keeps the slack of that cell greater than the worst slack plus a
guard factor. If such a replacement cell exists, the replacement would be performed, the timing information would be updated and the sequence would repeat, in much the same manner described above.
The reason for the guard factor, which could be set to zero in alternative embodiments, is to guarantee that new critical paths are not introduced. The path being impacted by the cell replacement
will still be at least the guard factor faster than the critical path.
It will be appreciated by one skilled in the art that in alternative embodiments, many of the steps can be performed independently and in parallel rather than in sequence. An advantage of parallel
operation is that with sufficient computing resources the total running time of the optimization is reduced. In particular, steps 630 and 730 described above involve generating timing results for an
entire group of cells in the real and virtual cell libraries. By allowing multiple instances of the static timing analysis tool to run independently on separate copies of the circuit design, these
steps can be parallelized. It is also the case that steps 670 and 740 described above, rather than evaluating a single step in the scaling of the candidate cell could evaluate multiple scalings of
the candidate cell in parallel. In will be appreciated that other aspects of the current invention can be parallelized.
One skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments, which are presented for purpose of illustration and not of limitation. | {"url":"http://www.google.fr/patents/US20060259880?hl=fr","timestamp":"2014-04-17T09:50:07Z","content_type":null,"content_length":"106054","record_id":"<urn:uuid:ee9de418-4d86-4d26-855b-8ec0cd7fde18>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Directed Mammalian Gene Regulatory Networks Using Expression and Comparative Genomic Hybridization Microarray Data from Radiation Hybrids
Meiotic mapping of quantitative trait loci regulating expression (eQTLs) has allowed the construction of gene networks. However, the limited mapping resolution of these studies has meant that
genotype data are largely ignored, leading to undirected networks that fail to capture regulatory hierarchies. Here we use high resolution mapping of copy number eQTLs (ceQTLs) in a mouse-hamster
radiation hybrid (RH) panel to construct directed genetic networks in the mammalian cell. The RH network covering 20,145 mouse genes had significant overlap with, and similar topological structures
to, existing biological networks. Upregulated edges in the RH network had significantly more overlap than downregulated. This suggests repressive relationships between genes are missed by existing
approaches, perhaps because the corresponding proteins are not present in the cell at the same time and therefore unlikely to interact. Gene essentiality was positively correlated with connectivity
and betweenness centrality in the RH network, strengthening the centrality-lethality principle in mammals. Consistent with their regulatory role, transcription factors had significantly more outgoing
edges (regulating) than incoming (regulated) in the RH network, a feature hidden by conventional undirected networks. Directed RH genetic networks thus showed concordance with pre-existing networks
while also yielding information inaccessible to current undirected approaches.
Author Summary
An important problem in systems biology is to map gene networks, which help identify gene functions and discover critical disease pathways. Current methods for constructing gene networks have
identified a number of biologically significant functional modules. However, these networks do not reveal directionality, that is, which gene regulates which, an important aspect of gene regulation.
Radiation hybrid panels are a venerable method for high resolution genetic mapping. Recently we have used radiation hybrids to map loci based on their effects on gene expression. Because these
regulatory loci are finely mapped, we can identify which gene turns on another gene, that is, directionality. In this paper, we constructed directed networks from radiation hybrid expression data. We
found the radiation hybrid networks concordant with available datasets but also demonstrate that they can reveal information inaccessible to existing approaches. Importantly, directionality can help
dissect cause and effect in genetic networks, aiding in understanding and ultimately rational intervention.
Citation: Ahn S, Wang RT, Park CC, Lin A, Leahy RM, et al. (2009) Directed Mammalian Gene Regulatory Networks Using Expression and Comparative Genomic Hybridization Microarray Data from Radiation
Hybrids. PLoS Comput Biol 5(6): e1000407. doi:10.1371/journal.pcbi.1000407
Editor: Hanah Margalit, The Hebrew University, Israel
Received: September 17, 2008; Accepted: May 6, 2009; Published: June 12, 2009
Copyright: © 2009 Ahn et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original author and source are credited.
Funding: The authors received no specific funding for this study.
Competing interests: The authors have declared that no competing interests exist.
Interrogating genome-scale datasets is a necessary step to a systems biology of the mammalian cell [1],[2]. Networks have been constructed using various approaches. In the transcriptome, coexpression
networks have been constructed by linking genes whose correlations exceed a selected p-value based on transcript profiling data across different samples [3]. In the proteome, genes can be linked if
their corresponding proteins bind each other based on yeast two-hybrid (Y2H) or co-affinity immunoprecipitation assays [4],[5]. Protein-protein interactions can also be ascertained from
literature-curated (LC) databases [6],[7]. The Human Protein Reference Database (HPRD) consists of ~8,800 proteins and ~25,000 interactions and was constructed using Y2H, co-affinity purification and
LC data [6]. Genes can also be linked by virtue of membership of a common pathway [8],[9], an example being the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway [10]–[12].
Networks constructed using these various approaches are correlated, with some exceptions. While a single dataset often has a large number of false positives and false negatives and reflects only one
facet of gene function, accessing multiple independent datasets increases the reliability of gene functional annotation. Integrating diverse gene networks has been shown predictive of
loss-of-function phenotypes in yeast [8],[13] and Caenorhabditis elegans [9].
Recently transcriptional networks have been constructed using expression data from genetically polymorphic individuals [14]–[16]. This approach allows the identification of quantitative trait loci
(QTLs) regulating expression, or eQTLs. Mapping of eQTLs relies on expression perturbations due to naturally occurring polymorphisms. These sequence variants may be lacking in critical pathways
because of selective pressure, rendering inaccessible important regions of the genetic network.
A disadvantage of most currently available networks is that it is difficult to infer functional relationships between interacting genes. Consequently, the edges between genes are undirected and have
no regulatory hierarchy. This is also true of eQTL networks where, because of limited mapping power, genotype information has been generally ignored and coexpression networks have been constructed
instead [17]. Causality between expression and clinical traits has been inferred from eQTL data using conditional correlation measures [18] and structural model analysis [19],[20]. However, this
approach has been restricted to a small subset of markers and traits and cannot be easily extended to constructing gene networks.
Radiation hybrid panels have been used to construct high resolution maps of mammalian genomes [21]–[23]. Fragmenting a mammalian genome using radiation yields many more breakpoints than meiotic
mapping and hence greatly enhanced resolution. The T31 mouse-hamster hybrid panel was constructed by lethally irradiating mouse cells harboring the thymidine kinase gene (Tk1^+) [22]. These cells
were then fused to Tk1^− hamster A23 cells. Selection for the Tk1^+ gene using HAT medium resulted in a panel of 100 hybrid cell lines, each of which contained a random sampling of the mouse genome.
Mouse autosomal genes retained in a hybrid clone have two hamster copies plus one mouse copy, compared to two copies otherwise.
We recently used the T31 RH panel for high-resolution mapping of QTLs for gene expression [24]. The QTLs regulate expression because of copy number changes and they are therefore called copy number
expression QTLs or ceQTLs. We re-genotyped the T31 panel at 232,626 markers using array comparative genomic hybridization (aCGH). The average retention frequency of mouse markers in the panel was
23.9% and the average length of the mouse fragments was 7.17 Mb. We also analyzed the panel using expression microarrays interrogating 20,145 genes.
Using regression, we found 29,769 trans ceQTLs regulating 9,538 genes at a false discovery rate (FDR) = 0.4 in the T31 panel. At the same FDR threshold, we also found 18,810 cis ceQTLs. Consistent
with the average fragment length, a ceQTL was identified as trans if >10 Mb from a regulated gene and cis otherwise. The interval for the ceQTLs was <150 kb, thus localizing them to an average of
only 2–3 genes.
In this paper we evaluate gene networks constructed from ceQTL mapping. In contrast to undirected networks from meiotically mapped eQTLs and protein binding approaches, the high resolution mapping
and dense genotyping of ceQTLs in the RH panel allowed the use of genotype information to construct directed networks. This directionality permits insights that cannot be obtained from undirected
A Directed Gene Network from Radiation Hybrids
We previously analyzed a mouse-hamster radiation hybrid panel, T31 [24]. The donor cells were male primary embryonic fibroblasts from the inbred mouse strain 129 and the recipient cells were from the
A23 male Chinese hamster lung fibroblast-derived cell line [22]. A total of 99 cell lines from the original panel were available. RH clones with retained autosomal mouse genes in the panel have two
hamster copies plus usually one extra mouse copy, compared to two hamster copies otherwise. The variation in gene dosage drives changes in mRNA expression.
Transcript abundance and marker dosage were measured by mouse expression arrays and comparative genomic hybridization arrays (aCGH), respectively. A total of 20,145 transcript levels were assayed by
the expression arrays and 232,626 markers by the aCGH. We mapped ceQTLs by regressing the expression array data on the aCGH data. Mouse and hamster genes were detected with comparable efficiency and
behaved equivalently in terms of regulation [24].
To construct the RH network, the copy number of each gene was estimated by linear interpolation using the two neighboring aCGH markers. The linear interpolation based estimation is reasonable,
considering the high density of aCGH markers.
Measured transcripts were denoted by , where and are gene and RH clone index, respectively. The estimated gene copy number was denoted by for gene in RH clone . For each ordered pair of genes and , a
Pearson correlation coefficient between and was calculated from the 99 observations. In a linear model , where and are regression parameters, the correlation coefficient can be viewed as a
standardized slope and measures the goodness of fit for the linear model. A significantly large positive value implies induction and a significantly large negative value implies repression.
Previously, we used an F-statistic, which is monotonic in the absolute value of the correlation coefficient , to test for significant association in a context of the linear model [24]. Here we
preserved the sign and used the correlation coefficient as a test statistic. We found that yielded more significant overlaps with other biological datasets than (below). The number of directed edges
and number of nodes with ≥1 edge for right-tailed, left-tailed and both-tailed thresholding are shown in Table S1 and Figure S1 (see Methods).
We constructed an adjacency matrix by assigning to its entry, which gives information on whether gene regulates gene , either directly or indirectly. Since has real number entries and is not
symmetric, the network represented by is weighted and directed. We used the correlation coefficients for thresholding and calculated the statistical significance of similarities to existing
biological datasets. This is in contrast to transforming the correlation coefficients into FDR (false discovery rate) corrected p-values and then performing statistical thresholding [24]. Our
strategy in this study is similar, in spirit, to the integration approach taken in [8],[9] where the reliability of each dataset is measured by comparing with a benchmark dataset.
Since nearly all genes show a copy number increase in a portion of the RH panel, the bulk of genes (94%) also showed a cis ceQTL [24]. To remove these cis ceQTLs as an artifactual source of edges in
the RH network, we omitted all markers within 10 Mb of the gene being considered. Thus, only trans ceQTLs were employed in the analysis.
Overlap with Existing Datasets
We examined the similarity of our network to existing datasets including protein-protein interactions from HPRD (Human Protein Reference Database) [6], the KEGG (Kyoto Encyclopedia of Genes and
Genomes) pathway database [10]–[12], Gene Ontology (GO) annotations [25] and a coexpression network obtained from the SymAtlas microarray database of normal mouse tissues [26] (see Methods). We used
two different approaches to compare the directed RH and undirected networks. In the first approach, we discarded the edge directions of the RH network and calculated an overlap of undirected edges
between the RH and existing networks. It is not uncommon to disregard directions in a network for modeling and analysis purposes [27]–[33] and projecting a directed network onto a space of undirected
networks by forgoing information on edge directions seems reasonable. In the second approach, we assumed a hidden directed random network for each undirected existing network and estimated the
resulting overlap of directed edges.
Undirecting the RH network.
To compare the directed RH network and the other undirected networks, we ignored the edge directions in the RH network and calculated the resulting overlap. To test overlap significance, we used a
one-sided Fisher's exact test based on a two by two contingency table, replaced with a one-sided chi-square test when the expected values in all table cells exceeded 50 [34] (see Methods). The
one-sided Fisher's exact test is equivalent to the hypergeometric test, widely used in Gene Ontology enrichment analysis [35]–[38] and also for evaluating overlap significance between different
protein-protein interaction datasets [39]. It is noteworthy that the one-sided chi-square test is closely related to the Bayesian log-likelihood score (LLS) approach to integrating diverse datasets
into a single network [8],[9]. That is, the chi-square statistic has a monotonic relationship with the LLS score for evaluating dataset quality (see Text S1).
Figure 1 shows p-values representing overlap significance of the RH network with various datasets for a range of correlation coefficient thresholds (Dataset S1). False discovery rates (FDRs) were
calculated following the Benjamini-Hochberg procedure [40]. For correlation coefficient thresholds between about 0 and 0.2, the RH network showed significant overlaps with all datasets (FDR = 0.01)
except the GO cellular component annotation network. Although only the biological process annotations from GO were previously used as benchmarks in integrating heterogeneous datasets [8],[9],[13],
[41], we also found significant overlap with the GO molecular function annotation.
Figure 1. Overlap significance between right-tailed thresholded RH networks and existing datasets.
(A) HPRD protein-protein interaction network. (B) KEGG pathway network. (C) SymAtlas coexpression network. (D) GO annotations. (E) GO molecular function annotation. (F) GO cellular component
annotation. (G) GO biological annotation. (H) Averaged values over results from A to G. One-sided Fisher's exact and chi-square tests used to assess overlap significance.
The existing networks we used for comparison vary in size from 20,957 edges (HPRD network) to 18,754,380 (SymAtlas coexpression network) (see Methods). Nevertheless, the significance of overlaps
quantified by p-values was comparable for the different networks (cf. [8],[9]). Figure 1H combines the comparisons of the RH and existing networks by averaging values. The numbers of undirected edges
shared with each dataset are shown in Figure S2. The non-monotonic relationships between values (Figure 1) and overlap (Figure S2) imply that large values are likely real and not due to random
effects of large numbers of observations. Similarly, the decline in with increasing correlation coefficient thresholds is due to the unavoidable loss of statistical power as edge number decreases.
The results suggest that our network possesses biological information relevant to other functional annotations.
The maximum overlap significance occurred at low correlation coefficient thresholds between 0 and 0.2 (Figure 1). To test whether this is simply because large thresholds (>0.2) yield too few edges
and small thresholds (<0) give too many edges for significant overlap, we randomly permuted the elements of the adjacency matrix for the RH network and repeated the one-sided Fisher's exact and
chi-square tests. The permuted network had the same size (number of edges) as the non-permuted RH network. As shown in Figures 2A (overlap with HPRD network) and 2B (overlap significance averaged
over existing networks), the permuted networks did not show any significant overlap with the existing datasets (FDR-corrected ). These computational controls imply that the low correlation
coefficient thresholds for maximum overlap significance are not simply a statistical artifact.
Figure 2. Comparison of RH networks and existing datasets.
(A) Overlap between 10 randomly permuted RH networks and HPRD network. The RH networks were constructed from right-tailed thresholding and one-sided Fisher's exact and chi-square tests used to assess
significance. (B) Averaged values for overlap between randomly permuted RH networks and different existing datasets (HPRD, KEGG, SymAtlas coexpression, GO, GO-molecular function, GO-cellular
component and GO-biological process annotation networks). (C) Overlap between RH networks constructed from a subset of randomly selected RH clones and HPRD network. Mean of overlap significance
(solid line) over 50 random subsets shown with standard errors calculated by bootstrapping (dash-dot line). (D) Same as (C) except averaged values over different existing datasets. (E) Comparing
different thresholding approaches. Maximum over varying correlation coefficient thresholds shown. (F) Comparing betweenness centralities of RH and HPRD networks. P-values of Spearman correlation
coefficients (one-sided, positive direction) between the betweenness centralities of RH and HPRD networks shown.
Next we investigated how the number of RH clones affects the overlap. The sensitivity and resolution of the RH network should improve as the number of RH clones increases. To test this, we randomly
selected a subset of the 99 RH clones (40, 60, 80 and 99 clones) and calculated the significance of overlap with the HPRD network using the one-sided Fisher's exact and chi-square tests (Figure 2C).
Similarly, Figure 2D shows the values averaged over the existing datasets. The maximum overlap significance over correlation coefficient thresholds, that is, sensitivity, increased with the number of
RH clones (Figures 2C and 2D). However, the correlation coefficient thresholds of maximum overlap significance remained nearly constant between 0 and 0.2 across different numbers of clones (Figures
2C and 2D). This observation implies that the relatively low correlation coefficients of maximum overlap significance may be due to RH network properties orthogonal to existing networks rather than
random noise in the array measurements or insufficient RH clones (see Discussion).
Hidden directed random network model.
We assume that for each undirected network there is a hidden directed random network, modeled as in [42] (see Methods). Since the hidden directed network is not directly observable, we estimated the
overlap of directed edges between the directed RH and the unobserved directed networks by a conditional expectation given the undirected existing dataset. P-values representing overlap significance
were calculated based on the random network model.
The results of the comparison of the directed RH network and the hidden directed random network are shown in Figure 3. The findings were remarkably similar to those where the directionality of the RH
network was discarded (Figures 1) except for scaling factors. The similarity is because the random network model of a hidden directed network, where both directions for an edge are equally probable,
does not contain more information than its undirected counterpart. We did not use any topological information on directionality obtained from RH networks since the purpose of the overlap analysis was
to explore and validate the RH networks by comparison with independent datasets. In addition, orienting the edges of undirected networks, such as protein-protein interaction networks, is a difficult
task since there is no genotype information in these datasets.
Figure 3. Overlap significance between right-tailed thresholded RH networks and existing datasets, calculated using hidden random directed network models.
Same as Figure 1 except that a hidden random directed network was used to model existing undirected networks.
Upregulation Gives More Significant Overlap with Existing Datasets
We examined whether upregulation in the RH data, represented by positive correlation coefficients, , showed a different significance of overlap with existing datasets than downregulation, represented
by . We defined an unweighted adjacency matrix by left-tailed thresholding of the RH data, where if for a given correlation coefficient threshold , and otherwise. This network emphasized
downregulation in the RH data. We also defined by both-tailed thresholding, where if , and otherwise. This network gave equal weight to up- and downregulation in the RH data and is equivalent to
previous datasets produced from F-tests [24]. The unweighted adjacency matrix for right-tailed thresholding is defined as if , and , emphasizing upregulation in the RH data.
Unweighted RH networks obtained from left-tailed thresholding, which emphasized downregulation, did not show any significant overlap (FDR-corrected ) with existing datasets (Figure S3, Dataset S1),
except the GO cellular component annotation. Even this significance was modest. Unweighted networks obtained by both-tailed thresholding, which equally weighted up- and downregulation, also did not
show any significant overlap (FDR-corrected ) with existing datasets, except the GO biological process annotation (Figure S4, Dataset S1).
Figure 2E compares the maximum significance over correlation coefficient thresholds for the different thresholding approaches. Overall, the results suggest upregulation in the RH network yields more
significant overlap with existing datasets than downregulation. This may reflect the fact that if a gene represses another gene in trans the two protein products are unlikely to co-exist in the cell
and hence unlikely to interact. A corollary is that protein binding methods such as yeast two-hybrid and co-affinity immunoprecipitation may miss negative regulatory interactions. Our finding is
reminiscent of the observation that interacting protein pairs have significantly higher transcript abundance correlations than chance [43],[44].
Topological Properties
The overlap analysis based on edge-comparison may fail to capture some indirect interactions or other topologies. We therefore compared the topological properties of the RH and HPRD networks.
The degrees (number of edges for each node, or connectivity) of the weighted (unthresholded) RH and HPRD networks were significantly correlated (Spearman's correlation coefficient = 0.055, ).
However, the similarity to the HPRD network disappeared when we used absolute values of the correlation coefficients of the RH network in the adjacency matrix, (Spearman's correlation coefficient =
−0.0081, ). These observations imply that the degree distribution for upregulated but not downregulated edges in the RH network is significantly correlated with the HPRD network. This is consistent
with the notion that repressive relationships are not well represented in HPRD.
Next, we compared the betweenness centralities of the RH and HPRD networks. The betweenness centrality measures the total number of nonredundant shortest paths going through each node, representing
the severity of bottlenecks in the network [45],[46]. The betweenness centralities of the RH and HPRD networks were significantly correlated (FDR = 0.05) when the right-tailed correlation coefficient
thresholds for RH network were between −0.1 and 0.1 (Figure 2F).
We calculated the diameters (average minimum distance between pairs of nodes) of the RH and HPRD networks. The diameter of a giant connected component, consisting of 5,433 nodes with 20,859
undirected edges excepting self-loops, of the HPRD network was 4.13. For the RH network, we considered those 5,433 genes that were in the HPRD network and used a right-tailed threshold of 0.37544,
yielding 20,859 undirected edges, to make its size (node and edge numbers) comparable to the HPRD network. The diameter of the RH network was 4.11, close to that (4.13) of the HPRD network.
We also compared the clustering coefficients of the RH and HPRD networks, a measure of local cliqueness [47], but found no significant positive correlation. In summary, the RH network showed
similarities with the HPRD network in terms of connectivity, betweenness centrality and diameter, but not cliqueness.
Previous studies in other networks showed that essentiality is positively correlated with connectivity and betweenness centrality [9], [46], [48]–[56]. However, some authors have questioned the
association between essentiality and connectivity, attributing it to dataset bias [6],[57]. We tested whether essentiality is associated with connectivity and betweenness centrality in the RH
Essential genes had significantly more edges than non-essential genes for a range of right-tailed correlation coefficient thresholds from −0.12 to 0.16 (FDR = 0.01) using a one-sided Wilcoxon
rank-sum test [34] (Figure 4A). This range is similar to that for significant overlaps with existing datasets. Also, the fraction of essential genes was positively correlated with the degree of the
weighted RH network (Pearson's correlation coefficient = 0.70, ) (Figure 4B).
Figure 4. Essentiality, connectivity and centrality in RH networks.
(A) P-values for one-sided Wilcoxon rank-sum test assessing whether essential genes have significantly more edges than non-essential. (B) Fraction of essential genes and degree of weighted RH
network. (C) P-values for one-sided Wilcoxon rank-sum test assessing whether essential genes have significantly larger betweenness centralities than non-essential. (D) Fraction of essential genes and
betweenness centrality of RH network constructed with correlation coefficient threshold of 0.1 by right-tailed thresholding.
Similarly, essential genes had significantly larger betweenness centralities for a range of right-tailed correlation coefficient thresholds from −0.14 to 0.16 (FDR = 0.01) using a one-sided Wilcoxon
rank-sum test (Figure 4C). Figure 4D shows that the fraction of essential genes was positively correlated with betweenness centrality for the RH network constructed from a typically optimal
right-tailed correlation coefficient threshold for overlap of 0.1 (Pearson's correlation coefficient = 0.72, ).
Transcription Factors Have More Outgoing Than Incoming Edges
It is natural to suppose that transcription factors would have more outgoing than incoming edges since transcription factors regulate other genes. This proposition cannot be tested in conventional
undirected networks, but can be tested in the directed RH network. Using a one-sided paired signed rank test [34] we found that transcription factors had significantly more outgoing edges than by
chance (FDR = 0.01) for a range of correlation coefficient thresholds from 0.23 to 0.46 (Figure 5A). We also used a one-sided Fisher's exact and chi-square test to evaluate the association between
transcription factors and genes having ≥1 outgoing edge in the RH network. The significance of the association was modest but significant (FDR = 0.05) (Figure 5B). In contrast, the association
between transcription factors and genes having ≥1 incoming edge was not significant (FDR = 0.05) (Figure 5B). Together, these results imply that transcription factors are more likely to regulate
other genes than be the target of regulation and suggest transcription factors have a privileged role in genetic networks.
Figure 5. Transcription factors and edge directionality.
(A) P-values for one-sided paired signed rank test assessing whether transcription factors have significantly more outgoing than incoming edges. (B) Overlap between transcription factors and genes
having ≥1 outgoing or incoming edge. P-values from one-sided Fisher's exact and chi-square tests.
We used high resolution mapping of ceQTLs in an RH panel to create a directed genetic network. There was significant overlap with existing networks such as HPRD, KEGG, GO annotation and a SymAtlas
coexpression network. The RH network also showed similar topological properties to the HPRD network in connectivity, betweenness centrality and diameter.
The RH network showed maximum significance of overlap with existing networks at relatively low positive correlation coefficient thresholds between 0 and 0.2. The low thresholds were not simply by
chance, since randomly permuted RH networks did not show any significant overlap with existing networks. Also, the low values did not seem to be caused by noise in the array measurements or by lack
of sufficient numbers of RH clones, since the correlation coefficient thresholds giving maximum overlap significance remained nearly constant for varying clone number, although the sensitivity of
overlap increased with the number of clones. This may reflect the orthogonal nature of the RH network compared to existing networks, suggesting the RH approach will yield complementary information on
mammalian genetic networks. Novel and replicated edges in the RH network may thus be balanced in the low correlation coefficient threshold range.
The overlap between the RH network and existing interaction networks was greater for edges possessing upregulation than downregulation. This observation may be because the corresponding proteins are
unlikely to interact if one gene represses another, since the proteins will not be present in the cell at the same time. It also implies that protein-protein interaction networks may fail to uncover
valid edges between genes if they have a repressive relationship.
Previous studies found significant associations of essentiality with connectivity and/or betweenness centrality in protein-protein interaction networks [39], [46], [48]–[52], coexpression networks
[53],[56], Bayesian integrated gene networks [9] and transcriptional regulatory networks [46],[50],[54]. Most investigations focused on yeast, worm and fly and there have been only a few studies of
mammalian gene networks [6],[9]. Some authors have questioned the association of essentiality and connectivity [6],[57]. Coulomb et al. found that essentiality was poorly related to connectivity when
biases in protein interaction databases were taken into account [57]. Yu et al. also found related problems due to bias in a yeast two hybrid dataset [39]. In contrast, the RH network is free of
biases that may exist in protein interaction datasets. The significant positive correlation between essentiality, connectivity and betweenness centrality in the RH network adds to the evidence of the
centrality-lethality rule in the mammalian setting.
We also showed that transcription factors were likely to have more outgoing rather than incoming edges. While this finding is not unexpected and helps validate the RH network, a recent study using
naturally occurring polymorphisms in yeast suggested that transcription factors are no more likely to reside close to eQTLs than chance [58]. The discrepancy between the RH and yeast studies may be
because an increase in copy number in the RH cells is a more reliable way to perturb gene networks than naturally occurring alleles. In contrast, polymorphisms may be under selective pressure to
minimize disruptions in potentially critical nodes in gene networks, such as transcription factors.
We thresholded the adjacency matrix at different correlation coefficients to compare unweighted RH networks with existing unweighted datasets. However, we chose to leave the RH network weighted
rather than finalizing an unweighted form at an optimal threshold. Such an operation is irreversible and would lose information on linkage strength and sign. In other studies, the sensitivity of a
coexpression network was limited by thresholding [56] and weighted coexpression networks were more robust than unweighted networks [53]. Indeed, weighted networks are widely used in various
applications. In probabilistic integrated gene networks, linkages between genes are represented by weighted sums of log likelihood score (LLS) values [8],[9]. Weighting was also used for a Bayesian
gene network [13] and a scientific collaboration network [59]. In addition, weighted coexpression networks have been extensively studied [53],[60] and it is straightforward to incorporate a weighted
network into a probabilistic integrated network by a Bayesian LLS approach [8],[9].
We constructed a directed gene network from radiation hybrids and found it concordant with existing networks. We also showed that RH networks have the potential to provide new insights reflecting
orthogonal aspects of gene regulation. The RH networks will be refined as more panels, including those available for other species, are analyzed resulting in improved power and sensitivity.
Radiation Hybrid Data
Details on the analysis of the T31 RH panel cells and the preprocessing of aCGH and expression array data can be found in [24]. The microarray and aCGH data have been deposited in NCBI Gene
Expression Omnibus (GEO) database under accession number GSE9052.
Network Construction
The directed RH network was constructed as described in Results. The copy number for each gene was estimated from the aCGH data by linear interpolation as follows. Let denote the array measurement
for aCGH marker in RH clone . For gene , suppose marker is nearest to the gene from the left on the same chromosome and marker is nearest from the right. The copy number for gene in clone was
estimated by where , and denote the genome coordinates in bp for gene and markers and , respectively. If gene did not have any marker to the left or right on the chromosome, the array measurement for
the nearest marker was taken instead.
A protein-protein interaction network was constructed from HPRD (Human Protein Reference Database) [6] by generating an adjacency matrix , where if the proteins corresponding to annotated mouse genes
and interact with each other and otherwise. Note that is symmetric and the HPRD network is undirected. The HPRD network had 6,015 nodes and 20,957 undirected edges, excepting self-loops.
A network was constructed from the KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway database [10]–[12] by generating an adjacency matrix such that if genes and participated in the same pathway
and otherwise. The KEGG pathway network had 1,629 nodes and 139,664 undirected edges except self-loops.
A network was constructed from the GO (Gene Ontology) database [25] by generating an adjacency matrix where if genes and belong to a common GO term and otherwise. Only GO terms with ≤200 genes were
considered. Similarly, , and were constructed considering only the GO molecular function terms, GO biological process terms and GO cellular component terms, respectively. The undirected GO,
GO-molecular function, GO-biological process and GO-cellular component networks had 10,442 nodes with 786,928 edges, 7,745 nodes with 359,006 edges, 7,653 nodes with 404,641 edges and 3,509 nodes
with 140,904 edges, respectively, excepting self-loops. All edges were undirected.
We constructed an mRNA coexpression network from the publicly available SymAtlas microarray database [26]. This database contains transcript profiling data from 61 normal mouse tissues. The Pearson's
correlation coefficients of mRNA expression across the mouse tissues were calculated and an adjacency matrix was generated by right-tailed thresholding the correlation coefficients with . The
SymAtlas coexpression network had 15,190 nodes and 18,754,380 undirected edges.
Overlap Significance Using Undirected RH Network
The significance of overlap between the RH network obtained from thresholding and, for example, the HPRD network was tested as follows.
First, for a given threshold , the adjacency matrix of an unweighted RH network was constructed where for right-tailed thresholding, for left-tailed thresholding and for both-tailed thresholding (see
Results). Second, for a comparison with the unweighted HPRD network, the adjacency matrix was forced to be symmetric by constructing a symmetric matrix for an undirected RH network such that if or ,
and otherwise. Third, a two by two contingency table was built showing the relationship between (1 or 0) and (1 or 0), where only pairs of genes in common to both networks are taken. In addition, for
all networks, only gene pairs separated by at least 10 Mb on a chromosome or on different chromosomes were selected. This requirement was imposed to remove possible biases due to copy number effects
of a gene's own dosage in the RH network and to ensure gene pairs were in trans. Fourth, an overlap was defined as the number of gene pairs such that both and . Then a one-sided Fisher's exact test
was performed to evaluate whether the overlap was significant and calculate a p-value. If the expected values in all table cells exceeded 50, a one-sided chi-square test was used to reduce
computational cost.
We similarly calculated the significance of overlaps with the KEGG pathway network, the SymAtlas coexpression network and the GO annotations.
Randomized RH network.
We randomly permuted the elements of the weighted and directed adjacency matrix that correspond to gene pairs in trans and performed the overlap significance test (above).
RH network from a subset of clones.
We randomly selected 40, 60 or 80 RH clones out of 99 and constructed an adjacency matrix (see Results) using measured transcripts and copy numbers for the selected clones. Then we calculated the
significance of overlap with existing databases (above). We repeated this 50 times for a fixed number of clones.
Overlap Significance Using Hidden Directed Random Network Model
For each existing undirected dataset, for example, the HPRD network, we assume there is a hidden directed random network with adjacency matrix , whose elements are independent Bernoulli random
variables with success probability . We suppose only the undirected version is observed, where (recall only off-diagonal elements are considered, that is, ). Then for are independent Bernoulli random
variables with success probability . Therefore, using an empirical success probability , the ratio of 1's to the total in , the success probability of the hidden directed random network can be
estimated as .
The overlap between the unweighted (thresholded) directed RH network, represented by , and the hidden directed HPRD network is given by . However, the overlap is not directly observable and instead
we calculate the conditional expectation given . Since , it can be seen that
Ignoring the constant scaling factor without loss of generality, we define an overlap as (recall that is symmetric whereas is not). To test whether an observed overlap is greater than chance, we
calculate a p-value as the probability of the overlap being greater than or equal to the observed value assuming the HPRD network is a random network as described above,
where are independent Bernoulli random variables with success probability and and are independent binomial random variables, and , with being the number of unordered pairs such that for or 2. To
reduce the computation cost, is approximated using the normal distribution when , , and .
Topological Measures
The node degree of the undirected, weighted adjacency matrix where was calculated by . Similarly, the degree of the HPRD network was calculated by . Then we calculated the Spearman's correlation
coefficients between and .
The betweenness centralities and clustering coefficients of the RH adjacency matrix and the HPRD adjacency matrix were calculated using MatlabBGL (http://www.stanford.edu/~dgleich). When we
calculated the betweenness centrality of the RH network, we used a subgraph by taking nodes that were in the HPRD network to reduce computational cost. Then the Spearman's correlation coefficients
between the betweenness centralities and also between clustering coefficients for RH and HPRD were calculated.
Essentiality and Connectivity and Betweenness Centrality
We obtained a list of 1,409 essential genes and 1,979 nonessential genes from the Mouse Genome Database [6],[61]. Those 3,388 genes were sorted by degree and binned into successive bins of 200 genes
and the correlation between mean degree and fraction of essential genes calculated [9]. The betweenness centrality for the RH network was calculated from , taking a subgraph consisting of a total of
3,388 genes of interest to reduce computational cost and . Similarly, the 3,388 genes were sorted by betweenness centrality and the significance of correlation between the mean betweenness centrality
and the fraction of essential genes tested.
Transcription Factors and Edge directionality
We obtained a list of 1,053 transcription factors by finding genes whose GO description includes a word “transcription.” The number of outgoing edges was calculated by for gene and the number of
incoming edges by for gene . We used a one-sided paired signed rank test [34] to assess whether transcription factors have larger than .
The network data are available at http://labs.pharmacology.ucla.edu/smithlab/RHnetwork.html
Supporting Information
Relationship between one-sided chi-square test and Bayesian log-likelihood score (LLS) method
(0.08 MB PDF)
Size of RH network constructed from right-tailed, left-tailed and both-tailed thresholding approaches.
(0.06 MB PDF)
Size of RH network. (A) Number of nodes with nonzero degree for RH network constructed from right-tailed thresholding. (B) Number of directed edges for RH network constructed from right-tailed
thresholding. (C) Number of nodes with nonzero degree for RH network constructed from left-tailed thresholding. (D) Number of directed edges for RH network constructed from left-tailed thresholding.
(E) Number of nodes with nonzero degree for RH network constructed from both-tailed thresholding. (F) Number of directed edges for RH network constructed from both-tailed thresholding.
(0.21 MB TIF)
Overlap between RH network constructed from right-tailed thresholding and existing datasets. Same as Figure 1, except number of overlapping undirected edges shown instead of .
(0.26 MB TIF)
Significance of overlap between RH network constructed from left-tailed thresholding and existing datasets. Same as Figure 1 except left-tailed thresholding.
(0.25 MB TIF)
Significance of overlap between RH network constructed from both-tailed thresholding and existing datasets. Same as Figure 1 except both-tailed thresh-olding.
(0.28 MB TIF)
Significance of overlap between RH network and existing datasets. Figures 1, S3 and S4 based on this dataset using one-sided Fisher's exact and chi-square tests. Expected and observed overlap and
corresponding p-values shown.
(0.78 MB XLS)
Author Contributions
Conceived and designed the experiments: DJS. Wrote the paper: SA DJS. Developed the methods: SA RML KL DJS. Implemented the methods: SA. Processed various data sets: RTW CCP AL. | {"url":"http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000407?imageURI=info:doi/10.1371/journal.pcbi.1000407.g003","timestamp":"2014-04-20T06:34:04Z","content_type":null,"content_length":"280003","record_id":"<urn:uuid:1e990a5c-8058-4e89-97ce-fd6dfeca6f35>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/hi_im_tasha/asked","timestamp":"2014-04-17T01:36:37Z","content_type":null,"content_length":"87263","record_id":"<urn:uuid:a13640a7-baac-4c7b-ba3a-fc92af29f90f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00366-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Zeros/Roots of Polynomials - College Algebra
• This section covers concepts and properties related to finding zeros/roots of polynomials. The following are some of the specific topics covered
□ zero(s) of a polynomial
□ the fundamental theorem of algebra
□ multiplicities of zeros
• If you need further information about these ideas, the link below will give you access to an online textbook with helpful information and examples.
Click on the tabs to learn about this topic and then try the problems in the Learning Objects at the bottom of the page. | {"url":"https://sites.google.com/a/uwlax.edu/college-algebra/Chapter4/Finding-ZerosRoots-of-Polynomials","timestamp":"2014-04-18T18:32:56Z","content_type":null,"content_length":"31258","record_id":"<urn:uuid:ac5529cf-90a9-43d7-b76c-cab46bfe84db>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Chain Rule Problem with Trig Functions
October 23rd 2010, 08:22 PM #1
[SOLVED] Basic Chain Rule Problem with Trig Functions
Hi, everyone. First post but I have a feeling I'll be coming back often.
Problem: For what values of $x$ in the interval $[0,2\Pi]$ does the graph of $f(x)=\cos^2(x)+\sin(x)$ have a horizontal tangent?
Answer: $f'(x)=-2\cos(x)\sin(x)+\cos(x)$
f(x) has horizontal tangents at $x=\frac{\Pi}{2}, \frac{3\Pi}{2}$
I don't have a problem getting the x-values for the horizontal tangents but I can't get the right derivative. Here's my work:
$f(x)=u^2+\sin(x)$, $u=\cos(x)$
$\frac{dy}{du}=2u+\cos(x)$, $\frac{du}{dx}=-\sin(x)$
Substitute $\frac{du}{dx}$ back into the equation.
Where is my error?
Thank you for your help,
Last edited by Tatsuya; October 23rd 2010 at 09:31 PM.
This is incorrect. You are only using the chain rule for the $\cos^2 x$ part, so leave the sin(x) part out of it.
It should be: $\frac{dy}{dx}=(2u)(-\sin(x))+\cos(x)$
Thank you SO much! That was too simple.
$0=\cos(x)$ at $\frac{\pi}{2}$ and $\frac{3\pi}{2}$
Don't ask me why I put $\frac{5\pi}{6}$ up there, it's wrong and I edited that out. Must have copied that down in error.
Thanks again, Educated!
October 23rd 2010, 08:34 PM #2
October 23rd 2010, 09:17 PM #3 | {"url":"http://mathhelpforum.com/calculus/160756-basic-chain-rule-problem-trig-functions.html","timestamp":"2014-04-17T07:41:23Z","content_type":null,"content_length":"40031","record_id":"<urn:uuid:d7fd9a05-0cfe-4aa2-aaa8-ec6d6f707b18>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Widespread Compensatory Evolution Conserves DNA-Encoded Nucleosome Organization in Yeast
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PLoS Comput Biol. Dec 2010; 6(12): e1001039.
Widespread Compensatory Evolution Conserves DNA-Encoded Nucleosome Organization in Yeast
Mathieu Blanchette, Editor^
Evolution maintains organismal fitness by preserving genomic information. This is widely assumed to involve conservation of specific genomic loci among species. Many genomic encodings are now
recognized to integrate small contributions from multiple genomic positions into quantitative dispersed codes, but the evolutionary dynamics of such codes are still poorly understood. Here we show
that in yeast, sequences that quantitatively affect nucleosome occupancy evolve under compensatory dynamics that maintain heterogeneous levels of A+T content through spatially coupled A/T-losing and
A/T-gaining substitutions. Evolutionary modeling combined with data on yeast polymorphisms supports the idea that these substitution dynamics are a consequence of weak selection. This shows that
compensatory evolution, so far believed to affect specific groups of epistatically linked loci like paired RNA bases, is a widespread phenomenon in the yeast genome, affecting the majority of
intergenic sequences in it. The model thus derived suggests that compensation is inevitable when evolution conserves quantitative and dispersed genomic functions.
Author Summary
Purifying selection is a major force in conserving genomic features. It pushes deleterious mutations to extinction while conserving the specific DNA sequence. Here we show that a large proportion of
the yeast genome evolves under compensatory dynamics that conserve genomic properties while modifying the genomic sequence. Such compensatory evolution conserves the local G+C content of the genome,
which influences nucleosome organization. Since purifying selection is too weak to eliminate every weakly deleterious mutation in nucleosome bound or unbound sequences, the local G+C content is
frequently stabilized by compensatory G+C gaining and G+C losing mutations in proximal loci. Theoretical analysis shows that compensatory evolution is inevitable when natural selection is weak and
the genomic feature is distributed over many loci. These results imply that sequence conservation may not always be equated with overall selection. They demonstrate that cycles of weakly deleterious
substitutions followed by positive selection for corrective mutations, which were so far studied mostly in RNA coding genes, are observed broadly and profoundly affect genome evolution.
With the complete sequencing of a large number of genomes, and with the rapid progress in the development and application of methodologies for functional annotation of whole genomes [1], it is
becoming evident that our basic concepts of genomic function must be updated. The view of genomes as “bags of genes” is challenged by multiple lines of evidence, such as the extensive transcription
of short and long RNAs from a substantial fraction of the genome [2]–[4], and the identification of a dense grid of enhancers and transcription factor binding sites in regions that could not be
previously associated with genes [5], [6]. Some of the properties of the newly emerging genomic encodings are clearly different from the prototypic example of the triplet genetic code. The direct
mapping between genomic positions (codons) and function (peptides) which is a hallmark of the genetic code does not seem to hold for the majority of the genome. Instead, genomic encodings integrate
small contributions from multiple positions to form complex and quantitative outcomes. These types of dispersed encodings may be involved in defining enhancer sequences, maintaining epigenomic
switches, affecting widespread transcription, and contributing to chromosome structure and dynamics. The evolutionary implications of these new types of codes are still poorly understood. The
classical models in molecular evolution assume fitness to be a function of a single evolving locus. Conservation of the function encoded by such a locus is quantitatively predicted to decrease its
rate of evolution. What rates of evolution can be expected when each of the multiple positions have small contributions to some joint quantitative fitness?
Neutral compensatory substitutions were predicted by Kimura 25 years ago [7] to couple substitutions in pairs of interacting protein coding loci. Kimura's concept was that an evolving population
trajectory may visit suboptimal fitness levels transiently, thereby invoking an adaptive corrective force that can bring the system back to optimality. Such a process will change the genomic
sequence, fixating pairs of compensatory alleles. Kimura's compensatory dynamic may work in any group of loci that are associated with an epistatic (non linear fitness function) constraint and was
quantified extensively in RNA coding loci where the epistatic coupling of paired loci has a clear structural interpretation [8]–[10]. Another important source of genomic information, transcription
factor binding sites, poses evolution with a different type of epistatic constraint by forming a quantitative binding energy landscape that affects gene regulation [11], [12]. The evolution of
binding sites was shown to drive compensatory effects at the single site level [13] and also at the level of binding site clusters (or enhancers) [13], [14]. Studies of enhancer evolution are
continuously providing striking examples for plasticity and compensation [15]–[17], but due to their heterogeneity, it is currently difficult to develop a general understanding of their evolutionary
A simple experimentally characterized example of a dispersed genomic encoding involves the effect of DNA sequence on nucleosome organization [18], [19]. In-vitro and in-vivo experiments in yeast [20]
, [21] and other species [22]–[24] showed that nucleosomal packaging is correlated with preferential binding of nucleosomes to specific dinucleotide periodicities, and is strongly anti-correlated
with A+T content in general and with poly(A/T) sequences in particular [20], [23], [25], [26]. The correlation between nucleosome occupancy and the underlying DNA sequence is sufficiently powerful to
allow sequence based nucleosome occupancy prediction, but this prediction is not based on a strict requirement for certain nucleotides to appear at precise positions. Rather, information from
multiple sequence positions along the 147bp length of the nucleosome contributes to the affinity of nucleosomes to a given sequence and consequently, to the formation of stable or semi-stable
nucleosome configurations [27]. The evolution of these sequence determinants thus serves as a test case for the dynamics of dispersed genomic encodings. Analysis of substitution rates in yeast
suggested that genomic sequences that are unbound to nucleosomes are evolving slower than genomic sequences that are bound to nucleosomes [20], [28]–[30]. Whether this is an indication of classical
purifying selection on nucleosome encoding sequences, increased abundance of transcription factor (TF) binding sites at low nucleosome occupancy loci, or nucleosome-associated mutability, is
currently unclear [31].
Here we analyze patterns of divergence and polymorphisms in yeast intergenic sequences to substantiate an extended model of selection on a dispersed genomic encoding. The analysis shows that yeast
low nucleosome occupancy sequences have maintained a high A+T content throughout the evolution of the Saccharomyces cerevisiae lineage. Contrary to standard evolutionary models, we show that this
conservation was made possible not by pointwise sequence conservation, but by a compensatory coupling of decreased rates of A/T-losing substitutions and increased rates of corrective A/T-gaining
substitutions. Theoretical analysis suggests that this type of evolutionary dynamics is largely unavoidable when the genome employs dispersed functional encodings. The evolutionary dynamics we reveal
shuffle sequences continuously while preserving their encoded function, creating a dynamic yet balanced process that may be central to the evolution of gene regulation.
Regional heterogeneity in nucleotide composition is correlated with yeast nucleosome occupancy
The global G+C content of the yeast intergenic genome is about 35% (Fig 1A) but there is a significant heterogeneity in the genome local nucleotide composition (Fig S1). Such heterogeneity must be
the consequence of a variable evolutionary process working in G+C poor and G+C rich sequences. Recently it was shown that nucleosome occupancy patterns strongly correlate with local G+C content in
yeast [32]. We define high nucleosome occupancy loci as those in the top 21% MNase-seq coverage percentiles in-vivo (total 540 kbp, Fig1B), and low nucleosome occupancy loci as those in the bottom
14% MNase-seq coverage percentiles in-vivo (total 350 kbp). Overall, the intergenic G+C content at high occupancy sequences (~40% G+C) is higher than the G+C content of low occupancy sequences (~28%
G+C). This heterogeneity is even more pronounced when studying the distribution of tri-nucleotides (Fig 1C, Fig S2), showing A/T tri-nucleotides to be more abundant in low occupancy sequences, and
pointing towards additional nucleosome sequence preferences. It was shown before that in-vitro nucleosome occupancy can be robustly predicted from the distribution of 5-mers or even 3-mers in the
sequence [21]. This suggests that the functionality and fitness contribution of DNA-encoded nucleosome organization, if such a contribution exists, is dispersed across multiple loci in a quantitative
fashion and is not encoded by a strict requirement for precise sequence elements at one or a few positions. To prove or disprove the hypothesis that yeast intergenic G+C content heterogeneity is
affected by nucleosome-related selection, we studied the evolutionary dynamics of yeast sequences bound and unbound to nucleosomes. We hypothesized that through characterization of these dynamics, we
may reveal, in addition to the sequence constraints affecting yeast nucleosome organization, some general principles governing the evolution of dispersed genomic encodings.
Yeast sequence heterogeneity is correlated with nucleosome occupancy. A) Heterogeneous local G+C content in yeast.
Analysis of context-dependent substitution rates reveals correlation between nucleosome occupancy and evolutionary dynamics
To study the evolutionary dynamics that underlie G+C content heterogeneity and nucleosome occupancy in the yeast genome, we inferred substitution rates and ancestral sequences in the Saccharomyces
sensu stricto clade. We performed evolutionary inference from alignments of five yeast genomes [33], [34] for sequences that were classified as high nucleosome occupancy loci in S. cerevisiae. We
separately inferred the evolutionary trajectory at low nucleosome occupancy loci. The analysis omitted exonic sequences, since the evolutionary dynamics in these involve additional sources of
selection relative to those affecting intergenic sequences. Differences in locus mutability are known to be associated with the flanking nucleotides [35], [36], and this effect may severely bias the
comparison of evolutionary dynamics between regions with different nucleotide composition. For example, A+T rich regions, like low-occupancy sequences, may exhibit slower divergence of A/T
nucleotides than G+C rich regions, simply because A/T mutability is reduced in the flanking context of A/T nucleotides. To account for this effect of flanking nucleotides on substitution dynamics, we
independently estimated the rate of substitution at all 16 possible combinations of flanking nucleotides. Indeed, the substitution rates estimated by our model vary significantly among flanking
contexts both in high and low occupancy loci and reflect context-dependency that is consistent among phylogenetic lineages (Fig 2A, Fig S3). For example, the C to T transition rate over the S.
cerevisiae lineage in low occupancy regions varies between ~0.14 in the context of tCc and ~0.03 in the gCg context. The estimation of context-dependent substitution rates proved essential for the
unbiased comparison of evolutionary dynamics between the low occupancy, G+C poor, and the high occupancy, G+C rich sequences. As we show next, it allowed us to robustly identify and validate major
differences in the evolutionary regimes of these two classes of loci.
Low occupancy sequences lose A/T nucleotides slowly and gain them in a context-dependent fashion. A) Yeast substitution rates are robustly correlated with the flanking nucleotides.
Low occupancy sequences lose A/T nucleotides slowly but gain A/T nucleotides at the same rate as high occupancy sequences
We first studied S. cerevisiae substitution rates inferred from intergenic sequences within 200 bp of annotated transcription start sites. It is known that this region in yeast promoters is enriched
for transcription factor binding sites and exhibits a stereotyped nucleosome-depleted region of length ~100–150 bp. As shown in Fig 2B–C (see also Fig S4 and Fig S5), the analysis reveals that the
rates of A/T-losing transitions (A to G, T to C) and transversions (A to C, T to G) are ~45% lower in low occupancy sequences than in high occupancy sequences. A decrease is observed for all 16
nucleotide contexts (within an estimation variance), and is slightly more pronounced in A/T contexts (AAA, AAT). Notably, the rates of A/T-gaining transitions (G to A, C to T) and transversions (G to
T, C to A) are not decreased like the A/T-losing substitutions. In most sequence contexts, the rates of A/T-gaining substitutions are higher in low occupancy sequences or similar between the sequence
classes. On the other hand, when flanked by G's or C's, the rates of A/T-gaining substitutions are four times slower in low occupancy compared to high occupancy sequences. Evolutionary theory could
not predict these dynamics if the evolution of G+C content was neutral (unless an extremely unlikely mutational regime is separating high from low occupancy regions, as we disprove below using
population genetics data). Moreover, a simple theory assuming average stronger evolutionary constraint on low occupancy sequences [20], [29] would predict a general decrease in the substitution rates
in the region and would not explain the asymmetry between A/T-gaining and A/T-losing substitution rates.
Overall G+C content is conserved for high and low nucleosome occupancy DNA
An important assumption underlying our evolutionary analysis above is that the evolutionary regime operating in regions that are occupied (or unoccupied) by nucleosomes in the extant S. cerevisiae
genome has been the same since the divergence of S. cerevisiae from S. paradoxus. Violations of this assumption can potentially affect our substitution rate estimations. For example, if nucleosome
occupancy is determined by the genomic sequence, but is not under selection, nucleosomes may drift freely following substitutions spontaneously generating new A+T rich hotspots. Following that, we
may enrich for substitutions that increase A+T content in extant low occupancy sequences by assuming nucleosome organization were conserved. To verify that such a scenario has not significantly
affected our analysis of TSS-proximal substitution rates, we inferred the G+C content in the common ancestor of S. cerevisiae and S. paradoxus, for 10 ranges of S. cerevisiae nucleosome occupancy
levels, and compared it to the extant G+C content (Fig 2D). We found that the G+C content at all levels of nucleosome occupancy did not change significantly during evolution in the S. cerevisiae
lineage. Sequences proximal to TSSs therefore conserve their regional G+C content (at least on average). Consequently, the different rates of substitutions in high and low nucleosome occupancy loci
do not represent net divergence in the sequence features that correlates with nucleosome occupancy. This is further confirmed by recent comparative analysis of nucleosome organization in S.
cerevisiae and S. paradoxus, which revealed only limited divergence in nucleosome positioning for these species [37], [38]. The highly non symmetric substitution dynamics observed at different levels
of nucleosome occupancy must therefore be explained by means of a stationary evolutionary process that conserves the underlying nucleosome-associated encoding.
Spatial coupling between A/T-losing and A/T-gaining substitutions suggests compensatory evolution preserves high and low occupancy sequences
One intriguing possibility that may explain the asymmetry between the rates of A/T-losing and A/T-gaining substitutions in low occupancy sequences is that while A/T-losing mutations are selected
against, some can be sustained in the population. Consequently, positive selection is able to push to fixation corrective A/T-gaining mutations (possibly at different genomic positions). If this
hypothesis is correct, we can predict that loci near sites of A/T-losing substitutions will be enriched with A/T-gaining substitutions and vice versa. Remarkably, the yeast divergence patterns
confirm this prediction. The data reveal that rates of A/T-gaining substitution are accelerated next to sites of observed A/T loss (compared to rates near conserved loci, Fig 3A). Furthermore, as
shown in Fig 3A, this effect does not represent general spatial coupling of substitutions, since the A/T gain rate is significantly higher near sites of A/T loss than it is near sites of A/T gain.
Conversely, the rates of A/T losing substitutions are higher next to sites of observed A/T gain (Fig 3B). Unexpectedly, this coupling effect is observed robustly across the entire spectrum of
nucleosome occupancy levels (p<1e-5 for high nucleosome occupancy, p<0.04 for low nucleosome occupancy). The coupling between contrasting substitutions on spatially linked loci suggests the
involvement of a common selective constraint, without which the dynamics at these loci must be independent of each other. The data therefore suggest that compensating A/T-losing and A/T-gaining
mutations work to conserve a heterogeneous G+C content (both high and low) in TSS-proximal sequences.
A/T-gaining and A/T-losing substitutions are spatially coupled. A) A/T gain rates are faster next to inferred A/T loss events.
Compensation and possible divergence of low occupancy regions revealed by the substitution dynamics at TSS-distal sequences
The trinucleotide distributions of low occupancy TSS-distal sequences (over 200 bp from an annotated TSS) are generally similar to those in TSS-proximal loci, but some important differences are
notable (Fig 4A). First, for low occupancy sequences, G/C trinucleotides are rarer in TSS-distal than in TSS-proximal loci. Second, poly-A/T trinucleotides are enriched relative to other A/T rich
nucleotides in TSS-proximal but not TSS-distal low occupancy loci. These differences may represent a lower fraction of TF binding sites in TSS-distal regions [19], [20] (Fig S6 for additional
analysis). As shown in Fig 4B–C, TSS-distal A/T-losing substitution rates are decreased in low occupancy vs. high occupancy sequences, consistent with the observations in TSS-proximal loci.
Furthermore, the rates of A/T-gaining substitution in many contexts are increased in low occupancy vs. high occupancy sequences, similar to their behavior in TSS-proximal regions (but with G/
C-flanking contexts not highly conserved). Comparison of the ancestral and extant G+C content reveals conservation at high levels of nucleosome occupancy, but some average decrease in G+C content for
low nucleosome occupancy loci (Fig 4D). Analysis of compensatory spatial correlation between A/T-gaining and A/T losing substitutions reveals significant coupling at high nucleosome occupancy levels
(p<6e-4). Also shown is the tendency of A/T-gaining substitutions at low nucleosome occupancy to occur in clusters (Fig S7).
Compensatory evolution at TSS-distal sequences.
The data therefore support a compensatory substitution process that drives G+C content conservation in most TSS-distal loci, in a way analogous to the dynamics at TSS-proximal loci. This is
demonstrated by the asymmetric rates of A/T gain and A/T loss, the conservation of G/C content and the compensatory substitution coupling at most ranges of nucleosome occupancy. An exception to this
general trend is observed at some of the TSS-distal low occupancy loci. We hypothesize that during the evolution of the S. cerevisiae lineage, de-novo A/T-rich hotspots may have driven divergence of
nucleosome organization in some TSS-distal loci (possibly since these were under weaker selection [37], [38]). This effect may explain the non-stationary G+C content and spatial clustering of A/
T-gaining substitutions at extant TSS-distal low occupancy loci (Fig S7). Taken together, the data on TSS-distal sequences further support the idea that selection maintains heterogeneous G+C content
across most yeast intergenic sequences (and in particular at TSS-proximal sequences), and that this selection drives changes in substitution rates that are difficult to explain using models of
selection on a single locus.
A theoretical model recapitulates the empirical yeast evolutionary dynamics
To study the hypothesis that selection on dispersed nucleosome encodings drives asymmetric substitution patterns in yeasts, we devised a simple theoretical model (Fig 5). We assume that a population
of 20 bp sequences (each representing a different “genome”) is evolving given a constant flux of mutations in some fitness landscape that depends only on the G+C content of the sequence. The
mutations transform G/C nucleotides to A/T nucleotides faster than they transform A/Ts to G/Cs, driving the genomes' stationary G+C content to a neutral level of 30%. Working against this stationary
G+C content, the fitness landscape defines a lower G+C content (20%) as optimal, with symmetrically decreasing fitness for suboptimal values. This landscape is designed to approximate the potential
selective pressure on low nucleosome occupancy sequences. We studied the model behavior at various selection intensities both analytically and using computer simulations (Methods). For each intensity
level, we determined the A/T gain and A/T loss substitution rates and stationary G/C content (Fig 5A–D). When selection is weak, the dynamics we observed are neutral, with the rates of substitutions
being equal to the rates of mutations, and the G+C content converging to the neutral stationary G+C content (30%). In contrast, when selection is strong, the rates of both A/T gain and A/T loss
decrease to zero and the G+C content is optimal (20%). These two regimes are compatible with the standard evolutionary theory of selection on a single locus. More notable are the substitution rates
observed at intermediate levels of selection. When selection is not sufficiently strong to purify all A/T-losing mutations, A/T-losing substitution rates are only partially decreased. Interestingly,
this decrease is matched by an increase in the rate of A/T-gaining substitutions to levels higher than the neutral rate. The new balance between A/T-losing and A/T-gaining rates is sufficient to
stabilize the G+C content of the regime at near-optimal levels. Detailed analysis reveals that the increase in the rate of A/T-gaining substitutions is driven by cycles of A/T-loss mutation at one
position, which are corrected by an A/T-gain mutation at another position. Similar but opposite dynamics are observed when the optimal G/C content is higher than the neutral one (modeling selection
of high G+C content in high nucleosome occupancy sequences, Fig S8). Furthermore, the compensatory regime is observed over a much wider range of selection intensities when the fitness landscape is
more tolerant as shown, for example, in Fig 5E–I. These theoretical predictions are consistent with the empirical behavior observed in yeast, showing that weak selection can be sufficiently powerful
to increase specific substitution rates over the neutral level due to a compensatory regime.
A model of weak compensatory selection predicts the evolutionary dynamics at yeast low occupancy sequences.
Compensatory dynamics are supported by S. cerevisiae polymorphism data
Our evolutionary analysis above supports the idea that high and low nucleosome occupancy sequences in yeast evolve under a selective pressure to maintain their G+C content, or a refined nucleosome
sequence potential that is approximated by the average G+C content. According to this scenario, in low occupancy sequences, which are generally A+T-rich, A/T-losing substitutions are weakly selected
against, while A/T-gaining substitutions are frequently pushed to fixation by an adaptive force. According to our simulations and to the standard population genetics theory, such selection on A/
T-gaining and A/T-losing mutations should affect the distribution of allele frequencies in the population. In low occupancy loci, A/T-losing single nucleotide polymorphisms (SNPs) are expected to
have lower allele frequencies than A/T-neutral SNPs, while A/T-gaining SNPs should have higher allele frequencies. Analysis of polymorphic sites in a sample of 39 S. cerevisae strains [39] confirmed
these predictions (Fig 6). We used data on 9185 SNPs in low occupancy loci and 16956 SNPs in high occupancy loci, approximating the minor allele frequency using majority voting and discarding sites
with incomplete data or more than two alleles. In low occupancy loci, A/T-losing SNPs are more rare (<20%, alternative threshold generated similar results, Fig S9) than A/T-gaining SNPs in non G/C
flanking context (p<2e–05). A reciprocal effect is observed at high occupancy loci, where A/T-gaining SNPs are more rare than A/T-losing SNPs in non G/C flanking context (p<3e–07). The reciprocality
of the effect also confirms that our conclusions are not affected by general biases in the estimation of allele frequencies due to systematic sequencing errors. We note that as expected by the low
divergence of A/T nucleotides in G/C flanking contexts of low occupancy sequences, the allele frequencies of A/T-gaining SNPs in such loci are reflective of stronger selection. This may be related to
the enrichment of such flanking contexts at TF binding sites, as we discuss below.
SNP data support the compensatory evolution hypothesis.
The evolutionary origins of G+C content heterogeneity in yeast intergenic regions
We classified yeast intergenic regions according to their nucleosome occupancy, and used evolutionary analysis of context-dependent substitution rates to reveal remarkable variability in the
evolutionary dynamics of sequences bound and unbound to nucleosomes. Our analysis shows that low occupancy sequences lose A/T nucleotides slowly compared to high occupancy sequences, but gain A/T
nucleotides at similar rates. We also observe spatial coupling between substitutions that gain A/Ts and substitutions that lose them, which suggests that a compensatory process preserves G+C content
at both high and low occupancy loci. These observations are compatible with a model in which the local G+C content in yeast is conserved through weak quantitative selection. Such weak selection
allows occasional fixation of substitutions that disrupt the optimal G+C content of the region, but then respond by adaptive evolution of corrective mutations at the mutated locus or at any of the
surrounding genomic positions. Data on allele frequencies of yeast SNPs independently confirm the predictions of such a model. This set of observations proves that the G+C heterogeneity of yeast
intergenic sequences is not a consequence of a neutral process and suggests that nucleosome organization may play a major role in this lack of neutrality.
Selection and the signals for nucleosome organization
The role of DNA encoded nucleosome occupancy in regulating gene expression is difficult to isolate experimentally, mostly due to the challenge of separating cause and effect inside the complex system
involving nucleosomes, remodeling factors and TFs. Previous analysis identified an anti-correlation between nucleosome occupancy and genomic conservation in yeast [20], [28]–[30] putting forward the
hypothesis that low occupancy regions (nucleosome free regions, linkers) may be under selection, either due to their increased frequency of TF binding sites, or since they serve as anchors that
organize the entire nucleosome landscape. According to our analysis nucleosome occupancy is tightly correlated with substitution patterns reminiscent of selection throughout the genome and not just
at low occupancy regions. The data therefore strongly support the non-negligible contribution of DNA encoded nucleosome organization to fitness and therefore to genome regulation. This is further
demonstrated by contrasting the G+C content related selection patterns at TSS-proximal sequences (Fig 2, ,3),3), with the frequent cases of overall divergence of A/T rich hotspots and clustered A/
T-gaining substitution in TSS-distal low occupancy sequences (Fig 4). The data suggest that when selection is not working, nucleosome occupancy drifts following changes in the encoding sequences [37]
, [38]. We note that according to our simulations and the empirical data, the selection on nucleosomal sequences must be weak, driven by the very small (but still specific) fitness contribution of
any individual genomic position. We predict that such selection is sufficiently powerful to contribute significantly to the heterogeneity of the yeast intergenic sequences, but it is clearly much
weaker (per base) than the selection working to conserve classical functional elements. These theoretical considerations underline the difficulty in proving the functionality of specific nucleosome
positioning sequences using direct genetics experiments, which typically require large and easily quantifiable phenotypic effects for specific genetic manipulations.
Combined selection on TF binding sites and nucleosome positioning sequences in TSS-proximal low occupancy sequences
One source of evolutionary constraint on yeast intergenic sequences is their interaction with transcription factors. TF binding sites are known to be conserved among yeast species [33], [34] and
their increased concentration in TSS-proximal nucleosome free regions was previously proposed to impose overall conservation at these regions. According to our inferred evolutionary dynamics at
TSS-proximal DNA, selection on TF binding sites indeed contributes to the evolution of low occupancy sequences. This is indicated, for example, by a very low A/T gain rates in G/C trinucleotides (Fig
2), which are part of some of the most abundant and conserved yeast binding sites (e.g., Ume6, PAC, Reb1, MBP1) [11], [12], [40]. Nevertheless, selection on binding sites, even those that are A/T
rich (e.g. TATA boxes) is highly unlikely to explain the nucleosome occupancy-dependent substitution rates we observed throughout the yeast genome. Specifically, the compensatory coupling of A/
T-losing and A/T-gaining substitutions is not compatible with any particular binding site model. We therefore hypothesize that a combination of purifying selection on TF binding sites (either strong
[33], [34] or weak [11]) and composite selection on DNA encoded nucleosome organization together define a complex fitness landscape that shapes the evolution of yeast intergenic sequences.
Evolution of dispersed sequence encodings necessitates compensatory dynamics
We studied here a model of evolution as manipulating sequences in a complex fitness landscape that combines contributions from multiple coupled loci into a single dispersed encoding. As shown by
theoretical and empirical analysis of the model, when selection on each individual locus is weak, purifying selection is incapable of completely purging mutations that are only slightly deleterious
and these are continuously challenging the overall optimality of the sequence. This suboptimality is compensated effectively by adaptive evolution at multiple other loci that participate in the
dispersed encoding. In contrast to other cases of compensatory evolution (proteins [41] or RNA molecules [8]-[10], [42]), the encodings we studied here provide ample direct ways to correct a slightly
deleterious substitution, thereby increasing the rate of compensation. Our study builds on earlier work on codon bias [43], [44], but uses the global and experimentally characterized sequence classes
at high and low nucleosomes occupancy loci to establish compensatory evolution as a major driving force in evolution under multi-site selection. This type of evolutionary dynamics may be generalized
to other dispersed functional encodings [45], [46] including complex regulatory switches that typically involve a large number of TF binding sites of variable factors and specificities. The
remarkably global nature of the compensatory effect we observed in yeast, which cause a measurable global increase in the substitution rate of specific mutations, supports the notion of an
evolutionary process that conserves function without a strict requirement to conserve sequence. It is tempting to speculate that such a process may allow genomes to maintain diversity and
continuously search the sequence space, without significantly compromising their existing regulatory circuits. Furthermore, this process may reduce, through compensation, the mutational load [47]
resulting from the use of multiple loci to encode regulatory functions.
Data sets
Multiple alignments of the Saccharomyces cerevisiae, Saccharomyces paradoxus, Saccharomyces mikatae, Saccharomyces kudriavzevii and Saccharomyces bayanus were downloaded from the UCSC database [48]
(sacCer2 version). Alignments were based on the SGD June 2008 assembly. A genome wide in-vivo nucleosome occupancy profile for S. cerevisiae was used as previously described [21], indicating a
nucleosome occupancy value for each genomic position. SNP data were downloaded from the SGRP website [39]. Gene Annotations and transcription start sites of S. cerevisiae were taken from the SGD
known gene table which corresponds to sacCer2 [49]. Transcription factor binding sites were downloaded from the UCSC Genome Browser [48] and are based on the chip-chip experiments described before
Classifying low and high occupancy sequences
Our analysis focused on intergenic genome sequences which are defined based on the SGD gene annotations. Each intergenic locus was defined as TSS-proximal if it is not part of an exon, and has an
annotated TSS within 200 bp of it. TSS-distal loci included the remaining non exonic loci. We defined low occupancy loci as positions with nucleosome occupancy value lower than −2.5 (relative to the
genomic mean, detailed description in Kaplan et al. [21]) and high occupancy loci as positions with occupancy higher than 0.4. Alternatively, we classified all loci to equal sized bins of nucleosome
occupancy (ten in analysis of ancestral G+C context and five in the analysis of spatial coupling). Alternative definition of low occupancy linker regions based on raw data of MNase restriction sites
resulted in similar results (data not shown).
Estimation of substitution rates
As described in the text, a refined context dependent substitution model is essential for the correct estimation of the different evolutionary dynamics in low G+C content, low occupancy loci and high
G+C content, high occupancy loci. We therefore applied a flexible substitution model to perform ancestral inference and learn evolutionary parameters from alignment data (details available upon
request). The model included parameters for the substitution rates at each of 16 possible contexts parameterized by the identities of the 3′ and 5′ flanking nucleotides. Independent substitution
rates were assumed for each lineage in a phylogenetic tree which was fixed throughout the process. We note that the model does not assume parametric constraints on different substitution rates, and
infers substitution rates on lineages, rather than a global substitution rate matrix and branch lengths. This approach has proved more robust given that a sufficient number of loci was available to
learn robustly the parameters at each lineage, and given that the substitution process in the different lineages indicated gradual changes in dynamics that a model using a universal rate matrix could
not have accounted for (for example, the extant G+C content in each of the species we used show some variability).
To perform ancestral inference, we used a customized loopy belief propagation algorithm on a factor graph approximation of the model [51]. Parameter estimation was then performed using a generalized
EM algorithm. We validated some key results using parsimony analysis (Fig S10 and data not shown).
For analysis of the resulted model parameters, each context dependent substitution rate was averaged with its reverse complement. For example CAT->CCT is averaged with ATG->AGG. The averaged
conditional probabilities are presented in Fig 2, ,4,4, Fig S3 and Fig S4. A/T gaining is defined as any of the following substitutions in any flanking contexts: C->A, C->T, G->A, G->T. A/T loss in
defined as any of the following substitutions in any flanking contexts: A->C, A->G, T->C, T->G. Analysis was generally focused on the S. cerevisiae lineage (data on the other lineages are shown in
Fig S3, Fig S5).
Evolutionary sequence simulation
In order to estimate the theoretical regional G+C content of S. cerevisiae intergenic sequence, we have simulated this sequence using a lineage specific evolutionary probabilistic model learned over
the whole intergenic sequence (see above). Specifically, the common ancestor of the sensu stricto clade was simulated first based on the learned 2-order markov model. Following this, the sequences of
the descendants were simulated based on the simulated ancestor sequence and the corresponding substitution model. Iteratively, the sequences of all species in the phylogeny were simulated, including
the extant species. The regional G+C content of the simulated S. cerevisiae intergenic sequence is presented in Fig S1.
Spatial coupling of substitutions
To estimate the coupling between A/T gaining and A/T losing substitutions in the yeast genome, we used our probabilistic model to infer at each genomic position j the posterior probability of each
type substitution in the lineage leading to species i from its ancestor (pai):
When s^j[i] denotes the nucleotide at the j'th genomic position of the i'th species in the phylogeny, and s^j[pai] denotes the sequence of the ancestor of this species at the same genomic position.
Given the posterior probabilities we computed for each genomic position j the expected numbers of A/T loss and A/T gain events in the sequence preceding it. This was done using a horizon parameter,
which was set to 5 bp by default (for alternative horizon values see below):
Where the δ[gain], δ[loss] functions were given by Table 1, and the net A/T divergence of the position was defined as:
A/T gain and loss delta parameters.
We then identified all positions with A/T divergence <-0.9 (A/T losing contexts), with A/T divergence >0.9 (A/T gaining contexts) and with conserved A/T content (background). For each such set we
computed the probability of A/T gain and A/T loss substitutions using the same inferred posterior probabilities. By using this approach (conditional probability given the events in the preceding 5
bp) we ensured each substitution is counted precisely once. By computing the probabilities for similar events (e.g. A/T gain) given different contexts (A/T losing, A/T gaining, or background), we
could robustly asses compensation patterns while controlling for the different basal rates of A/T gain and A/T loss and the general clustering of substitution in the genome.
To statistically assess the coupling between A/T divergence context and A/T losing/gaining substitutions in the S. cerevisiae lineage we counted the numbers of A/T gains and A/T losses at A/T gaining
and losing contexts:
In addition we counted the numbers of A/T and C/G occurrences in these contexts:
We wished to test whether the spatial compensation effect is significant even given the general clustering of substitutions. Our null hypothesis was therefore:
We test it using bootstrapping with 100,000 resamples. At each resample, a set of
Analysis of the robustness of the observed compensation patterns for different values of the horizon parameter is shown in Fig S11, Fig S12, and Fig S13.
Evolutionary theoretical model
To study the hypothesis that selection on dispersed nucleosome encodings drives asymmetric substitution patterns in yeasts, we devised a simple theoretical model. For clarity we describe here the
version of the model for low occupancy sequences. For nucleosome DNA the model is the same apart from the fitness function.
First we used a Wright-Fischer dynamics on a population of
We note that the population expected θ parameter may be estimated from the above parameters (
The simulation was based on the following procedure:
Initialize: Create a population of reference genome sequence R using the same initial sequence. We introduce the following counters to accumulate sufficient statistics for computing the rates of A->G
and G->A substitutions (N[A], N[G] and N[A->G],N[G->A], such that the rate will be estimated as N[A->G] /N[A], N[G->A] /N[G]).
Sample a new generation: to create a new generation, we sample [A] and N[G] for each sampled individual with the number of A's and G's in the respective sequence.
Updating the reference genome: given the new generation population, we tested the frequency of A and G at each of the L genomic loci. Whenever the frequency in the current population is larger than
0.95 and the major allele is different from the reference genome R, we incremented the counter N[A->G] or N[G->A] (after the burn-in period) and updated the sequence R.
We end up with counts of A's (N[A]), counts of G's (N[G]) (in units of generations X loci) and counts of the substitutions between them (N[A->G], N[G->A]). Substitution rates are estimated by:
These rates are shown in Fig 5 and Fig S8 for the different fitness landscapes we defined next.
The goal landscape is defined symmetrically around an optimal number of G's denoted by n[GC] and the selection intensity η (e.g., X axis in Fig 5B, C, F, G):
The threshold landscape is defined using similar parameters to generate an asymmetric function:
Analytic approximation
Next, we studied the above model analytically in the regime of low mutation rates. In this regime, drift is the dominating mechanism and we can model the process by assuming the population is
represented by a single genome (or GC content). Given the definitions above, the rate at which mutations that increase the GC content enter the population is
While the rate of mutations that decrease the GC content is
In such drift dominating regime, the fixation probability of a new mutation is:
where [52]. Therefore, the rate at which the GC-content increases or decreases is on average
Thus the set equations for the dynamics of
Solving this for the steady state
As can be seen in Fig S8, the analytical result and Wright-Fischer simulation are in good agreement.
Allele frequency analysis
We used DNA sequences of 39 S. cerevisiae strains sequenced in the Saccharomyces genome resequencing project (SGRP). Here, only intergenic 2 allele SNP's with sequence data from more than 20 strains
were considered informative. For each of those SNP's, major allele was defined as the most abundant allele in the population. Minor allele is defined as the least abundant allele. A/T gaining SNPs
were defined when the nucleotide of the major allele was C or G and the minor allele is A or T. A/T losing SNPs were defined reciprocally. All other SNPs were defined as A/T conserving (see
illustration in Fig 6). We further subdivided SNPs into two groups: SNP's in G/C flanking context and SNP's with at least one A or T in the flanking contexts, using the reference strain for
determining the context. These subgroups are again subdivided to SNP's within low occupancy sequences and SNP's within high occupancy sequences (Fig 6). We analyzed the distributions of the frequency
of minor alleles of these subgroups separately. In figure 6, shown are the fraction of rare alleles (minor allele frequency <0.20) among A/T gain, A/T loss and A/T conserved SNP's within low or high
occupancy sequences. We used a chi-squared test to reject the null of hypothesis that the fraction of rare alleles is the same between A/T gain and A/T loss SNP's.
Supporting Information
Figure S1
Heterogeneous G+C content. A) Shown is the probability density function of the regional G+C content (20 bp windows) over the intergenic S. cerevisiae sequence (black), over simulated intergenic
genomes (red, see methods) and the theoretical binomial distribution (green, p~0.35, n
(0.32 MB EPS)
Figure S2
Heterogeneous trinucleotides distribution over low and high nucleosome occupancy sequences. A-B) Shown are log ratios of trinucleotide frequencies in low and high occupancy sequences (Y axis) against
trinucleotide frequencies in high occupancy sequences (X axis) over TSS proximal sequences (A) and TSS distal sequences (B). Each trinucleotide is depicted by three adjacent color coded squares.
Pairs of reverse complimented trinucleotides are averaged and depicted together. In addition to the clear preference of A/T trinucleotides for low occupancy sequences (notice the abundant AAA), we
note the differences in G/C trinculeotide preferences between the occupancy groups. (C,D) shown are the log ratios of trinucleotide frequencies (same as A,B) over TSS proximal sequences (C) and TSS
distal sequences (D).
(0.39 MB EPS)
Figure S3
Yeast substitution rates are robustly correlated with the flanking nucleotides for all substitution types. Shown are the inferred substitution rates in TSS distal low occupancy sequences for the S.
cerevisiae lineage (the gray lineage, x axis), and other sensu stricto lineages (color coded, Y axis), for 16 different flanking nucleotide contexts. The linear fit (dashed line) slopes for each
lineage is roughly proportional to its branch length, but the model allows for differences in the substitution rates among lineages. A) A->C, T->G substitutions B) A->G, T->C substitutions C) A->T,
T->A substitutions D) C->A, G->T substitutions E) C->G, G->C substitutions F) C->T, G->A substitutions.
(0.90 MB EPS)
Figure S4
A/T gain and loss substitution rates at low and high occupancy loci. Shown are ratios of all substitution rates in low vs. high occupancy loci (Y axis) plotted against the substitution rates at high
occupancy loci (X axis) over TSS proximal (A) and distal sequences (B). Each point represents the rate of one substitution (color coded) in loci flanked by the 3′ and 5′ nucleotide depicted above the
data point. C,D) Substitution rates by their A/T dynamics in TSS proximal (C) and distal (D) loci. Error bars depict the standard deviation. The trends are identical over transitions and
(0.66 MB EPS)
Figure S5
A/T gain and loss dynamics in different lineages of the sensu stricto clade. A-F) A/T loss and A/T gain rates over TSS distal (bars) and proximal (gray ticks) for the lineages leading to the
following species: S. cerevisiae (A), S. paradoxus (B), S.mikatae (C), S. kudriazevii (D), the common ancestor of S. cerevisiae & S. paradoxus (E), and the common ancestor of S. cerevisiae & S.
mikatae (F). G-L) Shown are the average G+C content of the following extant species and inferred ancestors, depicted for 10 levels of S. cerevisiae nucleosome occupancy (Methods): S. cerevisiae (G),
S. paradoxus (H), S.mikatae (I), S. kudriazevii (J) the common ancestor of S. cerevisiae & S. paradoxus (K) and the common ancestor of S. cerevisiae & S. mikatae (L).
(0.40 MB EPS)
Figure S6
G/C trinucleotides in TSS proximal low occupancy loci are more likely to be bound by a transcription factor. Shown is the fraction of G/C trinucleotides that are bound by one of the following
transcription factors: REB1, UME6, MSN2, MBP1 within TSS distal high occupancy loci (-H), TSS distal low occupancy loci (-L), TSS proximal high occupancy loci (+H), and TSS proximal low occupancy
loci (+L).
(0.25 MB EPS)
Figure S7
Coupling of A/T gaining and A/T losing substitutions at TSS-distal sequences. A) Shown is a comparison of the rate of A/T gaining substitutions near inferred sites of A/T losing (black) and A/T
gaining (red) substitution, plotted for different ranges of nucleosomes occupancy (X axis). B) Similar analysis of A/T loss substitution rates around inferred A/T gain and A/T loss events.
(0.40 MB EPS)
Figure S8
Theoretical evolutionary model. A-H) Evolutionary simulation in high G+C fitness landscape. Shown are results of a simulation identical to the one described in Figure 5, with the fitness landscape
changed to reflect optimality at a G+C content of 40% (higher than the 30% neutral content). I) Theoretical evolutionary model recapitulates the empirical A/T content dynamics observed in the
Wright-Fischer simulation. Shown are the substitution rates for each selection intensity of A/T losing mutations (red) and A/T gaining mutations (blue) as approximated analytically (lines), compared
to the empirical results (dots).
(0.40 MB EPS)
Figure S9
Allele frequency of A/T gain and A/T loss SNP's differences are robust to rare allele threshold. A–D) Minor allele frequency of non G/C contexts A/T loss, A/T gain and A/T neutral SNP's across low
and high occupancy loci. Shown are fraction of minor alleles at low occupancy loci with frequencies smaller than 0.14 (A), fraction of minor alleles at high occupancy loci with frequencies smaller
than 0.14 (B), fraction of minor alleles at low occupancy loci with frequencies smaller than 0.3 (C), fraction of minor alleles at high occupancy loci with frequencies smaller than 0.3 (D). E–F)
Cumulative distribution function of non G/C, minor allele frequency of A/T loss, A/T gain and A/T neutral SNP's at low occupancy loci (E) and high occupancy loci (F).
(0.37 MB EPS)
Figure S10
Parsimonious inference validates substitution rates heterogeneity and spatial coupling of A/T gain and loss events. A–B) Shown are A/T gain (blue) and A/T loss (red) substitution rates of the S.
cerevisiae lineage inferred using parsimony (flanking context independent). Data is shown for TSS distal (A) and proximal (B) DNA sequences of S. cerevisiae, S. paradoxus and S. mikatae. A/T losing
rate is ~50% decreased in low occupancy compared to high occupancy. C–D) Rates of A/T gain and loss events are spatially coupled. Shown is a comparison of the rate of A/T gaining substitution near
parsimoniously inferred sites of AT losing (black) and AT gaining (red) substitution, plotted for different ranges of nucleosome occupancy (X axis) across TSS-distal (C) and TSS-proximal (D) loci.
This analysis is consistent with the context dependent analysis. E-F) Similar analysis of A/T loss substitution rates around inferred A/T gain and A/T loss events across TSS-distal (E) and
TSS-proximal (F) loci.
(0.39 MB EPS)
Figure S11
Spatial coupling between A/T gain and A/T loss (horizon of 1 bp). Shown are the results of an analysis similar to the one shown in Figure 3, but with the horizon used for determining gain/loss
context set to only one nucleotide (instead of 5 nucleotides).
(0.42 MB EPS)
Figure S12
Spatial coupling between A/T gain and A/T loss (horizon of 3 bp. Shown are the results of an analysis similar to the one shown in Figure 3, but with the horizon used for determining gain/loss context
set to only three nucleotide (instead of 5 nucleotides).
(0.39 MB EPS)
Figure S13
Spatial coupling between A/T gain and A/T loss (horizon of 10 bp). Shown are the results of an analysis similar to the one shown in Figure 3, but with the horizon used for determining gain/loss
context set to ten nucleotide (instead of 5 nucleotides).
(0.46 MB EPS)
We thank the members of the Segal and Tanay labs for useful discussions and comments on the manuscript.
The authors have declared that no competing interests exist.
Work in AT lab was supported by an ISF integrated technologies grant (http://www.isf.org.il/english/default.asp). The funders had no role in study design, data collection and analysis, decision to
publish, or preparation of the manuscript.
Mikkelsen TS, Ku M, Jaffe DB, Issac B, Lieberman E, et al. Genome-wide maps of chromatin state in pluripotent and lineage-committed cells. Nature. 2007;448:553–560. [PMC free article] [PubMed]
Guttman M, Amit I, Garber M, French C, Lin MF, et al. Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals. Nature. 2009;458:223–227. [PMC free article] [
Kapranov P, Willingham AT, Gingeras TR. Genome-wide transcription and the implications for genomic organization. Nat Rev Genet. 2007;8:413–423. [PubMed]
Birney E, Stamatoyannopoulos JA, Dutta A, Guigó R, Gingeras TR, et al. Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature. 2007;447
:799–816. [PMC free article] [PubMed]
Heintzman ND, Hon GC, Hawkins RD, Kheradpour P, Stark A, et al. Histone modifications at human enhancers reflect global cell-type-specific gene expression. Nature. 2009;459:108–112. [PMC free article
] [PubMed]
Visel A, Blow MJ, Li Z, Zhang T, Akiyama JA, et al. ChIP-seq accurately predicts tissue-specific activity of enhancers. Nature. 2009;457:854–858. [PMC free article] [PubMed]
7. Kimura M. The role of compensatory neutral mutations in molecular evolution. J Genet. 1985;64:7–19.
Kirby DA, Muse SV, Stephan W. Maintenance of pre-mRNA secondary structure by epistatic selection. Proc Natl Acad Sci USA. 1995;92:9047–9051. [PMC free article] [PubMed]
Chen Y, Carlini DB, Baines JF, Parsch J, Braverman JM, et al. RNA secondary structure and compensatory evolution. Genes Genet Syst. 1999;74:271–286. [PubMed]
Stephan W, Kirby Da. RNA folding in Drosophila shows a distance effect for compensatory fitness interactions. Genetics. 1993;135:97–103. [PMC free article] [PubMed]
Tanay A. Extensive low-affinity transcriptional interactions in the yeast genome. Genome Res. 2006;16:962–972. [PMC free article] [PubMed]
Bradley RK, Li X-Y, Trapnell C, Davidson S, Pachter L, et al. Binding site turnover produces pervasive quantitative changes in transcription factor binding between closely related Drosophila species.
PLoS Biol. 2010;8:e1000343–e1000343. [PMC free article] [PubMed]
Doniger SW, Fay JC. Frequent gain and loss of functional transcription factor binding sites. PLoS Comp Biol. 2007;3:e99–e99. [PMC free article] [PubMed]
Lusk RW, Eisen MB. Evolutionary mirages: selection on binding site composition creates the illusion of conserved grammars in Drosophila enhancers. PLoS Genet. 2010;6:e1000829–e1000829. [PMC free
article] [PubMed]
Ludwig MZ, Patel NH, Kreitman M. Functional analysis of eve stripe 2 enhancer evolution in Drosophila: rules governing conservation and change. BioSyst. 1998;958:949–958. [PubMed]
Tsong AE, Tuch BB, Li H, Johnson AD. Evolution of alternative transcriptional circuits with identical logic. Nature. 2006;443:415–420. [PubMed]
Tanay A, Regev A, Shamir R. Conservation and evolvability in regulatory networks: the evolution of ribosomal regulation in yeast. Proc Natl Acad Sci USA. 2005;102:7203–7208. [PMC free article] [
Satchwell SC, Drew HR, Travers AA. Sequence periodicities in chicken nucleosome core DNA• 1. J Mol Biol. 1986;191:659–675. [PubMed]
Segal E, Fondufe-Mittendorf Y, Chen L, Thåström A, Field Y, et al. A genomic code for nucleosome positioning. Nature. 2006;442:772–778. [PMC free article] [PubMed]
Yuan G-C, Liu Y-J, Dion MF, Slack MD, Wu LF, et al. Genome-scale identification of nucleosome positions in S. cerevisiae. Science. 2005;309:626–630. [PubMed]
Kaplan N, Moore IK, Fondufe-Mittendorf Y, Gossett AJ, Tillo D, et al. The DNA-encoded nucleosome organization of a eukaryotic genome. Nature. 2008;458:362–366. [PMC free article] [PubMed]
Mavrich TN, Jiang C, Ioshikhes IP, Li X, Venters BJ, et al. Nucleosome organization in the Drosophila genome. Nature. 2008;453:358–362. [PMC free article] [PubMed]
Lee W, Tillo D, Bray N, Morse RH, Davis RW, et al. A high-resolution atlas of nucleosome occupancy in yeast. Nat Genet. 2007;39:1235–1244. [PubMed]
Schones DE, Cui K, Cuddapah S, Roh T-Y, Barski A, et al. Dynamic regulation of nucleosome positioning in the human genome. Cell. 2008;132:887–898. [PubMed]
Segal E, Widom J. Poly (dA: dT) tracts: major determinants of nucleosome organization. Curr Opin Struct Biol. 2009;19:65–71. [PMC free article] [PubMed]
Field Y, Kaplan N, Fondufe-Mittendorf Y, Moore IK, Sharon E, et al. Distinct modes of regulation by chromatin encoded through nucleosome positioning signals. PLoS Comp Biol. 2008;4:e1000216–e1000216.
[PMC free article] [PubMed]
Segal E, Widom J. What controls nucleosome positions? Trends Genet. 2009;25:335–343. [PMC free article] [PubMed]
Washietl S, Machné R, Goldman N. Evolutionary footprints of nucleosome positions in yeast. Trends Genet. 2008;24:583–587. [PubMed]
Warnecke T, Batada NN, Hurst LD. The impact of the nucleosome code on protein-coding sequence evolution in yeast. PLoS Genet. 2008;4:e1000250–e1000250. [PMC free article] [PubMed]
Babbitt GA, Kim Y. Inferring natural selection on fine-scale chromatin organization in yeast. Mol Biol Evol. 2008;25:1714–1727. [PubMed]
Sasaki S, Mello CC, Shimada A, Nakatani Y, Hashimoto S-I, et al. Chromatin-associated periodicity in genetic variation downstream of transcriptional start sites. Science. 2009;323:401–404. [PMC free
article] [PubMed]
Tillo D, Hughes T. G+C content dominates intrinsic nucleosome occupancy. BMC Bioinformatics. 2009;10:442–442. [PMC free article] [PubMed]
Cliften P, Sudarsanam P, Desikan A, Fulton L, Fulton B, et al. Finding functional features in Saccharomyces genomes by phylogenetic footprinting. Science. 2003;301:71–76. [PubMed]
Kellis M, Patterson N, Endrizzi M, Birren B, Lander ES. Sequencing and comparison of yeast species to identify genes and regulatory elements. Nature. 2003;423:241–254. [PubMed]
Siepel A, Haussler D. Phylogenetic estimation of context-dependent substitution rates by maximum likelihood. Mol Biol Evol. 2004;21:468–488. [PubMed]
Hwang DG, Green P. Bayesian Markov chain Monte Carlo sequence analysis reveals varying neutral substitution patterns in mammalian evolution. Proc Natl Acad Sci USA. 2004;101:13994–14001. [PMC free
article] [PubMed]
Tsankov AM, Thompson DA, Socha A, Regev A, Rando OJ. The Role of Nucleosome Positioning in the Evolution of Gene Regulation. PLoS Biol. 2010;8:e1000414–e1000414. [PMC free article] [PubMed]
Tirosh I, Sigal N, Barkai N. Divergence of nucleosome positioning between two closely related yeast species: genetic basis and functional consequences. Mol Syst Biol. 2010;6:365–365. [PMC free
article] [PubMed]
Liti G, Carter DM, Moses AM, Warringer J, Parts L, et al. Population genomics of domestic and wild yeasts. Nature. 2009;458:337–341. [PMC free article] [PubMed]
MacIsaac KD, Wang T, Gordon DB, Gifford DK, Stormo GD, et al. An improved map of conserved regulatory sites for Saccharomyces cerevisiae. BMC Bioinformatics. 2006;7:113–113. [PMC free article] [
Poelwijk FJ, Kiviet DJ, Weinreich DM, Tans SJ. Empirical fitness landscapes reveal accessible evolutionary paths. Nature. 2007;445:383–386. [PubMed]
Meer MV, Kondrashov AS, Artzy-Randrup Y, Kondrashov FA. Compensatory evolution in mitochondrial tRNAs navigates valleys of low fitness. Nature. 2010;464:279–282. [PubMed]
Akashi H. Inferring Weak Selection From Patterns of Polymorphism and Divergence at “Silent” Sites in Drosophila DNA. Genetics. 1995;139:1067–1076. [PMC free article] [PubMed]
Li W-H. Models of nearly neutral mutations with particular implications for nonrandom usage of synonymous codons. J Mol Evol. 1987;24:337–345. [PubMed]
Taylor CF, Higgs PG. A population genetics model for multiple quantitative traits exhibiting pleiotropy and epistasis. J Theor Biol. 2000;203:419–437. [PubMed]
Haag ES. Compensatory vs. pseudocompensatory evolution in molecular and developmental interactions. Genetica. 2007;129:45–55. [PubMed]
47. Bürger R. Mathematical properties of mutation-selection models. Genetica. 1998;102-103:279–298.
Rhead B, Karolchik D, Kuhn RM, Hinrichs AS, Zweig AS, et al. The UCSC Genome Browser database: update 2010. Nucleic Acids Res. 2010;38:D613–619. [PMC free article] [PubMed]
Cherry JM, Adler C, Ball C, Chervitz SA, Dwight SS, et al. SGD: Saccharomyces Genome Database. Nucleic Acids Res. 1998;26:73–79. [PMC free article] [PubMed]
Harbison CT, Gordon DB, Lee TI, Rinaldi NJ, Macisaac KD, et al. Transcriptional regulatory code of a eukaryotic genome. Nature. 2004;431:99–104. [PMC free article] [PubMed]
51. Kschischang FR, Frey BJ, Loeliger Ha. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory. 2001;47:498–519.
Kimura M. On the probability of fixation of mutant genes in a population. Genetics. 1962;47:713–719. [PMC free article] [PubMed]
Articles from PLoS Computational Biology are provided here courtesy of Public Library of Science
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3009600/?tool=pubmed","timestamp":"2014-04-16T17:02:52Z","content_type":null,"content_length":"177396","record_id":"<urn:uuid:94f846f7-ac39-47c5-8975-edc85d2e469e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
While Math was probably one of my better subjects in High School, I was far from the best in my class. I chose a college in part based on the availability of Applied Mathematics and/or Computer
Science as a major. I expected that my High School courses would have prepared me well for college math, but unfortunately that was not the case. While I knew the math I had studied well, my approach
to learning math did not work at the college level. One of the first tests in my first math course Freshman year was a “multiple guess” test. The professor added 10 points to our score for each
correct answer, and subtracted 5 points from our score for each incorrect answer. My score was a -15. Yup, negative.
That first test, and a few that followed it, served as a wake up call. I heard it… but it took me another couple of years to learn how to pay just as much attention to concepts as to procedures. On
hindsight I should also have given more weight in my college decision-making process to the reputation of my professors as teachers, instead of the reputation of the university as a whole. I
persevered nonetheless and earned my B.Sc. in Applied Mathematics, with a minor in Computer Science, in the late 1970′s.
After college I worked for three years, including one as a teacher, then enrolled in graduate school. My two years of graduate work studying Finance, Marketing, and more Statistics were a wonderful
experience. I had many wonderful teachers who explained complex and quantitative topics clearly and with passion. The approaches, teamwork, patience, and perseverance I had learned in college and on
the job were invaluable.
Since then I have evolved into a bit of a generalist (one who knows less and less about more and more). My wife and I moved to Maine and started new chapters in our lives in the early 1990′s. I
missed teaching, so I signed up as a Substitute Teacher at several local High Schools, began teaching computer courses via our town’s Community Services programs, and started tutoring math students
after school – in addition to my computer and marketing consulting work.
I have now been tutoring math students for over 20 years, and occasionally teaching math classes when needed as a substitute. Along the way, I became the father of two boys who are now in high school
– which has added another perspective to my ramblings. I have started this blog to document, revisit, and hopefully improve upon many of the approaches that have helped students the most over these
years. Some of the postings are explanations that have evolved over many tutoring sessions. Others are musings about how math “should be” taught… some from personal experience, and some hoping I will
follow my own advice “next time”.
I hope this blog, despite being a perpetual “work in progress”, will:
• offer helpful approaches to topics that are frequent stumbling blocks for students.
• encourage both students and teachers to approach math conceptually as well as procedurally.
• let folks who find math challenging at times know that they are not alone, that many people who are “good at math” still experience the same challenges.
• encourage people to be patient with themselves as they seek to master new concepts and skills. Patience and perseverance are two critical attributes when seeking to master any concept or skill.
• encourage people to ask questions (of me or others) and engage in dialog with others (either written or verbal) about quantitative topics. I often learn a great deal more about a topic by writing
or talking about it than I do by just reading about it… thus this blog! | {"url":"https://mathmaine.wordpress.com/about/","timestamp":"2014-04-16T19:06:03Z","content_type":null,"content_length":"32083","record_id":"<urn:uuid:809719fa-8a13-4e3d-bef0-929262a4cdc3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical uniqueness
In category theory, one sometimes denotes unique arrows by ! in a commutative diagram.
In general two objects in a category may have any number of morphisms between them (including none). In most cases a diagram only highlights a relationship between some arrows in the category. We
don't care about the specifics.
However certain categorical constructions (eg, products — essential for Cartesian Closed Categories) require a unique arrow as part of the definition. To call attention to the uniqueness of the
arrow, it's denoted by an exclamation mark.
Similarly, when reasoning formally, mathematicians will write ∃!x. P(x) as a shorthand for "There exists a unique x such that P is true of it".
This is quite a powerful statement: in your entire universe of logical discourse there is exactly one thing which has the given property.
Of course often mathematicians only care about uniqueness upto isomorphism. That is, there might be another thing y with the property, but you have a canonical way of converting x to y and y to x,
such that the two objects are equivalent.
The ∃! notation is not primitive, in fact, you can define it (in an equational first order logic) as a kind of abbreviation:
∃!x.P(x) iff ∃x.[P(x) ∧ ∀y.P(y) ⇒ x=y]
Actually, philosophers (at least ones who work on the foundations of mathematics) worry quite a bit about uniqueness, equality and what you take as a primitive notion. For example, one could define
equality by
x=y iff for all properties P, P(x) ⇔ P(y)
This is a meta-logical statement: you're quantifying over all propositions in your logic. Essentially you've just said that two objects are equal if inside the logic you can not distinguish them.
Which means that for all intents and purposes, the object is unique within the logic.
Most mathematicians are not troubled by such things. | {"url":"http://everything2.com/title/Mathematical+uniqueness","timestamp":"2014-04-19T15:12:37Z","content_type":null,"content_length":"23242","record_id":"<urn:uuid:ea2b95b2-06f1-4f92-ba1b-f4bc02bfce54>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |