content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Ok, I am so totally frustrated with my programming class, I am ready to quit, but I don't give up that easily!! My instructor wants us to trace these arrays and give him the output. I can do this in
C#, but that doesn't help me understand it, and won't help me on a written test. This is what I have...
int[] val = {2,4,6,8,10,11,12,14,16};
int key=10, lowPos=0, highPos=val.Length-1, midPos;
midPos = (lowPos+highPos)/2;
Console.Write("{0} ",val[midPos]);
if (key<val[midPos])
else if (key > val[midPos])
while(val[midPos] != key && lowPos
<= highPos);
I know that the answer is 10, but I am having trouble figuring out how to trace this to get that value.
HELP me please!!!
Thank you | {"url":"http://www.dreamincode.net/forums/topic/73731-output-from-arrays/","timestamp":"2014-04-21T09:04:49Z","content_type":null,"content_length":"95291","record_id":"<urn:uuid:9e3ea563-8f61-4e3a-97db-1c9c0aebca0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the solution of the system of equations 2x − 5y = 11 and −2x + 3y = −9? (1) (−3,−1) (3) (3,−1) (2) (−1,3) (4) (3,1)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f95c0efe4b000ae9ecbab10","timestamp":"2014-04-19T04:28:46Z","content_type":null,"content_length":"77646","record_id":"<urn:uuid:95576cf8-f9fd-4e1d-8f94-48956b1976c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
Z-12: Correlation and Simple Least Squares Regression - Westgard QC
Z-12: Correlation and Simple Least Squares Regression
Written by Madelon F. Zady
Learn about r squared, Pearons Products, and other things that will make you want to regress.
EdD, Assistant Professor
Clinical Laboratory Science Program, University of Louisville
Louisville, Kentucky
August 2000
Those of you who have had a prior class in statistics or have experience with laboratory method evaluations will be familiar with statistical relationships or correlations, specifically the Pearson
Product Moment Correlation. Commonly known as the correlation coefficient, or r, it is the statistic most frequently used in all of laboratory medicine. In this lesson, we are going to consider the
relationship between two metric (numerical) variables and the interpretation of r. We will cover the use of correlation for comparing the results of two methods later. (See also Westgard, 1999, Basic
Method Validation.)
For now, we are going to consider a common correlation encountered in clinical chemistry - the increase of cholesterol values with age. Older patients usually have higher cholesterol levels as
compared to younger patients. If we were to check the correlation between age and cholesterol it would no doubt be significant. How was such a relationship first established? Most likely there was an
initial casual observation followed by a statistical test that proved significant.
Scattergram and Correlation
scattergram because the points scatter about some kind of general relationship. From the graph we can see a linear relationship - as age increases, so does the cholesterol concentration. It looks
like a first-order relationship, i.e., as age increases by an amount, cholesterol increases by a predictable amount. The relationship appears so strong that knowing a person's age (predictor or
independent variable) could help to infer something about his or her cholesterol level (criterion or response or dependent variable).
This scattergram demonstrates a positive relationship or positive correlation because both variables increase in the same direction. As age increases, cholesterol increases. If one variable increased
while the other decreased, that would be a negative or inverse correlation and the line would decline.
Correlation Coefficient
There are a lot of other relationships that we could graph, for example, smoking and age. We would expect people from 0-16 years of age to smoke very little, from 16-65 to smoke more and from 65-80
to once again smoke less. This would not be a linear relationship, so not all relationships increase or decrease together indefinitely (or linearly). And what is more, not all relationships are
Just how strongly related are two variables like age and cholesterol? That question can be answered by examining the correlation coefficient, Rho or r. The correlation coefficient represents the
strength of an association and is graded from zero to 1.00. It has no units, but may be positive or negative. The table below provides a rule of thumb scale for evaluating the correlation
│ Strength of Correlation │
│ Size of r │ Interpretation │
│ 0.90 to 1.00 │ Very high correlation │
│ 0.70 to 0.89 │ High correlation │
│ 0.50 to 0.69 │ Moderate correlation │
│ 0.30 to 0.49 │ Low correlation │
│ 0.00 to 0.29 │ Little if any correlation │
Here we are talking about the Pearson Product Moment Correlation, or r, that is calculated from the following formula which predicts Y from X or y/x:
The algebraic basis of r is the z-score, and this formula represents something called the Fisher z transformation. You may remember other formulae for the calculation of the correlation coefficient,
for example, another calculation is the infamous "Raw Score Formula." It takes the average student about twenty minutes to calculate a correlation by hand (with calculator) using the raw score
formula. However, the computer can perform the feat in seconds. (Try the Paired-data Calculator that is part of the method validation toolkit on this website.)
When we talk about correlation this way, we are saying that Y is dependent upon X, which may suggest or imply that X causes Y. We need to be careful when we talk about causality! A correlation
between two variables does not always mean a causal relationship, i.e., X causes Y to happen. There may be one or more variables intervening between X and Y, such as an unexamined variable like Z. In
this example, we really cannot say that age causes cholesterol to increase. As we all know, there are many intervening variables that are important: genetics, exercise and diet to name a few. And,
temporal sequence is also important when looking at causality, as X would always have to precede Y's occurrence in order to be causal. In our cholesterol example, this is not a problem. But if we
wanted to say that exercise causes cholesterol to decrease, then we had better be sure that we first evaluate patients' cholesterol levels, put them on an exercise program and then measure their
cholesterol levels after their participation in the program.
What if we wanted to use r to give us some idea about how closely the results of two methods compare? Presumably a high correlation between the results of two glucose methods would mean that the
methods are comparable. Those who are knowledgeable about the use of statistics in method comparison studies will tell us that the correlation coefficient is not a fool-proof statistic and must be
interpreted carefully. Any set of data where the points fall all on a line will give a high correlation coefficient. If a glucose method were consistently 50 mg/dl higher than another method, the
results would fall on the line and the correlation coefficient would be high, even though there is a serious systematic error or inaccuracy between the methods.
Coefficient of Determination
The square of the correlation coefficient or r² is called the coefficient of determination. We will examine this r² later in regression analysis.
Simple Linear Regression or Ordinary Least Squares Prediction
If we really want a statistical test that is strong enough to attempt to predict one variable from another or to examine the relationship between two test procedures, we should use simple linear
regression. Regression is more protected from the problems of indiscriminate assignment of causality because the procedure gives more information and demonstrates strength. In fact, the r that we
have been talking about above is only one part of regression statistics.
Let's see how this prediction works in regression. Let us say that we have a data set for two variables X and Y. We have calculated the mean for each of these variables. Now let's say that we put all
of the Y values into a container and draw one value out of that container at random. Before we look at that Y value, we first are going to guess what the number is. What value should we guess? What
value would be most likely? The best guess for the value of Y would be the mean value for the Y data - the arithmetic average is always a good guess. But statisticians have worked out a better
method. Another variable (X) can be used to approximate the Y. If Y is dependent upon this X, then the Y estimated this way will be closer to the true Y value than just guessing Y's mean.
Mathematical Formula for a Straight Line
In its simplest form, regression is essentially the formula for a straight line that you learned in beginning algebra. In essence, the prediction of Y from X is dependent upon the mathematical
formula for a straight line. The first time you saw this formula it appeared as follows:
y = mx + b
In regression, the equation for the straight line is recast as y = bx + a. This change in terminology leads to confusion. Here a is the y-intercept or constant and b is the coefficient or slope of
the line. A few more words of caution about regression - as in all of statistics there are certain assumptions: the x value is a true measure, both X and Y distributions are normal, and
homoscedasticity, i.e., the variance of y is the same for each value of x. Also statisticians often write the formula this way: y = bx + a + e, where e represents the error in prediction.
Interpreting the Scattergram
The objective in simple regression is to generate the best line between the two variables (the tabled values of X and Y), i.e., the best line that fits the data points. Regression uses a formula to
calculate the slope, then another formula to calculate the y-intercept, assuming there is a straight line relationship. The best line, or fitted line, is the one that minimizes the distances of the
points from the line, as shown in the accompanying figure. Since some of the distances are positive and some are negative, the distances are squared to make them additive, and the best line is one
that gives lowest sum or least squares. For that reason, the regression technique will sometimes be called least squares analysis.
1. Westgard, J. O. Basic Method Validation. Madison, WI: Westgard QC, Inc., 1999. | {"url":"http://www.westgard.com/lesson42.htm","timestamp":"2014-04-20T08:57:57Z","content_type":null,"content_length":"46157","record_id":"<urn:uuid:3590a655-fe12-45cc-9a0b-22d15ce32d84>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accounting Cost Behavior
7. Relevant range
We mentioned a relevant range before when we talked about step-variable costs:
Relevant range is the volume of activity, over which cost behavior stays valid.
Friends Company can produce from 10,000 to 50,000 valves per year. So, the relevant range for Friends Company is the range of normal activity from 10,000 to 50,000 units. Within this relevant range
all fixed costs, such as rent, equipment depreciation, and administrative salaries remain constant. If Friends Company decides to produce more valves, they have to hire additional staff and rent more
equipment, which will result in an increase of fixed costs. On the contrary, if the production level is reduced, Friends Company has to reduce staff and rental expenses, so fixed costs will decrease.
8. Methods for separating mixed costs
Management usually needs to know what fixed and variable costs are included in mixed costs. This is required for budgeting and planning purposes, among others. Using the total costs and the
associated activity level, it is possible to break out the fixed and variable components. There are three methods for separating a mixed cost into its fixed and variable components:
• High-low method
• Scatter-graph method
• Method of least squares
8.1. High-low method
When using the high-low method, the highest point and the lowest point are used to create the cost formula. The high point is defined as the point with the highest activity and the low point is
defined as the point with the lowest activity. Using the lowest and highest activity levels, it is possible to estimate the variable cost per unit and the fixed cost component of mixed costs.
Let us assume that Friends Company incurred the following costs during the past six months:
Illustration 14: Total costs of Friends Company over the past six months
│ Month │Vales Production │Total Cost │
│July │ 10,000 │ $44,000 │
│August │ 15,000 │ $60,000 │
│September│ 23,000 │ $85,000 │
│October │ 21,000 │ $75,000 │
│November │ 19,000 │ $70,000 │
│December │ 28,000 │ $98,000 │
The lowest level of production was in July and the highest level of production was in December. The difference between the number of units produced and the difference between the total cost at the
highest and lowest levels of production are shown below:
│ │ Production │Total Cost│
│Highest Level │28,000 units │ $98,000 │
│Lowest Level │10,000 units │ $44,000 │
│Difference │18,000 units │ $54,000 │
As the total fixed cost does not change with changes in the production volume, the difference in the total costs represents the change in the total variable costs. So, if we divide the difference in
the total costs by the difference in the production levels, we will have an estimate of the variable cost per unit:
Variable Cost per Unit = $54,000 ÷ 18,000 units = $3
The variable cost per unit is $3. The fixed cost will be the same at both the highest and lowest levels of production because fixed costs don't change. In order to estimate the fixed costs, we have
to subtract the estimated total variable cost from the total cost:
Total Cost = Variable Cost per Unit x Units of Production + Fixed Cost
Highest level:
$98,000 = $3 x 28,000 + Fixed Cost
Fixed Cost = $14,000
Lowest level:
$44,000 = $3 x 10,000 + Fixed Cost
Fixed Cost = $14,000
The fixed costs equal $14,000. Knowing the fixed costs and the variable cost per unit we can estimate the total costs for the planned production level by using the formula below:
T = F + V x N,
where T is the total cost, F is the fixed cost, V is the variable cost per unit, and N is the number of units to be produced.
Using the formula presented above and the fixed cost and variable cost per unit, we obtain the following formula for our example:
T = $14,000 + $3 x N
The methodology presented above is the high-low method of separating mixed costs. The advantage of this method is its simplicity. However, this method ignores all data points other than the highest
and the lowest activity levels. The highest and the lowest activity points often do not represent the rest of the points, which leads to a possible inaccuracy of the final results. This is the main
disadvantage of this method.
In order to get more precise results, it is better to use the scatter-graph method or the method of least squares.
Lecture Contents:
Free Study Notes
Download free accounting study notes by signing up for our free newsletter (
We never share or sell your e-mail to third parties. | {"url":"http://simplestudies.com/accounting-cost-behavior.html/page/6","timestamp":"2014-04-18T18:10:42Z","content_type":null,"content_length":"28310","record_id":"<urn:uuid:98ff63f3-db37-4f7c-beee-82e3de7c6d33>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/nphuongsun93/medals","timestamp":"2014-04-18T00:35:41Z","content_type":null,"content_length":"92721","record_id":"<urn:uuid:c480202a-dd2a-4c2c-a2d1-37d33e2508d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
P55-UD4P and 1866 Ram Setting ?? !!!
P55-UD4P and 1866 Ram Setting ?? !!!
I have a P55-UD4P, i5 750 and 2x2GB 1866 Team Memory (TXD36144M1866HC9TC) setup.
I am trying to get full perfomance from my Ram but the default bios is set too 1333.
I have tried enabling XMP but it causes the Pc to crash after 1-2 minutes
Not looking to overclock anything but rather small tweaks to get what i paid for (1866 Ram!
Any help would be greatly appreciated
a b V Motherboard
April 16, 2010 11:10:13 AM
antony_tem said:
I have a P55-UD4P, i5 750 and 2x2GB 1866 Team Memory (TXD36144M1866HC9TC) setup.
I am trying to get full perfomance from my Ram but the default bios is set too 1333.
I have tried enabling XMP but it causes the Pc to crash after 1-2 minutes
Not looking to overclock anything but rather small tweaks to get what i paid for (1866 Ram!
Any help would be greatly appreciated
If you want higher RAM multipilers, buy a Core i7 processor, since those have the upper multipliers unlocked. If you want to overclock instead, do it manually.
Crashman said:
If you want higher RAM multipilers, buy a Core i7 processor, since those have the upper multipliers unlocked. If you want to overclock instead, do it manually.
would i be able to achieve 1866 by overcocking since i dont have any extra cooling system?
If so, what parameters do i have to setup in the bios ?
Can't find your answer ? Ask !
a c 177 V Motherboard
April 16, 2010 1:19:49 PM
bilbat said:
The i5-750 is limited to 6/8/10 memory multipliers, so with a 'stock' Bclk of 133, the best you'll be able to get is 1333; to get roughly 1866, you'd need the Bclk at 186, which would bump your CPU
to 3.72 GHz - you'd be back to close to stock by then lowering the CPU multiplier to 15...
Would lowering the Cpu Multiplier to 15 affect its perfomance ?
a c 177 V Motherboard
April 16, 2010 2:21:08 PM
No it won't, but, then again, neither will you ever 'see' any real-world performance increase by going from the 1333 you'd normally get, to 1866; memory frequency doesn't 'scale' to actual
performance on these platforms - just increases the heat load on the memory controller, and makes a terribly large job of 'tuning' the whole shebang!
a c 177 V Motherboard
April 16, 2010 2:38:09 PM
a b V Motherboard
April 16, 2010 2:52:05 PM
antony_tem said:
Would lowering the Cpu Multiplier to 15 affect its perfomance ?
When you raise the Base Clock (bclk), understand that that overclocks the entire processor and memory systems.
I believe your stock CPU multi is 20, and the standard bclk setting is 133. 133 x 20 = 2,660. And there is your 2.6Ghz
For Memory it's the same math: The memory divider may be set with 6, 8, or 10. At the stock 133 mhz Base Clock, that nets 798, 1064, and 1333. Your system does not support dividers higher than 10. So
the only thing left to change is the Base Clock. But this takes everything else with it.
So - If you raise the Base Clock, then what happens? Let's try 150, so we get nice, neat numbers:
150 bclk x 20 cpu multi = 3,000 for a 3 GHz Processor overclock
150 bclk x 10 memory multi = 1500, for DDR3 1500
Using the 186 number Bilbat gave:
186 x 20 = 3720 for 3.7Ghz
(this is a high overlock, by the way: 3.7/2.6 = 42% You may not be able to count on this being stable or sustainable. And certainly not without raising voltages and heat.)
186 x 10 = 1860, for your memorys rated speed.
This is not likely to be stable without superior components, a lot of knowledgeable tweaking, and a superior cooling system. Let me say that again: a 42% overclock is not likely to be stable unless
you have superior components, a superior cooling system, and some knowledgeable tweaking.
Therefore, as Bilbat said, you will likely have to LOWER your processor multiplier in order to reach something reliable.
So, let's look at the math again:
At a 186 Base Clock
(maybe your motherboard can sustain this... maybe not)
, we know that your memory will run at 1860 Mhz.
186 x 10 = 1,860, or 1.8 Ghz = Way slower than stock, right?
186 x 15 = 2,790, or 2.7Ghz. = a little faster than stock, but a very mild (2.7/2.6 = 4%) overclock
186 x 17 = 3,162 or 3.1Ghz = A respectable overclock (3.1/2.6 = 19%)
The issue here is you overbought memory for the system you have. By this, I mean your RAM is fast enough that you have to strongly overclock everything else to get to your memory's rated speed.
My personal recommendation is (1) Accept that you overbought and don't worry about it. But if you must, then forget about getting to your memory's "rated" speeds. Rather, split the difference with a
comfortable CPU overclock and be secure in the knowledge that any issues are likely not the fault of your RAM.
a c 177 V Motherboard
April 16, 2010 3:09:31 PM
Scotteq said:
When you raise the Base Clock (bclk), understand that that overclocks the entire processor and memory systems.
I believe your stock CPU multi is 20, and the standard bclk setting is 133. 133 x 20 = 2,660. And there is your 2.6Ghz
For Memory it's the same math: The memory divider may be set with 6, 8, or 10. At the stock 133 mhz Base Clock, that nets 798, 1064, and 1333. Your system does not support dividers higher than 10. So
the only thing left to change is the Base Clock. But this takes everything else with it.
So - If you raise the Base Clock, then what happens? Let's try 150, so we get nice, neat numbers:
150 bclk x 20 cpu multi = 3,000 for a 3 GHz Processor overclock
150 bclk x 10 memory multi = 1500, for DDR3 1500
Using the 186 number Bilbat gave:
186 x 20 = 3720 for 3.7Ghz
(this is a high overlock, by the way: 3.7/2.6 = 42% You may not be able to count on this being stable or sustainable. And certainly not without raising voltages and heat.)
186 x 10 = 1860, for your memorys rated speed.
This is not likely to be stable without superior components, a lot of knowledgeable tweaking, and a superior cooling system. Let me say that again: a 42% overclock is not likely to be stable unless
you have superior components, a superior cooling system, and some knowledgeable tweaking.
Therefore, as Bilbat said, you will likely have to LOWER your processor multiplier in order to reach something reliable.
So, let's look at the math again:
At a 186 Base Clock
(maybe your motherboard can sustain this... maybe not)
, we know that your memory will run at 1860 Mhz.
186 x 10 = 1,860, or 1.8 Ghz = Way slower than stock, right?
186 x 15 = 2,790, or 2.7Ghz. = a little faster than stock, but a very mild (2.7/2.6 = 4%) overclock
186 x 17 = 3,162 or 3.1Ghz = A respectable overclock (3.1/2.6 = 19%)
The issue here is you overbought memory for the system you have. By this, I mean your RAM is fast enough that you have to strongly overclock everything else to get to your memory's rated speed.
My personal recommendation is (1) Accept that you overbought and don't worry about it. But if you must, then forget about getting to your memory's "rated" speeds. Rather, split the difference with a
comfortable CPU overclock and be secure in the knowledge that any issues are likely not the fault of your RAM.
Thanks for the great explanation
I tried changing the base clock to 186 and the cpu multiplier to 15. but saw my cpu temp rise from 45 to 85
so i guess there not much to do with what i have.
Thanks again
a c 177 V Motherboard
April 16, 2010 3:36:05 PM
Almost any kind of 'fiddling with' any Intel pretty much requires that you dump the original 'rotary postsge stamp' stock coolers - BUT - you don't need to spend a lot; nearly any aftermarket cooler
will have two or three times the heat-moving ability; only need to consider the top end coolers if you're planning at or over 4GHz, 24/7 with a relatively heavy workload!
a b V Motherboard
April 16, 2010 10:01:11 PM
antony_tem said:
Thanks for the great explanation
I tried changing the base clock to 186 and the cpu multiplier to 15. but saw my cpu temp rise from 45 to 85
so i guess there not much to do with what i have.
Thanks again
It sounds like your MOTHERBOARD BIOS assumes you're trying to overclock the CPU and has adjusted the core voltage accordingly. Try these settings:
VCore 1.25V
Uncore: 1.35V (sometimes called QPI/DRAM, IMC, or similar)
VDIMM: 1.65V
with your 185x15 setting.
Can't find your answer ? Ask !
Read discussions in other Motherboards categories | {"url":"http://www.tomshardware.com/forum/273292-30-ud4p-1866-setting","timestamp":"2014-04-23T10:37:56Z","content_type":null,"content_length":"152051","record_id":"<urn:uuid:bbed27c3-45fe-45f0-87ab-12d35a39f967>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reasoning with Computers
Brian Harvey
University of California, Berkeley
``Intelligent agents'' are said to be right around the corner; some products are already available. These programs are supposed to watch you work, make inferences about your preferred style, and
automate repetitive tasks. They will sift through all the nonsense on the World Wide Web and find only those items you'll really want to read. They'll be teammates or opponents in computer games.
How can computer programs make inferences? I chose logic puzzles as a testbed for experimentation. Logic puzzles are a relatively easy test case because each puzzle forms a ``closed world''; we know
in advance all the possible names, ages, house colors, or whatever characteristics the puzzle asks us to match up. There is no new computer science here. I am more or less recapitulating the early
history of computer inference systems. But educationally that may be more helpful than the complexities of the most modern attempts.
An Inference System for One Logic Puzzle
I first worked on this project when I wrote the third volume of Computer Science Logo Style (MIT Press, 1987). In that volume the task I set for myself was to introduce some of the topics in the
undergraduate computer science curriculum to a younger audience, and to illustrate the ideas with Logo programs rather than with formal proofs. I wanted to discuss logic as part of discrete
mathematics, and thought of logic puzzles as the task for an illustrative Logo program.
The program I wrote solved only one logic puzzle, taken from Mind Benders Book B-2, by Anita Harnadek (Critical Thinking Press, 1978):
A cub reporter interviewed four people. He was very careless, however. Each statement he wrote was half right and half wrong. He went back and interviewed the people again. And again, each
statement he wrote was half right and half wrong. From the information below, can you straighten out the mess?
The first names were Jane, Larry, Opal, and Perry. The last names were Irving, King, Mendle, and Nathan. The ages were 32, 38, 45, and 55. The occupations were drafter, pilot, police sergeant,
and test car driver.
On the first interview, he wrote these statements, one from each person:
□ 1. Jane: ``My name is Irving, and I'm 45.''
□ 2. King: ``I'm Perry and I drive test cars.''
□ 3. Larry: ``I'm a police sergeant and I'm 45.''
□ 4. Nathan: ``I'm a drafter, and I'm 38.''
On the second interview, he wrote these statements, one from each person:
□ 5. Mendle: ``I'm a pilot, and my name is Larry.''
□ 6. Jane: ``I'm a pilot, and I'm 45.''
□ 7. Opal: ``I'm 55 and I drive test cars.''
□ 8. Nathan: ``I'm 38 and I drive test cars.''
This puzzle includes four categories: first name, last name, job, and age. In each category there are four individuals; for example, the first name individuals are Jane, Larry, Opal, and Perry.
For every possible pairing of individuals in different categories (for example, Jane and pilot), the program keeps track of what it knows about whether or not they go together. Initially it knows
nothing, but, for example, after the first interview the program knows that Jane is not King, since the four statements are from different people.
For any given pairing, the program can know nothing, can know that the two individuals are the same, or can know that the two individuals are not the same. But there is also a fourth, perhaps more
interesting, situation. After the first interview, the program does not know whether or not Jane and Irving are the same person, nor whether Jane and age 45 are the same person. But it does know that
if one of these is true, the other must be false, and vice versa. This fact is represented as a link between the Jane-Irving pair and the Jane-45 pair.
The program works by making assertions based on the puzzle statement. From the first interview we get the following assertions:
• Jane-King is false.
• Jane-Nathan is false.
• Larry-King is false.
• Larry-Nathan is false.
• Jane-Irving is linked to Jane-45.
• King-Perry is linked to King-driver.
• Larry-sergeant is linked to Larry-45.
• Nathan-drafter is linked to Nathan-38.
As each assertion is recorded in the database, the program tries to use rules of inference to draw conclusions from the new assertion and the assertions already recorded. Here are the rules:
• Elimination rule: If the X-Y pairs are false for all but one individual Y in a given category, then the remaining possibility must be true.
• Uniqueness rule: If some X-Y pair is true, then every other X-Z pair must be false if Z is in the same category as Y.
• Transitive rules: If X-Y is true, and Y-Z is true, then X-Z must also be true. If X-Y is true, and Y-Z is false, then X-Z must be false.
• Link falsification: If X-Y is linked to Z-W, and we learn that X-Y is true, then Z-W must be false.
• Link verification: If X-Y is linked to Z-W, and we learn that X-Y is false, then Z-W must be true.
For each new assertion, there are only a finite number of possible inferences using these rules. If we assert that X-Y is true, then the program must check the uniqueness rule for each other
individual in the same category as X, and for each other individual in the same category as Y. It must check the transitive rules for the other known pairs involving X or Y. And if there was already
a link involving X-Y, then it must apply the link falsification rule.
Similarly, if we assert that X-Y is false, then the program must check the elimination rule, the second transitive rule, and the link verification rule.
An Inference System for Many Logic Puzzles
This first program worked fine for this particular puzzle, but couldn't handle other puzzles. The most obvious problem was that the link verification and link falsification rules apply only to this
puzzle. (The other three rules apply to any logic puzzle.)
In preparing the second edition of Computer Science Logo Style (MIT Press, 1997), I decided to generalize these link rules. In Anita Harnadek's puzzle, two pairs are linked only through mutual
exclusion; exactly one of the pairs must be true. More generally, two pairs might be linked by an implication:
If X-Y is true/false, then Z-W must be true/false.
So each link in the first program became two implications in the second version. For example:
If Jane-Irving is true, then Jane-45 must be false.
If Jane-Irving is false, then Jane-45 must be true.
These two implications are logically independent; one cannot be derived from the other.
Once the program (reproduced in appendix A) (or download it) can record implications, we replace the link falsification and link verification rules with three new rules about implications:
• Contrapositive rule: If P implies Q, then not-Q implies not-P.
• Implication rule (modus ponens): If P implies Q, and P is true, then Q must be true.
• Contradiction rule: If P implies Q, and P also implies not-Q, then P must be false.
In these rules, P and Q represent statements about the truth or falsehood of a pair, e.g., ``Jane-Irving is false.''
The modified program can solve not only Anita Harnadek's puzzle but also several others that I tried, taken from a Dell puzzle book.
By coincidence, my work on the second edition of Computer Science Logo Style happened at about the same time that Harold Abelson, Gerald Jay Sussman, and Julie Sussman were working on the second
edition of their brilliant text, Structure and Interpretation of Computer Programs (MIT Press, 1996). Their second edition introduced several new topics, one of which was -- here's the coincidence --
logic puzzles. I tried out my program on their example puzzles, such as this one:
Five schoolgirls sat for an examination. Their parents -- so they thought -- showed an undue degree of interest in the result. They therefore agreed that, in writing home about the examination,
each girl should make one true statement and one untrue one. The following are the relevant passages from their letters:
□ Betty: ``Kitty was second in the examination. I was only third.''
□ Ethel: ``You'll be glad to hear that I was on top. Joan was second.''
□ Joan: ``I was third, and poor old Ethel was bottom.''
□ Kitty: ``I came out second. Mary was only fourth.''
□ Mary: ``I was fourth. Top place was taken by Betty.''
What in fact was the order in which the five girls were placed?
Since this puzzle is very similar in form to the other one, with paired true and false statements, I thought my program would solve it easily. Here is how I represented the puzzle in Logo:
to exam
category "person [Betty Ethel Joan Kitty Mary]
category "place [1 2 3 4 5]
xor "Kitty 2 "Betty 3
xor "Ethel 1 "Joan 2
xor "Joan 3 "Ethel 5
xor "Kitty 2 "Mary 4
xor "Mary 4 "Betty 1
print []
To my dismay, my program was unable to discover any facts at all from this puzzle!
The program used by Abelson and Sussman does not work by making inferences from known facts. Instead, it works by backtracking: trying every possible combination of names with places, and rejecting
the ones that lead to a contradiction. This is more of a ``brute force'' approach; many possible combinations must be tried. (In this example, there are 120 possibilities, five factorial.)
A Logo version of the backtracking program is in appendix B (or download it). Here is how the program can be used to solve the examination puzzle:
to exam
track [[Betty Ethel Joan Kitty Mary] [1 2 3 4 5]] ~
[[not equalp (is "Kitty 2) (is "Betty 3)]
[not equalp (is "Ethel 1) (is "Joan 2)]
[not equalp (is "Joan 3) (is "Ethel 5)]
[not equalp (is "Kitty 2) (is "Mary 4)]
[not equalp (is "Mary 4) (is "Betty 1)]]
The general backtracking procedure TRACK takes two inputs. The first is a list of lists, one for each category, naming the individuals in that category. (The categories themselves don't have names in
this program.) The second input is also a list of lists, each of which is a Logo expression whose value must be TRUE for a correct solution. The tests use a predicate procedure IS that takes two
inputs and outputs true if they correspond to the same person in the particular proposed combination that the program is trying.
The backtracking procedure can also be used to solve the earlier puzzle about the cub reporter:
to cub.reporter
track [[Jane Larry Opal Perry]
[Irving King Mendle Nathan]
[32 38 45 55]
[drafter pilot sergeant driver]] ~
[[differ [Jane King Larry Nathan]]
[says "Jane "Irving 45]
[says "King "Perry "driver]
[says "Larry "sergeant 45]
[says "Nathan "drafter 38]
[differ [Mendle Jane Opal Nathan]]
[says "Mendle "pilot "Larry]
[says "Jane "pilot 45]
[says "Opal 55 "driver]
[says "Nathan 38 "driver]]
to differ :things
if emptyp bf :things [op "true]
op and (differ1 first :things bf :things) (differ bf :things)
to differ1 :this :those
foreach :those [if is :this ? [output "false]]
output "true
to says :who :one :two
output not equalp (is :who :one) (is :who :two)
I wrote the backtracking solution to this puzzle using the same names DIFFER and SAYS for the procedures that embody the facts of the puzzle, but they are not the same DIFFER and SAYS that are used
in the original version. The originals add assertions to a database; these are predicates that output TRUE if the current combination satisfies the condition.
The trouble with the backtracking solution to the cub reporter puzzle is that it's quite slow. There are 13,824 possible arrangements of first names, last names, jobs, and ages. (There are 24
possible combinations of first and last names, times 24 combinations of name and job, times 24 combinations of name and age.) The program might get lucky and find a solution on its first try, but on
average it will have to examine half of the possible combinations before finding a solution.
Repairing the Inference System
Why couldn't my inference program solve the examination puzzle? One crucial difference between the two puzzles discussed here is that the first includes some direct assertions, such as the fact that
Jane-King is false. The second puzzle tells us no actual facts; it's entirely implications. As a result, the implication rule (modus ponens) can't infer any facts.
If we don't have enough facts, we have to get more mileage out of the implications. Each of the inference rules for assertions gives rise to a corresponding rule for implications:
• Meta-elimination rules: If P implies that the X-Y pair is false for all but one individual Y in a given category, then P implies that the remaining possibility must be true. If P implies that the
X-Y pair is false for every Y in a given category, then P is false.
• Meta-uniqueness rule: If P implies that some X-Y pair is true, then P implies that every other X-Z pair must be false if Z is in the same category as Y.
• Meta-transitive rules: If P implies that X-Y is true, and if Y-Z is true, then P also implies that X-Z is true. If P implies that X-Y is true, and if Y-Z is false, then P implies that X-Z is
When the inference program is modified to include these new rules (appendix C) (or download it), it can solve the examination puzzle as well as the cub reporter puzzle. The cost is that the solution
is very slow, even for the original puzzle, because the new rules allow the program to infer many new implications, each of which must be tested in later steps to see if it, combined with new
information, allows yet another implication to be inferred.
I discovered this example as I was working on the final draft of my books. Should I include the modified program? In the end, I decided not to change the printed version, because the modified version
is so slow even for easy puzzles. The original version does work for most of the puzzles I found in puzzle books; the difficulty of published puzzles is limited by the fact that mere human beings
must be able to solve them!
Implications Unleashed
Even the modified version of the program does not make every possible inference from implications. For example, I included these rules:
• Meta-transitive rules: If P implies that X-Y is true, and if Y-Z is true, then P also implies that X-Z is true. If P implies that X-Y is true, and if Y-Z is false, then P implies that X-Z is
but I didn't include these:
• Meta-meta-transitive rules: If P implies that X-Y is true, and if P implies that Y-Z is true, then P also implies that X-Z is true. If P implies that X-Y is true, and if P implies that Y-Z is
false, then P implies that X-Z is false.
• All bases covered rule: If P implies Q, and not-P implies Q, then Q must be true.
• Meta-meta-meta-transitive rules: If P implies that X-Y is true, and if Q implies that Y-Z is true, then P and Q together imply that X-Z is true. If P implies that X-Y is true, and if Q implies
that Y-Z is false, then P and Q together imply that X-Z is false.
Also, my program can only accept implications about basic assertions. That is, if P and Q are statements such as ``X-Y is true'' or ``Z-W is false'' then I can represent the implication ``P implies
Q,'' but my program has no way to represent an implication such as ``(P implies Q) implies R.''
In fact, it's because of the limitation on the assertions that can be represented in this program that I need so many rules. A general inference system won't have transitive rules at all, meta- or
not. Instead it will represent assertions in a more general way so that
for any x, y, and z, is(x,y) and is(y,z) implies is(x,z)
can be represented as an assertion, not as a rule. In such a system, the number of rules needed is much smaller. In effect, I've again fallen into the same trap that led me to have the link
falsification and link verification rules in the first version of the program. I eliminated the need for those rules by allowing my program to represent implications as well as basic facts. But I'm
still limited to implications tied to a particular X-Y pair. What I can't represent in an assertion is the ``for any x, y, and z'' part of this transitive property. A system like mine, in which
assertions are about specific individuals, is a propositional logic. One in which I can say ``for any x'' is a predicate logic.
My original program, which could only record basic assertions except for one ad hoc kludge for links, could truly discover every possible inference from the facts it was given. But once we introduce
the idea of implications, there is no bound on the number of possible inferences. To write a practical program, we must draw a line somewhere, and decline to make inferences that are too complicated.
Forward and Backward Chaining
My program works by starting with the known facts and inferring as many new facts as it can. This approach is called ``forward chaining.'' Most practical inference systems today use ``backward
chaining'': The program starts with a question, such as ``What is Jane's last name,'' and looks for known facts that might help answer that question. In practice this can effectively limit the number
of dead-end chains of inference that the program follows.
Inference Versus Backtracking
Backtracking works best for puzzles with few categories, because increasing the number of categories dramatically increases the number of possible combinations that must be tested. But a backtracking
program is not much affected by the nature of the information given by the puzzle. By contrast, inference works best for puzzles that include plenty of basic facts in the information given, but an
inference program is not much affected by the number of categories. Each approach has strengths and weaknesses.
How do people solve logic puzzles? We often use a combination of the two methods. We generally start by making inferences, but if we get stuck, we switch to a backtracking approach. Backtracking
works well if inferences have already ruled out most of the possible solutions, so that there aren't as many left to test. Computer inference systems have also been written using this hybrid
technique. Such a program is harder to write, because it's not easy to specify precise rules to decide when to switch from inference to backtracking, and because the program's data structures must
accommodate both techniques. The advantage is that solutions can be found quickly for a wide range of problems.
[Addendum: Since publishing this, I've written a hybrid program, which you can download.]
Appendix A: The Inference Program
;; Establish categories
to category :category.name :members
print (list "category :category.name :members)
if not namep "categories [make "categories []]
make "categories lput :category.name :categories
make :category.name :members
foreach :members [pprop ? "category :category.name]
;; Verify and falsify matches
to verify :a :b
settruth :a :b "true
to falsify :a :b
settruth :a :b "false
to settruth :a :b :truth.value
if equalp (gprop :a "category) (gprop :b "category) [stop]
localmake "oldvalue get :a :b
if equalp :oldvalue :truth.value [stop]
if equalp :oldvalue (not :truth.value) ~
[(throw "error (sentence [inconsistency in settruth]
:a :b :truth.value))]
print (list :a :b "-> :truth.value)
store :a :b :truth.value
settruth1 :a :b :truth.value
settruth1 :b :a :truth.value
if not emptyp :oldvalue ~
[foreach (filter [equalp first ? :truth.value] :oldvalue)
[apply "settruth butfirst ?]]
to settruth1 :a :b :truth.value
apply (word "find not :truth.value) (list :a :b)
foreach (gprop :a "true) [settruth ? :b :truth.value]
if :truth.value [foreach (gprop :a "false) [falsify ? :b]
pprop :a (gprop :b "category) :b]
pprop :a :truth.value (fput :b gprop :a :truth.value)
to findfalse :a :b
foreach (filter [not equalp get ? :b "true] peers :a) ~
[falsify ? :b]
to findtrue :a :b
if equalp (count peers :a) (1+falses :a :b) ~
[verify (find [not equalp get ? :b "false] peers :a)
to falses :a :b
output count filter [equalp "false get ? :b] peers :a
to peers :a
output thing gprop :a "category
;; Common types of clues
to differ :list
print (list "differ :list)
foreach :list [differ1 ? ?rest]
to differ1 :a :them
foreach :them [falsify :a ?]
to justbefore :this :that :lineup
falsify :this :that
falsify :this last :lineup
falsify :that first :lineup
justbefore1 :this :that :lineup
to justbefore1 :this :that :slotlist
if emptyp butfirst :slotlist [stop]
equiv :this (first :slotlist) :that (first butfirst :slotlist)
justbefore1 :this :that (butfirst :slotlist)
;; Remember conditional linkages
to implies :who1 :what1 :truth1 :who2 :what2 :truth2
implies1 :who1 :what1 :truth1 :who2 :what2 :truth2
implies1 :who2 :what2 (not :truth2) :who1 :what1 (not :truth1)
to implies1 :who1 :what1 :truth1 :who2 :what2 :truth2
localmake "old1 get :who1 :what1
if equalp :old1 :truth1 [settruth :who2 :what2 :truth2 stop]
if equalp :old1 (not :truth1) [stop]
if memberp (list :truth1 :who2 :what2 (not :truth2)) :old1 ~
[settruth :who1 :what1 (not :truth1) stop]
if memberp (list :truth1 :what2 :who2 (not :truth2)) :old1 ~
[settruth :who1 :what1 (not :truth1) stop]
store :who1 :what1 ~
fput (list :truth1 :who2 :what2 :truth2) :old1
to equiv :who1 :what1 :who2 :what2
implies :who1 :what1 "true :who2 :what2 "true
implies :who2 :what2 "true :who1 :what1 "true
to xor :who1 :what1 :who2 :what2
implies :who1 :what1 "true :who2 :what2 "false
implies :who1 :what1 "false :who2 :what2 "true
;; Interface to property list mechanism
to get :a :b
output gprop :a :b
to store :a :b :val
pprop :a :b :val
pprop :b :a :val
;; Print the solution
to solution
foreach thing first :categories [solve1 ? butfirst :categories]
to solve1 :who :order
type :who
foreach :order [type "| | type gprop :who ?]
print []
;; Get rid of old problem data
to cleanup
if not namep "categories [stop]
ern :categories
ern "categories
Appendix B: The Backtracking Program
to track :lists :tests
foreach first :lists [make ? array count bf :lists]
catch "tracked [track1 first :lists bf :lists 1]
to track1 :master :others :index
if emptyp :others [tracktest stop]
track2 :master first :others bf :others
to track2 :names :these :those
if emptyp :these [track1 :master :those :index+1 stop]
foreach :these [setitem :index thing first :names ?
track2 bf :names remove ? :these :those]
to tracktest
foreach :tests [if not run ? [stop]]
foreach :master [pr se ? arraytolist thing ?]
throw "tracked
to is :this :that
if memberp :this :master [output memberp :that thing :this]
if memberp :that :master [output memberp :this thing :that]
localmake "who find [memberp :this thing ?] :master
output memberp :that thing :who
Appendix C: The Enhanced Inference System
Only the procedures changed from the version in appendix A are given here:
to implies :who1 :what1 :truth1 :who2 :what2 :truth2
if equalp (gprop :who1 "category) (gprop :what1 "category) [stop]
if equalp (gprop :who2 "category) (gprop :what2 "category) [stop]
implies1 :who1 :what1 :truth1 :who2 :what2 :truth2
implies1 :who2 :what2 (not :truth2) :who1 :what1 (not :truth1)
to implies1 :who1 :what1 :truth1 :who2 :what2 :truth2
localmake "old1 get :who1 :what1
if equalp :old1 :truth1 [settruth :who2 :what2 :truth2 stop]
if equalp :old1 (not :truth1) [stop]
if memberp (list :truth1 :who2 :what2 :truth2) :old1 [stop]
if memberp (list :truth1 :what2 :who2 :truth2) :old1 [stop]
if memberp (list :truth1 :who2 :what2 (not :truth2)) :old1 ~
[settruth :who1 :what1 (not :truth1) stop]
if memberp (list :truth1 :what2 :who2 (not :truth2)) :old1 ~
[settruth :who1 :what1 (not :truth1) stop]
store :who1 :what1 ~
fput (list :truth1 :who2 :what2 :truth2) :old1
if :truth2 [foreach (remove :who2 peers :who2)
[implies :who1 :what1 :truth1 ? :what2 "false]
foreach (remove :what2 peers :what2)
[implies :who1 :what1 :truth1 :who2 ? "false]]
if not :truth2 [implies2 :what2 (remove :who2 peers :who2)
implies2 :who2 (remove :what2 peers :what2)]
foreach (gprop :who2 "true) ~
[implies :who1 :what1 :truth1 ? :what2 :truth2]
foreach (gprop :what2 "true) ~
[implies :who1 :what1 :truth1 :who2 ? :truth2]
if :truth2 ~
[foreach (gprop :who2 "false)
[implies :who1 :what1 :truth1 ? :what2 "false]
foreach (gprop :what2 "false)
[implies :who1 :what1 :truth1 :who2 ? "false]]
to implies2 :one :others
localmake "left filter [not (or memberp (list :truth1 :one ? "false) :old1
memberp (list :truth1 ? :one "false) :old1
(and :truth1
(or (and equalp ? :who1
equalp gprop :what1 "category
gprop :one "category)
(and equalp ? :what1
equalp gprop :who1 "category
gprop :one "category))
(not or equalp :one :who1
equalp :one :what1))
equalp get :one ? "false)] ~
if emptyp :left [settruth :who1 :what1 (not :truth1) stop]
if emptyp butfirst :left ~
[implies :who1 :what1 :truth1 :one first :left "true] | {"url":"http://www.eecs.berkeley.edu/~bh/logic.html","timestamp":"2014-04-18T18:31:20Z","content_type":null,"content_length":"27227","record_id":"<urn:uuid:482a694f-f558-4297-9213-c99795273735>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Simulation Forum
Indeed f(n) and f(n+1) are two different forces: f(n) is the force you calculated in the previous simulation loop and f(n+1) is the force you calculate during the current loop. Still, all you need is
one force update per loop!
This also shows from the wikipedia article, where the new acceleration (which is just force / mass) is derived once per loop (in step 3 where it says "derive a").
Besides, I've done all sorts of experiments with this integration algorithm, including adding a second force update, but it doesn't improve accuracy or stability.
IMHO velocity verlet is the integration algorithm which gives you the best and cheapest allround result. Popular higher order algorithms like the Runge-Kutta 4th order requires four force updates per
loop, and it is not even symplectic (ie. it does not preserve energy). Swapping the RK4 out with four VV itrations at timestep dt/4 gives you better force preservation over time and better stability
for slightly less cpu power.
http://www.compsoc.man.ac.uk/~lucky/Dem ... erlet.htmlhttp://en.wikipedia.org/wiki/Verlet_int ... ity_Verlet
those are my functions :
inline void EulerMethod(float dt){
v += (a * dt); // dv/dt = a
r += (v * dt);// + (a * 0.5f * dt * dt); // dr = v*dt + 0.5*a*dt^2
inline void VerletVelocityMethod(float dt){
r += (v * dt) + (a * 0.5f * dt * dt); //.. r(t+dt)
v += a * 0.5f * dt; //.. v(t+dt/2)
a = pState(this,r,v,a)/mass; // pState points to function to get the force on object
v += a * 0.5f * dt; //.. v(t+dt)
VV show me the energy of the system being between 25 (integer!) and 24.99996
Euler shows energy between 24.91 .. 25.08.
so ye if make average of Euler energy it can be pretty precice, although BOTH methods are pretty precice just VV takes more time/memory since I need to call a function pState to calculate the force
all over again (while hoping the force calculation is short enough).
Looking at it from a different perspective, rotating VV's loop so the force computation is at the top, and labeling the velocity and position changes:
Starting with v0 and r0 (starting velocity and position, respectively):
a = f / m
v.5 = v0 + 0.5 * a * dt // v.5 is really the final velocity in the original order
r1 = r0 + v.5 * dt + 0.5 * a * dt * dt
v1 = v.5 + 0.5 * a * dt // v1 is really the midpoint velocity in the original order
Substituting v.5 into the r1 and v1 equations:
r1 = r0 + ( v0 + 0.5 * a * dt ) * dt + 0.5 * a * dt * dt
v1 = ( v0 + 0.5 * a * dt ) + 0.5 * a * dt
Now simplify and rearrange:
r1 = r0 + ( v0 + a * dt ) * dt // look at the term multiplied by dt,
v1 = v0 + a * dt // it's the same as the right-side here
So ...
r1 = r0 + v1 * dt
v1 = v0 + a * dt
These are the equations for Symplectic Euler. It's the same. I fail to see how re-arranging it changes the results, and yet you're still claiming that it's an order of magnitude more accurate in your
original post. It is not.
Yes, the energy measured at the final point wobbles with SE, but over the long term this is completely inconsequential - over a million timesteps, it doesn't gain or lose any more energy than VV. In
fact you can simply adjust your energy measurement for SE to report the same thing as VV if you want (I believe by taking the average of v0 and v1).
how people got to those equations, but after 3110 iteration with VV i got an INTEGER value when it acctualy needed to be an integer! Thats outstandingly accurate and I didnt even use a rounding-error
EV gave me an integer too but at the 3109 AND 3110.. I dont really know whats the difference but i'll test it more when I use it in the acctual graphic simulation (all the above were used in command
line to check).
And just to point out, Newton laws gives me an error of .198 at this stage
I did now a test to check the Energy of the system and you are right, Euler is abit bluffing (+-0.5%) while Verlet keeping it energy at 24.999+-0.001% or something like that (I guess on dt and other
parameters as well).
is there. If you add an energy check to your simulation, you'll see that VV is more accurate than SE. VV is very similar to the leapfrog method, but not identical (vv is slightly more accurate).
There is another difference that I consider negligible: the velocity for the Symplectic method is equivalent to the midpoint velocity rather than the final velocity of the Verlet method. This
difference *could* be considered important if you are measuring the energy of the system at the end of the timestep, as it might appear to be wobbling even though it really isn't. In any case, one
could easily move the bottom two equations of Verlet to the top of the loop, though, and then it becomes perfectly equivalent.
P.S. Unless I'm mistaken, I think the Velocity Verlet method has a bunch of other names including Newton-Stormer-Verlet and possibly Leapfrog as well.
r += v * dt + 0.5 * a * dt * dt
v += 0.5 * a * dt
a = f / m
v += 0.5 * a * dt
In the first few runs I even got an integer instead of .9998 values.
I also wiki-ed some strange words you used (AKA "numerical integration) and I didn't thought that subject is so wide, I acctualy didnt know thers a subject for my problem at all !
I'll sure have reading matirials now, thanks for the help !
Around these parts, we recommend using the symplectic Euler integration method (a.k.a. semi-implicit and possibly some other names):
v += ( a * dt )
r += ( v * dt )
Notice that we are using the *updated* (next timestep's) velocity for the position integration. This is conditionally stable, and more stable than the parabolic method. If the dt is small enough, you
will be able to simulate your spring with pretty good accuracy for quite awhile. Hope this helps.
Update(float dt)
r += (v * dt) + (a * 0.5f * dt * dt); // dr = v*dt + 0.5*a*dt^2
v += (a * dt); // dv/dt = a
my_atom->r.y = 2; // starting position, so it will get accelerated
my_atom->a.y = -2*my_atom->r.y; // a = -(k/m)*x .. (-k/m)==-2
Kinda confusing to be honest ! I thought I got it right ..
I did some adjustments to my VV algorithem according to what you said, now energy goes between 25 to 25.00003 which is still better then Euler.
so I changed from :
a = pState(this,r,v,a);
r += (v * dt) + (a * 0.5f * dt * dt); //.. r(t+dt)
v += (a*mass + pState(this,r,v,a)) * (0.5f * dt /mass); //.. v(t+2*dt/2)
r += (v * dt) + (prev_a * 0.5f * dt * dt); //.. r(t+dt)
v += prev_a*0.5f*dt;
a = pState(this,r,v,prev_a) / mass;
prev_a = a;
v += a * 0.5f * dt; //.. v(t+dt)
so the trick is that I just calculate the force in a difference place in the algorithem (didnt thought about it before).
One problem is that when i do speed-check (using milliseconds clock()) I get about the same running time for both, I think its because the latter use more lines of codes and its harder to 'simplify'
it like the first one which is just 2 lines and no division of vector by scalar.
EDIT: I made some adjustments, VV now is 300 ms (while previously was 360 ms) and euler stays at about 215 ms.
Thanks for clarifying it to me
You're welcome. Try this:
float halfdt = 0.5f * dt;
float invmass = 1.0f / mass;
r += v * dt + a * halfdt * dt;
v += a * halfdt;
a = pState(this, r, v, a) * invmass;
v += a * halfdt;
This should be even faster (but since I'm not sure how your pState func works the syntax might not be entirely right).
Yep your syntax is just right, and it does make it even faster ! Nice trick of using memory to improve speed i'll surly use it now in other projects as well so thank you very much | {"url":"http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=4&t=4008&p=15030","timestamp":"2014-04-21T12:08:03Z","content_type":null,"content_length":"64771","record_id":"<urn:uuid:76ff3936-4f1b-42e6-81bf-464d96c54067>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brazilian Journal of Physics
Services on Demand
Related links
Print version ISSN 0103-9733
Braz. J. Phys. vol.31 no.2 São Paulo June 2001
Noncommutative Supersymmetric Field Theories
Victor O. Rivelles
Instituto de Física, Universidade de São Paulo
Caixa Postal 66318, 05315-970, São Paulo - SP, Brazil
E-mail: rivelles@fma.if.usp.br
Received on 19 February, 2001
We discuss some properties of noncommutative supersymmetric field theories which do not involve gauge fields. We concentrate on the renormalizability issue of these theories.
I Introduction
Although string theory is quite well understood in the perturbative regime its formulation in a background independent way is almost unknown. There are many reasons for that. String theory has too
many degrees of freedom. It is quite difficult to handle all of them together. It also includes the gravitational field which may have quantum fluctuations. And there are many sources for nonlocality
which is also troublesome in any theory. One way out of these difficulties is to consider limits of string theory which have some of the troubles raised above but not all of them. This may allow us
to understand better some aspects of string theory without the complications of the full theory.
One such a limit is the zero slope limit of the D3-brane in the presence of a constant NS-NS field [1]. The low energy effective theory is a quantum field theory deformed in terms of the Moyal
product over space-time. In noncommutative field theories the usual product of fields is replaced by the Moyal product of fields giving rise to nonlocal field theories [2]. Usually nonlocal field
theories turn out to be not well defined but the nonlocality induced by the Moyal product is still tractable. It was found that the main characteristic of noncommutative field theories is the mixing
of ultraviolet (UV) and infrared (IR) divergences due to its nonlocal structure [3]. As a consequence it is not clear that the properties of the usual commutative field theories are kept, without
modifications, in their noncommutative counterparts. This gave rise to an intensive research of noncommutative field theories in Euclidean or Minkowski space-time.
One of the manifestations of the UV/IR mixing in the lf^4 theory is as an infrared quadratic singularity in the propagator at one loop [3]. Although renormalizable up to two loops [4] it becomes
non-renormalizable at higher loop orders. Models involving a complex scalar field may be non-renormalizable even at one loop [5]. So, noncommutativity seems to destroy the main characteristic of
commutative field theories, i.e., their renormalizability.
In what follows we will discuss the inclusion of supersymmetry in such models and how it restores the renormalizability. We will concentrate on the Wess-Zumino model in 3 + 1 dimensions [6] and the
supersymmetric non-linear sigma model in 2 + 1 dimensions [7]. In this last case we will see that the noncommutativity also destroys the mechanism for dynamical mass generation of the fermionic
sector, and we will show how supersymmetry helps to fix it.
II Noncommutative Spaces
In quantum mechanics we have the usual commutation relations
It is natural to consider noncommutative coordinates with commutation relations
where q[ij] is a constant of dimension L^2 which defines a noncommutativity scale. This breaks rotational (or Lorentz) symmetry but in the limit q ® 0 the symmetry is recovered. This is an example of
a noncommutative space. It can be extended to space-time but we will consider noncommutativity only in the spatial coordinates since otherwise there are problems with unitarity [8].
We can understand heuristically how the UV and IR physics gets mixed. From Eq.(1) it follows that D^i D^j ~ i g^ij. In a similar way, from Eq.(3) it follows that D^i D^j ~ iq^ij so we expect that D~
Fields defined on such spaces are operator valued objects. It turns out to be more convenient to use fields which are not operator valued objects but just functions. This can be achieved through the
use of the Weyl-Moyal correspondence [2]
We associate to the operator valued field f(x) through its Fourier transform p) as
The operator valued field
is the Moyal (or star) product. Then we can work on a commutative space in which the usual product of field is replaced by the Moyal product. Notice that the derivatives in the definition Eq.(8)
makes the Moyal product non-local. Also, the Moyal commutator of the commutative coordinates x^m gives
It can be easily verified the following properties of the Moyal product:
where k Ù q = k^m q[mn] q^n.
where f and g, respectively.
III Noncommutative Scalar Field Theory
Let us consider the massive scalar field in D=3 + 1 dimensions [3], whose action is
Using property d) it is seen that the propagator is not affected by the Moyal product. This is a generic property of noncommutative field theories. The vertex, however, must be symmetrized . In
momentum space we have
Then, the one loop correction for the two-point function is
The first term is the usual one loop mass correction of the commutative theory (up to a factor 1/2) which is quadratically divergent. The second term is not divergent due to the oscillatory nature of
cos(k Ù p). This shows that the nonlocality introduced by the Moyal product is not bad and leaves us with the same divergence structure of the commutative theory. To take into account the effect of
the second term we regularize the integral using the Schwinger parametrization
where a cutoff L was introduced. We find
Note that when the cutoff is removed, L ® ¥, the noncommutative contribution remains finite providing a natural regularization. Also q ® 0 or when ® 0.
The one loop effective action is then
where M is the renormalized mass. Let us take the limits L ® ¥ and ® 0. If we take first ® 0 then ^2 << L[eff]=L showing that we recover the effective commutative theory
If, however, we take L ® ¥ then ^2 >>
which is singular when ® 0. This shows that the limit L ® ¥ does not commute with the low momentum limit ® 0 so that there is a mixing of UV and IR limits.
The theory is renormalizable at one loop order if we do not take ® 0. What about higher loop orders? Suppose we have insertions of one loop mass corrections. Eventually we will have to integrate over
small values of L ® ¥. Then we find an IR divergence in a massive theory. This combination of UV and IR divergences makes the theory non-renormalizable.
There are also examples of non-renormalizable theories already at one loop order [5]. For a complex scalar field with interaction f^**f^* *f*f it is found that the theory is one-loop
non-renormalizable while f^**f*f^**f gives a one loop renormalizable model.
Then the question is whether it would be possible to find a theory which is renormalizable to all loop orders. Since the UV/IR mixing appears at the level of quadratic divergences a candidate theory
would be a supersymmetric theory because it does not have such divergences [9, 10]. As we shall see this indeed happens.
IV Noncommutative Wess-Zumino model
The noncommutative Wess-Zumino model in 3 + 1 dimensions [6] has the action
where A and B are bosonic fields, F and G are auxiliary fields and y is a Majorana spinor. The action is invariant under the usual supersymmetry transformations. They are not modified by the Moyal
product since they are linear in the fields. The elimination of the auxiliary fields through their equations of motion produces quartic interactions. In terms of the complex field f = A + iB we get f
^**f^* *f*f which is non-renormalizable in the noncommutative case. This casts doubts about the renormalizability of the model but as we shall see supersymmetry saves the day.
As usual, the propagators are not modified by noncommutativity due to the property d). They are given by
Taking into account the symmetries the vertices are
The degree of superficial divergence for a generic 1PI graph g is then
where N[] denotes the number of external lines associated to the field I[AF] and I[BF] are the numbers of internal lines associated to the mixed propagators AF and BF, respectively. In all cases we
will regularize the divergent Feynman integrals by assuming that a supersymmetric regularization scheme does exist.
The one loop analysis can be done in a straightforward way. As in the commutative case all tadpoles contributions add up to zero. We have verified this explicitly. The self-energy of A can be
computed and the divergent part is contained in the integral
The first term is logarithmically divergent. It differs by a factor 2 from the commutative case. As usual, this divergence is eliminated by a wave function renormalization. The second term is UV
convergent and for small p it behaves as p^2 ln(p^2/m^2) and actually vanishes for p = 0. Then there is no IR pole. The same analysis can be carried out for the others fields. For F we find that the
divergent part is
The first term is logarithmically divergent and can also be eliminated by a wave function renormalization. The second term diverges as ln(p^2/m^2) as p goes to zero. However its multiple insertions
is harmless. For the fermion field the divergent part is similar to the former results and needs also a wave function renormalization. The term containing cos(k Ù p) behaves as p^2/m^2) and vanishes
as p goes to zero. Therefore, there is no UV/IR mixing in the self-energy as expected.
To show that the model is renormalizable we must also look into the interactions vertices. The A^3 vertex has no divergent parts as in the commutative case. The same happens for the other three point
functions. For the four point vertices no divergence is found as in the commutative case. Hence, the noncommutative Wess-Zumino model is renormalizable at one loop with a wave-function
renormalization and no UV/IR mixing.
To go to higher loop orders we proceed as in the commutative case [11]. We derived the supersymmetry Ward identities for the n-point vertex function. Then we showed that there is a renormalization
prescription which is consistent with the Ward identities. They are the same as in the commutative case. And finally we fixed the primitively divergent vertex functions. Then we found that there is
only a common wave function renormalization as in the commutative case. In general we expect
At one loop we found dm = 0 and Z¢ = 1. We showed that this also holds to all orders and no mass renormalization is needed.
Being the only consistent noncommutative quantum field theory in 3 + 1 dimensions known so far it is natural to study it in more detail. As a first step in this direction we considered the
nonrelativistic limit of the noncommutative Wess-Zumino model [12]. We found the low energy effective potential mediating the fermion-fermion and boson-boson elastic scattering in the nonrelativistic
regime. Since noncommutativity breaks Lorentz invariance we formulated the theory in the center of mass frame of reference where the dynamics simplifies considerably. For the fermions we found that
the potential is significatively changed by the noncommutativity while no modification was found for the bosonic sector. The modifications found give rise to an anisotropic differential cross
V Noncommutative Gross-Neveu and Nonlinear Sigma Models
Another model where nonrenormalizability is spoiled by the noncommutativity is the O(N) Gross-Neveu model. This model is perturbatively renormalizable in 1 + 1 dimensions and 1/N renormalizable in 1
+ 1 and 2 + 1 dimensions. In both cases it presents dynamical mass generation. It is described by the Lagrangian
where y[i], i = 1, ¼N, are two-component Majorana spinors. Since it is renormalizable in the 1/N expansion in 1 + 1 and 2 + 1 dimensions we will consider both cases. As usual, we introduce an
auxiliary field s and the Lagrangian turns into
Replacing s by s + M where M is the VEV of the original s we get the gap equation (in Euclidean space)
To eliminate the UV divergence we need to renormalize the coupling constant by
In 2 + 1 dimensions we find
and therefore only for -M\not = 0, otherwise M is necessarily zero. No such a restriction exists in 1 + 1 dimensions. In any case, we will focus only in the massive phase. The propagator for s is
proportional to the inverse of the following expression
which is divergent. Taking into account the gap equation the above expression reduces to
which is finite. Then there is a fine tuning which is responsible for the elimination of the divergence and which might be absent in the noncommutative case due to the UV/IR mixing.
The noncommutative model is defined by
Elimination of the auxiliary field results in a four-fermion interaction of the type [i]*y[i]*[j]*y[j]. However a more general four-fermion interaction may involve a term like [i]*[j]*y[i]*y[j]. This
last combination does not have a simple 1/N expansion and we will not consider it. The Moyal product does not affect the propagators and the trilinear vertex acquires a correction of cos(p[1]Ù p[2])
with regard to the commutative case. Hence the gap equation is not modified, while the propagator for the s is now proportional to the inverse of
Now the divergent part is no longer canceled and this turns the model into a nonrenormalizable one.
On the other side, the nonlinear sigma model also presents troubles in its noncommutative version. The noncommutative model is described by
where j[i], i=1, ¼, N, are real scalar fields, l is the auxiliary field and M is the generated mass. The leading correction to the j self-energy is
where D[l] is the propagator for l. As for the case of the scalar field this can be decomposed as a sum of a quadratically divergent part and a UV finite part. Again there is the UV/IR mixing
destroying the 1/N expansion.
VI Noncommutative Supersymmetric Nonlinear Sigma Model
The Lagrangian for the commutative supersymmetric sigma model is given by
where F[i], i = 1, ¼, N, are auxiliary fields. Furthermore, s, l and x are the Lagrange multipliers which implement the supersymmetric constraints. After the change of variables l ® l + 2 M s, F ® F-
Mj where M= < s > , and the shifts s ® s + M and l ® l + l[0], where l[0]= < l > , we arrive at a more symmetric form for the Lagrangian
Now supersymmetry requires l[0 ]= -2M^2 and the gap equation is
so a coupling constant renormalization is required. We now must examine whether the propagator for s depends on the this renormalization. We find that the two point function for s is proportional to
the inverse of
which is identical to the Gross-Neveu case. Notice that the gap equation was not used. The finiteness of the above expression is a consequence of supersymmetry.
The noncommutative version of the supersymmetric nonlinear sigma model is given by
Notice that supersymmetry dictates the form of the trilinear vertices. Also, the supersymmetry transformations are not modified by noncommutativity since they are linear and no Moyal products are
The propagators are the same as in the commutative case. The vertices have cosine factors due to the Moyal product
We again consider the propagators for the Lagrange multiplier fields. Now the s propagator is modified by the cosine factors and is proportional to the inverse of
It is well behaved both in UV and IR regions. The propagators for l and x are proportional to the inverse of
respectively. They are also well behaved in UV and IR regions.
The degree of superficial divergence for a generic 1PI graph g is
where N[] is the number of external lines associated to the field j and y fields since, in principle, they are quadratic and linearly divergent, respectively. For the self-energies of j and y we find
that they diverge logarithmically and they can be removed by a wave function renormalization of the respective field. The same happens for the auxiliary field F. The renormalization factors for them
are the same so supersymmetry is preserved in the noncommutative theory. This analysis can be extended to the n-point functions. In 2 + 1 dimensions we find nothing new showing the renormalizability
of the model at leading order of 1/N. However, in 1 + 1 dimensions there some peculiarities. Since the scalar field is dimensionless in 1 + 1 dimensions any graph involving an arbitrary number of
external j lines is quadratically divergent. In the four-point function there is a partial cancellation of divergences but a logarithmic divergence still survives. The counterterm needed to remove it
can not be written in terms of òd^2 x j[i] *j[i] *j[j]*j[j] and ò d^2 x j[i]*j[j]*j[i]*j[j]. A possible way to remove this divergence is by generalizing the definition of 1PI diagram along the lines
suggested in [13] for the commutative nonlinear sigma model. However the cosine factors do not allow us to use this mechanism which casts doubt about the renormalizability of the noncommutative
supersymmetric O(N) nonlinear sigma model in 1 + 1 dimensions.
VII Conclusions
We have shown that it is possible to build consistent quantum field theories in noncommutative space. It seems that supersymmetry is an essential ingredient for renormalizability. The models studied
here do not involve gauge fields and this considerably simplifies the situation. All vertices are deformed in the same way by the Moyal product and this was essential to analyze the amplitudes. With
gauge fields the situation is much more complicated because the vertices are deformed in different ways. However, supersymmetric gauge theories may still have a better behavior.
This work was done in collaboration with H. O. Girotti, M. Gomes and A. J. da Silva. It was partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Conselho Nacional de
Desenvolvimento Científico e Tecnológico (CNPq), and PRONEX under contract CNPq 66.2002/1998-99.
[1] N. Seiberg and E. Witten, "String Theory and Noncommutative Geometry", hep-th/9908142, JHEP 9909, 032 (1999). [ Links ]
[2] T. Filk, Phys.Lett. B 376, 53 (1996). [ Links ]
[3] S. Minwalla, M. Van Raamsdonk and N. Seiberg, "Noncommutative Perturbative Dynamics", hep-th/9912072. [ Links ]
[4] I. Y. Aref'eva, D. M. Belov and A. S. Koshelev, "Two-loop Diagrams in Noncommutative Links ]
[5] I. Y. Aref'eva, D. M. Belov and A. S. Koshelev, "A note on UV/IR for Noncommutative Complex Scalar Field", hep-th/0001215. [ Links ]
[6] H. O. Girotti, M. Gomes, V. O. Rivelles, A. J. da Silva, "A Consistent Noncommutative Field Theory: the Wess-Zumino Model", hep-th/0005272, Nucl.Phys. B 587, 299 (2000). [ Links ]
[7] H. O. Girotti, M. Gomes, V. O. Rivelles, A. J. da Silva, "The Noncommutative Supersymmetric Nonlinear Sigma Model", hep-th/0102101. [ Links ]
[8] N. Seiberg, L. Susskind, N. Toumbas, "Space/Time Non-Commutativity and Causality", hep-th/0005015, JHEP 0006 (2000) 044. [ Links ]
[9] I. Chepelev and R. Roiban, "Renormalization of Quantum field Theories on Noncommutative R^d. I: Scalars, " hep-th/9911098. [ Links ]
[10] S. Ferrara and M. A. Lledo, "Some aspects of Deformations of Supersymmetric Field Theories, " hep-th/0002084. [ Links ]
[11] J. Iliopoulos and B. Zumino, "Broken supergauge symmetry and renormalization", Nucl. Phys. B 76, 310 (1974). [ Links ]
[12] H.O. Girotti, M. Gomes, V.O. Rivelles, A.J. da Silva, "The Low Energy Limit of the Noncommutative Wess-Zumino Model", hep-th/0101159. [ Links ]
[13] I. Ya. Aref'eva, Theor. Math. Phys. 36, 573 (1979); [ Links ]Ann. Phys. (NY) 117, 393 (1979); [ Links ]I. Ya. Aref'eva, E. R. Nissimov and S. J. Pacheva, Commun. Math. Phys. 71, 213 (1980). [
Links ]See also J. H. Lowenstein and E. R. Speer, Nucl.. Phys. B 158, 397 (1979). [ Links ] | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332001000200016&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T01:25:51Z","content_type":null,"content_length":"53459","record_id":"<urn:uuid:531f09aa-98d1-4faa-8b25-8f74806b4bf4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider the surface of the plane wall at temperature T[b] exposed to a medium at temperature . Heat is lost from the surface to the surrounding medium by convection with a heat transfer
coefficient of h. Disregarding radiation, heat transfer from a surface area A is expressed as [c]= A[b] and length L that is attached to the surface with a perfect contact (Figure 3.8).
Figure 3.8 Fins Enhance Heat Transfer From A Surface By Enhancing Surface
This time heat will flow from the surface to the fin by conduction and from the fin to the surrounding medium by convection with the same heat transfer coefficient h. The temperature of the fin
will be T[b] at the fin base and gradually decrease toward the fin tip. Convection from the fin surface causes the temperature at any cross section to drop somewhat from the midsection toward the
outer surfaces. However, the cross sectional area of the fins is usually very small, and thus the temperature at any cross section can be considered to be uniform. Also, the fin tip can be
assumed for convenience and simplicity to be insulated by using the corrected length for the fin instead of the actual length.
In the limiting case of zero thermal resistance or infinite thermal conductivity ( ), the temperature of the fin will be uniform at the base value of T[b]. The heat transfer from the fin will be
maximum in this case and can be expressed as
In reality, however, the temperature of the fin will drop along the fin, and thus the heat transfer from the fin will be less because of the decreasing temperature difference T(x) - toward the
fin tip, as shown in Figure. 3.9.
To account for the effect of this decrease in temperature on heat transfer, we define a fin efficiency as
where A[fin] is the total surface area of the fin. This relation enables us to determine the heat transfer from a fin when its efficiency is known. For the cases of constant cross section of very
long fins and fins with insulated tips, the fin efficiency can be expressed as
SinceA[fin]=PL for fins with constant cross section. Equation 3.38 can also be used for fins subjected to convection provided that the fin length L is replaced by the corrected length L[c].
Fin efficiency relations are developed for fins of various profiles and are plotted in Figure. 3.10 for fins on a plain surface and in Figure 3.11 for circular fins of constant thickness. The fin
surface area associated with each profile is also given on each figure. For most fins of constant thickness encountered in practice, the fin thickness t is too small relative to the fin length L
, and thus the fin tip area is negligible. Note that fins with triangular and parabolic profiles contain less material and are more efficient than the ones with rectangular profiles, and thus are
more suitable for applications requiring minimum weight such as space applications.
Figure 3.9 Ideal And Actual Temperature Distribution In A Fin
An important consideration in the design of finned surfaces is the selection of the proper fin length L. Normally the longer the fin, the larger the heat transfer area and thus the higher the
rate of heat transfer from the fin. But also the larger the fin, the bigger the mass, the higher the price, and the larger the fluid friction. Therefore, increasing the length of the fin beyond a
certain value cannot be justified unless the added benefits outweigh the added cost. Also, the fin efficiency decreases with increasing fin length because of the decrease in fin temperature with
length. Fin lengths that cause the fin efficiency to drop below 60 percent usually cannot be justified economically and should be avoided. The efficiency of most fins used in practice is above 90
Figure 3.10 Efficiency Of Circular, Rectangular And Triangular Fins On A Plain Surface Of Width W
Figure 3.11 Efficiency Of Circular Fins Of Length L And Constant Thickness T | {"url":"http://www.cdeep.iitb.ac.in/nptel/Mechanical/Heat%20and%20Mass%20Transfer/Conduction/Module%203/main/3.4.html","timestamp":"2014-04-18T05:31:26Z","content_type":null,"content_length":"15574","record_id":"<urn:uuid:adc83fcb-e019-4a1a-a87b-a48b0a5959b9>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how can you tell the speed of a song? i sing in choir
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50624da4e4b0583d5cd31400","timestamp":"2014-04-21T02:06:19Z","content_type":null,"content_length":"39851","record_id":"<urn:uuid:35051f5a-2bf6-4bc7-ba16-953c8a3b5703>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
higher inductive type
Type theory
Categorical semantics
Higher inductive types (HITs) are a generalization of inductive types which allow the constructors to produce, not just points of the type being defined, but also elements of its iterated identity
While HITs are already useful in extensional type theory, they are most useful and powerful in homotopy type theory, where they allow the construction of cell complexes, homotopy colimits,
truncations, localizations, and many other objects from classical homotopy theory.
All higher inductive types described below are given together with some pseudo-Coq code, which would implement that HIT if Coq supported HITs natively.
The circle
Inductive circle : Type :=
| base : circle
| loop : base == base.
Using the univalence axiom, one can prove that the loop space base == base of the circle type is equivalent to the integers; see this blog post.
The interval
The homotopy type of the interval can be encoded as
Inductive interval : Type :=
| zero : interval
| one : interval
| segment : zero == one.
See interval type. The interval can be proven to be contractible. On the other hand, if the constructors zero and one satisfy their elimination rules definitionally, then the existence of an interval
type implies function extensionality; see this blog post.
The 2-sphere
Similarly the homotopy type of the 2-dimensional sphere
Inductive sphere2 : Type :=
| base2 : sphere2
| surf2 : idpath base2 == idpath base2.
Inductive susp (X : Type) : Type :=
| north : susp X
| south : susp X
| merid : X -> north == south.
This is the unpointed suspension. It is also possible to define the pointed suspension. Using either one, we can define the $n$-sphere by induction on $n$, since $S^{n+1}$ is the suspension of $S^n$.
Mapping cylinders
The construction of mapping cylinders is given by
Inductive cyl {X Y : Type} (f : X -> Y) : Y -> Type :=
| cyl_base : forall y:Y, cyl f y
| cyl_top : forall x:X, cyl f (f x)
| cyl_seg : forall x:X, cyl_top x == cyl_base (f x).
Using this construction, one can define a (cofibration, trivial fibration) weak factorization system for types.
Inductive is_inhab (A : Type) : Type :=
| inhab : A -> is_inhab A
| inhab_path : forall (x y: is_inhab A), x == y.
This is the (-1)-truncation into h-propositions. One can prove that is_inhab A is always a proposition (i.e. $(-1)$-truncated) and that it is the reflection of $A$ into propositions. More generally,
one can construct the (effective epi, mono) factorization system by applying is_inhab fiberwise to a fibration.
Similarly, we have the 0-truncation into h-sets:
Inductive pi0 (X:Type) : Type :=
| cpnt : X -> pi0 X
| pi0_axiomK : forall (l : Circle -> pi0 X), refl (l base) == map l loop.
We can similarly define $n$-truncation for any $n$, and we should be able to define it inductively for all $n$ at once as well.
See at n-truncation modality.
The (homotopy) pushout of $f \colon A\to B$ and $g\colon A\to C$:
Inductive hpushout {A B C : Type} (f : A -> B) (g : A -> C) : Type :=
| inl : B -> hpushout f g
| inr : C -> hpushout f g
| glue : forall (a : A), inl (f a) == inr (g a).
Quotients of sets
The quotient of an hProp-value equivalence relation, yielding an hSet (a 0-truncated type):
Inductive quotient (A : Type) (R : A -> A -> hProp) : Type :=
| proj : A -> quotient A R
| relate : forall (x y : A), R x y -> proj x == proj y
| contr1 : forall (x y : quot A R) (p q : x == y), p == q.
This is already interesting in extensional type theory, where quotient types are not always included. For more general homotopical quotients of “internal groupoids” as in the (∞,1)-Giraud theorem, we
first need a good definition of what such an internal groupoid is.
Suppose we are given a family of functions:
Hypothesis I : Type.
Hypothesis S T : I -> Type.
Hypothesis f : forall i, S i -> T i.
A type is said to be $I$-local if it sees each of these functions as an equivalence:
Definition is_local Z := forall i,
is_equiv (fun g : T i -> Z => g o f i).
The following HIT can be shown to be a reflection of all types into the local types, constructing the localization of the category of types at the given family of maps.
Inductive localize X :=
| to_local : X -> localize X
| local_extend : forall (i:I) (h : S i -> localize X),
T i -> localize X
| local_extension : forall (i:I) (h : S i -> localize X) (s : S i),
local_extend i h (f i s) == h s
| local_unextension : forall (i:I) (g : T i -> localize X) (t : T i),
local_extend i (g o f i) t == g t
| local_triangle : forall (i:I) (g : T i -> localize X) (s : S i),
local_unextension i g (f i s) == local_extension i (g o f i) s.
The first constructor gives a map from X to localize X, while the other four specify exactly that localize X is local (by giving adjoint equivalence data to the map that we want to become an
equivalence). See this blog post for details. This construction is also already interesting in extensional type theory.
A prespectrum is a sequence of pointed types $X_n$ with pointed maps $X_n \to \Omega X_n$:
Definition prespectrum :=
{X : nat -> Type &
{ pt : forall n, X n &
{ glue : forall n, X n -> pt (S n) == pt (S n) &
forall n, glue n (pt n) == idpath (pt (S n)) }}}.
A prespectrum is a spectrum if each of these maps is an equivalence.
Definition is_spectrum (X : prespectrum) : Type :=
forall n, is_equiv (pr1 (pr2 (pr2 X)) n).
In classical algebraic topology, there is a spectrification functor which is left adjoint to the inclusion of spectra in prespectra. For instance, this is how a suspension spectrum is constructed: by
spectrifying the prespectrum $X_n \coloneqq \Sigma^n A$.
The following HIT should construct spectrification in homotopy type theory (though this has not yet been verified formally). (There are some abuses of notation below, which can be made precise using
Coq typeclasses and implicit arguments.)
Inductive spectrify (X : prespectrum) : nat -> Type :=
| to_spectrify : forall n, X n -> spectrify X n
| spectrify_glue : forall n, spectrify X n ->
to_spectrify (S n) (pt (S n)) == to_spectrify (S n) (pt (S n))
| to_spectrify_is_prespectrum_map : forall n (x : X n),
spectrify_glue n (to_spectrify n x)
== loop_functor (to_spectrify (S n)) (glue n x)
| spectrify_glue_retraction : forall n
(p : to_spectrify (S n) (pt (S n)) == to_spectrify (S n) (pt (S n))),
spectrify X n
| spectrify_glue_retraction_is_retraction : forall n (sx : spectrify X n),
spectrify_glue_retraction n (spectrify_glue n sx) == sx
| spectrify_glue_section : forall n
(p : to_spectrify (S n) (pt (S n)) == to_spectrify (S n) (pt (S n))),
spectrify X n
| spectrify_glue_section_is_section : forall n
(p : to_spectrify (S n) (pt (S n)) == to_spectrify (S n) (pt (S n))),
spectrify_glue n (spectrify_glue_section n p) == p.
We can unravel this as follows, using more traditional notation. Let $L X$ denote the spectrification being constructed. The first constructor says that each $(L X)_n$ comes with a map from $X_n$,
called $\ell_n$ say (denoted to_spectrify n above). This induces a basepoint in each type $(L X)_n$, namely the image $\ell_n(*)$ of the basepoint of $X_n$. The many occurrences of
to_spectrify (S n) (pt (S n)) == to_spectrify (S n) (pt (S n))
simply refer to the based loop space of $\Omega_{\ell_{n+1}(*)} (L X)_{n+1}$ of $(L X)_{n+1}$ at this base point.
Thus, the second constructor spectrify_glue gives the structure maps $(L X)_n \to \Omega (L X)_{n+1}$ to make $L X$ into a prespectrum. Similarly, the third constructor says that the maps $\ell_n\
colon X_n \to (L X)_n$ commute with the structure maps up to a specified homotopy.
Since the basepoints of the types $(L X)_n$ are induced from those of each $X_n$, this automatically implies that the maps $(L X)_n \to \Omega (L X)_{n+1}$ are pointed maps (up to a specified
homotopy) and that the $\ell_n$ commute with these pointings (up to a specified homotopy). This makes $\ell$ into a map of prespectra.
Finally, the fourth through seventh constructors say that $L X$ is a spectrum, by giving h-isomorphism data: a retraction and a section for each glue map $(L X)_n \to \Omega (L X)_{n+1}$. We could
use adjoint equivalence data as we did for localization, but this approach avoids the presence of level-3 path constructors. (We could have used h-iso data in localization too, thereby avoiding even
level-2 constructors there.) It is important, in general, to use a sort of equivalence data which forms an h-prop; otherwise we would be adding structure rather than merely the property of
such-and-such map being an equivalence.
Expositions include | {"url":"http://ncatlab.org/nlab/show/higher+inductive+type","timestamp":"2014-04-18T20:44:24Z","content_type":null,"content_length":"69101","record_id":"<urn:uuid:d1215451-b17a-449b-a689-2f8852824946>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special equation?
May 19th 2008, 01:45 PM
Special equation?
Wrong forum probably, but I'm not sure where to post...
My friend challenged me with this curious equation:
$2^x - 5 = x$
Like I know the answer is 3, but I don't know how to mathematically prove this.
May 19th 2008, 02:34 PM
All else fails I always do this
so $f(2)=-3$
and $f(4)=7$
Therefore by the intermediate value theorem there $\exists{c}\in(2,4)\backepsilon{f(c)}=0$
So now utilizing probably three but for the sake of the absence of luck in math
say the starting point for Newton-Raphson method is 2.5
we see that as $n\to\infty$$a_n\to{3}$
The check step
Sorry that is bad but I cannot think of anything else as of present
it works though (Sun)
May 19th 2008, 03:43 PM
My friend challenged me with this curious equation:
2^x - 5 = x
It might looks simple, but it's a real challenge. In fact, if $f(x)=2^x-5-x$, then f has 2 zeros. It means that the solution is not singular. But to prove it... not easy.
$f'(x)=ln(2)e^{xln(2)}=ln(2)e^x-1$. We want to study its sign. So we want to find when it is equal to $0$. $f'(x)=0 \Leftrightarrow ln(2)e^x=1 \Leftrightarrow x=ln(\frac{1}{ln(2)})=-ln(ln2)$.
Now if $x<-ln(ln2), f'(x)<0$. It means that $f(x)$ is decreasing from negative infinite to $-ln(ln2)$. As when $x>-ln(ln2)$, $f'(x)>0$, $f(x)$ is increasing from $-ln(ln2)$ to positive infinite.
f(0)=-4. As it will increase (strictly), it must cross the x axis on a single point when $x>0$. You can find a similar conclusion calculating for example $f(-10)$. It means that the roots of f
are respectively between $[-10,0]$ and $[0,5]$. (5 because $f(5)>0$). Now to find the roots, you can say that there is an easy one to see, when $x=3$. To calculate the other root, I suggest you
the bisection method which will work.
May 19th 2008, 03:50 PM
It might looks simple, but it's a real challenge. In fact, if $f(x)=2^x-5-x$, then f has 2 zeros. It means that the solution is not singular. But to prove it... not easy.
$f'(x)=ln(2)e^{xln(2)}=ln(2)e^x-1$. We want to study its sign. So we want to find when it is equal to $0$. $f'(x)=0 \Leftrightarrow ln(2)e^x=1 \Leftrightarrow x=ln(\frac{1}{ln(2)})=-ln(ln2)$.
Now if $x<-ln(ln2), f'(x)<0$. It means that $f(x)$ is decreasing from negative infinite to $-ln(ln2)$. As when $x>-ln(ln2)$, $f'(x)>0$, $f(x)$ is increasing from $-ln(ln2)$ to positive infinite.
f(0)=-4. As it will increase (strictly), it must cross the x axis on a single point when $x>0$. You can find a similar conclusion calculating for example $f(-10)$. It means that the roots of f
are respectively between $[-10,0]$ and $[0,5]$. (5 because $f(5)>0$). Now to find the roots, you can say that there is an easy one to see, when $x=3$. To calculate the other root, I suggest you
the bisection method which will work.
Haha stupid two roots :D | {"url":"http://mathhelpforum.com/calculus/38894-special-equation-print.html","timestamp":"2014-04-21T07:47:10Z","content_type":null,"content_length":"16072","record_id":"<urn:uuid:e62ff7e5-63dc-4960-bdca-c88740a47789>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparison of Surveyor and RIPE
Authors: Les Cottrell and Warren Matthews. Created: February 23, 2000; last updated on March 14, 2000
Both of these tools make end-to-end active performance measurements of the Internet.
Both the Surveyor and RIPE monitoring projects rely on a dedicated PC running Unix to be placed at each monitoring site. Each PC in turn relies on a Global Positioning System (GPS) device to obtain
accurate time and to synchronize time between each of the monitors. The monitors send packets at Poisson randomized time intervals to each other and use these packets to gather one way end-to-end
delay and loss measurements. They also make concurrent traceroutes which provides route history information. The community for Surveyor is Internet 2, though there are monitors at non Internet 2
sites, and in particular at 3 Higher Energy Physics (HEP) sites CERN, FNAL and SLAC that are also PingER monitor sites. The community for RIPE is European Internet Service Providers (ISPs), though
again there are RIPE machines at CERN and SLAC.
More general information comparing Surveyor and RIPE and other active measurement projects can be found in Comparison of some Internet Active End-to-end Performance Measurement projects.
Comparing Surveyor with RIPE
Jan-15 Data Set
Since the autocorrelation functions of pings measured as closely apart as 1 second (see High statistics ping results) indicate very weak correlations, rather than look for correlations between
individual Surveyor and RIPE probes we look at the data in aggregation. Therefore, for the Jan-15 set, we binned the RIPE & Surveyor data into 0.1 millisecond bins and plotted the one-way delay time
distributions that are shown to the right.
In the figure RIPE data has been normalized so the peak height of the 1st peak around 86 msec. has the same height as the equivalent peak in the Surveyor distribution. Also 0.2 msec. have been
subtracted from the delays for the RIPE data to improve the agreement, this is discussed in further detail below.
It can be seen that there is reasonably good agreement. Looking in more detail it is noteworthy that the second peak at around 95 msec. has more counts for the Surveyor data. Also there is more RIPE
data at lower delays in the region below the 1st peak. These difference are due to the 2 experiments not both being up during the entire interval and therefore not measuring exactly the same
behavior. In particular the SLAC Surveyor machine didn't take data during the period when the SLAC RIPE machine was reporting one-way delay less than 90ms.
It can be seen that there is a strong correlation between the points with R^2 > 0.9 (i.e. the proportion of variation in the Surveyor distribution attributable to the RIPE distribution is over 90%,
see "The New Statistical Analysis of Data", by T. W. Anderson and Jeremy D. Finn, published by Springer, 1996).
R^2 of the scatter plot data as a function of the "correction" (drift) made to the Surveyor delay. The plot of this is seen to the right.
It can be seen that the best correlation (highest value of R^2) is obtained when the Surveyor delay times are increased by 0.2 msec. Some of this difference maybe attributable to the differences in
packet sizes of the probes used. Surveyor uses 40 Byte packets whereas RIPE uses 100 Byte packets. For larger packets delay will be higher. Henk Uijterwaal of RIPE-NCC reports: "I've measured this
effect for the path Advanced-RIPE and back last April 1999, and found differences of 0.4..0.6 ms (depending on the time of day and thus network load). One can calculate the effective throughput from
these numbers. Back in April, I ended up with something in the 80 to 120 kb/s range, which is consistent with what one sees when transferring large files between the two sites. Of course, the numbers
may be different for the SLAC-CERN connection, but delays for the RIPE box should be a little higher than for Surveyor."
Jan-4 Data set
R^2) and plotting it as a function of the adjustment made to the RIPE delays. It can be seen that the in the chart to the left that the optimum fit (largest value of R^2) is when between 0.2 and 0.3
msec are subtracted from the RIPE delays.
dT[i] = T^S[i] - T^R[i] for each pair (with index i), where T^S[i] is the timestamp of the i-th Surveyor delay measurent and T^R[i] is the same for the RIPE delay measurement. Then we sorted by
differential dT, and kept those pairs that had a dT of < 2 secs. Then we scatter plotted these points and caluculated the R^2. We also looked at the dT distribution. The results are shown to the
right. This may illustrate that for this data, even for meaurements close togther in time (< 2 seconds apart and a median separation of 0.545 seconds), there is little correlation between the points.
This is even though there are points that have long delays, the problem is that the long delays in one measurement are not reflected in the other measurement. Presumably this means that the effect
that caused the long delay in one measurement has change its effect by the time (< 2 secs later) the other measurement is made. To obtain stronger correlation one probably needs more data with a
persistent structure in the data (for example such as would be caused by a routing change or congestion). An example of data with more structure can be seen in the results reported in Comparison of
Surveyor and PingER.
To effectively compare the methods it is important to ensure that the measurements cover the same time periods. The RIPE and Surveyor delay distributions strongly correlate (i.e. over 90% of the
Surveyor variation in the delay distribution in Surveyor is attributable to the delay distribution for RIPE). The widths of the distributions and the percentiles are also similar. The offset of about
0.2-0.3 msec. in the modes of the distributions can be explained by the difference in packet sizes. The lack of any long term (i.e. behavior persistent over time periods of 0.5 seconds or more)
structure in the data results in there being little correlation between the Surveyor and RIPE data points. The lack of a strong auto correlation value for the delays measured by RIPE and Surveyor for
these data sets is also probably due to the lack of noticeable structure in the data. [ Feedback ] | {"url":"http://www.slac.stanford.edu/comp/net/wan-mon/surveyor-vs-ripe.html","timestamp":"2014-04-18T14:30:23Z","content_type":null,"content_length":"12402","record_id":"<urn:uuid:0223257b-d18a-4489-9ae2-c9e7a23b7eca>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple Eigenvalues.
Next: Spectral Transformation Up: Convergence Properties Previous: Convergence Properties   Contents   Index
In theory, we get convergence only to eigenvectors that are represented in the starting vector. Normally this is not of any great concern, since rounding errors will also introduce those directions
that were not present at the outset into the computation. However, in one case this is important, namely, when the matrix
There are two different ways to get a set of linearly independent eigenvectors to a multiple eigenvalue. The first is to restart and run a projected operator on the subspace orthogonal to the
converged eigenvector(s), where all the converged eigendirections have been projected away. This corresponds to orthogonalizing the vector 4.6 to all converged eigenvectors. This procedure is
repeated as long as new vectors converge; see, e.g., [318]. There is a second, more radical way to deal with possibly multiple eigenvalues: the block Lanczos method. It starts with several, say 4.6.
In the block Lanczos method, the convergence is governed by the separation of the desired eigenvalue
Regardless of the above objection, block Lanczos is advantageous for efficiency reasons, whenever computing
Next: Spectral Transformation Up: Convergence Properties Previous: Convergence Properties   Contents   Index Susan Blackford 2000-11-20 | {"url":"http://web.eecs.utk.edu/~dongarra/etemplates/node106.html","timestamp":"2014-04-20T00:59:17Z","content_type":null,"content_length":"8077","record_id":"<urn:uuid:6f4585a3-192f-4e5f-ab6e-b8842383a323>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
The ordinary intersecting fret band is a pattern that crosses each other, found on Greek vase paintings.
Illustration of the intersection of a hexagonal pyramid and a plane.
Illustration of two lines intersecting at a point. This can be used to show vertical angles.
Illustration of two straight lines drawn from a point in a perpendicular to a given line, cutting off…
Draftsman's first method for drawing a parabola
The modern Gothic parapet is a stone design of a wall-like barrier found on the edge of a roof or structure.…
The modern Gothic parapet is a stone design of a wall-like barrier found on the edge of a roof or structure.… | {"url":"http://etc.usf.edu/clipart/keyword/intersecting","timestamp":"2014-04-17T15:49:56Z","content_type":null,"content_length":"40747","record_id":"<urn:uuid:dac6fc71-5dde-42cd-bc3d-cf97eae1a9c7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral and derivative simple problem
October 21st 2010, 03:17 AM #1
Jun 2009
Integral and derivative simple problem
How can i go from here
$\vec a = \frac {d \vec v} {dt}$
to here?
$v-v_0 = \int_{t_0}^t \vec a dt$
I get it till this point:
$\vec a dt = d \vec v$
Then he should integrate both sides.
Last edited by AniMuS; October 21st 2010 at 03:35 AM.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/160469-integral-derivative-simple-problem.html","timestamp":"2014-04-16T20:24:29Z","content_type":null,"content_length":"28889","record_id":"<urn:uuid:7450d569-b701-411a-a90b-c7cd7649c5d2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Panagiotis Petrou has posted a link to a recent paper of his, which develops a cost-effectiveness analysis of a drug used as a second-line treatment of renal carcinoma. The analysis is based on a
Bayesian Markov model. But (from an incredibly self...
Machine Learning with R – Book Review
I have to admit it...I'm an R junkie...since the first time I started learning R...it has become an addiction to me...I try to solve every problem with R...which is obviously not the best way to
go...but my rule of thumb is..."Try with R first...otherw...
analyze the national immunization survey (nis) with r
for twenty years now, the centers for disease control and prevention (cdc) has been random-digit-dialing american households to ask parents which vaccinations their little tykes have received. and
since vaccination history might not be at the for...
New version of pqR, now with task merging
I’ve now released pqR-2013-12-29, a new version of my speedier implementation of R. There’s a new website, pqR-project.org, as well, and a new logo, seen here. The big improvement in this version is
that vector operations are sped up using task merging. With task merging, several arithmetic operations on a vector may be merged into a
Understanding the data analytics project life cycle
While dealing with the data analytics projects, there are some fixed tasks that should be followed to get the expected output. So here we are going to build a data analytics project cycle, which will
be a set of standard data-driven processes to lead data to insights effectively. The defined data analytics processes of a The post Understanding...
Down-Sampling Using Random Forests
We discuss dealing with large class imbalances in Chapter 16. One approach is to sample the training set to coerce a more balanced class distribution. We discussdown-sampling: sample the majority
class to make their frequencies closer to the rarest class.up-sampling: the minority class is resampled to increase the corresponding frequencieshybrid approaches: some methodologies do a little of
both and...
R / Bioconductor for High-Throughput Sequence Analysis
I would like to recommend a recent workshop material on R/Bioconductor from Marc Carlson et al.http://www.bioconductor.org/help/course-materials/2013/SeattleMay2013/PDF:
IntermediateSequenceAnalysis2013.pdfR script: IntermediateSequenceAnalysis201...
New Version of RStudio (v0.98)
We’re pleased to announce that the final version of RStudio v0.98 is available for download now. Highlights of the new release include: An interactive debugger for R that is tightly integrated with
base R debugging tools (browser, recover, etc.) Numerous improvements to the Workspace pane (which is now called the Environment pane). R Presentations for easy authoring
Newest release of BCEA
Very shortly, I'll upload the newest release of BCEA, my R package to post-process the output of a (Bayesian) health economic model and produce systematic summaries (such as graphs and tables) for a
full economic evaluation and probabilistic sensitivit...
Unusual timing shows how random mass murder can be (or even less)
This post follows the original one on the headline of the USA Today I read during my flight to Toronto last month. I remind you that the unusual pattern was about observing four U.S. mass murders
happening within four days, “for the first time in at least seven years”. Which means that the difference between | {"url":"http://www.r-bloggers.com/page/4/?s=evaluation","timestamp":"2014-04-20T20:57:46Z","content_type":null,"content_length":"37820","record_id":"<urn:uuid:52618ea3-553f-4d4c-b013-181300e752cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
7 boys and 3 girls are to be seated in a row. How to calculate the number of ways they can be seated if the 3 girls... - Homework Help - eNotes.com
7 boys and 3 girls are to be seated in a row. How to calculate the number of ways they can be seated if the 3 girls want to be seated together?
We have 7 boys and three girls.
But 3 girls should sit together always. So the girls can be taken as one unit.
So now we have 7 boys and 1 unit of girl. Altogether it is 8.
So the ways that can arrange 8 people is 8!.
But we can interchange the positions of three girls while maintaining them together. So arranging three girls can be done in 3! ways.
So the ways that can arrange boys and girls are `8!xx3! = 241920`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/7-boys-3-girls-seated-row-how-calculate-number-439359","timestamp":"2014-04-18T16:36:18Z","content_type":null,"content_length":"25266","record_id":"<urn:uuid:24deb16f-8833-4dc5-a841-208782762291>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics in Analytical Chemistry: Part 4—Calibration: Uncertainty intervals
As was discussed in the previous column of this series (in the January 2003 issue of American Laboratory), calibration is a procedure that imperfectly transforms a response into a useful measurement.
It follows that there is uncertainty associated with any straight-line calibration (or, for that matter, with any calibration “curve,” such as a fitted exponential function). Hence, if one conducts
repeated calibration experiments, different coefficients for each experiment’s equation will be the result. (One hopes that the differences are relatively small.) An example is shown in Figure 1.
In order to quantify the uncertainty in any given calibration line, the concept of uncertainty intervals is needed. In statistics, there are three types of uncertainty intervals:
1. Confidence intervals give the uncertainty in an estimate of a population parameter. (Note: A parameter is the true, but unknown, value of a key descriptive numerical quantity, either for a
function or for the distribution of a population. Examples are the true mean and standard deviation of daily rainfall in Kalamazoo, or of pork-belly futures.) Confidence intervals are not
appropriate for reporting measurements, which are not parameters.
2. Prediction intervals give the uncertainty in one future measurement. These intervals are the most widely utilized for reporting measurements and will be used throughout this series of articles.
3. Statistical-tolerance intervals quantify the uncertainty in a chosen percentage of k future measurements (k may be infinite). Such intervals are appropriate for reporting measurements, but are
more complex to interpret. Also, to create statistical-tolerance intervals, tables of critical values are needed but are difficult to find.
(Note: The width of the interval increases as one progresses from (1) to (2) to (3), since there is more accounting for variation with each step.)
For a given calibration curve, the prediction interval consists of a pair of prediction limits (an upper and a lower limit) that bracket the curve. Each limit has a user-chosen level of significance
associated with it (e.g., 2.5% or 0.5%). The confidence level for the prediction interval is 100% minus the sum of these two levels. Although these two levels are typically the same, they do not have
to be. For a straight line (SL), the prediction interval (for the measurement, in concentration units) used at a given value of y is:
where x = (y–a)/b; a = the intercept of the line on the y-axis; b = the slope of the line; t = Student’s t; dof = degrees of freedom (for a SL, dof = [n – 2]); γ = the significance level associated
with the limit in question (i.e., either the upper or the lower limit); RMSE = the root mean square error (often used to estimate the measurement standard deviation, which is the statistic actually
needed here); n = the number of data points in the calibration design; x[avg] = the mean of the x values in the calibration; and S[xx] = the corrected sum of squares = Σ[(x – x[avg])^2]. Prediction
intervals capture the uncertainty in the: 1) calibration-line slope, 2) calibration-line intercept, and 3) response.
The above equation was not given with the intent that the reader should memorize it. However, calibration experiments can be designed more intelligently if the equation is understood. Generally, one
has some control over some of the terms in the equation. Knowing how the terms’ sizes will affect the value of the interval can be a guide to wise planning. The next paragraphs will discuss the
equation in some detail.
In general, the formula says that the line’s uncertainty is a function of three main components:
1. RMSE, which is an estimate of the measurement standard deviation; i.e., the inherent variation in the measurement system, typically the component that is the most responsible for uncertainty.
2. A square-root term that can be no smaller than 1.
3. Student’s t, which is the “penalty factor” that must be included since the true variation in the system is unknown. For 99% confidence, Student’s t approaches 2.33 as n increases. (Most people
are familiar with the ±3 sigma approximation. The name comes from the fact that this method calls for 7 or 8 replicates [i.e., 6 or 7 degrees of freedom after the standard deviation is
calculated]; with dof = 6 or 7, t is approx. 3 at 99% confidence. Similarly, ±2 sigma approximates 95% confidence.)
Table 1 - Student’s t table*
For a given instrumental method, the choice of calibration design will not have a significant effect on RMSE. The only ways to decrease RMSE significantly are: 1) to obtain a method or instrument
with less variation, or 2) to report an average of m independent measurements, reducing RMSE by a factor of the square root of m. However, the other two main components are affected by the choice of
design. Student’s t is affected by the number of degrees of freedom (i.e., the number of data points). As n increases, t decreases. However, a quick look at a Student’s t table shows that there are
diminishing returns even for moderate values of n (Table 1). As can be seen, the biggest gain is in moving from a dof of 1 to a dof of 5~10. Once n degrees of freedom exceed 20 or so, there is little
reduction in t.
It should be noted that there is a huge penalty if calibration is done using only one measurement on each of three concentrations of standards (Table 1). In this situation, dof is only 1; thus t is
31.8. If each standard were simply run in triplicate, the dof would rise to 7 and t would drop to 3. The decision to run more than the “bare minimum” design will greatly affect the prediction
The third component of the formula is the square-root expression. The number of data points, n, affects this part also. Since n is in the denominator, (1/n) can be reduced by driving n higher, an
action already favored because of the effect on t. Finally, [(x – x[avg])^2/S[xx])] depends on two things. The closer x = (y – a)/b is to x[avg], the narrower the interval (see the numerator); in
addition, the wider the calibration range, the narrower the interval (see the denominator). Actually, S[xx] is maximized when the calibration design consists of half the points at the low value, and
half at the high value, with none in between. However, this design is not recommended because it is does not allow for model validation (e.g., determining if curvature is present, meaning that the
straight-line model should be challenged). (Calibration design and model validation will be discussed in more detail beginning in the next article.)
In summary, one can decrease the size of the prediction interval by: 1) keeping n high, and 2) making the range of the calibration design as large as possible (provided that the model is plausible
for the entire range). Once the calibration experiment is conducted, the prediction interval can be plotted along with the calibration line. An example is shown in Figure 2. While all three lines
appear more or less parallel in this plot, the interval lines may flare out at either end. This phenomenon occurs most often when only a small number of data points are available.
It should also be pointed out that, technically, the interval applies to the y-values (i.e., the responses). In calibration, one wants to know the uncertainty in any concentration predicted from the
line (i.e., the x-values). The simplest technique, which applies to all types of models, is to plot the curve and interval via statistics software. Then, the width of the interval can be measured
graphically against either axis.
Mr. Coleman is an Applied Statistician, Alcoa Technical Center, MST-C, 100 Technical Dr., Alcoa Center, PA 15069, U.S.A.; e-mail: david.coleman@alcoa.com. Ms. Vanatta is an Analytical Chemist, Air
Liquide-Balazs™ Analytical Services, Box 650311, MS 301, Dallas, TX 75265, U.S.A.; tel: 972-995-7541; fax: 972-995-3204; e-mail: lynn.vanatta@airliquide.com. | {"url":"http://www.americanlaboratory.com/913-Technical-Articles/1707-Statistics-in-Analytical-Chemistry-Part-4-Calibration-Uncertainty-intervals/","timestamp":"2014-04-21T14:02:04Z","content_type":null,"content_length":"48441","record_id":"<urn:uuid:dfcd9898-6ada-4ddd-add4-0006ae6a54ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glossary page C
Category data
Data in which the values can be organised into distinct groups. These distinct groups (or categories) must be chosen so that they do not overlap and that every value belongs to one and only one
group, and there should be no doubt as to which one.
The term ‘category data’ is used with two different meanings. The curriculum uses a meaning that puts no restriction on whether or not the categories have a natural ordering. This use of category
data has the same meaning as qualitative data. The other meaning restricts category data to categories that do not have a natural ordering.
The eye colours of a class of year 9 students.
Alternative: categorical data
See: qualitative data
Curriculum achievement objectives references
Statistical investigation: Levels 1, 2, 3, 4, (5), (6), (7), (8)
Category variable
A property that may have different values for different individuals and for which these values can be organised into distinct groups. These distinct groups (or categories) must be chosen so that they
do not overlap and that every value belongs to one and only one group, and there should be no doubt as to which one.
The term ‘category variable’ is used with two different meanings. The curriculum uses a meaning that puts no restriction on whether or not the categories have a natural ordering. This use of category
variable has the same meaning as qualitative variable. The other meaning of category variable is restricted to categories which do not have a natural ordering.
The eye colours of a class of year 9 students.
Alternative: categorical variable
See: qualitative variable
Curriculum achievement objectives references
Statistical investigation: Levels (4), (5), (6), (7), (8)
Causal-relationship claim
A statement that asserts that changes in a phenomenon (the response) are caused by differences in a received treatment or by differences in the value of another variable (an explanatory variable).
Such claims can be justified only if the observed phenomenon is a response from a well-designed and well-conducted experiment.
Curriculum achievement objectives reference
Statistical literacy: Level 8
A study that attempts to measure every unit in a population.
Curriculum achievement objectives references
Statistical literacy: Levels (7), (8)
Central limit theorem
The fact that the sampling distribution of the sample mean of a numerical variable becomes closer to the normal distribution as the sample size increases. The sample means are from random samples
from some population.
This result applies regardless of the shape of the population distribution of the numerical variable.
‘Central’ is used in this term because there is a tendency for values of the sample mean to be closer to the ‘centre’ of the population distribution than individual values are. This tendency
strengthens as the sample size increases.
‘Limit’ is used in this term because the closeness or approximation to the normal distribution improves as the sample size increases.
See: sampling distribution
Curriculum achievement objectives reference
Statistical investigation: Level 8
Centred moving average
See: moving mean
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
A concept that applies to situations that have a number of possible outcomes, none of which is certain to occur when a trial of the situation is performed.
Two examples of situations that involve elements of chance follow.
Example 1
A person will be selected and their eye colour recorded.
Example 2
Two dice will be rolled and the numbers on each die recorded.
Curriculum achievement objectives references
Probability: All levels
Class interval
One of the non-overlapping intervals into which the range of values of measurement data, and occasionally whole-number data, is divided. Each value in the distribution must be able to be classified
into exactly one of these intervals.
Example 1 (Measurement data)
The number of hours of sunshine per week in Grey Lynn, Auckland, from Monday 2 January 2006 to Sunday 31 December 2006 are recorded in the frequency table below. The class intervals used to group the
values of weekly hours of sunshine are listed in the first column of the table.
Hours of sunshine frequency table,
by 5 hour class intervals.
Hours of sunshine Number of weeks
5 to less than 10 2
10 to less than 15 2
15 to less than 20 5
20 to less than 25 9
25 to less than 30 12
30 to less than 35 10
35 to less than 40 5
40 to less than 45 6
45 to less than 50 1
Total 52
Example 2 (Whole-number data)
Students enrolled in an introductory statistics course at the University of Auckland were asked to complete an online questionnaire. One of the questions asked them to enter the number of countries
they had visited, other than New Zealand. The class intervals used to group the values are listed in the first column of the table.
Number of countries visited frequency
table, by 4 country class intervals.
Number of countries visited Frequency
0 – 4 446
5 – 9 172
10 – 14 69
15 – 19 19
20 – 24 14
25 – 29 4
30 – 34 3
Total 727
Alternatives: bin, class
Curriculum achievement objectives references
Statistical investigation: Levels (4), (5), (6), (7), (8)
Cleaning data
The process of finding and correcting (or removing) errors in a data set in order to improve its quality.
Mistakes in data can arise in many ways, such as:
• A respondent may interpret a question in a different way from that intended by the writer of the question.
• An experimenter may misread a measuring instrument.
• A data entry person may mistype a value.
Curriculum achievement objectives references
Statistical investigation: Levels 5, (6), (7), (8)
Cluster (in a distribution of a numerical variable)
A distinct grouping of neighbouring values in a distribution of a numerical variable that occur noticeably more often than values on each side of these neighbouring values. If a distribution has two
or more clusters, then they will be separated by places where values are spread thinly or are absent.
In distributions with a small number of values or with values that are spread thinly, some values may appear to form small clusters. Such groupings may be due to natural variation (see sources of
variation), and these groupings may not be apparent if the distribution had more values. Be cautious about commenting on small groupings in such distributions.
For the use of ‘cluster’ in cluster sampling see the description of cluster sampling.
Example 1
The number of hours of sunshine per week in Grey Lynn, Auckland, from Monday 2 January 2006 to Sunday 31 December 2006 are displayed in the dot plot below.
If you cannot view or read this graph, select this link to open a text version.
From the greater density of the dots in the plot, we can see that the values have one cluster from about 23 to 37 hours per week of sunshine.
Example 2
A sample of 40 parents was asked about the time they spent in paid work in the previous week. Their responses are displayed in the dot plot below.
If you cannot view or read this graph, select this link to open a text version.
There are three clusters in the distribution: a group who did a very small amount or no paid work, a group who did part-time work (about 20 hours) and a group who did full-time work (about 35 to 40
Curriculum achievement objectives references
Statistical investigation: Levels (2), (3), (4), (5), (6)
Statistical literacy: Levels (2), (3), (4), (5), (6)
Cluster sampling
A method of sampling in which the population is split into naturally forming groups (the clusters), with the groups having similar characteristics that are known for the whole population. A simple
random sample of clusters is selected. Either the individuals in these clusters form the sample or simple random samples chosen from each selected cluster form the sample.
Consider obtaining a sample of secondary school students from Wellington. The secondary schools in Wellington are suitable clusters. A simple random sample of these schools is selected. Either all
students from the selected schools form the sample or simple random samples chosen from each selected school form the sample.
Curriculum achievement objectives references
Statistical investigation: Levels (7), (8)
Coefficient of determination (in linear regression)
The proportion of the variation in the response variable that is explained by the regression model.
If there is a perfect linear relationship between the explanatory variable and the response variable, there will be some variation in the values of the response variable because of the variation that
exists in the values of the explanatory variable. In any real data, there will be more variation in the values of the response variable than the variation that would be explained by a perfect linear
relationship. The total variation in the values of the response variable can be regarded as being made up of variation explained by the linear regression model and unexplained variation. The
coefficient of determination is the proportion of the explained variation relative to the total variation.
If the points are close to a straight line, then the unexplained variation will be a small proportion of the total variation in the values of the response variable. This means that the closer the
coefficient of determination is to 1, the stronger the linear relationship.
The coefficient of determination is also used in more advanced forms of regression, and is usually represented by R^2. In linear regression, the coefficient of determination, R^2, is equal to the
square of the correlation coefficient, i.e., R^2 = r^2.
The actual weights and self-perceived ideal weights of a random sample of 40 female students enrolled in an introductory statistics course at the University of Auckland are displayed on the scatter
plot below. A regression line has been drawn. The equation of the regression line is
predicted y = 0.6089x + 18.661 or predicted ideal weight = 0.6089 × actual weight + 18.661
If you cannot view or read this graph, select this link to open a text version.
The coefficient of determination R^2 = 0.822
This means that 82.2% of the variation in the ideal weights is explained by the regression model (i.e., by the equation of the regression line).
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
Combined event
An event that consists of the occurrence of two or more events.
Two different ways of combining events A and B are: A or B, A and B.
A or B is the event consisting of outcomes that are either in A or B or both.
If you cannot view or read this diagram, select this link to open a text version.
A and B is the event consisting of outcomes that are common to both A and B.
If you cannot view or read this diagram, select this link to open a text version.
Suppose we have a group of men and women and each person is a possible outcome of a probability activity. A is the event that a person is a woman and B is the event that a person is taller than
Consider A and B. The outcomes in the combined event A and B will consist of the women who are taller than 170cm.
Consider A or B. The outcomes in the combined event A or B will consist of all of the women as well as the men taller than 170cm. An alternative description is that the combined event A or B will
consist of all people taller than 170cm as well as the women who are not taller than 170cm.
Alternative: compound event, joint event
Curriculum achievement objectives reference
Probability: Level 8
Complementary event
With reference to a given event, the event that the given event does not occur. In other words, the complementary event to an event A is the event consisting of all of the possible outcomes that are
not in event A.
There are several symbols for the complement of event A. The most common are A' and Ā.
If you cannot view or read this diagram, select this link to open a text version.
Suppose we have a group of men and women and each person is a possible outcome of a probability activity. If A is the event that a person is aged 30 years or more, then the complement of event A, A',
consists of the people aged less than 30 years.
Curriculum achievement objectives reference
Probability: (Level 8)
Conditional event
An event that consists of the occurrence of one event based on the knowledge that another event has already occurred.
The conditional event consisting of event A occurring, knowing that event B has already occurred, is written as A | B, and is expressed as ‘event A given event B’. Event B is considered to be the
‘condition’ in the conditional event A | B.
The probability of the conditional event A | B, P(A|B) = P(A and B)/P(B) .
For a justification of the above formula, see the example below.
Suppose we have a group of men and women and each person is a possible outcome of the probability activity of selecting a person. A is the event that a person is a woman, and B is the event that a
person is taller than 170cm.
Consider A | B.
Given that B has occurred, the outcomes of interest are now restricted to those taller than 170cm.
A | B will then be the women of those taller than 170cm.
Suppose that the genders and heights of the people were as displayed in the two-way table below.
Two-way table of Height by Gender.
Taller than 170cm Not taller than 170cm Total
Gender Male 68 15 83
Female 28 89 117
Total 96 104 200
Given that B has occurred, the outcomes of interest are the 96 people taller than 170cm.
If a person is randomly selected from these 96 people, then the probability that the person is female is P(A|B) = 28/96 = 0.292.
If both parts of the fraction are divided by 200, this becomes P(A|B) = (28/200)/(96/200) = P(A and B)/P(B)
Curriculum achievement objectives reference
Probability: Level 8
Confidence interval
An interval estimate of a population parameter. A confidence interval is therefore an interval of values, calculated from a random sample taken from the population, of which any number in the
interval is a possible value for a population parameter.
The word ‘confidence’ is used in the term because the method that produces the confidence interval has a specified success rate (confidence level) for the percentage of times such intervals contain
the true value of the population parameter in the long run. 95% is commonly used as the confidence level.
See: bootstrap confidence interval, bootstrapping, margin of error
Curriculum achievement objectives reference
Statistical investigation: Level 8
Confidence level
A specified percentage success rate for a method that produces a confidence interval, meaning that the method has this rate for the percentage of times such intervals contain the true value of the
population parameter in the long run.
The most commonly used confidence level is 95%.
The confidence level associated with the process of forming a bootstrap confidence interval for a parameter cannot be determined accurately but, in most cases, the confidence level will be about 90%
or higher (especially if any samples used are quite large). That is, just because the central 95% of estimates was used to form the bootstrap confidence interval we cannot say that the confidence
level is 95%.
This confidence level concept can be illustrated using the ‘Confidence interval coverage’ module from the iNZightVIT software. The module produced the following output. Note that to use this module
you must have data on every unit in the population.
If you cannot view or read this diagram/graph, select this link to open a text version.
The population used is 500 students from the CensusAtSchool database. This is multivariate data. The variable ‘rightfoot’ (the length of a student’s right foot, in centimetres), the quantity ‘mean’,
the confidence interval method ‘bootstrap: percentile’, the sample size ‘30’ and the number of repetitions ‘1000’ were selected.
The ‘Population’ plot shows the population distribution of the right foot lengths of the 500 students in the population. The vertical line shows the true population mean (about 23.4cm). The darker
dots show the final random sample selected.
The true population mean is also shown as a dotted line through all three plots.
The ‘Sample’ plot shows the 30 foot lengths from the sample, the sample mean (vertical line) and the bootstrap confidence interval (horizontal line).
The ‘CI history’ plot shows bootstrap confidence intervals constructed from some of the samples. The bootstrap confidence intervals that contained (covered) the true population mean are shaded in a
light colour (green) and the bootstrap confidence intervals that did not contain (did not cover) the true population mean are shaded in a dark colour (red). The box gives the percentage success rate
of the bootstrap confidence interval process based on 1000 samples. The success rate of 94.7% estimates the confidence level when using the bootstrap confidence interval process on this population
and for this sample size.
Alternative: coverage
See: bootstrap confidence interval, bootstrapping
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
Confidence limits
The lower and upper boundaries of a confidence interval.
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
Continuous distribution
The variation in the values of a variable that can take any value in an (appropriately-sized) interval of numbers.
A continuous distribution may be an experimental distribution, a sample distribution, a population distribution, or a theoretical probability distribution of a measurement variable. Although the
recorded values in an experimental or sample distribution may be rounded, the distribution is usually still regarded as being continuous.
At Levels 7 and 8, the normal distribution is an example of a continuous theoretical probability distribution.
See: distribution
Curriculum achievement objectives references
Statistical investigation: Levels (5), (6), (7), (8)
Probability: Levels (5), (6), 7, (8)
Continuous random variable
A random variable that can take any value in an (appropriately-sized) interval of numbers.
The height of a randomly selected individual from a population.
Curriculum achievement objectives references
Probability: Levels (7), 8
The strength and direction of the relationship between two numerical variables.
In assessing the correlation between two numerical variables, one variable does not need to be regarded as the explanatory variable and the other as the response variable, as is necessary in linear
Two numerical variables have positive correlation if the values of one variable tend to increase as the values of the other variable increase.
Two numerical variables have negative correlation if the values of one variable tend to decrease as the values of the other variable increase.
Correlation is often measured by a correlation coefficient, the most common of which measures the strength and direction of the linear relationship between two numerical variables. In this linear
case, correlation describes how close points on a scatter plot are to lying on a straight line.
See: correlation coefficient
Curriculum achievement objectives reference
Statistical investigation: Level (8)
Correlation coefficient
A number between -1 and 1 calculated so that the number represents the strength and direction of the linear relationship between two numerical variables.
A correlation coefficient of 1 indicates a perfect linear relationship with positive slope. A correlation coefficient of -1 indicates a perfect linear relationship with negative slope.
The most widely used correlation coefficient is called Pearson’s (product-moment) correlation coefficient, and it is usually represented by r.
Some other properties of the correlation coefficient, r:
1. The closer the value of r is to 1 or -1, the stronger the linear relationship.
2. r has no units.
3. r is unchanged if the axes on which the variables are plotted are reversed.
4. If the units of one, or both, of the variables are changed, then r is unchanged.
The actual weights and self-perceived ideal weights of a random sample of 40 female students enrolled in an introductory statistics course at the University of Auckland are displayed on the scatter
plot below.
If you cannot view or read this graph, select this link to open a text version.
The correlation coefficient r = 0.906
See: coefficient of determination (in linear regression), correlation
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
Cyclical component (for time-series data)
Long-term variations in time-series data that repeat in a reasonably systematic way over time. The cyclical component can often be represented by a wave-shaped curve, which represents alternating
periods of expansion and contraction. The successive waves of the curve may have different periods.
Cyclical components are difficult to analyse, and at Level 8 cyclical components can be described along with the trend.
See: time-series data
Curriculum achievement objectives reference
Statistical investigation: (Level 8)
Last updated September 27, 2013 | {"url":"http://seniorsecondary.tki.org.nz/Mathematics-and-statistics/Glossary/Glossary-page-C","timestamp":"2014-04-20T03:11:44Z","content_type":null,"content_length":"257256","record_id":"<urn:uuid:62d32e86-f266-4cb3-82a7-ca1155b3798a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
WHAT WOULD YOU DO WITH $5.00?
Grades 2-4
The lesson opens with a discussion about money setting the stage for the video Alexander Who Used to Be Rich Last Sunday. This video is about a young boy who receives $5.00 as a gift from his
grandparents and then describes how he spent or lost the money as the day wears on. The students are asked to follow along with the video using play money to keep track of what is happening to
Alexander's riches. In the follow-up activity, students are asked to find various combinations of coins that will add up to a specific sum of money. In the extension activities, the students will
have an opportunity to explore additional problems involving money, take a closer look at money and how it is made, and read some additional books about how other people deal with the money that
comes their way. The final activity is a writing activity where students are asked to describe how they would handle a $5.00 gift.
Judith Viorst Stories: Alexander Who Used to Be Rich Last Sunday (AIMS Media)
Students will be able to:
□ identify coins and bills correctly;
□ work with money in amounts up to $5.00;
□ make sums of 50¢ using a variety of coins.
□ play money for students to use as a manipulative in the following denominations: 1¢, 5¢, 10¢, 25¢, $1.00 and $5.00
□ worksheet with pictures of coins that students can cut up
□ scissors
□ glue sticks, paste, or tape
□ Alexander, Who Used to Be Rich Last Sunday by Judith Viorst (ISBN 0-689-71199-9)
Ask the students, "How many of you, right now, have some money at home that is your very own?" (Pause for a show of hands.) "How did you manage to get this money?" (Pause for responses such as
allowance, gift from relative, earned it recycling bottles, etc.) "Have you ever saved your money to make a special purchase? What was that item? How long did it take to save your money? Was it
hard to save?"
"Today, we are going to watch a video about a boy named Alexander and see how he managed his money. You are going to use the play money in front of you to follow along with what happens in the
story. I'd like you to work in pairs and sort your money into piles of pennies, nickels, dimes, quarters, and bills." (Allow time for this activity.) Review the names of the coins and the bills
with the students.
The focus for viewing is a specific responsibility or task(s) students are responsible for during or after watching the video to focus ad engage students' viewing attention.
To give the students a specific responsibility while viewing, tell them, "Alexander is going to begin his story by describing how much money his brother has. With your partner, use the play money
to find out how much money Anthony has."
BEGIN the video Judith Viorst Stories: Alexander Who Used to Be Rich Last Sunday after the credits when you see the sign "I'm Asleep." PAUSE after "... and 18 pennies" to allow the students time
to work with their partners to count out the money Anthony had and find the total amount. You may wish to have volunteers recall the various amounts and write them on the board. Anthony had six
dollars, three quarters, one dime, seven nickels, and 18 pennies for a total of $7.38.
To refocus the students on the video say, "We're going to do the same thing and find out how much money his brother Nicholas had." RESUME the video. PAUSE after "It isn't fair because all that I
have is bus tokens." Ask the students to work with their partners to count out Nicholas' money. He had two quarters, five dimes, five nickels, 14 pennies, and four dollar bills for a total of
$5.39. Also ask students if they know what bus tokens are. If they do not know, explain that people use money to purchase tokens that allow them to ride on buses and subways. The tokens can only
be used for that purpose. Even though tokens resemble coins, people can't use them to buy other things, like candy. They are only good for riding on the bus or subway.
To refocus the students on the video ask, "How did Alexander get rich last Sunday? How rich was he?" RESUME the video. PAUSE when Alexander says, "We like money a lot. Especially me." When the
students respond that Alexander's riches were the result of a gift from his visiting grandparents, ask them if they think this is a lot of money. What would they do with $5.00?
Bring the students' attention back to the video by saying, "Let's see what Alexander does with his money. I'd like you and your partner to help Alexander keep track of his money. Put a five
dollar bill in front of you just like Alexander has. Watch carefully because I would like you to figure out how much change will be left after he makes his first purchase?" RESUME the video.
PAUSE after "Fifty cents out of a dollar." to allow time for students to use the play money to figure out the change (answer: $4.50).
Say, "Follow along with your play money and find out how much he has left after the next money transaction." RESUME the video. PAUSE after Anthony collects the 25¢ bet from Alexander and give the
students time to figure out that Alexander has $4.25 left.
To refocus on the video, say, "Alexander must like betting because he's going to make two more bets. How much money will he have left?" RESUME the video. PAUSE after he hands the money over to
Nicholas. Allow time for students to subtract the two quarters he has bet and lost and determine that he now has $3.75 left. Depending on your students' skills, you may have to talk them through
trading in a dollar for four quarters in order to pay the last debt. In the movie, he gave Nicholas a dollar and received 75¢ back. Ask the students what they think of Alexander's money
management skills so far.
Refocus them on the video with the question: "By the time his grandparents leave, how much money has Alexander spent or lost?" RESUME the video. PAUSE after the good-byes are said and the
grandparents get into their car. Ask the students to recall what happened to Alexander's money and make a list on the board: 75¢ to rent a snake, 50¢ for bad words, 5¢ down the toilet, 5¢ down a
crack in the porch floor, 50¢ for eating his brother's candy bar, 40¢ for the magic show, 25¢ for kicking, $1.25 at Kathy's yard sale. Allow time for the students to count out what has happened
with their play money and determine that Alexander has nothing left! Discuss the various strategies your students used to solve this problem. Some will have subtracted the money for each event.
Others might have added up all of the expenses and made one big subtraction.
Bring the students' attention back to the video by asking, "What does Alexander do to try to earn money?" RESUME the video. PAUSE after Alexander says, "Friendly's Market wasn't very friendly."
Ask the students to recall all the ways Alexander tried to earn money: pulling out a tooth, renting his toys, looking for stray change in pay phones, recycling. Ask the students how they try to
earn money? Is it easier to earn money or spend money or save money?
Ask, "As Alexander reviews how his $5 disappeared, which choices seem to bring him the most pleasure? Do you think he would make these choices again if he could relive last Sunday when he used to
be rich'?" RESUME the video. STOP when the credits start to roll. Allow time for the students to give their opinions on whether Alexander would do things differently if given another chance.
Say, "Remember in the video when Alexander's dad made him pay 50¢ for using bad language? Alexander tried to get out of it by saying he didn't have any change but his dad said he could make
change and took one of Alexander's dollars and gave him back a lot of coins that had a value of 50¢. I'd like you to work with your partner to find as many ways as you can that Alexander's dad
could have made 50¢. Make a list of all the ways you can find using quarters, dimes nickels, and pennies."
Note to the teacher: There are 49 solutions to this problem. If that seems too overwhelming for your students, you can make the problem simpler by only allowing quarters, dimes and nickels (10
solutions). Or you could ask them to find as many ways as possible for making change for 25¢ with or without using pennies (12 solutions with pennies; 3 without pennies).
As you lead the class discussion about the number of solutions, try to help the students articulate the strategies they used. Was it all trial-and-error or did some use an organized list? You can
model an organized list approach by posting the first half dozen responses as you receive them and then asking if they see any way to organize the list. One solution might be:
25¢ 10¢ 5¢ 1¢
This list starts off with the fewest number of coins needed to make 50¢ and then trades in one of the coins to find all of those combinations in descending order. Making an organized list helps
students see patterns and know when they have exhausted all of the possibilities. This makes a good problem for them to work on over the course of several days and then try to organize the list
as the culminating activity to verify that they have found all of the solutions. Many students will be surprised that there are so many ways to make change for 50¢.
If you have simplified the problem for younger students, you may want to have them cut paper coins out of paper and paste their solutions onto paper as a way of recording what they have already
done. The list idea may be too abstract for them.
Have a banker visit the class and discuss how banks can help people save their money.
Bring in a collection of coins from around the world. Locate the countries represented in an atlas or on a globe and discuss how their money is the same or different from our own. If you have
students from other countries in your class, maybe they or their parents could lead the discussion.
Math: How long would it take you to save a dollar if you saved 1¢ on the first day of the month, 2¢ on the second day, 4¢ on the third day, 8¢ on the fourth, always doubling the money from the
day before? Have the students make estimates and then find out. The answer that by the 7th day, they will have accumulated $1.27 will surprise them. How much will they save if they do this for 20
days? What patterns do they see in their lists of numbers?
Math: The "Coin Count" activities from Group Solutions (Gems, Lawrence Hall of Science, 1992, ISBN 0-912511-81-8) involve cooperative group activities where students have to read clues, place
money in a cup and find out the total amount of money in the cup. This involves lots of critical thinking skills while giving additional practice working with money.
Math: Form small groups of four to six students. Each student writes his/her name on a piece of paper. Who has got the most expensive name in the group? Predict and then find out. A =1¢, B = 2¢,
C = 3¢,... Z = 26¢. After they make their predications, ask them to explain why they thought this. Some very good critical thinking comes out of this activity.
Math: In a similar vein to "Who has the most Expensive Name?" is The $1.00 Word Riddle Book by Marilyn Burns (Cuisenaire Co. of America, ISBN 0-941355-02-0) which gives riddles that have answers
that equal $1.00 using the A =1¢, B = 2¢, C = 3¢,... Z = 26¢ pattern.
Math: How long is a dollar bill? Measure the diameters of the coins.
Math: What is the value of a mile of pennies laid side-by-side?
(Note: For younger classes, you might want to use a shorter measurement-- maybe a yard or a meter.)
Science: Money fits in nicely with a unit on rocks and minerals. Silver and copper are minerals that are mined as ores and then made into alloys used in coins. Furthermore, minerals are made up
of crystals. Rocks, minerals, and crystals often form part of a fourth grade science study.
Science: Have students study money closely using a magnifying glass. What interesting things do they find?
Science/Math: Make a Venn diagram with three circles representing penny, nickel, dime, respectively. Analyze the coins and place a description of their characteristics in the proper parts of the
Venn diagram. This would be particularly appropriate after you have given the class sufficient time to study the coins under the magnifying glass.
Science: Use an eye dropper to find out how many drops of water will fit on a coin. Which coin holds the most? Does the heads side hold more, less, or the same as the tails side?
Science: Use balance scales to weigh various amounts of coins. Have students set out challenges for each other. For example, which weighs more - 10¢ in pennies or 50¢ in nickels. Predict and then
weigh them.
Science/Math: Would you rather have your height in a stack of nickels or a line of quarters laid side-by-side? How much money would you get each way? Make a graph of the class' height as measured
this way.
History: Whose pictures are on our American money? Who were these people?
History: America used to use gold for money. Study the California Gold Rush. Why did we stop using gold as money?
Language Arts: Children might enjoy comparing this video to the original book of Alexander Who Used to Be Rich Last Sunday by Judith Viorst (Aladdin Books, ISBN 0-689-71199-9). This book was
originally published in 1978. At the time, Alexander only got $1.00 from his grandparents and everything cost a great deal less! How are the book and video the same? How are they different?
Language Arts: The Go-Around Dollar by Barbara Johnston Adams (Simon & Shuster, ISBN 0-02-700031-1) is the story of a dollar bill as it travels from hand to hand. There are lots of neat facts
interwoven in the story, such as how long a bill stays in circulation, how much it weighs, how it is made.
Language Arts: Pigs Will Be Pigs by Amy Axelrod (Simon & Shuster, ISBN 0-02-765415-X) has pigs adding, subtracting, multiplying and dividing their money to satisfy their pig cravings for food.
Language Arts: "Smart" by Shel Silverstein is a poem about a young boy who thinks he is pretty clever when he swaps one dollar for two shiny quarters "'cause two is more than one!" It's all
downhill from there! Children enjoy pointing out the errors of the young boy's thinking.
Writing: If someone gave you $5.00 as a gift, what would you do with it?
Master Teacher: Linda Dodge
Lesson Plan Database
Thirteen Ed Online | {"url":"http://www.thirteen.org/edonline/nttidb/lessons/ma/fivedma.html","timestamp":"2014-04-16T07:43:29Z","content_type":null,"content_length":"19843","record_id":"<urn:uuid:1a689de0-a9eb-4cca-bdb7-22b0b02575df>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes on Bohr-Sommerfeld Quantization and the Classical Limit
Next: Atom
Notes on Bohr-Sommerfeld Quantization and the Classical Limit
Massachusetts Institute of Technology
Department of Physics
Physics 8.04 Wed Oct 11 20:51:26 EDT 1995
The basic idea behind semi-classical, or Bohr-Sommerfeld quantization, is that we may determine the allowed quantized states of a system by performing a classical analysis of the motion and then
insisting, that in order to avoid destructive interference, there are exactly an integral number of de Broglie wavelengths around the orbit or one period of the classical motion. Mathematically, if q
is the coordinate of the particle (usually written ``r" in three dimensions or ``x" in one) and p as its momentum, then
Note that this is an approximate and not an exact condition because by looking at classical trajectories we are ignoring many details of the wave function.
Prof. Tomas Alberto Arias
Wed Oct 11 20:51:17 EDT 1995 | {"url":"http://muchomas.lassp.cornell.edu/8.04/Lecs/lec_bohr-sommerfeld/notes.html","timestamp":"2014-04-20T05:45:02Z","content_type":null,"content_length":"3713","record_id":"<urn:uuid:d504950c-7dbb-4c3f-9c3b-bfb6c64700a1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Studies in Logic and the Foundations of Mathematics, Vol. 81. Amsterdam etc.: North-Holland. X, 490 p. $ 90.00; Dfl. 250.00 (1987).
The first edition of this book (1975; Zbl 0354.02027) which appeared more than 10 years ago quickly became one of the standard sources in proof theory. One can often see in a research paper a
reference to details of proofs and constructions from the book. The general structure of the book is unchanged, but some additions are made reflecting new developments in proof theory. Let us list
main changes. Exposition of the completeness theorem for intuitionistic predicate calculus using Heyting algebras (and their relation to Kripke models) is added, although topoi are not mentioned at
all and only a couple of lines are devoted to Girard’s categorical constructions. The importance of first-order model theory is stressed by paying attention to the axiomatization of particular
Heyting algebras: a lot of place is devoted to the axiomatization of the segment [0,1]. The normalisation proof for arithmetic is slightly simplified. In chapter 2 even more attention than before is
devoted to provably recursive functions of arithmetic. A version of the Löb-Wainer hierarchy (in terms of Hardy functions) for $<{ϵ}_{0}$-recursive functions is constructed using the detailed
structure of fundamental sequences for ordinals $<{ϵ}_{0}$. Acquaintance with this type of results is useful both in the subsequent proof of Ketonen-Solovay bounds for Paris-Harrington’s version of
the Ramsey theorem independent of arithmetic and in the discussion of ordinal diagrams. Two other independence results for combinatorial statements more closely connected to ordinals are presented:
Kirby-Paris theorem on Goodstein sequences [but not their Hydra game] and Friedman’s version of Kruskal’s theorem. This additional material is included in place of the discussion of the interpolation
theorems for weak subsystems of higher order logic.
In chapter 4 (end of section 23) the general completeness theorem for infinitary logic is used to formulate a proof-theoretic equivalent of Borel determinacy and state a problem: to give a new proof
of this result of Martin by proving a corresponding cut elimination theorem.
One of the main contributions of the author was the ordinal analysis of subsystems of second-order arithmetic by means of special systems of ordinal notation which the author introduced and called
ordinal diagrams (o.d.). The proof of well-foundedness for o.d. is complicated both combinatorially and in proof-theoretic respect. In this edition this proof is changed. The schema of ordinal
analysis is also changed: instead of normalising derivations of existential theorems in subsystems of second-order arithmetic, derivations of arbitrary theorems in the corresponding system of
second-order logic are now normalised, and then the translation of arithmetic into logic is used. This makes the proof more elegant, but to obtain the best possible ordinal estimate the author has to
use the more involved treatment due to Arai dealing with arithmetic systems. Ordinal diagrams turn out to be convenient for the treatment of the generalised Kruskal’s theorem introduced by Friedman
including the proof of its independence from a system of ${{\Pi }}_{1}^{1}$- analysis.
An unusual feature is the appendix (about 100 pages) consisting of contributions of four authors (G. Kreisel, W. Pohlers, S. G. Simpson and S. Feferman) reflecting their view of proof theory and
partially compensating for the author’s reluctance to investigate connections of some of his notions (especially ordinal diagrams) with the notions used by other authors. The appendices help the
reader to form an even more complete picture of the modern state of proof theory. The 11-page postscript by the author lists some references used in the text and provides information on further
developments. This book is a very useful addition to the literature on proof theory.
03F05 Cut-elimination; normal-form theorems
03F15 Recursive ordinals; ordinal notations
03-02 Research monographs (mathematical logic)
03F35 Second- and higher-order arithmetic and fragments
03F30 First-order arithmetic and fragments
03F55 Intuitionistic mathematics
03B10 First-order logic
03B15 Higher-order logic; type theory
03B20 Subsystems of classical logic | {"url":"http://zbmath.org/?q=an%3A0609.03019","timestamp":"2014-04-17T06:57:16Z","content_type":null,"content_length":"25938","record_id":"<urn:uuid:d65aaefb-c250-46a3-a1db-869215e2cf44>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
HP COBOL
Reference Manual
7.43 TAN
The TAN function returns a numeric value that approximates the tangent of an angle or arc, expressed in radians, that is specified by the argument.
is a numeric or integer argument.
1. The type of this function is numeric.
2. The returned value is the approximate tangent of the angle specified.
COMPUTE TAN-RSLT = FUNCTION TAN (X).
X and TAN-RSULT are numeric data items. If the value of X is 3, the approximate tangent of an angle of 3 radians is moved to TAN-RSLT.
7.44 TEST-DATE-YYYYMMDD
The TEST-DATE-YYYYMMDD function tests whether a standard date in the form YYYYMMDD is a valid date in the Gregorian calendar.
General Format
FUNCTION TEST-DATE-YYYYMMDD ( arg )
is an integer.
1. The type of this function is integer.
2. If the year is not within the range 1601 through 9999, the function returns a 1.
Otherwise, if the month is not within the range 1 through 12, the function returns a 2.
Otherwise, if the number of days is invalid for the given month, the function returns a 3.
Otherwise, the function returns a 0 to indicate the date is a valid date in the form YYYYMMDD.
IF FUNCTION TEST-DATE-YYYYMMDD (123456789) = 1
DISPLAY "correct - invalid year (12345)".
IF FUNCTION TEST-DATE-YYYYMMDD (19952020) = 2
DISPLAY "correct - invalid mm (20)".
IF FUNCTION TEST-DATE-YYYYMMDD (19950229) = 3
DISPLAY "correct - invalid dd (29)".
IF FUNCTION TEST-DATE-YYYYMMDD (20040229) = 0
DISPLAY "correct - valid YYYYMMDD".
7.45 TEST-DAY-YYYYDDD
The TEST-DAY-YYYYDDD function tests whether a Julian date in the form YYYYDDD is a valid date in the Gregorian calendar.
General Format
FUNCTION TEST-DAY-YYYYDDD ( arg )
is an integer.
1. The type of this function is integer.
2. If the year is not within the range 1601 through 9999, the function returns a 1.
Otherwise, if the number of days is invalid for the given year, the function returns a 2.
Otherwise, the function returns a 0 to indicate the date is a valid date in the form YYYYDDD.
IF FUNCTION TEST-DAY-YYYYDDD (12345678) = 1
DISPLAY "correct - invalid year (12345)".
IF FUNCTION TEST-DAY-YYYYDDD (1995366) = 2
DISPLAY "correct - invalid ddd (366)".
IF FUNCTION TEST-DAY-YYYYDDD (2004366) = 0
DISPLAY "correct - valid YYYYDDD".
7.46 UPPER-CASE
The UPPER-CASE function returns a character string that is the same length as the argument with each lowercase letter in the argument replaced by the corresponding uppercase letter.
is an alphabetic or alphanumeric argument at least one character in length.
1. The type of this function is alphanumeric.
2. The returned value is the same character string as the argument, except that each lowercase letter in the argument is replaced by the corresponding uppercase letter.
MOVE FUNCTION UPPER-CASE (STR) TO UC-STR.
If STR (an alphanumeric data item six characters in length) contains the value "Autumn" the value returned and stored in UC-STR (also an alphanumeric data item six characters in length) is
"AUTUMN"; if STR contains "FALL98" the value returned is unchanged ("FALL98").
ACCEPT NAME-FIELD.
WRITE RECORD-OUT
FROM FUNCTION UPPER-CASE(NAME-FIELD).
The value in NAME-FIELD is made all-uppercase, unless it was already all-uppercase, in which case it is unchanged. Any nonalphabetic characters remain unchanged.
7.47 VARIANCE
The VARIANCE function returns a numeric value that approximates the variance of its arguments.
is an integer or numeric argument.
1. The type of this function is numeric.
2. The returned value is the approximation of the variance of the argument series, and is defined as the square of the standard deviation of the argument series. (For a definition of standard
deviation, see the description of the
Section 7.41 function.)
3. If the argument series consists of only one value, the returned value is 0.
COMPUTE RSULT = FUNCTION VARIANCE (A).
The value returned and stored in RSULT is 0, because there is only one argument.
COMPUTE RSULT = FUNCTION VARIANCE (A, B, C).
If A has the value 1, B has 2, and C has 12, the value returned and stored in RSULT is approximately 24.667. This represents the variance, which is the square of the standard deviation of these
arguments; the calculation is described in the description of the Section 7.41 function. In the above examples, A, B, C, and RSULT are numeric data items.
7.48 WHEN-COMPILED
The WHEN-COMPILED function returns the date and time the program was compiled.
1. The type of this function is alphanumeric.
2. The returned value is the date and time of compilation of the source program that contains this function. If the program is a contained program, the returned value is the compilation date and
time associated with the separately compiled program in which it is contained.
3. The returned value denotes the same time as the compilation date and time provided in the listing of the source program and in the generated object code for the source program. The representation
differs, and the precision can differ, as shown in the second example.
4. The contents of the character positions returned, numbered from left to right, are as follows:
┃Character Positions│ Contents ┃
┃1-4 │Four numeric digits of the year in the Gregorian calendar. ┃
┃5-6 │Two numeric digits of the month of the year, in the range 01 through 12. ┃
┃7-8 │Two numeric digits of the day of the month, in the range 01 through 31. ┃
┃9-10 │Two numeric digits of the hours past midnight, in the range 00 through 23. ┃
┃11-12 │Two numeric digits of the minutes past the hour, in the range 00 through 59. ┃
┃13-14 │Two numeric digits of the seconds past the minute, in the range 00 through 59. ┃
┃15-16 │Two numeric digits of the hundredths of a second past the second, in the range 00 through 99. ┃
┃17-21 │The value 00000. (Reserved for future use.) ┃
The value returned and stored in VERSION-STAMP (an alphanumeric data item) is the date and time of the program's compilation.
This is a sample value returned by the WHEN-COMPILED function. Reading from left to right, it shows
• The year, 1997
• The month, January
• The day of the month, the 10th
• The time of day, 16:52 (4:52 P.M.)
• The seconds, 31, and the hundredths
of seconds, 32, after 16:52:31
This compilation date and time as shown on the compiler listing (which does not show hundredths of seconds) is as follows:
┃Previous │Next│Contents │Index┃ | {"url":"http://h71000.www7.hp.com/doc/82final/6296/6296pro_099.html","timestamp":"2014-04-20T21:29:20Z","content_type":null,"content_length":"16760","record_id":"<urn:uuid:90bbf615-2208-46f1-80e3-ec14205d8460>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Education Requirements
The Department offers eight courses that fulfill the Scientific Inquiry Requirement of the General Education Requirements, PHYS 115, 120, 125, 230, 235, 345, 425 and 435; five courses that fulfill
the Quantitative Reasoning component of the Analytical Reasoning Requirement, PHYS 120, 125, 230, 235 and 360; and six courses that fulfill the Abstract Reasoning component of that requirement, PHYS
115,120, 125, 230, 235 and 360. The Department also offers occasional Earlham Seminars.
Planning Ahead
To maintain flexibility in their schedules, students who plan to major in Physics should consider beginning the introductory sequence in their first year. For students who have not previously taken
calculus, this may require that they take MATH 180 during the fall of their first year. (It is possible to major in Physics beginning in the Sophomore year, but scheduling is then rather crowded.) It
is important that students plan their programs early, after careful consultation with their academic advisers about career aims, to maximize their opportunities for off-campus study or for completing
a minor in addition to their Physics Major.
Physicists or astronomers with a doctoral degree can do research in a field of their own choice — working in industrial, academic or government laboratories. Some industrial or government
laboratories employ physicists or astronomers with a B.S. or M.S. degree in assisting capacities, and some of these help their employees in working toward higher degrees. Earlham's Physics Department
supplies information to students about career opportunities and currently active fields of specialization. Students who are preparing for doctoral graduate work in physics should plan to take PHYS
350, 355, 360, 375, 425, 435, 445, 485 and 488, in addition to MATH 180, 280, 310, 320, 350 and CS 128.
Students planning careers as high school physics teachers should plan their programs carefully in consultation with both the Education and Physics faculty. In their course of study, they should
include the introductory sequence and courses selected from PHYS 350, 355, 360, 375, 415, 425 and 445, and the necessary courses in Education.
The Major
• PHYS 125 Analytical Physics I: Mechanics
• PHYS 235 Analytical Physics II: Electricity and Magnetism, Optics and Waves
• PHYS 345 Introduction to Modern Physics
• PHYS 355 Advanced Physics Laboratory
• PHYS 375 Thermal Physics OR
PHYS 445 Introduction to Quantum Mechanics
• PHYS 480 Senior Seminar
• PHYS 488 Senior Capstone Experience
• Two courses (or more if necessary for a total at least 6 credits) from other Physics courses
numbered 300 – 480. Courses between 481 and 487 may be counted toward the Major
with permission from the Department.
And these Mathematics courses:
• MATH 180 Calculus A
• MATH 280 Calculus B
• MATH 320 Differential Equations
• MATH 350 Multivariate Calculus
The Minor
• PHYS 125 Analytical Physics I: Mechanics
• PHYS 235 Analytical Physics II: Electricity and Magnetism, Optics and Waves
• PHYS 345 Introduction to Modern Physics
• One other Physics course numbered 300 or above
• MATH 180 Calculus A
• MATH 280 Calculus B
Professional Option Program: Engineering
Earlham's 3-2 Pre-Engineering Option provides a wonderful opportunity for students considering a career in engineering who want the experience of a broad, liberal arts education that is seldom
available to students in engineering schools. By combining three years at Earlham with two years at an engineering school, students can emphasize the liberal arts as well as the technical aspects of
their education.
The Earlham Pre-Engineering Program permits a student to complete the B.A. degree requirements at Earlham and the engineering requirements at a professional engineering school with the aim of
becoming a practicing engineer in industry, government or at a university. Typically this type of program involves three years at Earlham studying fundamental science and the liberal arts, followed
by two years of specialization at an affiliated engineering school. At the end of those five years, the student receives two degrees: a B.A. from Earlham in pre-engineering studies, and a B.S. from
the engineering program. For more information about this opportunity, contact the 3-2 Faculty Liaison in the Physics Department.
Pre-Engineering requirements in the sciences depend on the engineering program to which the student transfers, but most programs have requirements such as these:
• One year of Physics (PHYS 125, 235)
• One year of Chemistry (usually CHEM 111, 331)
• Mathematics through Differential Equations and Multivariate Calculus (MATH 180, 280, 320 and 350)
• One semester of Computer Programming (CS 128)
Some programs include additional courses such as economics (required by Columbia) or additional courses in biology, chemistry or electronics (for students with particular interests such as biomedical
or electrical engineering).
* Key
Courses that fulfill
General Education Requirements:
• (A-AP) = Arts - Applied
• (A-TH) = Arts - Theoretical/Historical
• (A-AR) = Analytical - Abstract Reasoning
• (A-QR) = Analytical - Quantitative
• (D-D) = Diversity - Domestic
• (D-I) = Diversity - International
• (D-L) = Diversity - Language
• (ES) = Earlham Seminar
• (SI) = Scientific Inquiry
• (W) = Wellness
• (WI) = Writing Intensive
• (AY) = Offered in Alternative Year
Key to Course Numbering
Courses numbered in the 100s and 200s are aimed at first- and second-year students; courses numbered in the 300s and 400s are upper-level.
The second digit of the course number for Physics courses specifies its subfield within the discipline. Courses numbered #0# are courses of general interest; #1# courses (for example PHYS 115 or 415)
are in astronomy; #2#, in mechanics; #3#, in electro-magnetism; #4#, in modern physics; #5#, laboratory-focused courses; #6#, mathematical physics; and #7#, thermal physics.
*PHYS 115 ENCOUNTERS WITH THE COSMOS (4 credits)
Explore and discover the origin and evolution of the expanding universe that surrounds us, and the processes that created the “star dust” of which we are composed. Find out what really happens when
you travel into a black hole and hear the latest discoveries from the Mars Rover! This course provides a descriptive study of the origin and evolution of the universe and the nature of the solar
system, the stars and galactic systems. Lab. (A-QR, SI)
*PHYS 120 GENERAL PHYSICS I: MATTER In MOTION (4 credits)
How can we understand the complexities of motion? What determines the arc of a basketball free-throw, or how can we model blood pressure in the humans? This course develops concepts of force,
momentum and energy and applies them to a variety of phenomena ranging from the motion of elementary particles to the motion of the planets. High school algebra and trigonometry are used. Lab. (A-AR,
A-QR, SI)
*PHYS 125 ANALYTICAL PHYSICS I: MECHANICS (5 credits)
What dictates the complexities of motion? How can we use physic to understand energy issues, or control the path of a probe launched to rendezvous with Mars? This course develops concepts of force,
momentum, energy and heat, and applies them to a variety of phenomena ranging from the motion of elementary particles to the motion of the planets. Lab. Co-requisite: MATH 180 or background in
Calculus. (A-AR, A-QR, SI)
*PHYS 150 EARLHAM SEMINAR (4 credits)
Offered for first-year students. Topics vary. (ES)
*PHYS 230 GENERAL PHYSICS II: INTANGIBLE PHYSICS — ELECTRICITY, MAGNETISM AND OPTICS (4 credits)
You can change the direction of a baseball’s motion by hitting it, but how do you curve light’s motion to form the image on your retina? How can you move a beam of electrons without touching them?
This course extends concepts like force and energy to realms that we cannot experience by touch. This course investigates the nature of electrostatics, electrical currents, magnetism, waves and
optics, as well as a few concepts from modern Physics. Lab. Prerequisite: PHYS 120. (A-AR, A-QR, SI)
*PHYS 235 ANALYTICAL PHYSICS II: ELECTROMAGNETISM AND WAVES (5 credits)
How is electricity created or lightning modeled? What is the fundamental nature of light? How can we use mirrors to create three-dimensional images? In this course, electrostatics, electromagnetism,
electric and magnetic fields, waves and optics are treated using analytical techniques of calculus and vector analysis. Lab. Prerequisite: PHYS 125. Co-requisite: MATH 280. (A-AR, A-QR, SI)
*PHYS 345 CHALLENGING CONCEPTS OF MODERN PHYSICS (4 credits)
Few ideas stretch the imagination or challenge the intuition as much as Relativity and Quantum Mechanics. In this course, you’ll investigate special relativity, quantum Physics, atomic and nuclear
Physics with elementary classical Physics as a foundation. In the study of special relativity, students will reason through the implications of Einstein’s postulate and find how the predictions of
his theory can be put to experimental tests. Elementary quantum mechanics, on the other hand, will show how scientists have sometimes had to change their conceptual framework when confronted with
phenomena that cannot fit into an earlier paradigm. Lab. Prerequisites: MATH 280 and PHYS 235. (SI)
PHYS 350 ELECTRONICS AND INSTRUMENTATION (3 credits)
This is a laboratory-oriented course dealing with analog and digital circuits. Circuit theory is developed for diodes, transistors, operational amplifiers and integrated circuits. These components
are used to construct a range of devices, including power supplies, oscillators, amplifiers and logic circuits. Laboratory work will allow students to gain an operational understanding of these basic
concepts. Skills debugging, circuit building, and reading circuit diagrams will be stressed. Lab. Prerequisites: PHYS 230 or 235. Also listed as CS 350. (AY)
PHYS 355 ADVANCED PHYSICS LABORATORY (3 credits)
Explores experimental techniques, such as programming and machining, associated with advanced undergraduate physics courses. Studying a wide range of physical phenomena, students will be exposed to a
wide variety of experimental techniques. Emphasizing individual initiative and deep investigation, students will be able to direct their work to areas or questions of particular interest. Students
develop skills in communicating scientific results in journal article format as well as through oral and poster presentations. Lab. Prerequisite: PHYS 345.
*PHYS 360 MATHEMATICAL PHYSICS (3 credits)
Applies mathematical techniques to the study of physical systems. Examines topics such as vector analysis, complex variables, Fourier series and boundary value problems. These topics are studied in
the context of modeling and understanding physical systems. Students will see how individual techniques, once developed, can be applied to very broad classes of problems. This course develops skills
in communicating scientific results in written form as well as in an oral presentation. Prerequisites: MATH 320 and 350. Also listed as MATH 360. (A-AR, A-QR)
PHYS 375 THERMAL PHYSICS (3 credits)
Examines basic concepts of thermodynamics such as internal energy, heat, work, temperature, reversibility and entropy. This course shows how the application of a few basic concepts from probability
and statistics can elucidate a wide range of phenomena such as the kinetic theory of gases, osmotic pressure and changes in equilibrium states cause by variations in pressure or temperature. Quantum
applications include Planck's theory of blackbody radiation and statistics for identical particles. Prerequisites: PHYS 345 and MATH 280. (AY)
*PHYS 425 ANALYTICAL MECHANICS (3 credits)
Examines statics and dynamics of particles, rigid bodies and continuous media, along with Lagrangian mechanics and normal coordinates. Students will extend their ability to analyze mechanical systems
through math techniques such as differential equations, Fourier series, and solutions to systems of linear equations. Approximation techniques are introduced for dealing with systems for which no
analytical solution is possible. Prerequisites: PHYS 235, MATH 320 and MATH 350. (SI) (AY)
*PHYS 435 CLASSICAL ELECTRICITY AND MAGNETISM (3 credits)
The development and application of electromagnetic field theory. This course covers material from PHYS 235 in greater detail, deepening the level of application of mathematical approaches that are
useful in a wide range of Physics subjects, such as divergence, curl and Fourier techniques. The core of the course, Maxwell’s equations, expresses the fundamental interrelationship between electric
and magnetic phenomena, as well as radiation theory and an understanding of behavior of light. Prerequisites: PHYS 235, MATH 320 and MATH 350. (SI) (AY)
PHYS 445 INTRODUCTORY QUANTUM MECHANICS (3 credits)
An introduction to the techniques, problems and interpretation of quantum mechanics. The quantum conditions, Schroedinger's equation and other formulations are applied to the rectangular potential
well, the harmonic oscillator and the hydrogen atom. Also considers perturbation theory, identical particles and multiparticle systems. Students will gain familiarity with quantum systems, and the
implications of quantum theory. Mathematical skills such as integrating Gaussian functions and partial differential equations will be developed. Prerequisites: PHYS 345, MATH 320, MATH 350 or MATH
360. (AY)
PHYS 480 SENIOR SEMINAR (3 credits)
Students and faculty meet to discuss topics of current interest in physics. These topics focus either on some area of Physics or on an area in which Physics overlaps with other disciplines. Recent
topics have included Cosmology and General Relativity, Solid-State Physics, and Computational Models in Biophysics. PHYS 480 may be counted toward the major more than once if the topic of the course
is different. Prerequisite: PHYS 375 or PHYS 445, or permission of instructor.
PHYS 481 INTERNSHIPS, FIELD STUDIES AND OTHER FIELD EXPERIENCES (1-3 credits)
PHYS 482 SPECIAL TOPICS (3 credits)
Selected topics determined by the instructor for upper-level study.
PHYS 483 TEACHING ASSISTANTS (1-3 credits)
PHYS 484 FORD/KNIGHT RESEARCH PROJECT (1-4 credits)
Collaborative research with faculty funded by the Ford/Knight Program.
PHYS 485 INDEPENDENT STUDY (1-3 credits)
An investigation of a specific problem or topic.
PHYS 486 PHYSICS RESEARCH (1-3 credits)
Qualified students engage in independent research under the direction of a faculty supervisor. The research is typically part of ongoing research projects which in recent years have included a study
of X-ray emission from active galactic nuclei, vibrational modes of drumheads and the physics of ultrasound. Offered by special arrangement.
PHYS 488 SENIOR CAPSTONE EXPERIENCE (0 credits)
Majors must successfully complete comprehensive examinations during the Senior year. Offered both semesters. | {"url":"http://www.earlham.edu/physics-and-astronomy/the-program/","timestamp":"2014-04-19T09:45:22Z","content_type":null,"content_length":"46947","record_id":"<urn:uuid:6b0bd2cc-eaa9-45cd-a8db-39f6c61c3707>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vienna, VA Algebra Tutor
Find a Vienna, VA Algebra Tutor
...I customize my techniques based upon what type of student I am tutoring (visual, verbal, etc...) and student preferences (what has worked for the individual in the past, what methods their
teachers use in the classroom, etc...) I am open to tutoring students for daily coursework or standardized ...
33 Subjects: including algebra 1, algebra 2, reading, English
...I am a Professional Geologist in the Commonwealth of Virginia. I know geology very well and have published professional papers in the subject. I am a doctoral candidate in Environmental
Science and Policy at George Mason University.
10 Subjects: including algebra 1, algebra 2, statistics, Microsoft Excel
...My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional
economist. Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to...
16 Subjects: including algebra 1, algebra 2, calculus, statistics
...I can help you plan out your high school career if you are just beginning or help you tackle the college application process. I have been working with students for several years at both the
high school and college level. I have excelled in my own AP and college level courses, and I know how to teach you what you need to know to perform in the classroom.
56 Subjects: including algebra 1, algebra 2, chemistry, reading
Biology is my passion! I have an advanced degree in Molecular and Microbiology with over 5 years experience in a lab setting. I had taken a few anatomy courses during my undergrad degree and
taught labs for a pre-med anatomy class for 2 years as a graduate student.
13 Subjects: including algebra 1, algebra 2, geometry, grammar
Related Vienna, VA Tutors
Vienna, VA Accounting Tutors
Vienna, VA ACT Tutors
Vienna, VA Algebra Tutors
Vienna, VA Algebra 2 Tutors
Vienna, VA Calculus Tutors
Vienna, VA Geometry Tutors
Vienna, VA Math Tutors
Vienna, VA Prealgebra Tutors
Vienna, VA Precalculus Tutors
Vienna, VA SAT Tutors
Vienna, VA SAT Math Tutors
Vienna, VA Science Tutors
Vienna, VA Statistics Tutors
Vienna, VA Trigonometry Tutors
Nearby Cities With algebra Tutor
Annandale, VA algebra Tutors
Arlington, VA algebra Tutors
Bethesda, MD algebra Tutors
Burke, VA algebra Tutors
Centreville, VA algebra Tutors
Dunn Loring algebra Tutors
Fairfax, VA algebra Tutors
Falls Church algebra Tutors
Herndon, VA algebra Tutors
Manassas, VA algebra Tutors
Mc Lean, VA algebra Tutors
Oakton algebra Tutors
Reston algebra Tutors
Springfield, VA algebra Tutors
Sterling, VA algebra Tutors | {"url":"http://www.purplemath.com/Vienna_VA_Algebra_tutors.php","timestamp":"2014-04-18T19:18:38Z","content_type":null,"content_length":"23860","record_id":"<urn:uuid:49fb1365-f1ab-40c0-9aa2-644c549b0eeb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Darts' Brain Teaser
Probability puzzles require you to weigh all the possibilities and pick the most likely outcome.
Puzzle ID: #144
Category: Probability
Submitted By: duckrocket
Corrected By: sugarnspice4u7
Peter throws two darts at a dartboard, aiming for the center. The second dart lands farther from the center than the first. If Peter now throws another dart at the board, aiming for the center, what
is the probability that this third throw is also worse (i.e., farther from the center) than his first? Assume Peter`s skillfulness is constant.
Show Hint
Since the three darts are thrown independently, they each have a 1/3 chance of being the best throw. As long as the third dart is not the best throw, it will be worse than the first dart. Therefore
the answer is 2/3. Hide
What Next?
(user deleted) I could agree with the solution if all 3 darts were thrown at the same time, or if any results were unknown until all three darts had been thrown independently (or for that matter,
Dec 31, 2001 any scenario where the question being asked is "Three darts are thrown at a dartboard, what are the chances that any particular dart will not be the closest to the center?".
However, since you are running 2 independent tests, you are actually asking a different question twice (will the dart currently being thrown be closer than the 1st dart thrown?) the
odds will be 50-50 for every dart thrown. What if you threw 100 darts, or 1000? Would your odds really increase with every dart? What if you removed the 2nd dart from the board
after it was thrown? Would it decrease your odds of being farther from the center?
vvega The question and explanation are a little out of synch. The answer is correct only before any darts are thrown.
Mar 06, 2002
It is true that if we are about to throw N darts independently, then each dart has a future probability 1/N of being closest to the center, and a (N-1)/N probability of not being
closest. But this is only true before we throw any darts.
However, in the question, exactly as it is phrased, we already know the relative positions of the first two darts. The question states that the second dart is NOT the closest.
Therefore the probability that the second dart is the closest is clearly zero, not 1/3 as given in the explanation.
So I suggest that the correct answer to the question, as stated, is 1/2 not 1/3. Because there are only two darts that could potentially end up being closest – the 1st or the 3rd.
This holds true for any number of darts if we wait until just before we throw the last dart to ask our question. Suppose N=1000 and we have already thrown 999 of them, then we
already know that 998 of the thrown darts now have zero probability of being closest. With this prior knowledge, we can state that the final dart has a 50% chance of being closest,
since there are now only two possible outcomes – either the last dart will hit closer than the current closest, or it won’t.
cathalmccabe I like vega's thinking, but there is not enough info. 2/3 is definitely wrong though, 50/50 is closer
Mar 08, 2002
bighippo4 I disagree with the solution. There is no way to predict if a shot will be better than the first two.
Apr 10, 2002
cathalmccabe If you don't know "how good" the other ones were all you can say is 50/50 the last being better
Apr 17, 2002 If you know how good the others are, eg bulls eye, 0 chance of improving.
If the other two went miles away =~ 1 of improving
Bender Duck is 100% correct. The information that the second dart is farther from the bullseye than the first is irrelevent. The question could be rephrased: what's the probablity the
Jun 12, 2002 third dart won't be the closest yet? The answer to this question is (N-1)/N for any number, N, of darts thrown.
dewtell Duckrocket's solution is correct, provided that there is no chance
Jul 23, 2002 of a tie between the different darts. If there is a chance of a tie,
that will reduce the probability a bit, since the question asks for
the probability that the third dart is further than the first.
The really neat thing about this puzzle is that it works for any probability
distribution function where ties have probability zero (most continuous density
functions without singularities should satisfy this), no matter what the
shape of the underlying probability function is.
The information that the second dart is further than the first *does*
change the probability distribution of the first dart. To see this,
suppose that we were rolling fair dice instead of throwing darts.
If I tell you that the second die was higher than the first, the
probability that the first die is now showing a "1" is five times
that of the first die showing a "5" (you are looking at
five combinations, 1-2, 1-3, 1-4, 1-5, and 1-6, vs. 5-6),
and the probability of the first die showing a "6" is now zero.
Before I told you that additional information, all sides
were equally probable for the first die.
If you do this problem with dice instead of darts, you won't get
exactly 2/3rds, because of the chance of a tie, but as you
increase the sides of each die towards infinity, the limit
of the probability should be 2/3rds.
Bender The information that the second dart is farther off the mark than the first is irrelevant, as long a the question is: 'what's the probability that the third dart won't be the
Jul 29, 2002 closest yet?'. The probability would be unchanged if instead the second dart was closer than the first, and question was to find the probability that the third dart is farther off
the mark than the second. Of course the statistics of the location of the first dart are influenced by the information that it's closer than the second dart. But the question does
not ask about the location of the first dart. As for the probability of a tie, that is clearly zero. The distance of a dart from the center of the target is a continuous variable,
not discrete like the faces of a die. For more discussion of this problem, see the probability section at rec-puzzles.org, where Duck probably got it.
dewtell The probability of a tie doesn't have to be zero, despite it being a (mostly) continuous variable.
Jul 31, 2002 For example, you could have a model where the probability of being immediately next to a boundary stripe
is non-zero, because any hit on the boundary stripe is deflected to the closest point of the target next to the stripe.
As for the relevance of the second dart, it becomes irrelevant once you paraphrase the question as you suggest.
But it is clearly relevant to the original question, which asked for the probability that the third dart is
farther than the first. Just disregarding the second dart without accounting for its affect on the first dart's probability distribution
leads to confusion such as shown in vvega's comment. You can't disregard the second dart without either explicitly or implicitly acknowledging that
the first dart's distribution has changed with the additional information.
Glenbo It says, "Assume Peter`s skilfulness is constant." Then wouldn't the 3rd dart be the worst?
Nov 21, 2002
dewtell I read "Assume Peter's skillfulness is constant"
Nov 22, 2002 to mean "Assume that Peter throws each dart
with the same probability distribution function."
So if he has (say) a 10% probability of hitting the
bullseye with the first dart, he would also have
a 10% probability of hitting the bullseye with the third
dart. Likewise for any other area on the dart board.
jimbo I think there are some before and after misinterpretations. Before he throws the darts, if you ask the question which of the three throws is going to produce the closest dart, the
Mar 14, 2003 1st, 2nd or 3rd throw, then they are all equally likely. After he has thown one dart, you are now in possession of new information. If he is an expert dart player who rarely misses
and his first dart barely makes the board, then clearly the next dart is not equally likely to be futher away. I'd like to be at a tournament where the world darts champion has just
thrown the worst shot of his career and have some of these theoreticians tell me the next dart is 50/50 or 33% etc likely to be worse. I'll take $100 000 bet at these odds if you
are willing to put up the money. Unless you know something about the skill of the player and the average distribution of shots, you cannot make such predictions using this kind of
Mathematical probability which is based on counting the number of equally likely cases!
darthforman ??????
Apr 25, 2005
stephiesd I think you people have to much time on your hands.
Dec 09, 2005
musicmaker21113 I also think that the 2/3 solution can't be right. After all, you do need more information. For example, say the first dart landed right by the bull's eye, with just enough room for
Jan 09, 2006 one additional dart to fit on the exact center of the bull's eye. In order to fit the dart on the bulls-eye, you would need perfect accuracy to hit that tiny target. Since the first
dart did not hit, and the second dart was worse, the statement that the thrower throws with a constant degree of accuracy demonstrates that the thrower does not have perfect
accuracy. The third dart might land anywhere within a circle (or more likely an elllipse) defined by the throwers degree of accuracy. Just looking at the surface area available to
hit would say that if the thrower is not perfectly accurate, the odds of the third dart landing in the tiny space inside the first dart is extremely low, while the odds of it
landing outside is much larger. The probability of making a better throw would then be the area of the circle of probability defined by his level of accuracy divided into the area
of the circle within the location of the first dart. In any event, it is not a simple or straightforward as presented, and more information is needed to answer the question
blackmarket69 ILL miss YOUR dartboard
Feb 23, 2006
garul Nice problem and beautifully explained by dewtell.
Mar 24, 2006 Can't understand how the hint is supposed to help in this case?
brainjuice dunt undrstand even a lil bit!!
Mar 31, 2006
petraka i got it,, but i dont get the HINT.
Apr 25, 2006
ellephat There is no correct answer.
May 13, 2006
Say the first dart was a bullseye. Then the third dart will automatically be farther from the center. But if the second dart hits the very edge of the board, and the first dart was
just barely closer, then the third dart will have a great chance to be the closest. So the answer really depends on where the first dart is specifically.
josty ? i just did not get it, it was a tough one
Aug 15, 2006
ciotog The answer to this teaser is clearly wrong, why is it allowed to remain?
Oct 25, 2006
Apr 06, 2007
btw, how did you make "wrong" bold?
Pixit THE ANSWER IS NOT WRONG!
Aug 12, 2008
Throwing three darts you can have these possible outcomes.
(the numbers are which dart that comes closest to center, in the middle and furthest away. So for example 213 means second dart was best and third dart was worst)
We know that the second dart lands farther away then the first so we only have these left:
Out of these there is a 2/3 chance third dart will worse than the first one. QED
The information that the second dart is farther from the bullseye than the first is relevent contrary to what manyof you think. Imagine we had 1000000 darts. First dart is better
than the next 999998 darts how likely is it then that the last dart will be better than the first? Obviously in this case the first one was really good, probable bulls eye, so its
very veryunlikely we can beat it with the last one.
I have also runned a computer simulation on this and it confirm the answer 2/3.
Zag24 You know you've got a great probability problem when more than half the people posting claim you have the wrong answer.
Feb 20, 2009
BTW, as Pixit explained quite clearly, the author is correct. Everyone who disagrees with them is full of baloney.
With these sorts of puzzles, if you start with a bunch of cases that have equal probability, then you have some additional information that eliminates some of the cases, then the
remaining cases still have equal probability. (Of course, you do have to define your cases correctly so that the new information doesn't re-weight them. But that isn't hard,
willwang123 The solution is correct.
Jun 30, 2011
Let the darts be called A, B, and C. Let us throw A, B, and C in that order. We have two possibilities (where the order is how close the dart was to the center):
What most of you don't understand is that, given that dart A is closer than dart B, it is more likely that dart A is the best throw.
spikethru4 Let's suppose that dart A landed exactly in the centre of the bullseye and dart B landed on the edge of the dartboard. Are you seriously trying to say that the events ABC, ACB and
Jul 29, 2013 CAB are all equally likely?
The answer depends upon the proximity of dart A, which is known at the time of throwing dart C. Assuming that Peter does actually hit the dartboard with his last dart, the answer is
1-d/r, where d is the distance of dart A from the centre and r the radius of the board. If we allow for the possibility that Peter misses the dartboard with equal probability, then
we have a problem, as r becomes infinite. | {"url":"http://www.braingle.com/brainteasers/teaser.php?op=2&id=144&comm=1","timestamp":"2014-04-19T04:39:51Z","content_type":null,"content_length":"52362","record_id":"<urn:uuid:8d8b623b-a9c5-46f4-97e2-5eb7e65adb48>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton's Laws of Motion
Newton's Laws of Motion
Shodor > CSERD > Resources > Applications > Newton's Laws of Motion
Newton's Laws of Motion
1. An object in motion will remain in motion unless acted on by an external force.
2. The acceleration of an object is directly proportional to the sum of the forces acting in the object, and inversely proportional to the mass of the object.
3. For every force, there is an equal and opposite force.
Newton's first law states that unless there are forces present, an objects motion will keep moving the way it is currently moving. It will not change direction, and it will not slow down.
Discussion question - Does this make sense? Assume for a second that Newton was right. Gently toss a ball back and forth between yourself and another student, or between your two hands. Does this
agree with Newton's first law? Slide a book gently across a table. Does it slow down? Does this agree with Newton's first law? If Newton was right, what can you say about these two examples?
Newton's second law states that more massive objects take more of a push to get moving, and that the harder you push, the more movement you will get. This can be written mathematically as
a = F/m.
Sometimes this is rearranged and written as
F = ma.
(Note that F and a are both vectors. Not only is the acceleration proportional to the net force, it is also in the same direction as the net force.)
Discussion question - The gravitational pull of the Earth is proportional to the product of the mass of the Earth and the mass of the object that is being pulled by the Earth. If the acceleration of
an object pulled by that gravity is inversely proportional to the object's mass, what does that say about the acceleration due to gravity of any two objects of different mass near the Earth's
Newton's third law states that when you push on something it pushes back.
Discussion question - Lean on a sturdy wall nearby. Note that when you lean on the wall, you are pushing on the wall. What would happen if the wall did not also push back on you?
©1994-2014 Shodor
Newton's Laws of Motion
1. An object in motion will remain in motion unless acted on by an external force.
2. The acceleration of an object is directly proportional to the sum of the forces acting in the object, and inversely proportional to the mass of the object.
3. For every force, there is an equal and opposite force.
Newton's first law states that unless there are forces present, an objects motion will keep moving the way it is currently moving. It will not change direction, and it will not slow down.
Discussion question - Does this make sense? Assume for a second that Newton was right. Gently toss a ball back and forth between yourself and another student, or between your two hands. Does this
agree with Newton's first law? Slide a book gently across a table. Does it slow down? Does this agree with Newton's first law? If Newton was right, what can you say about these two examples?
Newton's second law states that more massive objects take more of a push to get moving, and that the harder you push, the more movement you will get. This can be written mathematically as
a = F/m.
Sometimes this is rearranged and written as
F = ma.
(Note that F and a are both vectors. Not only is the acceleration proportional to the net force, it is also in the same direction as the net force.)
Discussion question - The gravitational pull of the Earth is proportional to the product of the mass of the Earth and the mass of the object that is being pulled by the Earth. If the acceleration of
an object pulled by that gravity is inversely proportional to the object's mass, what does that say about the acceleration due to gravity of any two objects of different mass near the Earth's
Newton's third law states that when you push on something it pushes back.
Discussion question - Lean on a sturdy wall nearby. Note that when you lean on the wall, you are pushing on the wall. What would happen if the wall did not also push back on you?
Newton's first law states that unless there are forces present, an objects motion will keep moving the way it is currently moving. It will not change direction, and it will not slow down.
Discussion question - Does this make sense? Assume for a second that Newton was right. Gently toss a ball back and forth between yourself and another student, or between your two hands. Does this
agree with Newton's first law? Slide a book gently across a table. Does it slow down? Does this agree with Newton's first law? If Newton was right, what can you say about these two examples?
Newton's second law states that more massive objects take more of a push to get moving, and that the harder you push, the more movement you will get. This can be written mathematically as
(Note that F and a are both vectors. Not only is the acceleration proportional to the net force, it is also in the same direction as the net force.)
Discussion question - The gravitational pull of the Earth is proportional to the product of the mass of the Earth and the mass of the object that is being pulled by the Earth. If the acceleration of
an object pulled by that gravity is inversely proportional to the object's mass, what does that say about the acceleration due to gravity of any two objects of different mass near the Earth's
Newton's third law states that when you push on something it pushes back.
Discussion question - Lean on a sturdy wall nearby. Note that when you lean on the wall, you are pushing on the wall. What would happen if the wall did not also push back on you? | {"url":"http://www.shodor.org/refdesk/Resources/Applications/LawsOfMotion/","timestamp":"2014-04-18T18:12:21Z","content_type":null,"content_length":"10944","record_id":"<urn:uuid:6a9ab227-2679-4f2d-b9a2-427fc8de60d7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 btu equals watts
How much btu equals 1 watts? A watt is a measure of power, while btu is a measure of energy. Power is measured in watts or btu per hour. Energy is measured in btu or ...http://wiki.answers.com/Q/
There are 0.2931 Watts in 1 BTU. While a Watt is a measure of Power, a BTU (British Thermal Unit) is a traditional unit of energy, where 1 BTU is equal to thehttp://www.ask.com/question/
How many btu equals 1 calorie? 1 calorie = ~0.004 BTU. How many BTU in a 1 HP air conditioning system? HOW MANY BTU IN A 1 HP 2545 btu = 1 HPhttp://wiki.answers.com/Q/How_many_btu_equals_1_hp
How to Calculate BTU Output From Watts. BTUs, or British thermal units, is a measurement of energy. It measures the amount of energy that is needed to heat 1 lb. of ...http://www.ehow.com/
Put simply, one British Thermal Unit [Btu] is the amount of heat required to raise the temperature of 1 pound [ lb.] of water 1 degree Fahrenheit.http://www.simetric.co.uk/sibtu.htm
HVAC FORMULAS TON OF REFRIGERATION - The amount of heat required to melt a ton (2000 lbs.) of ice at 32°F 288,000 BTU/24 hr. 12,000 BTU/hr. APPROXIMATELY 2 inches in Hg.http://www.descoenergy.com
1 kilowatt-hour (kWh) is equivalent to: 3412 British Thermal Units (BTU) 1000 watt (W) = 1 kilowatt (kW) is equivalent to: 3412 British Thermal Units per hour (BTU/h)http://www.bnoack.com/power/
BTU stands for British Thermal Units, a means of measuring heat and energy. One BTU equals the amount of energy required to heat one pound of water by 1 degree ...http://www.ehow.com/
Best Buy customers questions and answers for Frigidaire 11,000 BTU Portable Air Conditioner - White. Read questions and answers real customers have contributed for ...http://reviews.bestbuy.com/
T T [1] informal abbreviation for "trillion," meaning the American trillion 10 12. T [2] symbol for the Dvorak T-number, a subjective estimate of the strength of a ...http://www.unc.edu/~rowlett/
1. How Much Power Do I Need? Add up the wattage of tools, appliances and motors you want to run at the same time. Then select the generator with the RUNNING wattage ...http://www.duropower.com/ | {"url":"http://en.findeen.com/1_btu_equals_watts.html","timestamp":"2014-04-20T23:28:48Z","content_type":null,"content_length":"22960","record_id":"<urn:uuid:bf4a83c4-45a9-4ddd-8165-9beda264f9cd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Confirmatory LCA
Mingnan Liu posted on Thursday, November 14, 2013 - 8:39 am
Hi, I am new to Mplus and there is a not so fundamental model that I don't know how to set up. Here is the situation: I have 12 Likert scales (y1-y12), each with 5 categories (strongly
agree--strongly disagree). I want to run a LCA and I believe the y1-y4 belong to latent class variable f1, y5-y8 belong to f2, and y9-y12 belong to f3. The theory also says that y1-y12 also belong to
another latent variable f4. At the same time, y1-y12 belong to f5 as well. The differences are for f1-f4, the y1-y12 are treated as ordinal scale while for f5 y1-y12 are treated as nominal scale.
f1-f3 are correlated with each other whereas f4 and f5 are not correlated with each other, nor with f1-f3. How should I set up the Analysis and Model? Any suggestions are appreciated!
Linda K. Muthen posted on Thursday, November 14, 2013 - 8:44 am
Do you want to use factors as latent class indicators for the LCA?
Mingnan Liu posted on Thursday, November 14, 2013 - 9:07 am
Do you mean I should do CFA first before LCA? I do want to get the conditional probability of each response category category, particularly for f4 and f5. My understanding is that I will one for f4
since I treat the indicator as ordinal and five for f5 since they are nominal. One thing I forgot to mention is that I need to add a constrain so that the beta weights (or conditional probability)
for f4 are the same across all 12 questions. Similarly, the constrain is needed for f5 so that conditional probabilities on the five categories are the same across 12 questions. Thank you.
Linda K. Muthen posted on Thursday, November 14, 2013 - 10:27 am
It is not clear to me what you want to do. Are f1-f5 continuous latent variables, that is, factors or categorical latent variables, that is, latent class variables. Please note that the same
variables cannot be treated as ordinal and nominal.
Mingnan Liu posted on Thursday, November 14, 2013 - 10:59 am
Hi Linda,
My apologizes for the unclear questions. To put it simpler, I am trying to replicate the findings from
Kieruj and Moors, 2013 "Response style behavior: question format dependent or personal style?" Quality and Quantity 47: 193-211
particularly the model they specified on p.200. As you can see, F_1i to F_3i are content relevant whereas A_i and E_i are content irrelevant. I am interested in A and E latent variables. I want to
treat the indicator variables as nominal when estimating the latent variable E because, on p.201, the the beta weights are different for each response category. This is why there is a subscript c for
the coefficients of E. The indicator variables are treated as nominal when estimating latent variables A and F1-F3. Therefore, only one beta weight for each question. Also, the beta weights for E and
A are the same for all questions, as you can see from p.201.
Did I understand their model correctly?
Thanks again!
Bengt O. Muthen posted on Friday, November 15, 2013 - 4:52 pm
I think this can be done in Mplus throught Model Constraint and using nominal link, but don't have the time to explore how right now.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/13/17469.html?1384563151","timestamp":"2014-04-20T12:33:40Z","content_type":null,"content_length":"23960","record_id":"<urn:uuid:e74cf5fc-4051-48a4-867e-e7422f4e0ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00206-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersection Type Disciplines in Lambda Calculus and Applicative Term Rewriting Systems
Results 1 - 10 of 35
, 1995
"... We demonstrate the pragmatic value of the principal typing property, a property more general than ML's principal type property, by studying a type system with principal typings. The type system
is based on rank 2 intersection types and is closely related to ML. Its principal typing property prov ..."
Cited by 94 (0 self)
Add to MetaCart
We demonstrate the pragmatic value of the principal typing property, a property more general than ML's principal type property, by studying a type system with principal typings. The type system is
based on rank 2 intersection types and is closely related to ML. Its principal typing property provides elegant support for separate compilation, including "smartest recompilation" and incremental
type inference, and for accurate type error messages. Moreover, it motivates a novel rule for typing recursive definitions that can type many examples of polymorphic recursion.
- In Proc. 29th Int’l Coll. Automata, Languages, and Programming, volume 2380 of LNCS , 2002
"... Let S be some type system. A typing in S for a typable term M is the collection of all of the information other than M which appears in the final judgement of a proof derivation showing that M
is typable. For example, suppose there is a derivation in S ending with the judgement A M : # meanin ..."
Cited by 86 (12 self)
Add to MetaCart
Let S be some type system. A typing in S for a typable term M is the collection of all of the information other than M which appears in the final judgement of a proof derivation showing that M is
typable. For example, suppose there is a derivation in S ending with the judgement A M : # meaning that M has result type # when assuming the types of free variables are given by A. Then (A, #) is a
typing for M .
- In Conf. Rec. POPL ’99: 26th ACM Symp. Princ. of Prog. Langs , 1999
"... Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of
finding a typing for a given term, if possible. We define an intersection type system which has principal typin ..."
Cited by 52 (17 self)
Add to MetaCart
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a
typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable -terms. More interestingly, every finite-rank
restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System F where the finite rank restriction for
every finite rank at 3 and above has neither principal typings nor decidable type inference. This is also in contrast to earlier presentations of intersection types where the status (decidable or
undecidable) of these properties is unknown for the finiterank restrictions at 3 and above. Furthermore, the notion of principal typings for our system involves only one operation, substitution,
rather than severa...
- J. FUNCT. PROGRAMMING , 1998
"... Many polyvariant program analyses have been studied in the 1990s, including k-CFA, polymorphic splitting, and the cartesian product algorithm. The idea of polyvariance is to analyze functions
more than once and thereby obtain better precision for each call site. In this paper we present an equivalen ..."
Cited by 41 (7 self)
Add to MetaCart
Many polyvariant program analyses have been studied in the 1990s, including k-CFA, polymorphic splitting, and the cartesian product algorithm. The idea of polyvariance is to analyze functions more
than once and thereby obtain better precision for each call site. In this paper we present an equivalence theorem which relates a co-inductively defined family of polyvariant ow analyses and a
standard type system. The proof embodies a way of understanding polyvariant flow information in terms of union and intersection types, and, conversely, a way of understanding union and intersection
types in terms of polyvariant flow information. We use the theorem as basis for a new flow-type system in the spirit of the CIL -calculus of Wells, Dimock, Muller, and Turbak, in which types are
annotated with flow information. A flow-type system is useful as an interface between a owanalysis algorithm and a program optimizer. Derived systematically via our equivalence theorem, our flow-type
system should be a g...
- In ICFP ’97 [ICFP97 , 1997
"... We present a new framework for transforming data representations in a strongly typed intermediate language. Our method allows both value producers (sources) and value consumers (sinks) to
support multiple representations, automatically inserting any required code. Specialized representations can be ..."
Cited by 29 (13 self)
Add to MetaCart
We present a new framework for transforming data representations in a strongly typed intermediate language. Our method allows both value producers (sources) and value consumers (sinks) to support
multiple representations, automatically inserting any required code. Specialized representations can be easily chosen for particular source/sink pairs. The framework is based on these techniques: 1.
Flow annotated types encode the "flows-from" (source) and "flows-to" (sink) information of a flow graph. 2. Intersection and union types support (a) encoding precise flow information, (b) separating
flow information so that transformations can be well typed, (c) automatically reorganizing flow paths to enable multiple representations. As an instance of our framework, we provide a function
representation transformation that encompasses both closure conversion and inlining. Our framework is adaptable to data other than functions.
, 1995
"... We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system.
An immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We int ..."
Cited by 26 (1 self)
Add to MetaCart
We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system. An
immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We introduce a rank 2 system combining intersections and polymorphism, and prove that it types exactly
the same terms as the other rank 2 systems. The combined system suggests a new rule for typing recursive definitions. The result is a rank 2 type system with decidable type inference that can type
some interesting examples of polymorphic recursion. Finally,we discuss some applications of the type system in data representation optimizations such as unboxing and overloading.
, 2003
"... Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of
finding a typing for a given term, if possible. We define an intersection type system which has principal typ ..."
Cited by 26 (12 self)
Add to MetaCart
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a
typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable #-terms. More interestingly, every finite-rank
restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference.
, 1997
"... We present a typed intermediate language # CIL for optimizing compilers for function-oriented and polymorphically typed programming languages (e.g., ML). The language # CIL is a typed lambda
calculus with product, sum, intersection, and union types as well as function types annotated with flow label ..."
Cited by 22 (13 self)
Add to MetaCart
We present a typed intermediate language # CIL for optimizing compilers for function-oriented and polymorphically typed programming languages (e.g., ML). The language # CIL is a typed lambda calculus
with product, sum, intersection, and union types as well as function types annotated with flow labels. A novel formulation of intersection and union types supports encoding flow information in the
typed program representation. This flow information can direct optimization.
- In Proc. 1999 Int’l Conf. Functional Programming , 1999
"... We investigate finite-rank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms.
Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places ..."
Cited by 22 (9 self)
Add to MetaCart
We investigate finite-rank intersection type systems, analyzing the complexity of their type inference problems and their relation to the problem of recognizing semantically equivalent terms.
Intersection types allow something of type T1 /\ T2 to be used in some places at type T1 and in other places at type T2 . A finite-rank intersection type system bounds how deeply the /\ can appear in
type expressions. Such type systems enjoy strong normalization, subject reduction, and computable type inference, and they support a pragmatics for implementing parametric polymorphism. As a
consequence, they provide a conceptually simple and tractable alternative to the impredicative polymorphism of System F and its extensions, while typing many more programs than the Hindley-Milner
type system found in ML and Haskell. While type inference is computable at every rank, we show that its complexity grows exponentially as rank increases. Let K(0, n) = n and K(t + 1, n) = 2^K(t,n);
we prove that recognizing the pure lambda-terms of size n that are typable at rank k is complete for dtime[K(k-1, n)]. We then consider the problem of deciding whether two lambda-terms typable at
rank k have the same normal form, Generalizing a well-known result of Statman from simple types to finite-rank intersection types. ...
- In: (ITRS ’04 , 2005
"... The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying ..."
Cited by 17 (7 self)
Add to MetaCart
The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying out compositional type inference. The fundamental idea of expansion is to be able to calculate
the effect on the final judgement of a typing derivation of inserting a use of the intersection-introduction typing rule at some (possibly deeply nested) position, without actually needing to build
the new derivation. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=87450","timestamp":"2014-04-19T02:10:17Z","content_type":null,"content_length":"38321","record_id":"<urn:uuid:758d1f74-fa40-4566-be0b-217f71c44755>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
The regression equation versus r-squared
The regression equation versus r-squared
OK, I hope I'm not beating a dead horse here, but here's another way to think of the difference between r-squared and the regression equation.
The r-squared comes from the standpoint of stepping back and looking at the distribution of wins among teams in your dataset. Some teams have over 60 wins, some teams have under 20 wins, and some
teams are in the middle. If you look at the standings, and ask yourself, "how important are differences in salary to how we got this way?", then you're asking about r-squared.
The regression equation matters more if you're interested in the future, if you care about how much you can influence wins by increasing payroll. If you ask yourself, "how much do I have to spend to
get a few extra wins?", then you want the regression equation.
The r-squared looks at the past, and asks, "was salary important to how we got to this variance in wins?". The regression equation looks to the future, and says, "can we use salary to influence wins?
It's very possible, and very easy, to have two different answers to these two questions. Here's an example.
Suppose you're trying to see what activities 25-year-olds partake in that affect their life expectancy. You might discover that the average 25-year-old lives to 80, but you want to try to figure out
what factors influence that. You run a multiple regression, and you figure out that if the person smokes at 25, it appears to cut five years off his life expectancy. If he eats healthy, it adds four
years. If he commits suicide at 25, it cuts off 55 years (since he dies at 25 instead of 80).
Your regression equation would look something like:
life expectancy = 80 - (5 * smoker) + (4 * eats healthy) - (55 * commits suicide).
We should all agree that committing suicide has a big effect on life expectancy, right?
Now, let's look at the r-squared. To do that, look at all the 25-year-olds in the sample (which might be several thousand). You'll see a few that live to 25, some that live to 45, a bunch that live
to 65, a larger bunch that live to 80, and some that live to 100. The distribution is probably bell-shaped.
For the r-squared, ask yourself: how much did suicide contribute to the curve looking like this? The answer: very little. There are probably very few suicides at 25, and even if you adjusted for
those, by taking those points out of the left side of the curve and moving them to the peak, the curve would still look roughly the same. Suicide is not a very big factor in making the curve look
like it does.
And so, you get a very low r-squared for suicide. Maybe it would be .01, or even less.
See the apparent contradiction?
-- suicide has a HUGE effect on lifespan.
-- r-squared for suicide vs. lifespan is very low
And, again, that's because:
-- the regression equation tells you what effect the input has on the output;
-- the r-squared tells you how important that input was in creating the distribution you see.
The regression equations tell you that having a piano drop on your head is very dangerous. The low r-squared tells you that pianos haven't historically been a major source of death.
Here's a different way to explain this, which might make more sense to gamblers:
Suppose that you had to predict the lifespan of a random 25-year-old. Obviously, the more information you have, the more accurate your estimate will be. And, imagine the amount you lose is the square
of the error in your guess. So if you guess 80, and the random person dies at 60, you lose $400 (the square of 80 minus 60).
Without any information, your best strategy is to guess the average, which we said was 80. Your average loss will be the variance, which is the square of the SD. Suppose that SD is 15. Then, your
average loss would be $225.
Now, how valuable is knowing the value of whether or not the guy committed suicide? It's probably not that valuable. Most of the time, the answer will be "no", and you're only slightly better off
than when you started (maybe you guess 80.05 now instead of 80). A tiny, tiny proportion of the time, the answer will be "yes," and you can safely guess 25 and be right on. On balance, you're a
little better off, but not much.
On average, how much less will you lose given the extra information? The answer is given by the r-squared. If the r-squared of the suicide vs. lifespan regression is .01, as estimated above, then
your loss will be reduced by 1%. Instead of losing $225, on average, you'll lose only about $222.75.
Again: the r-squared doesn't tell you that suicide is dangerous. It just tells you that, because of *some combination of dangerousness of suicide and historical frequency of suicide*, you can shave
1% off your error by taking it into account.
Now, let's reapply this to basketball. The r-squared for salary vs. wins was .2561. The SD of wins was 14.1, so the variance was the square of that, or 199.
If you took a bet where you had to guess a random team's wins, and had to pay the square of the difference, you'd pick "41" and, on average, owe $199. But let's suppose someone tells you the team's
payroll. Now, you can adjust your guess, to predict higher if the team has a high payroll, or lower if the team has a low payroll. If you adjust your guess optimally -- by using the results of the
regression equation -- you'll cut your average loss by 25.61%. So, on average, you'd lose only 74.39% as much as before. That works out to $148.11.
What Berri, Brook and Schmidt are saying, in "The Wages of Wins," is, "look, if you can only cut your losses by 25.61% by knowing salary, then money can't be that important in buying wins." But
that's wrong. What they should conclude is that "how important money is, combined with how often it's been used to buy wins," isn't that important.
And, really, if you look at the full results of the regression, it turns out that money IS important in buying wins, but that not too many teams took advantage of that fact in 2008-09.
The equation shows that every $1.6 million dollars in additional salary will buy you a win -- so if you want to go 61-21, it should only cost you $32 million more than the league-average payroll of
$68.5 million.
That's pretty important, and so the low r-squared must be that not a lot of teams varied much in salary. If you look at the salary chart, there's a huge group bunched near the average: there are 18
teams between $62mm and $75mm, within $6.5 million of the average. Those teams are so close together that there's not much difference in their expected wins.
If you have to bet, and the random team you pick turns out to be the lowest-spending in the league, you'll reduce your estimate. You would have lost a lot of money guessing 41, so the information
that you picked a low-spending team will cut your losses a lot. If it turns out be be one of the highest-spending in the league, same thing. But if it turns out to be one of the 18 teams in the
mdidle, the salary information won't help you much. And why the r-squared is only about 25% -- for many of the teams in the sample, knowing the salary doesn't help you cut your losses much.
What if we take out those 18 teams, and regress only on the remaining 12? Well, the regression equation stays almost the same -- $1.5 million per win instead of $1.6. But the r-squared increases to
.4586. Why does the r-squared increase? Because salary is much more significant a factor for those 12 teams than for the ones in the middle. Before, knowing the salary might not do you much good for
your estimate if it's one of the teams bunched in the middle. But, now, those teams are gone. Your random team is much more likely to be the Cavaliers or the Clippers, so knowing the salary is a much
bigger help, and it lets you cut your betting losses by almost half.
One last summary:
1. The regression equation tells you how powerful the input is in affecting output -- is it a nuclear weapon, or a pea-shooter?
2. The r-squared tells you how powerful the input is, "multiplied by" how extensively the input was historically used. That is: a nuclear weapon used once might give you the same r-squared as a
pea-shooter used a billion times.
So a low r-squared might mean
-- an input that doesn't have much effect on the output (e.g., shoe size probably doesn't affect lifespan much);
-- an input that has a big effect on output but doesn't happen much (e.g., suicide curtails 100% of lifespan but happens rarely); or
-- an input that doesn't affect output and also doesn't happen much. (e.g., fluorescent purple shoes' effect on lifespan).
In the case of the 2008-09 NBA, the regression equation shows that salary is a fairly powerful bomb. And the moderate r-squared shows that not every team uses it to its full potential.
Bottom line: salary can indeed very effectively buy wins. The r-squared is as small as it is because, in 2008-09, NBA teams differed only moderately in how they chose to vary their spending.
Labels: basketball, gambling, NBA, payroll, statistics, The Wages of Wins
10 Comments:
Links to this post: | {"url":"http://blog.philbirnbaum.com/2009/05/regression-equation-versus-r-squared.html","timestamp":"2014-04-16T13:12:05Z","content_type":null,"content_length":"47563","record_id":"<urn:uuid:e48f5775-105c-4fec-9897-c9f9ef5716de>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morrisville State College
Topics include: Complex fractions; Evaluation and combinations of functions, inverse functions, exponential, and logarithmic functions, including applications; General angle trigonometry in radian
measure; Graphs of basic trigonometric functions; Transformations of sine and cosine functions; Trigonometric identities and equations; Law of sines and law of cosines, including applications. (TI-83
plus or TI-84 plus required. TI-Nspire or similar calculator is not allowed.) Prerequisite: MATH 102 (C or better required) or equivalent 3 credits (3 lecture hours), fall or spring semester These
credits count toward the Math and/or Science (List B) requirements for graduation. Students who successfully complete MATH 103 will fulfill the SUNY General Education requirement for Mathematics. | {"url":"http://www.morrisville.edu/Banner/Courses/coursedescription.aspx?prefix=MATH&coursenum=103","timestamp":"2014-04-18T16:02:29Z","content_type":null,"content_length":"2049","record_id":"<urn:uuid:8bb4a8ec-0755-4684-bb18-f07da3e56944>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Arithmetic of Dynamical Systems
Joseph H. Silverman
Springer-Verlag – Graduate Texts in Mathematics 241
ISBN: 13: 978-0-387-69903-5 – 1st ed. – © 2007 – 511 + ix pages
Math. Subj. Class [2010]: 37Pxx (37P05, 37P15, 37P20, 37P30, 37P35, 37P45, 37P50)
Available from Amazon and direct from Springer.
The Arithmetic of Dynamical Systems is a graduate level text designed to provide an entry into a new field that is an amalgamation of two venerable areas of mathematics, Dynamical Systems and Number
Theory. Many of the motivating theorems and conjectures in the new subject of Arithmetic Dynamics may be viewed as the transposition of classical results in the theory of Diophantine equations to the
setting of discrete dynamical systems, especially to the iteration theory of maps on the projective line and other algebraic varieties.
1. An Introduction to Classical Dynamics
2. Dynamics Over Local Fields: Good Reduction
3. Dynamics Over Global Fields
4. Families of Dynamical Systems
5. Dynamics Over Local Fields: Bad Reduction
6. Dynamics Associated to Algebraic Groups
7. Dynamics in Dimension Greater Than One
Click on the links for the following material.
Errata List
No book is ever free from error or incapable of being improved. I would be delighted to receive comments, good or bad, and corrections from my readers. You can send mail to me at
Return to Top of Page.
Go to J.H. Silverman's Home Page . | {"url":"http://www.math.brown.edu/~jhs/ADSHome.html","timestamp":"2014-04-18T05:30:32Z","content_type":null,"content_length":"3895","record_id":"<urn:uuid:8764534f-9dc0-4ae6-9387-db93b291efab>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
I really need help on this stuff
We are doing vector spaces in my math class right now and I'm TOTALLY lost. Here are a few of the problems I don't get:
Let u and v be (fixed) vectors in the vector space V. Show that the set W of all linear combinations ab+bv of u and v is a subspace of V.
Prove: If the (finite) set S of vectors contains the zero vector, then S is linearly dependent.
Determine whether or not the given vectors in R^n form a basis for R^n.
v[1]=(3,-7,5,2), v[2]=(1,-1,3,4), v[3]=(7,11,3,13)
Let {v[1],v[2],...,v[k]} be the basis for the proper subspace W of the vector space V, and suppose that the vector v of V is not in W. Show that the vectors v[1],v[2],...,v[k],v are linearly
Please help! Thanks!
isuckatmath wrote:Let u and v be (fixed) vectors in the vector space V. Show that the set W of all linear combinations ab+bv of u and v is a subspace of V.
Show that generic elements of the set fulfill the definition of a vector space. (There should be a list of properties for vector spaces. You need to show that these linear combinations obey these
isuckatmath wrote:Prove: If the (finite) set S of vectors contains the zero vector, then S is linearly dependent.
What is the definition of a linearly independent set? If you add the zero vector to such a set, what property no longer holds?
isuckatmath wrote:Determine whether or not the given vectors in R^n form a basis for R^n.
v[1]=(3,-7,5,2), v[2]=(1,-1,3,4), v[3]=(7,11,3,13)
Since the vectors are in R^4 and you have only three vectors (so clearly they cannot form a basis), try to find a vector in R^4 which is not in the span of the set you've been given.
isuckatmath wrote:Let {v[1],v[2],...,v[k]} be the basis for the proper subspace W of the vector space V, and suppose that the vector v of V is not in W. Show that the vectors v[1],v[2],...,v[k],v
are linearly independent.
What is the definition of a basis? So what can you say about the vectors in the basis?
If the vector v is not in W, can you form v by a linear combination of the basis vectors? So what can you say about the set with this new vector thrown in? | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=3080","timestamp":"2014-04-21T04:33:24Z","content_type":null,"content_length":"23407","record_id":"<urn:uuid:abfd8025-0195-4118-bc86-4a4f0a85e306>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Katherine on Monday, April 4, 2011 at 6:34pm.
The ratio of the number of postcards john had to the number of postcards Zachary ha d was 4:9. Zachary had 45 more postcards than John. After giving some postcards to John, Zachary had 6/7 as many
postcards as John.
(A) How many postcards did Zachary have in the beginning?
(B) How many postcards did Zachary give to John?
• math !!!! :O - Reiny, Monday, April 4, 2011 at 7:20pm
The way you define your variables is very crucial
let the number of cards John has be 4x
let the number of cards Zach has be 9x
(notice 4x : 9x = 4:9 )
Zach has 45 more than John
so 9x = 4x + 45
5x = 45
x = 9
so now we know that John has 36 and Zach has 81
so after the "give-away"
John has 36+y
Zach has 81-y
we are told that now Zach has 6/7 as many as John
81-y = (6/7)(36+y)
567 - 7y = 216 + 6y
-13y = -351
y = 27
Related Questions
math - The ratio of the number of postcards john had to the number of postcards ...
math - The ratio of the number of postcards john had to the number of postcards...
math - The ratio of the number of postcards john had to the number of postcards ...
Pre-Algreba - The ratio of the number of postcards john had to the number of ...
math word problems - Sally and marta had the same number of postcards. After ...
math - Sally and Marta had the same number of postcards. After Sally sold 18 of ...
math - There are 10 boxes of postcards. Each box contains 10 bundles of 10 ...
English - 1. What is the opposite of "upside down"? 2. Don't tell me, because I ...
math - Lena bought a total of 20 postcards. She bought 6 more large poscards ...
Math - Sally and Marta have the same number of postcards | {"url":"http://www.jiskha.com/display.cgi?id=1301956454","timestamp":"2014-04-20T04:35:49Z","content_type":null,"content_length":"8948","record_id":"<urn:uuid:aa1832e5-c610-4ce3-8cce-a02694fde22e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Magnetohydrodynamic free convection and entropy generation in a square porous cavity.
(English) Zbl 1079.76643
Summary: The problem of entropy generation in a fluid saturated porous cavity for laminar magnetohydrodynamic natural convection heat transfer is analyzed in this paper. Heat transfer results are
also presented additionally. Darcy’s law for porous media is considered. Magnetic force is assumed acting along the direction of the gravity force. As boundary conditions of the cavity, two vertical
opposite walls are kept at constant but different temperatures and the remaining two walls are kept thermally insulated. For a range of Rayleigh number ($Ra=1-{10}^{4}$) and Hartmann number ($Ha=
0-10$), heat transfer, overall entropy generation rate, and heat transfer irreversibility are presented in terms of dimensionless Nusselt number ($Nu$), entropy generation number ($Ns$), and Bejan
number ($Be$), respectively. Finally, parametric results are presented in terms of isothermal lines, streamlines, isentropic lines, and iso-Bejan lines.
76R10 Free convection (fluid mechanics)
76W05 Magnetohydrodynamics and electrohydrodynamics
76S05 Flows in porous media; filtration; seepage
80A20 Heat and mass transfer, heat flow | {"url":"http://zbmath.org/?q=an:1079.76643","timestamp":"2014-04-16T19:27:02Z","content_type":null,"content_length":"21905","record_id":"<urn:uuid:c347fbe2-7e92-4f1e-8d4e-961ee4fdeb3c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Society
Bulletin Notices
AMS Sectional Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:10:30
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
2003 Spring Central Section Meeting
Bloomington, IN, April 4-6, 2003
Meeting #985
Associate secretaries:
Susan J Friedlander
, AMS
Saturday April 5, 2003
• Saturday April 5, 2003, 8:00 a.m.-4:00 p.m.
Meeting Registration
Conference Lounge, Indiana Memorial Union
• Saturday April 5, 2003, 8:00 a.m.-4:00 p.m.
Exhibit and Book Sale
Distinguished Alumni Room, Indiana Memorial Union
• Saturday April 5, 2003, 8:30 a.m.-11:15 a.m.
Special Session on Holomorphic Dynamics, II
Room 135, Ballantine Hall
Eric D. Bedford, Indiana University bedford@indiana.edu
Kevin M. Pilgrim, Indiana University pilgrim@indiana.edu
• Saturday April 5, 2003, 8:30 a.m.-11:15 a.m.
Special Session on Geometric Topology, II
Room 246, Ballantine Hall
Paul A. Kirk, Indiana University pkirk@indiana.edu
Charles Livingston, Indiana University livingst@indiana.edu
□ 8:30 a.m.
Bifurcations, Symmetry, and $SU(n)$ Gauge Theory.
Christopher M Herald*, University of Nevada, Reno
□ 9:30 a.m.
Normal Surface Theory, Character Varieties and Quantum Invariants.
Charles D Frohman*, The University of Iowa
□ 10:30 a.m.
A cut-and-paste approach to contact topology.
Ko Honda, USC
William H. Kazez*, UGA
Gordana Matic, UGA
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Particle Models and Their Fluid Limits, I
Redbud Room, Indiana Memorial Union
Robert T. Glassey, Indiana University glassey@indiana.edu
David C. Hoff, Indiana University hoff@indiana.edu
□ 8:30 a.m.
On A Class of Parabolic Equations with Nonlocal Boundary Conditions.
Hong-Ming Yin*, Washington State University
□ 9:00 a.m.
Cosmology, Black Holes, and Shock Waves Beyond the Hubble Length.
Joel A Smoller*, Univ. of Michigan
□ 9:30 a.m.
On Steady States in Galactic Dynamics.
Jack W Schaeffer*, Carnegie Mellon University
□ 10:00 a.m.
□ 10:30 a.m.
Initial- Boundary Value Problem in Plasmas.
Marshall Slemrod*, University of Wisconsin-Madison
□ 11:00 a.m.
Approximation of the Vlasov-Poisson-Fokker-Planck System by a Deterministic Particle Method.
Stephen Wollman*, College of Staten Island, CUNY
Ercument Ozizmir, College of Staten Island, CUNY
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Mathematical and Computational Problems in Fluid Dynamics and Geophysical Fluid Dynamics, I
Persimmon Room, Indiana Memorial Union
Roger Temam, Indiana University temam@indiana.edu
Shouhong Wang, Indiana University showang@indiana.edu
□ 8:30 a.m.
Theory and applications of initial-boundary-value problems for nonlinear wave equations.
Jerry Bona*, University of Illinois at Chicago
□ 9:00 a.m.
Decomposing Atmospheric Weather Patterns.
Paul K. Newton*, University of Southern California
□ 9:30 a.m.
Small-Rossby Number Asymptotics in Rotating Fluid Systems.
Djoko Wirosoetisno*, Indiana University
□ 10:00 a.m.
□ 10:30 a.m.
Instabilities in the Quasi Geostrophic Equation.
Susan Friedlander*, U of Illinois-Chicago
□ 11:00 a.m.
Energy budgets in Charney--Hasegawa--Mima and surface quasi-geostrophic turbulence.
Chuong V Tran*, University of Alberta
John C Bowman, University of Alberta
• Saturday April 5, 2003, 8:30 a.m.-11:10 a.m.
Special Session on Operator Algebras and Free Probability, I
Room 215, Ballantine Hall
Hari Bercovici, Indiana University bercovic@indiana.edu
Marius Dadarlat, Purdue University mdd@math.purdue.edu
□ 8:30 a.m.
Connes' Embedding Problem, Lance's WEP and the Noncommutative Stone-Weierstrass Problem.
Nate Brown*, Penn State University
□ 9:30 a.m.
Invariant subspaces for operators modeled by certain upper triangular random matrices.
Ken Dykema*, Texas A&M University
Uffe Haagerup, University of Southern Denmark
□ 10:30 a.m.
On Voiculescu's non-microstates free entropy.
Dimitri Y Shlyakhtenko*, UCLA
• Saturday April 5, 2003, 8:30 a.m.-11:15 a.m.
Special Session on Applications of Teichm\u"ller Theory to Dynamics and Geometry,II
Room 247, Ballantine Hall
Christopher M. Judge, Indiana University cjudge@indiana.edu
Matthias Weber, Indiana University matweber@indiana.edu
□ 8:30 a.m.
Counting Problems on Translation Surfaces.
Howard Masur*, University of Illinois at Chicago
□ 9:30 a.m.
Relative isoperimetric inequality: an extension of the classical isoperimetric inequality.
Jaigyoung Choe*, Seoul National University
□ 10:30 a.m.
Infinitely Generated Veech Groups.
Pascal Hubert, Institut de Mathematiques de Luminy
Thomas A Schmidt*, Oregon State University
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Differential Geometry, II
Room 140, Ballantine Hall
Jiri Dadok, Indiana University dadok@indiana.edu
Bruce Solomon, Indiana University solomon@indiana.edu
Ji-Ping Sha, Indiana university jsha@indiana.edu
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Recent Trend in the Analysis and Computations of Functional Differential
Maple Room, Indiana Memorial Union
Paul W. Eloe, University of Dayton paul.eloe@notes.udayton.edu
Qin Sheng, University of Dayton qin.sheng@notes.udayton.edu
□ 8:30 a.m.
Positive Solutions of a Three-Point Boundary Value Problem on a Time Scale.
Eric R. Kaufmann*, University of Arkansas at Little Rock
□ 9:00 a.m.
Existence Results for Impulsive Functional Differential Equations.
John M Davis*, Baylor University
□ 9:30 a.m.
A Quasilinearization Approach for Nonlinear Boundary Value Problems on Time Scales.
Elvan Akin-Bohner*, University of Missouri-Rolla
Ferhan Merdivenci Atici, Western Kentucky University
□ 10:00 a.m.
□ 10:30 a.m.
Positive Solutions for Singular Three Point Boundary Value Problems.
Parmjeet K Singh*, Baylor University
□ 11:00 a.m.
Stability properties of linear Volterra integrodifferential equations.
Muhammad Islam*, University of Dayton
Youssef Raffoul, University of Dayton
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Graph and Design Theory, II
Room 144, Ballantine Hall
Atif A. Abueida, University of Dayton Atif.Abueida@notes.udayton.edu
Mike Daven, Mount Saint Mary College daven@msmc.edu
• Saturday April 5, 2003, 8:30 a.m.-11:15 a.m.
Special Session on Representations of Infinite Dimensional Lie Algebras and Mathematical Physics, II
Room 219, Ballantine Hall
Katrina Deane Barron, University of Notre Dame kbarron@nd.edu
Rinat Kedem, University of Illinois, Urbana-Champaign rinat@math.uiuc.edu
• Saturday April 5, 2003, 8:30 a.m.-11:25 a.m.
Special Session on Graph Theory, I
Room 209, Ballantine Hall
Tao Jiang, Miami University jiangt@muohio.edu
Zevi Miller, Miami University millerz@muohio.edu
Dan Pritikin, Miami University pritikd@muohio.edu
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Extremal Combinatorics, II
Room 208, Ballantine Hall
Dhruv Mubayi, University of Illinois at Chicago mubayi@math.uic.edu
Jozef Skokan, University of Illinois at Urbana-Champaign jozef@math.uiuc.edu
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Cryptography and Computational and Algorithmic Number Theory, II
Room 205, Ballantine Hall
Joshua Holden, Rose-Hulman Institute of Technology holden@rose-hulman.edu
John Rickert, Rose-Hulman Institute of Technology john.rickert@rose-hulman.edu
Jonathan Sorenson, Butler University sorenson@butler.edu
Andreas Stein, University of Illinois at Urbana-Champaign andreas@math.uiuc.edu
□ 8:30 a.m.
NUCOMP I - Idea and Algorithm.
Renate Scheidler*, University of Calgary
□ 9:00 a.m.
NUCOMP II - Implementation and Applications.
Michael J Jacobson*, University of Calgary / CISaC
□ 9:30 a.m.
Some results concerning periodic continued fractions.
Roger Patterson, Macquarie University
Alf J van der Poorten, Macquarie University
Hugh C Williams*, University of Calgary
□ 10:00 a.m.
□ 10:30 a.m.
Tate-pairing implementations for tripartite key-agreement.
Iwan M Duursma*, Univ of Illinois at U-C
Hyang-Sook Lee, Ewha Womans University
□ 11:00 a.m.
Permutation-Sensitive Signatures.
Eric Bach*, University of Wisconsin
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Operator Algebras and Their Applications, I
Room 217, Ballantine Hall
Jerry Kaminker, Indiana University-Purdue University Indianapolis kaminker@math.iupui.edu
Ronghui Ji, Indiana University-Purdue University Indianapolis ronji@math.iupui.edu
• Saturday April 5, 2003, 8:30 a.m.-11:20 a.m.
Special Session on Stochastic Analysis with Applications, II
Oak Room, Indiana Memorial Union
Jin Ma, Purdue University majin@math.purdue.edu
Frederi Viens, Purdue University viens@stat.purdue.edu
• Saturday April 5, 2003, 9:00 a.m.-11:15 a.m.
Special Session on Harmonic Analysis in the 21st Century, II
Hoosier Room, Indiana Memorial Union
Winston C. Ou, Indiana University wou@indiana.edu
Alberto Torchinsky, Indiana University torchins@indiana.edu
□ 9:00 a.m.
Fixed Points of Holomorphic Mappings.
Steven G. Krantz*, Washington University in St. Louis
□ 10:00 a.m.
A Tauberian Theorem for Ergodic Averages, Maximal Estimates, and Spectral Decomposability for Positive Invertible Operators.
Earl R. Berkson*, University of Illinois
T. A. Gillespie, University of Edinburgh
□ 10:30 a.m.
Bellman functions with Wavelets.
Janine Wittwer*, Williams College
• Saturday April 5, 2003, 9:00 a.m.-11:15 a.m.
Special Session on Probability, II
Sassafras Room, Indiana Memorial Union
Russell D. Lyons, Indiana University rdlyons@indiana.edu
Robin A. Pemantle, Ohio State University pemantle@math.ohio-state.edu
□ 9:00 a.m.
Optimization of shape in continuum percolation.
Johan Jonasson*, Chalmers University of Technology
□ 10:00 a.m.
Percolation on Finite Cayley Graphs.
Igor Pak*, MIT
□ 10:30 a.m.
What do bootstrap percolation and integer partitions have in common?
Thomas M Liggett*, UCLA
A E Holroyd, UC Berkeley
Dan Romik, Weizmann Institute
• Saturday April 5, 2003, 9:00 a.m.-11:20 a.m.
Special Session on Codimension One Splittings of Manifolds, II
Room 236, Ballantine Hall
James F. Davis, Indiana University jfdavis@indiana.edu
Andrew A. Ranicki, University of Edinburgh aar@maths.ed.ac.uk
□ 9:00 a.m.
Twisting diagrams, codimension 1 splittings and Unil.
Ian Hambleton*, McMaster University
□ 9:30 a.m.
UNil(Z,Z,Z) and 4-manifolds.
Bjoern Jahren*, University of Oslo
Slawomir Kwasik, Tulane University
□ 10:00 a.m.
□ 10:30 a.m.
The block structure space of the infinite-dimensional real projective space.
Tibor Macko*, Univerisity of Aberdeen, Scotland, UK
□ 11:00 a.m.
Splitting Theorems for Connected Sums.
James F. Davis*, Indiana University
Francis X Connolly, University of Notre Dame
• Saturday April 5, 2003, 9:00 a.m.-11:25 a.m.
Special Session on Algebraic Topology, II
Room 214, Ballantine Hall
Randy McCarthy, University of Illinois, Urban-Champaign randy@math.uiuc.edu
Ayelet Lindenstrauss, Indiana University ayelet@math.indiana.edu
• Saturday April 5, 2003, 9:30 a.m.-11:20 a.m.
Special Session on Weak Dependence in Probability and Statistics, II
Walnut Room, Indiana Memorial Union
Richard C. Bradley, Indiana University bradleyr@indiana.edu
Lanh T. Tran, Indiana University tran@indiana.edu
• Saturday April 5, 2003, 10:00 a.m.-11:20 a.m.
Special Session on Ergodic Theory and Dynamical Systems, II
Room 006, Ballantine Hall
Roger L. Jones, DePaul University rjones@condor.depaul.edu
Ayse A. Sahin, DePaul University asahin@condor.depaul.edu
• Saturday April 5, 2003, 11:40 a.m.-12:30 p.m.
Invited Address
More rigorous results on the NK model.
Whittenberger Auditorium, Indiana Memorial Union
V. Limic, University of British Columbia
R. Pemantle*, Ohio State University
• Saturday April 5, 2003, 2:00 p.m.-2:50 p.m.
Invited Address
On the asphericity of hyperplane complements.
Room 013, Ballantine Hall
Daniel J. Allcock*, University of Texas
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Ergodic Theory and Dynamical Systems, III
Room 006, Ballantine Hall
Roger L. Jones, DePaul University rjones@condor.depaul.edu
Ayse A. Sahin, DePaul University asahin@condor.depaul.edu
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Holomorphic Dynamics, III
Room 135, Ballantine Hall
Eric D. Bedford, Indiana University bedford@indiana.edu
Kevin M. Pilgrim, Indiana University pilgrim@indiana.edu
□ 3:00 p.m.
On Geometrically Finite Branched Covering Maps.
Guizhen Cui, Academy of Mathematics and System Sciences
Yunping Jiang*, CUNY Graduate Center and Queens College
Dennis Sullivan, CUNY Graduate Center
□ 4:00 p.m.
Random Iteration and Degenerate Domains.
Linda Keen*, Lehman College, CUNY
Nikola Lakic, Lehman College, CUNY
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Weak Dependence in Probability and Statistics,III
Walnut Room, Indiana Memorial Union
Richard C. Bradley, Indiana University bradleyr@indiana.edu
Lanh T. Tran, Indiana University tran@indiana.edu
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Geometric Topology, III
Room 246, Ballantine Hall
Paul A. Kirk, Indiana University pkirk@indiana.edu
Charles Livingston, Indiana University livingst@indiana.edu
□ 3:00 p.m.
An index two subcategory of a certain cobordism category.
Patrick Gilmer*, Louisiana State University
□ 4:00 p.m.
Links of complex surface singularities.
Liviu I Nicolaescu*, University of Notre Dame
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Harmonic Analysis in the 21st Century, III
Hoosier Room, Indiana Memorial Union
Winston C. Ou, Indiana University wou@indiana.edu
Alberto Torchinsky, Indiana University torchins@indiana.edu
□ 3:00 p.m.
Lipschitz Spaces and Calderon Zygmund Operators associated to Non-doubling Measures.
A. Eduardo Gatto*, DePaul University
Jose Garcia-Cuerva, Univ. Autonoma de Madrid
□ 4:00 p.m.
$L^p$--estimates for singular integrals of even kernels.
Rodrigo Banuelos*, Purdue University
Pedro J Mendez-Hernandez, University of Utah
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Particle Models and Their Fluid Limits, II
Redbud Room, Indiana Memorial Union
Robert T. Glassey, Indiana University glassey@indiana.edu
David C. Hoff, Indiana University hoff@indiana.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Probability, III
Sassafras Room, Indiana Memorial Union
Russell D. Lyons, Indiana University rdlyons@indiana.edu
Robin A. Pemantle, Ohio State University pemantle@math.ohio-state.edu
□ 3:00 p.m.
Entropy and graph homomorphisms.
David Galvin*, Microsoft Research
Prasad Tetali, Georgia Tech
□ 3:30 p.m.
Some Ideas in Scenery Reconstruction.
Heinrich Felix Matzinger*, University of Bielefeld, Germany
□ 4:30 p.m.
A threshold routing system in heavy traffic.
Vlada Limic*, British Columbia
Ruth Williams, UC San Diego
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Mathematical and Computational Problems in Fluid Dynamics and Geophysical Fluid Dynamics, II
Persimmon Room, Indiana Memorial Union
Roger Temam, Indiana University temam@indiana.edu
Shouhong Wang, Indiana University showang@indiana.edu
• Saturday April 5, 2003, 3:00 p.m.-5:10 p.m.
Special Session on Operator Algebras and Free Probability, II
Room 215, Ballantine Hall
Hari Bercovici, Indiana University bercovic@indiana.edu
Marius Dadarlat, Purdue University mdd@math.purdue.edu
□ 4:00 p.m.
Non-commutative $L_p$ spaces and their operator space structures.
Zhong-Jin Ruan*, University of Illinois
□ 4:30 p.m.
On free semigroupoid algebras.
David W. Kribs*, Purdue University
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Applications of Teichm\u"ller Theory to Dynamics and Geometry, III
Room 247, Ballantine Hall
Christopher M. Judge, Indiana University cjudge@indiana.edu
Matthias Weber, Indiana University matweber@indiana.edu
□ 3:00 p.m.
Moduli spaces of constant mean curvature surfaces properly embedded in $R^3$.
Rob Kusner*, University of Massachusetts at Amherst
□ 4:00 p.m.
Dynamics on branched covers of Veech surfaces.
Alex Eskin*, University of Chicago
J Marklof, University of Bristol
D Witte, Oklahoma State University
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Differential Geometry, III
Room 140, Ballantine Hall
Jiri Dadok, Indiana University dadok@indiana.edu
Bruce Solomon, Indiana University solomon@indiana.edu
Ji-Ping Sha, Indiana university jsha@indiana.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Recent Trend in the Analysis and Computations of Functional Differential
Maple Room, Indiana Memorial Union
Paul W. Eloe, University of Dayton paul.eloe@notes.udayton.edu
Qin Sheng, University of Dayton qin.sheng@notes.udayton.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Graph and Design Theory, III
Room 144, Ballantine Hall
Atif A. Abueida, University of Dayton Atif.Abueida@notes.udayton.edu
Mike Daven, Mount Saint Mary College daven@msmc.edu
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Representations of Infinite Dimensional Lie Algebras and Mathematical Physics, III
Room 219, Ballantine Hall
Katrina Deane Barron, University of Notre Dame kbarron@nd.edu
Rinat Kedem, University of Illinois, Urbana-Champaign rinat@math.uiuc.edu
□ 3:00 p.m.
$C_2$-cofiniteness of the vertex operator algebra $V_L^+$ when $L$ is a rank one lattice.
Gaywalee - Yamskulna*, SUNY at Binghamton
□ 3:30 p.m.
Affinization of commutative algebras.
Michael Roitman*, University of Michigan
□ 4:00 p.m.
Framed VOAs and frame stabilizers.
Robert L. Griess*, University of Michigan
• Saturday April 5, 2003, 3:00 p.m.-4:55 p.m.
Special Session on Graph Theory, II
Room 209, Ballantine Hall
Tao Jiang, Miami University jiangt@muohio.edu
Zevi Miller, Miami University millerz@muohio.edu
Dan Pritikin, Miami University pritikd@muohio.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Extremal Combinatorics, III
Room 208, Ballantine Hall
Dhruv Mubayi, University of Illinois at Chicago mubayi@math.uic.edu
Jozef Skokan, University of Illinois at Urbana-Champaign jozef@math.uiuc.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Cryptography and Computational and Algorithmic Number Theory, III
Room 205, Ballantine Hall
Joshua Holden, Rose-Hulman Institute of Technology holden@rose-hulman.edu
John Rickert, Rose-Hulman Institute of Technology john.rickert@rose-hulman.edu
Jonathan Sorenson, Butler University sorenson@butler.edu
Andreas Stein, University of Illinois at Urbana-Champaign andreas@math.uiuc.edu
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Operator Algebras and Their Applications, II
Room 217, Ballantine Hall
Jerry Kaminker, Indiana University-Purdue University Indianapolis kaminker@math.iupui.edu
Ronghui Ji, Indiana University-Purdue University Indianapolis ronji@math.iupui.edu
□ 3:00 p.m.
□ 3:30 p.m.
Euler characteristics of manifolds of bounded geometry.
John G Miller*, IUPUI
□ 4:00 p.m.
Hyperbolic groups, bounded cohomology and orbit equivalence superrigidity.
Igor Mineyev*, University of Illinois at Urbana-Champaign
□ 4:30 p.m.
Uniformly embedding discrete groups into Hilbert space.
Erik Guentner*, University of Hawaii, Manoa
Nigel Higson, Pennsylvania State University
Shmuel Weinberger, University of Chicago
• Saturday April 5, 2003, 3:00 p.m.-4:20 p.m.
Special Session on Codimension One Splittings of Manifolds, III
Room 236, Ballantine Hall
James F. Davis, Indiana University jfdavis@indiana.edu
Andrew A. Ranicki, University of Edinburgh aar@maths.ed.ac.uk
□ 3:00 p.m.
A homotopic approach to the Isomorphism Conjecture.
Daniel Juan-Pineda, Instituto de Matematicas, UNAM, Campus Morelia
Stratos Prassidis*, Canisius College
□ 3:30 p.m.
Codimension 1 splitting and noncommutative localization.
Andrew A Ranicki*, University of Edinburgh
□ 4:00 p.m.
Seifert Surfaces and Characteristic Polynomials.
Desmond Sheiham*, University of California, Riverside
• Saturday April 5, 2003, 3:00 p.m.-4:50 p.m.
Special Session on Stochastic Analysis with Applications, III
Oak Room, Indiana Memorial Union
Jin Ma, Purdue University majin@math.purdue.edu
Frederi Viens, Purdue University viens@stat.purdue.edu
□ 3:00 p.m.
Particle representations for measure-valued branching processes.
Thomas G. Kurtz*, University of Wisconsin-Madison
Eliane R. Rodrigues, National Autonomous University of Mexico
□ 4:00 p.m.
Particle approximations of Lyapunov exponents connected to Schr\"odinger operators and Feynman-Kac semigroups.
Pierre Del Moral*, LSP Universite Paul Sabatier Toulouse, France
□ 4:30 p.m.
On spin glasses systems and stochastic calculus.
Samy Tindel*, University of Paris 13
• Saturday April 5, 2003, 3:00 p.m.-4:45 p.m.
Special Session on Algebraic Topology, III
Room 214, Ballantine Hall
Randy McCarthy, University of Illinois, Urban-Champaign randy@math.uiuc.edu
Ayelet Lindenstrauss, Indiana University ayelet@math.indiana.edu
□ 3:00 p.m.
$p$-compact groups and $p$-local groups.
Jesper Grodal*, University of Chicago
□ 3:30 p.m.
(Homological?) decompositions of Out F_n.
Daniel Biss*, University of Chicago
□ 4:00 p.m.
A correspondence between atomic H spaces and atomic co-H spaces.
Brayton Gray*, University of Illinois at Chicago
• Saturday April 5, 2003, 3:00 p.m.-4:10 p.m.
Session for Contributed Papers
Room 229, Ballantine Hall
• Saturday April 5, 2003, 5:10 p.m.-6:00 p.m.
Invited Address
On the motion of the interface of the two phase flows.
Room 013, Ballantine Hall
Sijue Wu*, University of Maryland, College Park/Harvard University
• Saturday April 5, 2003, 6:15 p.m.-7:30 p.m.
Reception (Hosted by the Department of Mathematics)
Alumni Hall, Indiana Memorial Union
Inquiries: meet@ams.org | {"url":"http://ams.org/meetings/sectional/2085_program_saturday.html","timestamp":"2014-04-21T00:04:31Z","content_type":null,"content_length":"101857","record_id":"<urn:uuid:a353f6fa-1be8-48df-831f-6d6c121c8924>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Maxima] unsimplified result from integrate
Barton Willis willisb at unk.edu
Wed Jan 21 05:12:37 CST 2009
I was looking into bug 2465066 "unsimplified result from integrate." The
bug is
(%i1) matchdeclare(x, symbolp)$
(%i2) tellsimpafter('integrate(f(x),x), g(x))$
(%i3) integrate(5*f(x)+7,x);
(%o3) 5*integrate(f(x),x)+7*x
(%i4) expand(%,0,0);
(%o4) 5*g(x)+7*x
In sinint (sin.lisp), I see
((let ((ans (simplify
(let ($opsubst varlist genvar stack)
(integrator exp var)))))
(if (sum-of-intsp ans)
(list '(%integrate) exp var)
Changing simplify to ($expand ... 0 0) fixes the bug. I'm not sure why
($expand ... 0 0) works and simplify doesn't. Maybe the code needs to be
fixed elsewhere. Advice? Is ($expand ... 0 0) an OK fix?
More information about the Maxima mailing list | {"url":"http://www.ma.utexas.edu/pipermail/maxima/2009/015474.html","timestamp":"2014-04-20T23:35:19Z","content_type":null,"content_length":"3222","record_id":"<urn:uuid:fea368f6-eb00-4cb6-968e-a74a7f1feac3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
hi, can anyone help me to understand how to find the derivatives of exponential functions?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
the shortcut: if the function is of the form: a^x, then the derivative is (a^x)*ln(a) So d/dx (2^x) = 2^x * ln2 why? you can show that the limit process on the slope equation gives: lim(h->0) a^x
(a^h - 1)/h with a really straight forward simplification then lim(h->0) (a^h-1)/h = ln(a)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50784f87e4b02f109be427ab","timestamp":"2014-04-19T12:52:57Z","content_type":null,"content_length":"27897","record_id":"<urn:uuid:badb63ff-82ac-4e00-b8fe-28258ccc2276>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrices proof
January 18th 2013, 07:25 PM #1
Nov 2012
Matrices proof
If X = $P^{-1}AP and A^{3} = I, prove that X^{3} =I$
I did this
$X^{3} = (P^{-1}AP)(P^{-1}AP)(P^{-1}AP) = (P^{-3}A^{3}P^{3}) = (P^{-3+3}A^{3}) = A^{3} = I$
I used the law of indices, is it right to apply this for matrices?
Please help me, I am not sure if I did it right here!
Thank you!
Re: Matrices proof
Hey Tutu.
Hint: Group the P*P^(-1) = I (I is identity matrix) together to get a lot of cancellation and then use A^3 = I to get your final result.
Re: Matrices proof
Thank you!
I thought I could not change the order within the multiplication, but so is it that I can do it in thie case? Why?
Is it
X^3 = (P^(-1)PA)(P^(-1)PA)(P^(-1)PA)
= P^(-3) (PA)^3
= I
Wait, I did this using laws of indices again, can you please explain how to do it, I'm afraid I do not understand..
Thank you ever so much!
Re: Matrices proof
your proof is incorrect.
$X^3 = (P^{-1}AP)^3 = (P^{-1}AP)(P^{-1}AP)(P^{-1}AP)$
$= P^{-1}A(PP^{-1})A(PP^{-1})AP = P^{-1}AIAIAP = P^{-1}AAAP = P^{-1}A^3P$
NOT $P^{-3}A^3P^3$.
and since $A^3 = I$,
$P^{-1}A^3P = P^{-1}IP = P^{-1}P = I$.
no "switching around of order" is required.
Re: Matrices proof
Thank you!
Does that mean that in matrices, PP^(-1) = P? I thought it equalled to I
Re: Matrices proof
Sorry, ignore the previous post. I get it now!
January 18th 2013, 07:55 PM #2
MHF Contributor
Sep 2012
January 21st 2013, 03:53 AM #3
Nov 2012
January 21st 2013, 06:08 AM #4
MHF Contributor
Mar 2011
January 21st 2013, 06:56 AM #5
Nov 2012
January 21st 2013, 06:57 AM #6
Nov 2012 | {"url":"http://mathhelpforum.com/algebra/211574-matrices-proof.html","timestamp":"2014-04-18T15:01:36Z","content_type":null,"content_length":"42710","record_id":"<urn:uuid:ccf87083-2241-4b5c-8686-809b8f65080a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: HOW ALGEBRAIC IS ALGEBRA ?
J. AD '
AMEK \Lambda) , F. W. LAWVERE AND J. ROSICK '
Y \Lambda)
ABSTRACT. The 2category VAR of finitary varieties is not varietal over CAT . We
introduce the concept of an algebraically exact category and prove that the 2category
ALG of all algebraically exact categories is an equational hull of VAR w.r.t. all op
erations with rank. Every algebraically exact category K is complete, exact, and has
filtered colimits which (a) commute with finite limits and (b) distribute over products;
besides (c) regular epimorphisms in K are productstable. It is not known whether (a)
-- (c) characterize algebraic exactness. An equational hull of VAR w.r.t. all operations
is also discussed.
I. Introduction
I.1 Is algebra algebraic? The purpose of our paper is to study the nonfull embed
U : VAR ! CAT
of the 2category of all finitary varieties into the 2quasicategory of all categories. The
morphisms (1cells) of the former are indicated by the duality between varieties and alge
braic theories introduced in [ALR 1 ]: they are the algebraically exact functors, i.e., finitary
right adjoints preserving regular epimorphisms. And 2cells are the natural transforma | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/414/3816522.html","timestamp":"2014-04-19T17:39:18Z","content_type":null,"content_length":"8395","record_id":"<urn:uuid:6e9eef19-b18f-4eae-b660-9300dc8c8628>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Auto labeling environments in AUCTeX
up vote 6 down vote favorite
If I insert some environment in AUCTeX by C-c C-e (for example equation or figure) then AUCTeX ask for a label with auto-inserted text eq: or fig:.
I would like to add theorem environment to LaTeX environments in AUCTeX. I done this by
(add-hook 'LaTeX-mode-hook
(lambda ()
'("theorem" LaTeX-env-label)
Moreover I hove something like
(setq reftex-label-alist
("theorem" ?t "thm:" "~\\ref{%s}" t ("theorem" "th."))
Then when I use C-c C-e to add theorem environment then it ask for a label for a theorem but without auto thm: text. I need to add this manually.
Is it possible to make AUCTeX add theorem environment acts the same as equation or figure adding auto thm: text to a label?
To clarify, if I add theorem environment without a label and then use C-c ( to use RefTeX to add a label then it ask for a label in the form thm:.
emacs latex auctex
1 have you set reftex-plug-into-AUCTeX to t? – rvf0068 May 7 '12 at 18:13
@rvf0068: yes, it's turned on. – xen May 7 '12 at 19:49
@rvf0068 now it works but i think there are still some issues: if you type theorem and then press C-c ), you can't select them for some reason. – xD13G0x May 7 '12 at 21:22
I can't explain it, but sometimes it helps me to reload the .tex buffer, and to reset AuCTeX (C-u C-c C-n). – rvf0068 May 7 '12 at 22:35
add comment
1 Answer
active oldest votes
Finally got it.
I was not aware that after adding something like
(setq reftex-label-alist
("theorem" ?t "thm:" "~\\ref{%s}" t ("theorem" "th."))
up vote 2 down vote accepted
to my .emacs I should do
If I put this into .emacs after my RefTeX options then everything works great.
add comment
Not the answer you're looking for? Browse other questions tagged emacs latex auctex or ask your own question. | {"url":"http://stackoverflow.com/questions/10475379/auto-labeling-environments-in-auctex","timestamp":"2014-04-18T23:53:40Z","content_type":null,"content_length":"68608","record_id":"<urn:uuid:64195739-7ec7-4c1f-9e8e-9cac0303752b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parametrized Surface
April 8th 2013, 03:56 PM #1
Apr 2013
Parametrized Surface
For a parametrized surface r: K (subset of R^2) -> R^3 and a parameter value (u0,v0) in K, show that there is a neighborhood N of (u0,v0) such that r(N) is the image of a projectionally
parametrized surface. A projectionally parametrized surface is one that has as its image the graph of a function defined on a region in a coordinate plance. Ex: r(x, y) = (x, y, g(x,y)) where g:
R -> R is continuously differentiable with bounded partial derivatives.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/new-users/217036-parametrized-surface.html","timestamp":"2014-04-19T01:51:28Z","content_type":null,"content_length":"28950","record_id":"<urn:uuid:04cb403e-0838-4063-a936-b5d4936a8ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Graphs and Models series by Bittinger, Beecher, Ellenbogen, and Penna is known for helping students “see the math” through its focus on visualization and technology. These books continue to
maintain the features that have helped students succeed for years: focus on functions, visual emphasis, side-by-side algebraic and graphical solutions, and real-data applications.
With the
Fifth Edition
, visualization is taken to a new level with technology, and students find more ongoing review. In addition, ongoing review has been added with new
Mid-Chapter Mixed Review
exercise sets and new
Study Guide summaries
to help students prepare for tests.
This package contains:
• College Algebra: Graphs and Models, Fifth Edition
Enhance your learning experience with text-specific study materials.
This title is also sold in the various packages listed below. Before purchasing one of these packages, speak with your professor about which one will help you be successful in your course.
Package ISBN-13: 9780321894878
Includes this title packaged with:
• Graphing Calculator Manual for College Algebra: Graphs and Models, 5th Edition
Judith A. Penna
• MathXL Valuepack Access Card (6-months)
. J. Pearson
$219.33 | Add to Cart
Package ISBN-13: 9780321893949
Includes this title packaged with:
• Graphing Calculator Manual for College Algebra: Graphs and Models, 5th Edition
Judith A. Penna
• Student's Solutions Manual for College Algebra: Graphs and Models, 5th Edition
Judith A. Penna
• MyMathLab -- Valuepack Access Card
. J. Pearson
$222.67 | Add to Cart
Purchase Info
ISBN-10: 0-321-78395-6
ISBN-13: 978-0-321-78395-0
Format: Alternate Binding
Digital Choices
MyLab & Mastering ?
MyLab & Mastering products deliver customizable content and highly personalized study paths, responsive learning tools, and real-time evaluation and diagnostics. MyLab & Mastering products help move
students toward the moment that matters most—the moment of true understanding and learning.
eTextbook ?
With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs.
Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere.
Print Choices
Alternative Options ?
Click on the titles below to learn more about these options.
Loose Leaf Version ?
Books a la Carte are less-expensive, loose-leaf versions of the same textbook. | {"url":"http://www.mypearsonstore.com/bookstore/college-algebra-graphs-and-models-0321783956","timestamp":"2014-04-16T17:01:11Z","content_type":null,"content_length":"19369","record_id":"<urn:uuid:2764b468-1104-4c42-9386-d010a1d65367>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability question involving binomial distributions?
May 15th 2010, 10:23 AM
Probability question involving binomial distributions?
Each question in a 15-question multiple-choice test had 5 possible answers. Suppose you guess randomly on each answer.
a)Show the probability distribution for the number of correct answers.(In a formula using x as the variable.)
b)Verify that the formula E(X)=np for the expectation of the number of correct answers.
Steps shown would be helpful.
May 15th 2010, 10:28 AM
Each question in a 15-question multiple-choice test had 5 possible answers. Suppose you guess randomly on each answer.
a)Show the probability distribution for the number of correct answers.(In a formula using x as the variable.)
b)Verify that the formula E(X)=np for the expectation of the number of correct answers.
Steps shown would be helpful.
Showing some of your own work would be helpful also.
May 17th 2010, 09:21 AM
[quote=icy;513227]Each question in a 15-question multiple-choice test had 5 possible answers. Suppose you guess randomly on each answer.
a)Show the probability distribution for the number of correct answers.(In a formula using x as the variable.p=1/3,n=15,the rest you can do it
b)Verify that the formula E(X)=np for the expectation of the number of correct answers.
let t=0
=np(1-p+p)^(n-1) where q=1-p
May 18th 2010, 03:15 AM
mr fantastic
Each question in a 15-question multiple-choice test had 5 possible answers. Suppose you guess randomly on each answer.
a)Show the probability distribution for the number of correct answers.(In a formula using x as the variable.p=1/3,n=15,the rest you can do it
b)Verify that the formula E(X)=np for the expectation of the number of correct answers.
let t=0
=np(1-p+p)^(n-1) where q=1-p
I suspect that part (b) of the question requires the OP to explicitly calculate the mean using the answer to part (a), and then show that it gives the same value as the formula np. | {"url":"http://mathhelpforum.com/statistics/144843-probability-question-involving-binomial-distributions-print.html","timestamp":"2014-04-17T05:06:21Z","content_type":null,"content_length":"6907","record_id":"<urn:uuid:e4d06d03-9293-4bd3-b1c1-cceb8223ca71>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Nearest value
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Nearest value
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: Nearest value
Date Sat, 29 Nov 2003 12:01:34 -0000
Ramani Gunatilaka asked
> I have a data set with consumption and other variables such
> as number of
> adults, district, sector for each household.
> I need to write a programme that requires selecting the
> particular household
> whose consumption is nearest to the mean consumption of all
> the households.
and Renzo Comolli replied
> How do you plan to handle ties (i.e. a case in which there
> is more than one
> household which is at the same minimal distance from the average
> consumption)?
> If you plan to pick one of them at random then
> . egen meanconsumption=mean(consumption)
> . generate absconsumptiondev=abs(meanconsumption-consumption)
> . sort absconsumptiondev
> . keep in 1
> -sort- already does the randomization for you among ties
> If you plan to keep all the ties
> . egen meanconsumption=mean(consumption)
> . generate absconsumptiondev=abs(meanconsumption-consumption)
> . egen mindevfromavgcons=min(absconsumptiondev)
> . keep if absconsumptiondev==mindevfromavgcons
In the same spirit, note that you don't need to
store the mean (which is clearly a constant) in
a variable.
su consumption, meanonly
produces (silently) a mean accessible immediately
thereafter as -r(mean)-, so you can then
gen absconsumptiondev = abs(consumption - r(mean))
Similarly, you don't to need to store the
minimum in a variable, as a similar approach
could be used. In this case, however,
sort absconsumption
would let you look at the first few households.
The -egen- approach really comes into its own
when you want to do something like this
within (e.g.) panels.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2003-11/msg00829.html","timestamp":"2014-04-16T07:20:06Z","content_type":null,"content_length":"6844","record_id":"<urn:uuid:a965be5e-6a70-4057-95e2-eebe2c13534c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Boylston Geometry Tutor
Find a West Boylston Geometry Tutor
...I am currently tutoring a student Trigonometry. In America, my major was Accounting. I had Financial Accounting 1 and 2, Intermediate Accounting 1 and 2, and Managerial Accounting--all As.
11 Subjects: including geometry, accounting, Chinese, algebra 1
...In Massachusetts I am certified to teach Chemistry in high school and planning to join the teaching profession very soon. I like to interact with students. I provide clear and simple
explanation of concepts which helps my students a lot.Hi everyone, I have keen interest in cooking Indian food.I...
13 Subjects: including geometry, chemistry, biology, algebra 2
...I will tutor in your home or if preferred at an agreed upon public building such as a library or your child's school with permission. I have 5 children ranging in age from 16-27 years old. I
have been there as a parent, trying to help my child feel positive about themselves and their abilities.
12 Subjects: including geometry, reading, writing, algebra 1
...I do not do programming! If you are struggling with your computer and feel guidance will help- it usually does- give me a try. I will help you become comfortable with the machine.
45 Subjects: including geometry, chemistry, English, physics
...I have taught all age groups from kindergartner to graduate/professional students during my own teaching career of almost 30 years. I have also taught students who were not able to perform
well in math and science while in primary and middle school. I am consistent and patient.
11 Subjects: including geometry, calculus, biology, Japanese
Nearby Cities With geometry Tutor
Bolton, MA geometry Tutors
Boylston geometry Tutors
Clinton, MA geometry Tutors
Holden, MA geometry Tutors
Jefferson, MA geometry Tutors
Lancaster, MA geometry Tutors
Leicester, MA geometry Tutors
Northborough geometry Tutors
Paxton, MA geometry Tutors
Princeton, MA geometry Tutors
Rutland, MA geometry Tutors
Shirley, MA geometry Tutors
Shrewsbury, MA geometry Tutors
Sterling, MA geometry Tutors
Upton, MA geometry Tutors | {"url":"http://www.purplemath.com/West_Boylston_geometry_tutors.php","timestamp":"2014-04-21T14:45:50Z","content_type":null,"content_length":"23928","record_id":"<urn:uuid:8a326bd7-82a5-4f2a-9fac-a414d75639a2>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00565-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics of Computation
ISSN 1088-6842(online) ISSN 0025-5718(print)
Asymptotic expansions of multiple integrals of rapidly oscillating functions
Authors: T. Iwaniec and A. Lutoborski
Journal: Math. Comp. 50 (1988), 215-228
MSC: Primary 41A60
MathSciNet review: 917829
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: Expansions of multiple integrals
where w is a function on kth variable, g is smooth, are given in terms of negative powers of the integers
• [1] M. Abramowitz & I. A. Stegun, Handbook of Mathematical Functions, Dover, New York, 1972.
• [2] U. Banerjee, L. J. Lardy, and A. Lutoborski, Asymptotic expansions of integrals of certain rapidly oscillating functions, Math. Comp. 49 (1987), no. 179, 243–249. MR 890265 (88e:41063), http:
• [3] Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou, Asymptotic analysis for periodic structures, Studies in Mathematics and its Applications, vol. 5, North-Holland Publishing Co.,
Amsterdam, 1978. MR 503330 (82h:35001)
• [4] Philip J. Davis and Philip Rabinowitz, Methods of numerical integration, 2nd ed., Computer Science and Applied Mathematics, Academic Press Inc., Orlando, FL, 1984. MR 760629 (86d:65004)
• [5] J. N. Lyness, The calculation of Fourier coefficients by the Möbius inversion of the Poisson summation formula. I. Functions whose early derivatives are continuous, Math. Comp. 24 (1970),
101–135. MR 0260230 (41 #4858), http://dx.doi.org/10.1090/S0025-5718-1970-0260230-8
• [6] Hans J. Stetter, Numerical approximation of Fourier-transforms, Numer. Math. 8 (1966), 235–249. MR 0198716 (33 #6870)
• [7] Giorgio Talenti, Best constant in Sobolev inequality, Ann. Mat. Pura Appl. (4) 110 (1976), 353–372. MR 0463908 (57 #3846)
Similar Articles
Retrieve articles in Mathematics of Computation with MSC: 41A60
Retrieve articles in all journals with MSC: 41A60
Additional Information
DOI: http://dx.doi.org/10.1090/S0025-5718-1988-0917829-6
PII: S 0025-5718(1988)0917829-6
Article copyright: © Copyright 1988 American Mathematical Society | {"url":"http://www.ams.org/journals/mcom/1988-50-181/S0025-5718-1988-0917829-6/","timestamp":"2014-04-19T00:36:59Z","content_type":null,"content_length":"25020","record_id":"<urn:uuid:40dea684-397e-4420-862b-8ae9ad005e89>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00050-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Discussion: Understanding Distance, Speed, and Time Relationships Using Simulation Software tool, Advantages of interactive Graphs
Discussion: Understanding Distance, Speed, and Time Relationships Using Simulation Software tool
Topic: Advantages of interactive Graphs
Related Item: http://mathforum.org/mathtools/tool/13171/
<< see all messages in this topic
next message >
Subject: Advantages of interactive Graphs
Author: Aldana
Date: Jun 26 2006
When presenting an interactive graph of two lines that represent displacement of
two objects (runner) as time changes will open to discussion and to the
presentation of the following topics:
- Rate of change, slope and uniform velocity
- How does velocity (slow or fast) relates to the steepness of the lines?
- When you have interaction of two objects (runners) you can talk about things
in common such as time an distance from a given point for both objects.
- It is easier to picture a system of equations.
- Changing parameters in a software tool saves time and it will give
the student several options to go into and grab concepts deeper.
Reply to this message Quote this message when replying?
yes no
Post a new topic to the Understanding Distance, Speed, and Time Relationships Using Simulation Software tool Discussion discussion
Discussion Help | {"url":"http://mathforum.org/mathtools/discuss.html?context=tool&do=r&msg=24977","timestamp":"2014-04-19T00:35:46Z","content_type":null,"content_length":"16408","record_id":"<urn:uuid:3068da85-c705-489d-8458-85b24364c937>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability portable
Stability stable
Maintainer sven.panne@aedion.de
This module corresponds to section 2.10 (Rectangles) of the OpenGL 2.1 specs.
class Rect a whereSource
rect and rectv support efficient specification of rectangles as two corner points. Each rectangle command takes four arguments, organized either as two consecutive pairs of (x, y) coordinates, or as
two pointers to arrays, each containing an (x, y) pair. The resulting rectangle is defined in the z = 0 plane.
rect (Vertex2 x1 y1) (Vertex2 x2, y2) is exactly equivalent to the following sequence:
Graphics.Rendering.OpenGL.GL.BeginEnd.renderPrimitive Graphics.Rendering.OpenGL.GL.BeginEnd.Polygon $ do
Graphics.Rendering.OpenGL.GL.VertexSpec.vertex (Vertex2 x1 y1)
Graphics.Rendering.OpenGL.GL.VertexSpec.vertex (Vertex2 x2 y1)
Graphics.Rendering.OpenGL.GL.VertexSpec.vertex (Vertex2 x2 y2)
Graphics.Rendering.OpenGL.GL.VertexSpec.vertex (Vertex2 x1 y2)
Note that if the second vertex is above and to the right of the first vertex, the rectangle is constructed with a counterclockwise winding.
Rect Double
Rect Float
Rect Int16
Rect Int32 | {"url":"http://hackage.haskell.org/package/OpenGL-2.2.3.0/docs/Graphics-Rendering-OpenGL-GL-Rectangles.html","timestamp":"2014-04-20T19:47:04Z","content_type":null,"content_length":"7094","record_id":"<urn:uuid:ea030c7b-8a8b-45c8-951d-3fc8d473fa56>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vogl, Stefanie (2009): Tropical Cyclone Boundary-Layer Models. Dissertation, LMU München: Faculty of Physics
Metadaten exportieren
Autor recherchieren in
Hurricanes are some of the most spectacular yet deadly natural disasters. Especially in times of the widely discussed anthropogenic climate change, public interest focusses on such extreme weather
events. Nowadays, highly sophisticated numerical models are used for example for track prediction, but still there are many fundamental open questions. Among these, the question how intense a
tropical cyclone may become is of major interest. In this work a study of the two most common types of models for the hurricane boundary layer is carried out. This study reveals major deficiencies of
boundary layer models and finally leads to a reassessment of the established theory of potential intensity of hurricanes. In chapter (2), a linear model for the hurricane boundary layer is derived
from a detailed scale analysis of the full equations of motions. It is shown how analytic solutions for the model may be calculated and how these solutions may be used to appraise the integrity of
the linear approximation. Some of the results of this chapter are published in Vogl and Smith (2009). In chapter (3), a slab model is examined, which yields results for the main thermodynamic
quantities. Depending on the chosen boundary layer depth and the imposed wind profile, two different types of solution behaviour found and interpreted. Other aspects of the dynamics and
thermodynamics of the boundary layer are studied as for example the influence of shallow convection. The limitations and strengths of the slab model are discussed at the end of chapter (3). The
results are published in Smith and Vogl (2008). The results of the detailed investigation of the linear and the slab model both point out an important deficiency of hurricane boundary layer models,
namely the assumption of gradient wind balance. In chapter (4) it is shown that indeed the major deficiency of the established hurricane (P)otential (I)ntensity theory is the tacit assumption of
gradient wind balance in the boundary layer. The results of chapter (4) show a fundamental problem of the established PI theory and then point to an improved conceptual model of the hurricane inner
core region. Thus this work suggests a way forward to an urgently needed more consistent theory for the hurricane potential intensity. It is published in Smith, Montgomery and Vogl (2008).
Item Type: Thesis (Dissertation, LMU Munich)
Keywords: Tropische Wirbelstürme, planetare Grenzschicht, cyclones, planetary boundary layer
Subjects: 600 Natural sciences and mathematics
600 Natural sciences and mathematics > 530 Physics
Faculties: Faculty of Physics
Language: English
Date Accepted: 24. June 2009
1. Referee: Dameris, Martin
Persistent Identifier (URN): urn:nbn:de:bvb:19-102740
MD5 Checksum of the PDF-file: 73292ae542b52e1e08a232f28db14626
Signature of the printed copy: 0001/UMC 17891
ID Code: 10274
Deposited On: 16. Jul 2009 07:19
Last Modified: 16. Oct 2012 08:29 | {"url":"http://edoc.ub.uni-muenchen.de/10274/","timestamp":"2014-04-19T14:32:48Z","content_type":null,"content_length":"26203","record_id":"<urn:uuid:48170044-51b1-4670-9606-b5f0636716d9>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Initial treatment of fractions in Japanese textbooks.
Despite a long history of research and curriculum development efforts, fraction teaching and learning remains a major challenge for U.S. teachers and students. In contrast, according to according to
1. As stated or indicated by; on the authority of: according to historians.
2. In keeping with: according to instructions.
3. the TIMSS TIMSS Trends in International Mathematics and Science Study
TIMSS Third International Math and Science Study , Japanese Japanese (jăp'ənēz`), language of uncertain origin that is spoken by more than 125 million people, most of whom live in Japan. There are
also many speakers of Japanese in the Ryukyu Islands, Korea, Taiwan, parts of the United States, and students appear to be very successful on problems involving fractions. Because textbooks play an
important role in mathematics teaching and learning, 6 elementary school elementary school: see school. mathematics textbook textbook Informatics A treatise on a particular subject. See Bible.
series were analyzed an·a·lyze
tr.v. an·a·lyzed, an·a·lyz·ing, an·a·lyz·es
1. To examine methodically by separating into parts and studying their interrelations.
2. Chemistry To make a chemical analysis of.
3. for its treatment of fractions. The study investigated the following questions: (1) what are the specific fraction understandings the Japanese curriculum and textbooks attempt to develop and at
what grade levels? (2) how do the Japanese textbooks introduce various fraction related ideas during Grade 4? and (3) what representations do the Japanese textbooks use as they introduce and develop
fraction-related ideas during Grade 4?
The findings of the study raise some important questions for mathematics educators and curriculum developers in the United States United States, officially United States of America, republic (2005
est. pop. 295,734,000), 3,539,227 sq mi (9,166,598 sq km), North America. The United States is the world's third largest country in population and the fourth largest country in area. : (1) Why do we
introduce fractions so early in our curricula? (2) How can we intentionally in·ten·tion·al
1. Done deliberately; intended: an intentional slight. See Synonyms at voluntary.
2. Having to do with intention. support children's learning of fractions through careful selection of problems and representations? and (3) How can we help students go beyond the part-whole meaning
of fractions? Is the notion of measurement-fractions potentially useful with U.S. students?
Research that looks across countries can provide a sharper picture of
what matters in instruction aimed at developing proficiency.
(National Research Council, 2001, p. 358)
Despite a long history of research and curriculum development efforts, fraction teaching and learning remains a major challenge for U.S. teachers and students. Figure 1 presents some of the related
items involving fractions from the Third International Mathematics and Science Study (TIMSS). As Figure 1 indicates, U.S. students performed at or near the international average on these items, but
their performance is far less than what we would like it to be. In contrast, more than 80% of Japanese students responded correctly to the same items. In fact, Japanese students outperformed their
U.S. counterparts on all released items involving fractions. The Japanese curriculum introduces fractions later than typical U.S. curricula do (Watanabe Watanabe (渡辺 "crossing area") is the fifth
most common Japanese surname.
The first to be named Watanabe were 'kuge (court nobles), direct descendants of the Emperor Saga (786-842). , 2001a), thus, the Japanese students performed better than U.S. students in spite of in
opposition to all efforts of; in defiance or contempt of; notwithstanding.
See also: Spite an earlier and more frequent discussion of fractions in U.S. schools. This observation naturally raises the question, "How does the Japanese curriculum treat fractions?" In this
paper, I will present the findings from a study that investigated the initial treatment of fractions in the Japanese national curriculum and their elementary school mathematics textbooks. The
purpose of the study was to provide a detailed description of the way fractions are treated in the Japanese textbook series. It is hoped that such a description may facilitate a critical reflection
on the way fractions are treated in U.S. curricular materials
[FIGURE 1 OMITTED]
[FIGURE 1 OMITTED]
Why study a curriculum and/or and/or
Used to indicate that either or both of the items connected by it are involved.
Usage Note: And/or is widely used in legal and business writing. textbooks?
Clearly, many factors influence how teachers teach mathematics in their classrooms. As Stigler & Hiebert (1999) noted, teaching is a cultural activity in which teachers follow their cultural
scripts. As a cultural activity, no single factor will explain why teachers in a particular way. Nevertheless, understanding how various factors influence classroom teaching, either singularly
1. Being only one; individual.
2. Being the only one of a kind; unique.
3. Being beyond what is ordinary or usual; remarkable.
4. Deviating from the usual or expected; odd. or in combination, should provide some valuable insights for the mathematics education community in the United States.
One critical factor that influences teaching and learning of mathematics is the curriculum (Schmidt, McKnight, & Raizen, 1996). The curriculum analysis conducted within the TIMSS framework suggested
that a typical U.S. curriculum is unfocused un·fo·cused also un·fo·cussed
1. Not brought into focus: an unfocused lens.
2. , undemanding, and incoherent (Schmidt, Houang, & Cogan Cogan is a suburb of Penarth in the Vale of Glamorgan, South Wales. It has one of four of the vale's Leisure Centre's. The Cogan railway
line serves Barry, Rhoose and Bridgend and Cardiff. , 2002). The analysis by Schmidt et al. (2002) shows that the high-performing countries' curricula tend to be more focused (fewer topics in each
grade level), cohesive cohesive,
n the capability to cohere or stick together to form a mass. (logical sequencing of mathematical topics) and with higher mastery expectations (much less repetition REPETITION, construction of wills.
A repetition takes place when the same testator, by the same testamentary instrument, gives to the same legatee legacies of equal amount and of the same kind; in such case the latter is considered a
repetition of the former, and the legatee is entitled of the topics across grades).
Of course, investigating the nature and quality of a curriculum in any country is a complicated matter. In the Second International Mathematics Study, the International Association for the
Evaluation of Educational Achievement (IEA IEA International Energy Agency
IEA International Environmental Agreements
IEA International Association for the Evaluation of Educational Achievement
IEA Institute of Economic Affairs
IEA Inferred from Electronic Annotation
IEA International Ergonomics Association ) considered three "faces" of curriculum--the intended, implemented, and attained at·tain
v. at·tained, at·tain·ing, at·tains
1. To gain as an objective; achieve: attain a diploma by hard work.
2. . The intended curriculum is the curriculum established at the system level. In Japan, the Ministry of Education, Culture, Sports, Science and Technology (the Ministry hereafter In the future.
The term hereafter is always used to indicate a future time—to the exclusion of both the past and present—in legal documents, statutes, and other similar papers. ) publishes the Course of Study
(COS), which specifies the content goals and time allocation The apportionment or designation of an item for a specific purpose or to a particular place.
In the law of trusts, the allocation of cash dividends earned by a stock that makes up the principal of a trust for a beneficiary usually means that the dividends will be treated as for each subject
matter. In addition, the Ministry publishes a series of commentary books for each subject matter at each level (elementary, lower secondary, and upper secondary) to articulate articulate /
ar·tic·u·late/ (ahr-tik´u-lat)
1. to pronounce clearly and distinctly.
2. to make speech sounds by manipulation of the vocal organs.
3. to express in coherent verbal form.
4. the points of consideration regarding the content, instructional approach, and assessment. Through these documents, the Ministry makes public the intended curriculum.
Clearly, the intended curriculum influences the actual classroom instruction, that is, the implemented curriculum, but its influences are not always direct. Textbooks and accompanying teachers'
manuals play an important role. Schmidt et al. (1996) consider these materials as a "potentially implemented curriculum" (p. 30), and their role is to bridge between the intended and implemented
curricula. Shimahara and Sakai Sakai (säkī`), city (1990 pop. 807,765), Osaka prefecture, S Honshu, Japan, on Osaka Bay at the mouth of the Yamato River. An industrial center, it has engineering,
iron- and steelworks, chemical plants, machine factories, and textile mills. (1995) report that significant numbers of both American American, river, 30 mi (48 km) long, rising in N central Calif.
in the Sierra Nevada and flowing SW into the Sacramento River at Sacramento. The discovery of gold at Sutter's Mill (see Sutter, John Augustus) along the river in 1848 led to the California gold
rush of and Japanese elementary school teachers rely heavily on teachers' manuals as they teach mathematics. According to one Japanese college-level mathematics educator, about 70% of elementary
school teachers rely on teachers manuals when they teach mathematics lessons (Shigematsu, personal communication, April, 1997). Japanese mathematics In the history of mathematics, Japanese
mathematics or wasan (Japanese: 和算), denotes a genuinely distinct kind of mathematics developed in Japan during the Edo Period (1603-1867) when the country was isolated from west educators
sometimes lament mediocre me·di·o·cre
Moderate to inferior in quality; ordinary. See Synonyms at average.
[French médiocre, from Latin mediocris : medius, middle; see medhyo- teachers simply holding the teachers' manual and teaching directly from the book. If it is indeed the case that both American and
Japanese teachers rely on teachers' manuals to conduct their mathematics lessons, at least a part of the reason for the different nature of mathematics lessons in the U.S. and in Japan might be
attributable to the way teachers' manuals are organized. Watanabe's (2001b) analysis of the overall structure and contents of teachers' manuals in Japan and the United States does reveal significant
differences, and further investigation along this line may provide new insights into the curricular and achievement differences, that is, the differences in intended, implemented and attained
curricula, between the U.S. and Japanese students.
Students' understanding of fractions
Typically, simple fractions such as one half, one third, and one fourth are introduced as early as kindergarten kindergarten [Ger.,=garden of children], system of preschool education. Friedrich
Froebel designed (1837) the kindergarten to provide an educational situation less formal than that of the elementary school but one in which children's creative play instincts would be in the U.S.
More formal instruction on fractions, including ideas such as comparing fractions and equivalent fractions, usually takes place during the early intermediate grades, around Grades 3 or 4. Students
then move on to computation Computation is a general term for any type of information processing that can be represented mathematically. This includes phenomena ranging from simple calculations to
human thinking. with fractions starting as early as Grade 4 or 5. Despite such an early introduction and repeated treatment of fractions, many upper elementary school students' understanding of
fractions leaves much to be desired. Consider the following example from Simon (2002).
In a fourth-grade class, I asked the students to use a blue rubber
band on their geoboards to make a square of a designated size, and
then to put a red rubber band around one half of the square. Most of
the students divided the square into two congruent rectangles.
However, Mary, cut the square on the diagonal, making two congruent
right triangles. The students were unanimous in asserting that both
fit with my request that they show halves of the square. Further, they
were able to justify that assertion.
I then asked the question, "Is Joe's half larger; is Mary's half
larger, or are they the same size?" Approximately a third of the class
chose each option. In the subsequent discussion, students defended
their answers. However, few students changed their answers as a result
of the arguments offered.
(Simon, 2002, p. 992)
In an earlier study (Watanabe, 1995), similar questions were posed to 16 fifth graders in individual interviews. Congruent con·gru·ent
1. Corresponding; congruous.
2. Mathematics
a. Coinciding exactly when superimposed: congruent triangles.
b. squares were cut into two equal parts in three different ways: by a vertical line, by a diagonal line, and by a slanted slant
v. slant·ed, slant·ing, slants
1. To give a direction other than perpendicular or horizontal to; make diagonal; cause to slope: line that created two congruent trapezoids. (See Figure 2.) After the students verified ver·i·fy
tr.v. ver·i·fied, ver·i·fy·ing, ver·i·fies
1. To prove the truth of by presentation of evidence or testimony; substantiate.
2. that two copies of each shape were identical and they could be put together to form the same square, they were given one of each shape and asked, "If these were cookies and you were really hungry
which one would you pick?" All but two students initially picked one of the three to be the largest. Even after they were reminded of the initial demonstration that two copies of each shape made up
congruent squares, 8 of those students maintained that the piece they selected was the largest. Simon (2002) concluded that these students had the understanding of fractions as an arrangement rather
than a quantity.
[FIGURE 2 OMITTED]
Fraction teaching and learning have been a focus of research for a long time. Kieren (1980) identified 5 sub-constructs of fractions: part-whole, operator, quotient quotient - The number obtained by
dividing one number (the "numerator") by another (the "denominator"). If both numbers are rational then the result will also be rational. , measure, and ratio. A variety of research projects, both
large and small scale, utilized these sub-constructs in their studies of teaching and learning of fractions. Probably, the most extensive study of fractions was carried out under the Rational Number
Project (e.g., Hehr, Harel, Post & Lesh, 1992, 1993). Other researchers have also taken advantage of the notion of fraction sub-constructs in their studies. Many studies provided detailed
descriptions of the challenges students faced as they attempted to solve problems involving fractions. One consensus that seems to emerge from these studies was that children's whole number
understanding interfered with their effort to make sense of fractions: for example 1/3 is greater than 1/2 because 3 is greater than 2. Such difficulty creates a major challenge for teaching of
fractions. Two other examples of challenges students face were cited by Larson's (1980) study revealing challenges in locating a fraction on a number line and by Greer's (1987) study reporting
challenges in selecting an appropriate operation when problems involved rational numbers.
Mack (1990, 1995) investigated children's informal understanding of fractions and how it might be utilized in formal fraction instruction. In particular, she suggested that a sequence of instruction
which begins with partitioning To divide a resource or application into smaller pieces. See partition, application partitioning and PDQ. of a whole and then expanding to include other strands might
be effective. Pothier and Sawada's (1983, 1989) work shows that there is a pattern in young children's development of partitioning strategies and justifications for equality of parts. Armstrong and
Larson Larson may refer to:
People with the surname Larson:
• Larson (surname)
In places:
• Larson, North Dakota, a US city
See also
• Larsen
• Larsson
(1995) investigated how students in fourth, sixth and eighth grades compared areas of rectangles and triangles embedded Inserted into. See embedded system. in another geometric figure. They found
that although most students used direct comparison methods, explanations based on part-whole, or partitioning, increased as students became more familiar with fractions. These studies suggest the
importance of partitioning activities in the beginning of fraction instruction. Unfortunately, most textbook series provide children pre-partitioned figures. As a result, children themselves do not
engage in the act of partitioning, and those activities become simply counting activities for children.
More recently, Steffe, Olive, Tzur and their colleagues have embarked upon an ambitious study to articulate children's construction of fraction understanding (e.g., Olive, 1999, Steffe, 200; Tzur,
1999, 2004). The reorganization hypothesis (Olive, 1999) offers an alternative perspective on teaching and learning of fractions. According to their findings from a teaching experiment, children's
whole number concepts did not interfere with their efforts to make sense of fractions (Olive, 1999, Steffe, 2000; Tzur, 1999, 2004). In fact, the types of units and operations children constructed
in their whole number sequence can facilitate their reorganization of fraction schemes. However, the nature of instruction and the types of problems used in instruction, not limited to fraction
instruction but also including instruction on multiplication multiplication, fundamental operation in arithmetic and algebra. Multiplication by a whole number can be interpreted as successive
addition. For example, a number N multiplied by 3 is N + N + N. , division, and so on, must be carefully aligned with such a potential development of fraction understanding. For example,
multiplication is often considered as simply repeated addition. Although repeated addition is a tool to calculate the product, multiplication is much more than repeated addition. Rather, students
should be encouraged to understand multiplication as a way to quantify Quantify - A performance analysis tool from Pure Software. something when it is composed of several copies of identical size,
and this is exactly what is emphasized in the Japanese curriculum (Watanabe, 2003). Such an understanding can become the basis of understanding fraction m/n as m times of 1/n, instead of "m out of
n," which does not necessarily signify sig·ni·fy
v. sig·ni·fied, sig·ni·fy·ing, sig·ni·fies
1. To denote; mean.
2. To make known, as with a sign or word: signify one's intent. a quantity. Thompson Thompson, city, Canada
Thompson, city (1991 pop. 14,977), central Man., Canada, on the Burntwood River. A mining town, it developed after large nickel deposits were discovered in the area in 1956. and Saldanha This
article or section is written like a personal reflection or and may require .
Please [ improve this article] by rewriting this article or section in an . (2003) noted that "we rarely observe textbooks or teachers discussing the difference between thinking of 3 as 'three out
of five' and thinking of it as '3/5 one fifth'" (p. 107).
What this brief review of research literature suggests is that the research findings have not significantly influenced the textbook treatment of fractions in the United States. In fact, in some
cases, the textbook treatment of fractions go counter to the research findings. Perhaps an in-depth in-depth
Detailed; thorough: an in-depth study.
detailed or thorough: an in-depth analysis
study of how fractions are treated differently in another country's textbook series may serve as a catalyst to re-elevate the way fractions are typically treated in the U.S. textbooks.
Research questions
The overall research goal was to gain a better understanding of how fractions are introduced and developed in the Japanese curriculum and textbooks. For the analysis of the textbook treatment, I
focused my analysis on Grade 4, the year when fractions are first introduced and discussed. Specifically, the study tries to answer the following questions:
* What are the specific fraction understandings the Japanese curriculum and textbooks attempt to develop and at what grade levels?
* How do the Japanese textbooks introduce and develop various fraction related ideas during Grade 4?
* What representations do the Japanese textbooks use as they introduce and develop fraction-related ideas during Grade 4?
The first question was intended to help us understand if and how the Japanese elementary mathematics Elementary mathematics consists of mathematics topics frequently taught at the primary and
secondary school levels. The most basic are arithmetic and geometry. The next level is probability and statistics, then algebra, then (usually) trigonometry and pre-calculus. curriculum and
textbooks incorporated the findings from the existing research. For example, does the Japanese curriculum and textbooks treat non-unit fractions as iteration One repetition of a sequence of
instructions or events. For example, in a program loop, one iteration is once through the instructions in the loop. See iterative development.
(programming) iteration - Repetition of a sequence of instructions. of a unit fraction? The last two questions primarily focused on the way the curriculum and textbooks might support students'
learning of fractions.
The National Course of Study (Japan Society of Mathematical Education, 2000) and Commentary on the National Course of Study: Elementary School Mathematics (Ministry of Education, 1999) were included
in the analysis of the Japanese national curriculum. Because the treatment of fractions in the Japanese curriculum is completed in Grade 6, the final year of their elementary schools, only the
Commentary for elementary school mathematics was included in the analysis. These documents were the primary sources to answer the first research question although the textbooks and accompanying
teachers' manuals were also included in the analysis. The documents were analyzed first to identify the timing and the specific focus of the curricular treatment of fractions in each grade level. In
addition to noting the timing of fraction instruction, the analysis attempted to locate the fraction instruction in relationship to other relevant mathematical ideas. Those mathematical ideas are
multiplication and division operations with whole numbers, decimal Meaning 10. The numbering system used by humans, which is based on 10 digits. In contrast, computers use binary numbers because it
is easier to design electronic systems that can maintain two states rather than 10. numbers, and measurement.
Since the two government documents only identify and explain the specific learning expectations but not how they should be accomplished, textbooks were analyzed to answer the last two research
questions. There are six commercially published textbook series for elementary school mathematics that have been approved by the Ministry. For the textbook treatment of fractions, I focused my
analysis on how fractions are initially introduced and developed. Since this takes place, according to the national curriculum documents, in grade 4, my analysis focused on Grade 4 textbooks. The
Grade 4 pupils' books for all six series were included in the analysis. Furthermore, the teachers' manual accompanying the most widely used series was also included in the analysis.
Watanabe (2001a) reported that the Japanese textbooks are organized so that each lesson will focus on one (or a few) problem(s). Therefore, to analyze the textbooks, I have focused on the following
two specific aspects: (a) the nature of the problems, that is, is the problem contextualized or presented purely symbolically, and if problems are contextualized, what is the context, and (b) the
type of representation used, that is, does the textbook use any non-symbolic representation, and if so, what types.
Learning Goals
The Commentary specifies the learning goals with respect to fractions very explicitly. Table 1 summarizes the fraction related topics discussed in the Ministry of Education documents. As the table
shows, fractions are not formally introduced in the Japanese curriculum until Grade 3. In many textbooks in the United States, simple fractions such as 1/2, 1/3 and 1/4 are included starting with
Grade 1 (e.g., Clements Clements is a name that can refer to the following: People
First Name
• Andrew Clements, author
• Andrew Jackson Clements, politician
• Bill Clements, politician
• Charlie Clements, British actor
, Jones, Moseley Mose·ley , Henry Gwyn Jeffreys 1887-1915.
British physicist who determined that the atomic number of an element can be deduced from the element's x-ray spectrum. & Schulman Schulman is a surname, usually that of a Jewish person. The name is
derived from the Yiddish word shul ("synagogue"). Some well-known people with this name are:
• Arnold Schulman
• Frank Schulman, Unitarian Universalist minister, theologian, and author
• J.
, 1999). Therefore, the Japanese curricular treatment of fractions starts much later than is the case in a typical U.S. curriculum. On the other hand, fractions are prominently discussed in middle
school mathematics textbooks in the United States (e.g., Larson, Boswell, Kanold, & Stiff, 1999). Therefore, the Japanese curricular treatment of fractions is much more concentrated with a clear
mastery expectation by the end of Grade 6.
Prior to the study of fractions, students have completed the study of whole number multiplication (in Grade 3) and (in Grade 4) the study of whole number division, which included division by 2- or
3-digit numbers and the division algorithm
This article is about a mathematical theorem. For a list of division algorithms, see Division (digital).
The division algorithm is a theorem in mathematics which precisely expresses the outcome of the usual process of division of integers. relationship,
Dividend = Divisor divisor - A quantity that evenly divides another quantity.
Unless otherwise stated, use of this term implies that the quantities involved are integers. (For non-integers, the more general term factor may be more appropriate.)
Example: 3 is a divisor of 15. x Quotient + Remainder.
Decimal numbers are introduced in Grade 4; however, the Ministry documents do not specify whether decimals or fractions should be discussed first. Of the 6 elementary school mathematics textbooks,
only one series introduces fractions prior to discussing decimal numbers. The scope of the Grade 4 discussion of decimal numbers is limited to the first decimal place decimal place
The position of a digit to the right of a decimal point, usually identified by successive ascending ordinal numbers with the digit immediately to the right of the decimal point being first: (or 1/
10's place). Addition and subtraction subtraction, fundamental operation of arithmetic; the inverse of addition. If a and b are real numbers (see number), then the number a−b is that number (called
the difference) which when added to b (the subtractor) equals of decimal numbers are also discussed in Grade 4. Multiplication and division of decimal numbers are discussed in Grade 5, when the
Japanese COS completes the treatment of decimal numbers.
Table 2 summarizes the content of the measurement strand Strand, street in London, England, roughly parallel with the Thames River, running from the Temple to Trafalgar Square. It is a street of law
courts, hotels, theaters, and office buildings and is the main artery between the City and the West End.
1. in the Japanese COS. As the table shows, before the introduction of fractions, the Japanese curriculum completes the study of measurements on the following attributes: length, capacity and
weight. In Grade 4, the same year children begin their investigation of fractions, the area measurement is also introduced. Since the COS does not specify the order of topic within a given grade
level, the order in which these topics are treated in a textbook varies. Of the six textbook series, three, including the two most widely used series, discuss the area measurement prior to the
introduction of fractions, while the other three introduce fractions prior to their discussion of the area measurement. The fact that the area measurement is also a new concept in Grade 4 may have
some impact on the types of models used in these textbook series, as it will become clearer later.
Meanings of fractions
According to the Commentary, there are five different meanings of fractions discussed in the elementary school mathematics curriculum. Those meanings are, using the fraction 2/3 as an example,
1. two parts of a whole that is partitioned par·ti·tion
a. The act or process of dividing something into parts.
b. The state of being so divided.
a. into three equal parts
2. representation of measured quantities such as 2/3 l or 2/3 m
3. two times of the unit obtained by partitioning 1 into 3 equal parts
4. quotient fraction (2/3)
5. A is 2/3 of B--if we consider B as 1 (a unit), then the relative size of A is 2/3.
According to the Commentary, Grade 4, when fractions are first introduced, the focus is on the first three meanings of fractions, while the quotient fraction becomes a focus in Grade 5. Fractions as
ratio, the fifth meaning, are investigated in Grade 6 as students study proportions.
In the teachers' manuals, these five meanings are also discussed and elaborated. However, in the textbooks, the first two meanings are often combined together. In other words Adv. 1. in other words
- otherwise stated; "in other words, we are broke"
put differently , many problems found in the Japanese textbooks are put in the context of measurement, where the whole is one measurement unit. Thus, the length equivalent to two of the three
equally partitioned parts of 1 meter is described as "2/3 of 1 meter," and the length is denoted as 2/3 m. However, the primary role of the part-whole meaning of fraction seems to be the
establishment of unit fractions, such as 1/3 (or 1/3 m). As the unit progresses, the textbooks place much more emphasis on treating a non-unit fraction as a collection of unit fractions, the third
meaning of fraction in the Commentary. Thus, they will pose questions such as, "What are the lengths equivalent to two, three, or four 1/3 m?" This meaning of fractions is then used to expand the
range of fractions beyond proper fractions. Diagrams similar to Figure 3 are often included in the textbooks.
The teachers' manual accompanying the most widely used elementary mathematics textbook suggests that the two main ideas about fraction concepts are (1) fractions are useful to denote de·note
tr.v. de·not·ed, de·not·ing, de·notes
1. To mark; indicate: a frown that denoted increasing impatience.
2. the quantity less than 1 unit, and (2) fractions are numbers just like whole numbers and decimal numbers. The manual also states that the advantage of fractions is that we can flexibly establish
new fractional units the unit of a fraction; the reciprocal of the denominator; thus, ¼ is the unit of the fraction
See also: Unit , but this flexibility poses a challenge of representing fractions on a number line.
[FIGURE 3 OMITTED]
Problems used in introducing and developing fractions concepts
What kinds of problems do the Japanese textbooks use to introduce and develop fraction concepts? Sugiyama, Iitaka and Itoh (2002) introduce fractions through a problem set using the context of a
child measuring the circumference of a tree by wrapping a strip of paper around it. The picture of the paper strip shows that the circumference is slightly longer than 1 meter, and the question
posed to students is how to express the length beyond 1 meter. Three other series use similar problems that are set in the context of linear measurement. One series (Nakahara, 2002) uses a liquid
measure context instead, and one series (Hiraoka & Hashimoto Hashimoto is a Japanese surname and place name.
• The area of Hashimoto in Sagamihara in Japan
• The city of Hashimoto in Japan.
• Hashimoto Gahō (1835-1908), Kanō school painter
, 2002) introduces fractions by asking the size of a piece of cake obtained by cutting the cake into two equal parts. Table 3 summarizes the problem contexts in the 6 textbook series.
There are two notable features of the way the Japanese textbooks introduce and develop fractions. First, of the six textbook series, five of them use opening problems that are set in a "mixed
number" situation, that is, the fractional fractional
size expressed as a relative part of a unit.
fractional catabolic rate
the percentage of an available pool of body component, e.g. protein, iron, which is replaced, transferred or lost per unit of time. quantity investigated is a part of a quantity greater than one
unit. This is true even of the one series that splits its treatment of fractions into two sections: fractions less than one and fractions greater than one. The only exception to this approach is
Hiraoka and Hashimoto (2002) where the opening problem asks students how they might describe the size of a piece of cake obtained by cutting the original into two equal pieces. Problems of this
nature seem to be much more common in U.S. textbooks. The use of mixed number contexts in the opening problems is consistent with the emphasis in the Commentary that fractions are useful to express
those quantities that are less than one unit. Moreover, by using fractional amounts that cannot be expressed by a decimal number with one decimal place (e.g., 1/3 and 1/4), the textbooks demonstrate
the flexibility of fractional units, another point emphasized by the Commentary.
Another feature of the problem used in the Japanese textbooks is that the measurement contexts used in the problems are either linear or liquid measurement. In fact, the only problems involving
measurement other than length or capacity are the two opening problems involving measurement other than length or capacity are the two opening problems from Hiraoka and Hashimoto (2002) that
involved partitioning of a cake. Even in this particular textbook, of the 33 problems in the unit, 11 involved linear measurement contexts while 4 additional problems involved liquid measurement.
Table 4 summarizes the frequency of various measurement problems appearing in the six textbook series analyzed.
There are several different graphical representations that can be used to model fractions. The three most common models are area models, linear models, and discrete models (see Figure 4).
Unlike most U.S. textbooks, in which area models are the most dominant graphical representation for fractions, linear models are the primary graphical representations of fractions in the Japanese
Grade 4 textbooks. Although the diagrams accompanying a liquid measure problem (see Figure 5) are similar to area models, they are different in the sense that they are much more context-bound.
Therefore, it is not appropriate to share in the top 3 segments in Figure 4 because liquid cannot be floating inside a measuring cup.
[FIGURE 4 OMITTED]
One of the reasons for not using area models to represent fractions appears to be the fact that the area measurement is introduced after the initial discussion of fractions. Although this was the
case in only 3 of the six textbook series, there is also an historical factor. Unlike the most recent revision of the National Course of Study, which went into effect in the 2003-2004 school year,
fractions were introduced in Grade 3 while area measurement was introduced in Grade 4. Therefore, under the previous COS, fractions were introduced before area measurement in all textbook series.
Therefore, it is not surprising that textbook series, even if they now introduce area measurement prior to the introduction of fraction concepts, choose not to utilize unfamiliar representations in
this particular context.
Perhaps a much more significant reason for focusing on linear models is the Japanese curriculum's effort to establish fractions as numbers through the use of number line. Students are familiar with
number lines as a representation of whole numbers and decimal numbers (except for those students who use Sawada & Okamoto (2002), which introduces fractions before decimal numbers). By representing
fractions on a number line, the Japanese curriculum tries to help students view fractions as numbers. Toward this end, textbooks often include graphical representations that are very similar to
number line like the one shown in Figure 6.
[FIGURE 5 OMITTED]
Furthermore, some textbooks will include graphical representations similar to the one shown in Figure 7 to intentionally connect the number line model with familiar representations of fractions.
[FIGURE 6 OMITTED]
These graphical representations are similar to the ones that represented the linear measurement problem contexts, thus they are familiar to children; however, they do not include a measurement unit,
emphasizing that this is a representation for numbers.
[FIGURE 7 OMITTED]
So, what do these findings tell us about the way the Japanese elementary mathematics curriculum introduces and develops fraction concepts? In terms of the timing of fraction introduction, the
Japanese curriculum definitely introduces fractions later than typical U.S. textbook series do. However, the difference in the curricular treatments of fractions is not limited to the timing of its
introduction. Perhaps more significantly, the Japanese elementary mathematics curriculum seems to progress through various fraction-related ideas with more focus and mastery expectations, as
suggested by Schmidt et al's (2002) analysis of the overall mathematics curriculum. Thus, after 3 years, and about 47 lessons according to the suggested pacing of one series, the Japanese curriculum
claims to have completed the study of fractions. This seems to be in stark contrast with the way fraction concepts are often developed (or not) in the U.S. textbooks. Typically, children in the U.S.
are introduced to simple unit fractions with the denominators of 2, 3 and 4 in their first exposure with fractions. Then, the textbooks expand the scope of their treatment to include non-unit
fractions and fractions with larger denominators. However, throughout this development, which may take place over a few grade levels, the meaning of fractions seems to stay constant--part of a
whole. As Thompson and Saldanha (2003) note, rarely do we see in the U.S. textbooks a treatment of non-unit fractions as collections of unit fractions--a meaning emphasized in the introductory unit
in the Japanese curriculum.
Another way Japanese textbook series are more intentional in·ten·tion·al
1. Done deliberately; intended: an intentional slight. See Synonyms at voluntary.
2. Having to do with intention. and purposeful pur·pose·ful
1. Having a purpose; intentional: a purposeful musician.
2. Having or manifesting purpose; determined: entered the room with a purposeful look. is in their choice of representations. As discussed above, the Japanese textbooks appear to make an intentional
effort to help students connect the linear representation of fractions with the number line. Furthermore, this emphasis on number lines and linear models may be one of the reasons for focusing on
linear measurement as the problem context used when students were introduced to fractions. By pictorially pic·to·ri·al
1. Relating to, characterized by, or composed of pictures.
2. Represented as if in a picture: pictorial prose.
3. representing the problem situations, the textbooks can naturally introduce the linear model of fractions. These linear models, then, are intentionally connected
to the number line model. Moreover, representing quantities using a "tape diagram diagram /di·a·gram/ (di´ah-gram) a graphic representation, in simplest form, of an object or concept, made up of
lines and lacking pictorial elements. " is something students are familiar with from their earlier studies. Thus, students are introduced to a new concept within a familiar representation context.
These findings seem to raise several questions about the way fractions are treated in many U.S. textbooks. I will conclude this paper by discussing some of those questions. It is my hope that this
article will begin a serious discussion on these issues.
Why do we introduce fractions so early in our curricula? It is clear that although the Japanese students are introduced to fractions later than the U.S. students are, their achievement level is
higher at Grade 8. What do we gain by introducing fractions so early? Would U.S. students do even worse if they were introduced to fractions later? Is it possible that focusing primary grades
mathematics instruction on fewer mathematical ideas would help them develop a deeper understanding of those ideas? Could that eventually improve their learning of fractions when they do encounter
Why do we place so much emphasis on area models? Is the 'pizza model' really helpful for children to understand fractions as numbers? Clearly, many children (and adults) can relate very easily to
the 'pizza' or 'pie' model of fractions. However, does it make sense to focus so much of our attention on this model? How exactly is the 'pizza model' helpful for students' learning of various
fraction-related ideas? Is it possible that the benefit of familiarity is outweighed by the challenges this circular area model poses? Furthermore, one of the reasons why we introduce fractions so
early is that fractions are needed in the customary measurement system, in particular in linear and liquid measure contexts. However, if that is indeed the case, an intentional connection to linear
and liquid measurement contexts seem to be much more needed in the U.S. classrooms than it is in Japan. Familiarity is an important consideration, but so is the connection within mathematics.
What are the strengths and weaknesses of various fraction models? Is it always better to use multiple models, or is it more helpful if instruction focuses on one particular model? Related to the
previous question, we should investigate how other models might be helpful for children learning various fraction ideas. We need to understand not only how each model might be helpful but also what
students need to understanding prior to using that model. How do young children who have yet to explore the concept of area make sense of this model? Whether one uses a circular region or not, when
the area model is used, students will have to partition A reserved part of disk or memory that is set aside for some purpose. On a PC, new hard disks must be partitioned before they can be formatted
for the operating system, and the Fdisk utility is used for this task. a geometric figure. What kinds of experiences with geometric shapes This is a list of geometric shapes. Generally composed of
straight line segments
• polygon
○ concave polygon
○ constructible polygon
should children have to support their fraction learning using such models?
How can we intentionally support children's learning of fractions through careful selection of problems and representations? Number lines are something U.S. textbooks often use, but what challenges
do students face when they use number lines to represent fractions? How can fraction instruction be designed so that we can help students deal with those challenges head on? Alternatively, how
should we teach number lines with whole numbers so that children can use number lines as a tool to think about fractions as numbers?
How can we help students go beyond the part-whole meaning of fractions? Is the notion of 'measurement fractions' potentially useful with U.S. students? When number lines are used to represent
fractions, there is an underlying assumption that fractions are numbers. However, when students' understanding of fractions is limited to the part-whole meaning, it is doubtful that they understand
fractions as numbers. As the students in the quotation QUOTATION, practice. The allegation of some authority or case, or passage of some law, in support of a position which it is desired to
2. Quotations when properly made, assist the reader, but when misplaced, they are inconvenient. from Simon (2002) show, it is not uncommon for students to have a more qualitative understanding of
fractions than a quantitative understanding. How can we organize our instruction so that we can facilitate children's development of an understanding of fractions as numbers? Could the notion of a
'measurement fraction' used in the Japanese curriculum be potentially useful? Could such an approach be helpful to support children's development of iterative it·er·a·tive
1. Characterized by or involving repetition, recurrence, reiteration, or repetitiousness.
2. Grammar Frequentative.
Noun 1. understanding of non-unit fractions, that is, a/b means a copies of 1/b, which some studies seem to suggest beneficial (Olive, 1999, Stefee, 2002; Tzur, 1999, 2004)?
This study was conducted to provide an in-depth description of how fractions are treated in the Japanese elementary school mathematics curriculum. Although the article started with the data from the
TIMSS showing a superior performance by the Japanese 8th graders compared to their U.S. counterparts, this study was not conducted to be an evaluative study. Rather, I hope that by understanding
deeply how fractions are introduced and developed in another country, I can raise some questions about our current practice. It is my hope that a critical reflection on our current practices will
help us improve both the quality of curricular materials and our fraction instruction, making them more informed by the existing research. A lasting improvement can only result if we engage in such
critical reflection as opposed to just copying another country's approach.
Armstrong, B., & Larson, C. (1995). Students' use of part-whole and direct comparison strategies for comparing partitioned rectangles. Journal for Research in Mathematics Education, 26, 2-19.
Behr, M., Harel, G., Post, T., & Lesh, R. (1992). Rational numbers, ratio, and proportion. In D. Grouws (Ed.), Handbook
For the handbook about Wikipedia, see .
This article is about reference works. For the subnotebook computer, see .
"Pocket reference" redirects here.
for research on mathematics teaching and learning (pp. 296-333). New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and
the Canadian province of : Macmillan Macmillan, river, c.200 mi (320 km) long, rising in two main forks in the Selwyn Mts., E Yukon Territory, Canada, and flowing generally W to the Pelly River. It
was an important route to the gold fields from c.1890 to 1900. .
Behr, M., Harel, G., Post, T., & Lesh, R. (1993). Rational numbers: Toward a semantic See semantics. See also Symantec. analysis--Emphasis on the operation construct. In T. P. Carpenter, E. Fennema,
& T. A. Romberg Rom·berg , Sigmund 1887-1951.
Hungarian-born American composer of operettas, including Blossom Time (1921) and The Student Prince (1924).
Noun 1. (Eds.), Rational numbers: An integration of research (pp. 49-84). Hillsdale Hillsdale, borough (1990 pop. 9,750), Bergen co., NE N.J.; inc. 1923. It is primarily residential. , NJ: Erlbaum.
Clements, D. H., Jones, K. W., Moseley, L. G., & Schulman, L. (1999). Math in my world. New York: McGraw-Hill The McGraw-Hill Companies, Inc., (NYSE: MHP) is a publicly traded corporation
headquartered in Rockefeller Center in New York City. Its primary areas of business are education, publishing, broadcasting, and financial and business services. .
Greer Greer, town (1990 pop. 10,322), Greenville and Spartanburg counties, NW S.C., in a farm region noted for its peaches. Textiles, foods, and automobiles are produced. , B. (1987).
Non-conservation of multiplication and division involving decimals. Journal for Research in Mathematics Education, 18, 37-45.
Hiraoka, T., & Hashimoto, Y. (Eds.) (2002). Tanoshii sansu. Tokyo Tokyo (tō`kēō), city (1990 pop. 8,163,573), capital of Japan and of Tokyo prefecture, E central Honshu, at the head of Tokyo Bay. :
Dainihon Tosho.
Hosokawa Hosokawa is a Japanese surname.
People with the name include:
• Morihiro Hosokawa (born 1938), 79th Prime Minister of Japan
• Naomi Hosokawa (born 1974), Japanese actress
• Hajime Hosokawa (1901-1970), Japanese doctor who discovered Minamata disease
, T., Sugioka, T., & Nohda, N. (Eds.) (1998). Shintei Sansu. Osaka Osaka (ō`säkä), city (1990 pop. 2,623,801), capital of Osaka prefecture, S Honshu, Japan, on Osaka Bay, at the mouth of the Yodo
River. : Keirinkan
Ichimatsu, S., Okada
• Okada (commercial motorcycle), commercial motorcycles in Nigeria
• Okada, a town in Benin City in the country Nigeria
• Okada (岡田 hill rice-paddy), a Japanese name.
, I., & Machida Machida (mächē`dä), city (1990 pop. 349,050), Tokyo Metropolis, E central Honshu, Japan, on the Tsurumi River. It is an industrial and residential suburb of Tokyo, and an important
transportation hub. , S. (Eds.) (2002). Skogakko sansu. Tokyo: Gakko Tosho.
Japan Society of Mathematical Education (2000). Mathematics program in Japan: Elementary, lower secondary & upper secondary schools (unofficial un·of·fi·cial
Of or being a drug that is not listed in the United States Pharmacopeia or the National Formulary. translation of the Course of Study. Tokyo: Author.
Kieren, T. (1980). The rational number constructs: Its elements and mechanisms. In T.E. Kieren (Ed.), Recent research on number learning (pp. 125-150). Columbus Columbus.
1 City (1990 pop. 178,681), seat of Muscogee co., W Ga., at the head of navigation on the Chattahoochee River; settled and inc. 1828 on the site of a Creek village. : ERIC/SMEAC.
Larson, C. N. (1980). Locating proper fractions on number lines: Effect of length and equivalence. School Science and Mathematics, 80, 423-428.
Larson, R., Boswell, L., Kanold, T.D., & Stiff, L. (1999). Passport passport
Document issued by a national government identifying a traveler as a citizen with a right to protection while abroad and a right to return to the country of citizenship. It is normally a small
booklet containing a description and photograph of the bearer. to Mathematics. Evanston Evanston, residential city (1990 pop. 73,233), Cook co., NE Ill., on Lake Michigan; settled 1826, inc. 1892. A
largely residential suburb north of Chicago, Evanston has businesses and manufactures goods such as books and published documents, paper, paint, chemicals, : McDougal Littell.
Mack, N. K. (1990). Learning fractions with understanding: Building on informal knowledge. Journal for Research in Mathematics Education, 21, 16-32.
Mack, N. K. (1995). Confounding confounding
when the effects of two, or more, processes on results cannot be separated, the results are said to be confounded, a cause of bias in disease studies.
confounding factor whole-number and fraction concepts when building on informal knowledge. Journal for Research in Mathematics Education, 26, 422-441.
Ministry of Education (1999). Commentary on the national course of study: Elementary school mathematics (in Japanese). Tokoyo: Toyokan.
Nakahara, T. (Ed.) (2002). Shogaku sansu. Osaka: Osaka Shoseki.
National Research Council (2001). Adding it up: Helping children learn mathematics. J. Kilpatrick, J. Swafford, and B. Findell (Eds.). Mathematics Learning Study Committee, Center for Educational,
Division of Behavioral behavioral
pertaining to behavior.
behavioral disorders
see vice.
behavioral seizure
see psychomotor seizure. and Social Sciences and Education. Washington, DC: National Academy Press.
Olive, J. (1999). From fractions to rational numbers of arithmetic: A reorganization hypothesis. Mathematical Thinking and Learning, 1, 279-314.
Pothier, Y., & Sawada, D. (1983). Partitioning: The emergence of rational number ideas in young children. Journal for Research in Mathematics Education, 14, 307-317.
Pothier, Y., & Sawada, D. (1989). Children's interpretation of equality in early fraction activities. Focus on Learning Problems in Mathematics, 11(3), 27-38.
Sawada, T., & Okamoto, D. (2002). Shogaku sansu. Tokyo: Kyoiku Shuppan.
Schmidt, W. H., Houang, R., & Cogan, L. (2002 Summer). A coherent curriculum: The case of mathematics. American Educator, 1-17.
Schmidt, W.H., McKnight, C. C., & Raizen, S. A. (1996). A splintered vision: An investigation of U.S. science and mathematics education. Boston: Kluwer.
Shimahara, N. K., & Sakai, A. (1995). Learning to teach in two cultures: Japan and the United States. New York: Garland Garland, city (1990 pop. 180,650), Dallas co., N Tex., a suburb of Dallas;
inc. 1891. Since World War II, Garland has grown from an agricultural community into an important center for electronics research and for the production of electronic equipment. Publishing.
Simon, M. A. (2002). Focusing on key developmental understanding in mathematics. In D. S. Mewborn and others (Eds.), Proceedings of the Twenty-Fourth Annual Meeting of the North American North
named after North America.
North American blastomycosis
see North American blastomycosis.
North American cattle tick
see boophilusannulatus. Chapter of the International Group for the Psychology of Mathematics Education (PME-NA PME-NA North American Chapter of the International Group for the Psychology of
Mathematics Education ), Volume 2, 991-998.
Steffe, L. P. (2002). A new hypothesis concerning children's fractional knowledge. Journal of Mathematical Behavior, 20, 267-307.
Stigler, J., & Hiebert, J. (1999). The teaching gap: Best ideas from the world's teachers for improving education in the classroom. New York: The Free Press.
Sugiyama, Y., Iitaka, S., & Itoh, S. (Eds.) (2002). Atarashii sansu. Tokyo: Tokyo Shoseki.
Thompson, P. W., & Saldanha, L.A. (2003). Fractions and multiplicative mul·ti·pli·ca·tive
1. Tending to multiply or capable of multiplying or increasing.
2. Having to do with multiplication.
mul reasoning. In J. Kilpatrick, W. G. Martin, & D. Schifter (Eds.), A research companion to Principles and Standards in School Mathematics, (pp. 95-113). Reston, VA: National Council of Teachers of
Mathematics The National Council of Teachers of Mathematics (NCTM) was founded in 1920. It has grown to be the world's largest organization concerned with mathematics education, having close to
100,000 members across the USA and Canada, and internationally. .
Tzur, R. (1999). An integrated study of children's construction of improper fractions improper fraction
A fraction in which the numerator is larger than or equal to the denominator.
improper fraction
Noun and the teacher's role in promoting that learning. Journal for Research in Mathematics Education, 30(4), 390-416.
Tzur, R. (2004). Teacher and students' joint production of a reversible reversible,
adj capable of going through a series of changes in either direction, forward or backward (e.g., reversible chemical reaction).
reversible hydrocolloid,
n See hydrocolloid, reversible. fraction conception. Journal of Mathematical Behavior, 23, 92-114.
Watanabe, T. (1995) Inconsistencies among fifth grade students' understanding of multiplicative concepts. Paper presented at the annual meeting of the American Education Research Association. San
Francisco San Francisco (săn frănsĭs`kō), city (1990 pop. 723,959), coextensive with San Francisco co., W Calif., on the tip of a peninsula between the Pacific Ocean and San Francisco Bay, which are
connected by the strait known as the Golden , April.
Watanabe, T. (2001a). Let's eliminate fraction instruction from primary curriculum. Teaching Children Mathematics, 8, 70-72.
Watanabe, T. (2001b). Content and organization of teachers' manuals: An analysis of Japanese elementary mathematics teachers' manuals. School Science and Mathematics, 101, 194-205.
Watanabe, T. (2003). Teaching multiplication: An analysis of elementary school mathematics teachers' manual from Japan and the United States. Elementary School Journal Published by the University of
Chicago Press, The Elementary School Journal is an academic journal which has served researchers, teacher educators, and practitioners in elementary and middle school education for over one hundred
years. , 104, 111-125.
Tad (Telephone Answering Device) An answering machine. Watanabe
Kennesaw State University Kennesaw State University, commonly known as Kennesaw State, is a public, coeducational university and is part of the University System of Georgia. It is located in
Kennesaw, an unincorporated community in Cobb County, Georgia, United States, approximately 20 miles north of
Table 1. Summary of fraction related topics discussed in the Ministry of
Education documents.
Grade 4 Introduction of fractions; improper fractions and mixed
numbers; comparison of fractions (with like denominators
Grade 5 Comparison of fractions (unlike denominators); equivalent
fractions; addition and subtraction of fractions with like
denominators; fractions as quotient; relationships among
fractions, decimals & whole numbers
Grade 6 Addition and subtraction of fractions with unlike denominators;
creating equivalent fractions; multiplication and division of
Table 2. Summary of the measurement strand in the Japanese COS
Grade Content
1 Introduction of length--direct and indirect comparison, the use
of informal units
2 Linear measurement with the units of m (meter), cm (centimeter)
and mm (millimeter). Clock reading
3 Linear measurement with the unit of km. Introduction of capacity
and weight, using the units of l (liter) and g (gram),
respectively. Other units of capacity (milliliter and
deciliter) and weight (kilogram) are also touched upon.
4 Introduction of area measurement using the units of cm2 (square
centimeter). Calculating the area of squares and rectangles.
Introduction of angle measurement using the unit of degree.
5 Area of plane figures, including triangles, parallelograms, and
6 Introduction of volume, using the unit of cm3 (cubic centimeter),
and calculating the volume of rectangular prisms (cubes and
Table 3. Summary of problem contexts
A B* C D* E* F
% of measurement problems in 98% 60% 40% 38% 45% 51%
the fraction unit or units
% of problems shown with 0% 13% 21% 38% 27% 21%
number line
% of problems presented only 2% 26% 39% 23% 21% 28%
with symbols
A: Ichimatsu, Okada & Machida (2002)
B: Sugiyama, Iitaka & Itoh (2002) * Does not add up to 100% due to
rounding errors.
C: Hosokawa, Nohda, Shimizu & Funakoshi (2002)
D: Nakahara (2002) * Does not add up to 100% due to rounding errors.
E: Hiraoka and Hashimoto (2002) * 7% of problems involved area
F: Sawada (2002)
Table 4. Summary of measurement problems in the textbook series.
A B C D E F
% of measurement problems in the 98% 58% 40% 38% 45% 51%
fraction unit or units
% of linear measurement problems 72% 59% 69% 30% 64% 63%
among all measurement problems
% of liquid capacity problems 28% 41% 31% 70% 25% 37%
among all measurement problems
A: Ichimatsu, Okada & Machida (2002)
B: Sugiyama, Iitaka & Itoh (2002)
C: Hosokawa, Nohda, Shimizu & Funakoshi (2002)
D: Nakahara (2002)
E: Hiraoka and Hashimoto (2002)
F: Sawada (2002)
Reader Opinion | {"url":"http://www.thefreelibrary.com/Initial+treatment+of+fractions+in+Japanese+textbooks.-a0165312366","timestamp":"2014-04-18T03:53:17Z","content_type":null,"content_length":"101036","record_id":"<urn:uuid:8ef7d921-e9a0-45e1-9eeb-4e4c9ae3ed23>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 449
(b) If the natural abundance of Ag-107 is 51.84%, what is the natural abundance of Ag-109? % (c) If the mass of Ag-107 is 106.905, what is the mass of Ag-109? amu I really need help. Can u say the
answer then explain how you got it? Thanks:)
Oh yah ok thanks:)
(b) If the natural abundance of Ag-107 is 51.84%, what is the natural abundance of Ag-109? Thanks
so how would u do it when an element gains. like Cl?
Yah I dont think we have gotten to the second part but the first makes sense. Thanks:)
thanks. can u explain that? and how did u know the stuff? like i am confused
Predict how many electrons will most likely be gained or lost by each of the following elements. Al I know it loses but how many?
One could easily determine a single bb's mass by simply weighing it or by weighing 100 BBs and dividing the mass by 100. Why couldnt Robert Millikan use this procedure to measure the charge of an
electron? Thanks:)
When ice melts, it absorbs 0.33 kJ per gram. How much ice is required to cool a 12.0 fluid ounce drink from 75°F to 35°F, if the heat capacity of the drink is 4.18 J/g·°C? (Assume that the heat
transfer is 100% efficient.) please explain realy realy well tha...
i have another question: A 43 kg sample of water absorbs 343 kJ of heat. If the water was initially at 22.1°C, what is its final temperature? I got 1.9 but i think i didn't do it correctly
Calculate the amount of heat required to raise the temperature of a 25 g sample of water from 5°C to 10.°C. i got 523 then rounded it to two sig figs 520 but it said it was wrong
I got it wrong. can u explain it better. i dont think that is in the correct equation form??
oops. forget what I said last
how do i know what that mass of the ice is?
how do i know what that mass of the ice is?
thanks. um. i got 201. but it said it was wrong
oops here. When ice melts, it absorbs 0.33 kJ per gram. How much ice is required to cool a 12.0 fluid ounce drink from 75°F to 35°F, if the heat capacity of the drink is 4.18 J/g·°C? (Assume that the
heat transfer is 100% efficient.)
capacity of the drink is 4.18 J/g·°C? (Assume that the heat transfer is 100% efficient.)
I got it:)
If 1.68 L of water is initially at 25.0 °C, what will its temperature be after absorption of 6.8 10-2 kWh of heat? I dont know how to set it up because i have the equation q= m times c times change
in temp (the triangle and the a T is the symbol for that. I know ther is a ...
I got it:) it was 5e2
I got it wrong! would it be 5e3. cause i messed up on the amount of spaces but idk if that is the correct sig fig?
Calculate the amount of heat required to raise the temperature of a 16 g sample of water from 5°C to 12°C. Round to the correct number of sig figs. I got 5e3 and it was wrong.
just to conferm is this the question 4x^2 - 5x ?
Questions about the Crucible
1) What happens when john turns to elizabeth for advice in act 4. Also... I cant find the physical appearance for Francis Nurse. Does anyone know how he is portrayed and looks like in the play
.Question part Points Submissions 1 0/0.5 2/3 Total 0/0.5 ...In the explosion of a hydrogen-filled balloon, 0.75 g of hydrogen reacted with 6.9 g of oxygen to form how many grams of water vapor?
(Water vapor is the only product.) i already posted this but i dont think people u...
349 kJ to joules
divide 49 by -7
divide -15 by 5. so -3
i got 5.76e4 but its wrong
241 kJ to Calories how do i do this?
In the explosion of a hydrogen-filled balloon, 0.75 g of hydrogen reacted with 6.9 g of oxygen to form how many grams of water vapor? (Water vapor is the only product.) I am really confussed on how
to do this.
11+d=b b+3 +d=37 so.. plug in the top equation to the bottom and you get d+11 +d=37 so 2d=26 divide and you get d=13 plug this into the top equation and you get b=24 so answer: b=24 d=13 and you
check by adding both and they equal 37
i posted on your previous post:)
rly need help on math problem!
Directions: Use each statement as givena and draw a single conclusion using all of the statements. 1) No birds,except ostriches, are nine feet high.There are no birds in this aviary that belong to
anyone but me. No ostrich lives on mince pie. I have no birds less than nine fee...
begining algebra
(y/4) +y simplified: 5/4y
2=1/n multiply by n 2n=1 divide by 2 n=1/2
oops for 1) i got eggshells => not unselfish
use each statement as given, and draw a single conclusion using all of the statements. => means implies 1) no misers are unselfish.None but misers save eggshells. Answer i got: eggshells => unselfish
2) Every eagle can fly.Some pigs cannot fly Answer 1 got:Every eagle =&...
plug in the numbers so 20=4r then divide by 4 20/4 so 5=r
quick chemistry
A solid white substance A is heated strongly in the absence of air. It decomposes to form a new white substance B and a gas C. The gas has exactly the same properties as the product obtained when
carbon is burned in an excess of oxygen. What is solid B most likely to be? a)com...
conversion problems
conversion problems
convert 3.1 L to quarts
ok,thanks.. so i got a)810 b)50. c)6.7 is that correct?
Fill in the blanks. (a) 5.6 ft2 = ____ in2 (b) 5.6 yd2 = _____ ft2 (c) 5.6 m2 = ____ yd2 can you please explain how to do it
algebra 1
A0 h=12s b)just plug in those numbers for s:)
digital photagraphy
im not 100% but wouldn't it be Emphasis and Proportion
Chemisty (Sig Figs)
What do you mean?
Chemisty (Sig Figs)
1.20 Mm to kilometers its 1200 but when I put it in it said not the correct number of sig figs
I dont really understand your question? are you talking about 1st,2nd and 3rd person?
sig figs
how do you round 0.009934 to two sig figs? would it be .010? or would that be 3 sig figs?
oh ok. you have to put that into slope intercept form. So it would be 8y=2 (because when we subtract the 3x with the 3x it cancels out) then divide the 8 so y= 1/4 im pretty sure thats how it would
reduce down to now you just have to pull your info from that
0=6x + 8 +8y is that what the question is? it should most likely be in this format y=mx+b
I think it is beautiful
i think it would be chemical weathering
Chemistry-sig figs
ernest y did u post that on my question? it doesn't ever pertain to my question
Chemistry-sig figs
oh so is it incorrect and the answer is 1.2 or is it 1.20? does the second answer have 3 sig figs?
Chemistry-sig figs
where did you get 0.049?
Chemistry-sig figs
Each of the following numbers was supposed to be rounded to two significant figures. For each one below, decide whether it was correctly rounded or not, and for those that were incorrectly rounded,
correct them (a) 1.249 X 10 to the 3rd power. They said it rounded to 1.3 X 10 ...
Chemisty (Sig Figs)
so i am terrible at this. Round each number to 3 significant numbers. Can you explain? 1. 9845.42800 2. 1.548937675e12 3. 10.7771777425 4. 0.000045875 thanks:)
what does this mean? "Quelle est ta matiere scolaire preferee? thanks:)
I got it!
How do you say "I prefere track" interms of the sport and not the actual track. I know how to say I prefer and I know that track has the word l'athletisme(sp?) but I'm not sure what come before. Is
it faire or jour and is it du after that? Thanks:)
Do you think modernization was a positive or negative movement in the Ottoman Empire? Why?
How have guppies adapted over time to their freshwater environment? thanks:)
What kinds of cells make it difficult to tear fern leaves from a plant? thanks:)
I know that miller syndrome is autosomal recessive so would i use a karyotype or pedegree to represent that because I have to: Include a pedegree of at least 3 generations to demonstrate the
inheritance pattern. Be sure to label and explain the chart accurately so that someone...
ok thanks
What are some treat ments for Miller Syndrome? If you found a website that would be great. I already know of NINDS and FACES websites. thanks:)
ok thanks so much!
What are the treatments for Miller Syndrome. A website would be nice:) Thanks!
What are 3 ways that guppy fish interect (symbiosis,mutualism etc.) please provide a website:) thanks:)
I got that... disregard this post
I found out the answer! also where do guppies live in the world. A website would be great! thanks:)
Do guppies live in a herd, grove, alone, pairs, or colonies? Of what size? I have been trying to find this answer but its so broad and i need a little help. If you could also post the website where
you found the information the would be great! thanks:)
Stem Cell Research
Thanks so much!
Stem Cell Research
Thanks, do you know of any specific ones that just talk about opposing or just supporting?
Stem Cell Research
Hi I need to find 4 articles about stem cell research. 2 need to be ones that just support it and 2 the just oppose it. Does anyone have any link of web sites where I can find this. It can't talk
about both it has to oppose or support. Also a website would be ok but if you...
Wrld Hst
What did Margaret Cavendish believe/ talk about in her book Philosophical Fancies? thanks:)
Do prokaryote cells and eukaryote calls both go through the M phase or does just eukaryote cells? Then what does prokaryote cells go through in the cell cycle? thanks:)
Wrld Hst
Why did the Spanish empire decline in the 1600's? thanks:)
how do you draw tryptophan and i need to know what goes around the pentagon part etc. thanks
So like what are some examples because I was confused and my teacher said it had something to do with Mutualism and Parasitism.
What are interactions between living things in the tropical rainforest? thanks
Social Studies
Then did John come after that?
Social Studies
Then did John come after that?
Social Studies
Henry II of England
Social Studies
thanks, and also do you know who was king after henry because in my text book it communicates John but i looked on google and it says richard
Social Studies
How does the movie The Lion in Winter relate to World History? thanks
What chemicals are in curds? thanks
ok thanks so much!
thanks, What do you put for the hexagon?
how do you draw a the amino acid Phenylalanine? thanks:)
help please:)!!
A grocer mixes together some cashews costing 8$ per kilogram with some nuts costing 10$ per kilogram. The grocer sold 12 kg of the mixture for 8.50$ per kilogram. How many kilograms of cashews were
in the mixture? I have to find 2 equations then find the answer most likely usi...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=rebekah&page=4","timestamp":"2014-04-18T16:39:35Z","content_type":null,"content_length":"23019","record_id":"<urn:uuid:b955ba88-9943-47ff-b7a4-d5d047bef337>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematic Functions
If you consider it in the purest sense of a computer language like C, C++, Pascal, Visual Basic, and Java, etc, C# doesn't have its own built-in support for mathematics. It must borrow this
functionality either from other libraries or from other language. Fortunately, all of this is particularly easy.
To perform the basic algebraic and geometric operations in C#, you can use methods of the Math class of the .NET Framework. As seen in the previous lesson, you can also take advantage of Visual
Basic's very powerful library of functions. This library is one of the most extended set of functions of various area of business mathematics.
We know different ways to declare a variable of a numeric type. Here are examples:
using System;
class Program
static int Main()
short sNumber;
int iNumber;
double dNumber;
decimal mNumber;
return 0;
One of the primary rules to observe in C# is that, after declaring a variable, before using it, it must have been initialized. Here are examples of initializing variables:
using System;
class Program
static int Main()
short sNumber = 225;
int iNumber = -847779;
double dNumber = 9710.275D;
decimal mNumber = 35292742.884295M;
Console.WriteLine("Short Integer: {0}", sNumber);
Console.WriteLine("Integral Number: {0}", iNumber);
Console.WriteLine("Double-Precision: {0}", dNumber);
Console.WriteLine("Extended Precision: {0}", mNumber);
return 0;
This would produce:
Short Integer: 225
Integral Number: -847779
Double-Precision: 9710.275
Extended Precision: 35292742.884295
Press any key to continue . . .
When initializing a variable using a constant, you decide whether it is negative, 0 or positive. This is referred to as its sign. If you are getting the value of a variable some other way, you may
not know its sign. Although you can use comparison operators to find this out, the Math class provides a method to check it out for you.
To find out about the sign of a value or a numeric variable, you can call the Math.Sign() method. It is overloaded in various versions whose syntaxes are:
public static int Sign(sbyte value);
public static int Sign(short value);
public static int Sign(int value);
public static int Sign(long value);
public static int Sign(sbyte value);
public static int Sign(double value);
public static int Sign(decimal value);
When calling this method, pass the value or the variable you want to consider, as argument. The method returns:
• -1 if the argument is negative
• 0 if the argument is 0
• 1 if the argument is positive
Here are examples of calling the method:
using System;
class Program
static int Main()
short sNumber = 225;
int iNumber = -847779;
double dNumber = 9710.275D;
decimal mNumber = 35292742.884295M;
Console.WriteLine("Number: {0} => Sign: {1}",
sNumber, Math.Sign(sNumber));
Console.WriteLine("Number: {0} => Sign: {1}",
iNumber, Math.Sign(iNumber));
Console.WriteLine("Number: {0} => Sign: {1}",
dNumber, Math.Sign(dNumber));
Console.WriteLine("Number: {0} => Sign: {1}\n",
mNumber, Math.Sign(mNumber));
return 0;
This would produce:
Number: 225 => Sign: 1
Number: -847779 => Sign: -1
Number: 9710.275 => Sign: 1
Number: 35292742.884295 => Sign: 1
Press any key to continue . . .
The Integral Side of a Floating-Point Number
As reviewed in Lesson 3, when dealing with a floating-point number, it consists of an integral side and a precision side; both are separated by a symbol which, in US English, is the period. In some
operations, you may want to get the integral side of the value. The Math class can assist you with this.
To get the integral part of a decimal number, the Math class can assist you with the Trancate() method, which is overloaded in two versions whose syntaxes are:
public static double Truncate(double d);
public static double Truncate(double d);
When calling this method, pass it a number or a variable of float, double, or decimal type. The method returns the int side of the value. Here is an example of calling it:
using System;
class Program
static int Main()
float number = 225.75f;
Console.WriteLine("The integral part of {0} is {1}\n",
number, Math.Truncate(number));
return 0;
This would produce:
The integral part of 225.75 is 225
Press any key to continue . . .
The Minimum of Two Values
If you have two numbers, you can find the minimum of both without writing your own code. To assist you with this, the Math class is equipped with a method named Min. This method is overloaded in
various versions with each version adapted to each integral or floating-point data type. The syntaxes are:
public static byte Min(byte val1, byte val2);
public static sbyte Min(sbyte val1, sbyte val2);
public static short Min(short val1, short val2);
public static ushort Min(ushort val1, ushort val2);
public static int Min(int val1, int val2);
public static uint Min(uint val1, uint val2);
public static float Min(float val1, float val2);
public static long Min(long val1, long val2);
public static ulong Min(ulong val1, ulong val2);
public static double Min(double val1, double val2);
public static decimal Min(decimal val1, decimal val2);
Here is an example of calling the method:
using System;
class Program
static int Main()
int number1 = 8025;
int number2 = 73;
Console.WriteLine("The minimum of {0} and {1} is {2}",
number1, number2, Math.Min(number1, number2));
return 0;
This would produce:
The minimum of 8025 and 73 is 73
Press any key to continue . . .
Remember that you can use the var or the dynamic keyword to declare the variables:
using System;
class Program
static int Main()
var number1 = 8025;
dynamic number2 = 73;
Console.WriteLine("The minimum of {0} and {1} is {2}",
number1, number2, Math.Min(number1, number2));
return 0;
The Maximum Integer Value of a Series
As opposed to the minimum of two numbers, you may be interested in the higher of both. To help you find the maximum of two numbers, you can call the Max() method of the Math class. It is overloaded
in various versions with one of each type of numeric data. The syntaxes of this method are:
public static byte Max(byte val1, byte val2);
public static sbyte Max(sbyte val1, sbyte val2);
public static short Max(short val1, short val2);
public static ushort Max(ushort val1, ushort val2);
public static int Max(int val1, int val2);
public static uint Max(uint val1, uint val2);
public static float Max(float val1, float val2);
public static long Max(long val1, long val2);
public static ulong Max(ulong val1, ulong val2);
public static double Max(double val1, double val2);
public static decimal Max(decimal val1, decimal val2);
Here is an example of calling the method:
using System;
class Program
static int Main()
int number1 = 8025;
int number2 = 73;
Console.WriteLine("The maximum of {0} and {1} is {2}",
number1, number2, Math.Max(number1, number2));
return 0;
This would produce:
The maximum of 8025 and 73 is 8025
Press any key to continue . . .
We know how to declare variables of integral, floating-point, and string types. We also saw how to initialize the variables. If you have a program with mixed types of variables, you may be interested
in converting the value of one into another. Again, in Lesson 1, we saw how much memory space the variable of each data type required in order to hold its value. Here is a summary of what we learned:
│Data Type│Name │Memory Size│
│byte │Byte │8 bits │
│sbyte │Signed Byte │8 bits │
│char │Character │16 bits │
│short │Small Integer │16 bits │
│ushort │Unsigned Small Integer │16 bits │
│int │Signed Integer │32 bits │
│uint │Unsigned Integer │32 bits │
│float │Single-Precision Floating-Point Number │32 bits │
│double │Double-Precision Floating-Point Number │64 bits │
│long │Signed Long Integer │64 bits │
│ulong │Unsigned Long Integer │64 bits │
│decimal │Extended Precision Floating-Point Number │128 bits │
As you can see, a value held by a Byte variable can fit in the memory reserved for an int variable, which can be carried by a long variable. Thanks to this, you can assign a Byte value to an int
variable, or an int variable to a long variable. Also, based on this, because the memory reserved for an int variable is larger than the one reserved for a double variable, you can assign a variable
of the former to a variable of the latter. Here is an example:
using System;
class Program
static int Main()
int iNumber = 2445;
double dNumber = iNumber;
Console.WriteLine("Number = {0}", iNumber);
Console.WriteLine("Number = {0}\n", dNumber);
return 0;
This would produce:
Number = 2445
Number = 2445
Press any key to continue . . .
This characteristic is referred to as implicit conversion.
Because of memory requirements, the direct reverse of implicit conversion is not possible. Since the memory reserved for a short variable is smaller than that of an int, you cannot assign the value
of an int to a short variable. Consider the following program:
using System;
class Program
static int Main()
int iNumber = 168;
short sNumber = iNumber;
Console.WriteLine("Number = {0}", iNumber);
Console.WriteLine("Number = {0}\n", sNumber);
return 0;
This would produce the following error:
Cannot implicitly convert type 'int' to 'short'.
Value casting consists of converting a value of one type into a value of another type. For example, you may have an integer value and you may want that value in an expression that expects a short.
Value casting is also referred to as explicit conversion.
To cast a value or a variable, precede it with the desired data type in parentheses. Here is an example:
using System;
class Program
static int Main()
int iNumber = 168;
short sNumber = (short)iNumber;
Console.WriteLine("Number = {0}", iNumber);
Console.WriteLine("Number = {0}\n", sNumber);
return 0;
This would produce:
Number = 168
Number = 168
Press any key to continue . . .
When performing explicit conversion, you should pay close attention to the value that is being cast. If you want an integer value to be assigned to a short variable, the value must fit in 16 bits,
which means it must be between -32768 and 32767. Any value beyond this range would produce an unpredictable result. Consider the following program:
using System;
class Program
static int Main()
int iNumber = 680044;
short sNumber = (short)iNumber;
Console.WriteLine("Number = {0}", iNumber);
Console.WriteLine("Number = {0}\n", sNumber);
return 0;
This would produce:
Number = 680044
Number = 24684
Press any key to continue . . .
Notice that the result is not reasonable.
In Lesson 13 and Lesson 32, we will see that each C# data type, which is adapted from a .NET Framework structure, is equipped with a ToString() method that could be used to convert its value to a
String type. We didn't address the possibility of converting a value from one primitive type to another. To support the conversion of a value from one type to another, the .NET Framework provides a
class named Convert. This class is equipped with various static methods; they are so numerous that we cannot review all of them.
Remember that each primitive data type of the C# language is type-defined from a .NET Framework structure as follows:
┃C# Data Type │Name │.NET Framework Structure┃
┃bool │Bollean │Boolean ┃
┃byte │Byte │Byte ┃
┃sbyte │Signed Byte │SByte ┃
┃char │Character │Char ┃
┃short │Small Integer │Int16 ┃
┃ushort │Unsigned Small Integer │UInt16 ┃
┃int │Integer │Int32 ┃
┃uint │Unsigned Integer │UInt32 ┃
┃long │Long Integer │Int64 ┃
┃ulong │Unsigned Long Integer │UInt64 ┃
┃float │Single-Precision Floating-Point │Single ┃
┃double │Double-Precision Floating-Point │Double ┃
┃decimal │Extended Precision Floating-Point Number │Decimal ┃
┃No Explicit Type│Date/Time Value │DateTime ┃
┃string │String │String ┃
To adapt the Convert class to each C# data type, the class is equipped with a static method whose name starts with To, ends with the .NET Framework name of its structure, and takes as argument the
type that needs to be converted. Based on this, to convert a decimal number of a double type to a number of int type, you can call the ToInt32() method and pass the double variable as argument. Its
syntax is:
public static int ToInt32(double value);
Here is an example:
using System;
class Program
static int Main()
double dNumber = 34987.68D;
int iNumber = Convert.ToInt32(dNumber);
Console.WriteLine("Number: {0}", dNumber);
Console.WriteLine("Number: {0}", iNumber);
return 0;
The decimal numeric system counts from negative infinity to positive infinity. This means that numbers are usually negative or positive, depending on their position from 0, which is considered as
neutral. In some operations, the number considered will need to be only positive even if it is provided in a negative format. The absolute value of a number x is x if the number is (already)
positive. If the number is negative, its absolute value is its positive equivalent. For example, the absolute value of 12 is 12, while the absolute value of –12 is 12.
To get the absolute value of a number, the Math class is equipped with a method named Abs, which is overloaded in various versions. Their syntaxes are:
public static sbyte Abs(sbyte value);
public static short Abs(short value);
public static int Abs(int value);
public static float Abs(float value);
public static double Abs(double value);
public static long Abs(long value);
public static decimal Abs(decimal value);
This method takes the argument whose absolute value must be fond. Here is an example:
using System;
class Program
static int Main()
int number = -6844;
Console.WriteLine("Original Value = {0}", number);
Console.WriteLine("Absolute Value = {0}\n", Math.Abs(number));
return 0;
This would produce:
Original Value = -6844
Absolute Value = 6844
Press any key to continue . . .
Consider a floating-point number such as 12.155. This number is between integer 12 and integer 13:
In the same way, consider a number such as -24.06. As this number is negative, it is between –24 and –25, with –24 being greater.
In arithmetic, the ceiling of a number is the closest integer that is greater or higher than the number considered. In the first case, the ceiling of 12.155 is 13 because 13 is the closest integer
greater than or equal to 12.155. The ceiling of -24.06 is -24.
To support the finding of a ceiling, the Math class is equipped with a method named Ceiling that is overloaded with two versions whose syntaxes are:
public static double Ceiling(double a);
public static decimal Ceiling(decimal d);
This method takes as argument a floating-point number of variable whose ceiling needs to be found. Here is an example:
using System;
class Program
static int Main()
double value1 = 155.55; double value2 = -24.06;
Console.WriteLine("The ceiling of {0} is {1}",
value1, Math.Ceiling(value1));
Console.WriteLine("The ceiling of {0} is {1}\n",
value2, Math.Ceiling(value2));
return 0;
This would produce:
The ceiling of 155.55 is 156
The ceiling of -24.06 is -24
Press any key to continue . . .
Besides the Math class, the Double structure provides its own implementation of this method using the following syntax:
public static decimal Ceiling(decimal d);
Consider two floating numbers such as 128.44 and -36.72. The number 128.44 is between 128 and 129 with 128 being the lower. The number -36.72 is between -37 and -36 with -37 being the lower. The
lowest but closest integer value of a number is referred to as its floor.
To assist you with finding the floor of a number, the Math class provides the Floor() method. It is overloaded in two versions whose syntaxes are:
public static double Floor(double d);
public static decimal Floor(decimal d);
The floor() method takes the considered value as the argument and returns the integer that is less than or equal to the argument. Here is an example:
using System;
class Program
static int Main()
double value1 = 1540.25;
double value2 = -360.04;
Console.WriteLine("The floor of {0} is {1}",
value1, Math.Floor(value1));
Console.WriteLine("The floor of {0} is {1}\n",
value2, Math.Floor(value2));
return 0;
This would produce:
The floor of 1540.25 is 1540
The floor of -360.04 is -361
Press any key to continue...
Instead of using the Math class, the Double structure also has a method to find the floor of a decimal number. Its syntax is:
public static decimal Ceiling(decimal d);
The power is the value of one number or expression raised to another number. This follows the formula:
ReturnValue = x^y
To support this operation, the Math class is equipped with a method named Pow whose syntax is:
public static double Pow(double x, double y);
This method takes two arguments. The first argument, x, is used as the base number to be evaluated. The second argument, y, also called the exponent, will raise x to this value. Here is an example:
using System;
class Program
static int Main()
const double source = 25.38;
const double exp = 3.12;
double result = Math.Pow(source, exp);
Console.WriteLine("Pow({0}, {1}) = {2}\n",
source, exp, result);
return 0;
This would produce:
Pow(25.38, 3.12) = 24099.8226934415
Press any key to continue . . .
You can calculate the exponential value of a number. To support this, the Math class provides the Exp() method. Its syntax is:
public static double Exp (double d);
Here is an example of calling this method:
using System;
class Program
static int Main()
Console.WriteLine("The exponential of {0} is {1}",
709.78222656, Math.Exp(709.78222656));
return 0;
This would produce:
The exponential of 709.78222656 is 1.79681906923757E+308
Press any key to continue . . .
If the value of x is less than -708.395996093 (approximately), the result is reset to 0 and qualifies as underflow. If the value of the argument x is greater than 709.78222656 (approximately), the
result qualifies as overflow.
To calculate the natural logarithm of a number, you can call the Math.Log() method. It is provides in two versions. The syntax of one is:
public static double Log(double d);
Here is an example:
using System;
class Program
static int Main()
double log = 12.48D;
Console.WriteLine("Log of {0} is {1}", log, Math.Log(log));
return 0;
This would produce:
Log of 12.48 is 2.52412736294128
Press any key to continue . . .
The Math.Log10() method calculates the base 10 logarithm of a number. The syntax of this method is:
public static double Log10(double d);
The number to be evaluated is passed as the argument. The method returns the logarithm on base 10 using the formula:
y = log[10]x
which is equivalent to
x = 10^y
Here is an example:
using System;
class Program
static int Main()
double log10 = 12.48D;
Console.WriteLine("Log of {0} is {1}", log10, Math.Log10(log10));
return 0;
This would produce:
Log of 12.48 is 1.09621458534641
Press any key to continue . . .
The Logarithm of Any Base
The Math.Log() method provides another version whose syntax is:
public static double Log(double a, double newBase);
The variable whose logarithmic value will be calculated is passed as the first argument to the method. The second argument allows you to specify a base of your choice. The method uses the formula:
Y = log[NewBase]^x
This is the same as
x = NewBase^y
Here is an example of calling this method:
using System;
class Program
static int Main()
double logN = 12.48D;
Console.WriteLine("Log of {0} is {1}", logN, Math.Log(logN, 4));
return 0;
This would produce:
Log of 12.48 is 1.82077301454376
Press any key to continue . . .
You can calculate the square root of a decimal positive number. To support this, the Math class is equipped with a method named Sqrt whose syntax is:
public static double Sqrt(double d);
This method takes one argument as a positive floating-point number. After the calculation, the method returns the square root of x:
using System;
class Program
static int Main()
double sqrt = 8025.73D;
Console.WriteLine("The square root of {0} is {1}", sqrt, Math.Sqrt(sqrt));
return 0;
This would produce:
The square root of 8025.73 is 89.5864387058666
Press any key to continue . . .
A circle is a group or series of distinct points drawn at an exact same distance from another point referred to as the center. The distance from the center C to one of these equidistant points is
called the radius, R. The line that connects all of the points that are equidistant to the center is called the circumference of the circle. The diameter is the distance between two points of the
circumference to the center; in other words, a diameter is double the radius.
To manage the measurements and other related operations, the circumference is divided into 360 portions. Each of these portions is called a degree. The unit used to represent the degree is the
degree, written as ˚. Therefore, a circle contains 360 degrees, that is 360˚. The measurement of two points A and D of the circumference could have 15 portions of the circumference. In this case,
this measurement would be represents as 15˚.
The distance between two equidistant points A and B is a round shape geometrically defined as an arc. An angle is the ratio of the distance between two points A and B of the circumference divided by
the radius R. This can be written as:
Therefore, an angle is the ratio of an arc over the radius. Because an angle is a ratio and not a “physical” measurement, which means an angle is not a dimension, it is independent of the size of a
circle. Obviously this angle represents the number of portions included by the three points. A better unit used to measure an angle is the radian or rad.
A cycle is a measurement of the rotation around the circle. Since the rotation is not necessarily complete, depending on the scenario, a measure is made based on the angle that was covered during the
rotation. A cycle could cover part of the circle in which case the rotation would not have been completed. A cycle could also cover the whole 360˚ of the circle and continue there after. A cycle is
equivalent to the radian divided by 2 * Pi.
The word п, also written as Pi, is a constant number used in various mathematical calculations. Its approximate value is 3.1415926535897932. The calculator of Windows represents it as
To support the Pi constant, the Math class is equipped with a constant named PI.
A diameter is two times the radius. In geometry, it is written as 2R. In C++, it is written as 2 * R or R * 2 (because the multiplication is symmetric). The circumference of a circle is calculated by
multiplying the diameter to Pi, which is 2Rп, or 2 * R * п or 2 * R * Pi.
A radian is 2Rп/R radians or 2Rп/R rad, which is the same as 2п rad or 2 * Pi rad.
To perform conversions between the degree and the radian, you can use the formula:
360˚ = 2п rad which is equivalent to 1 rad = 360˚ / 2п = 57.3˚.
Consider the following geometric figure:
Consider AB the length of A to B, also referred to as the hypotenuse. Also consider AC the length of A to C which is the side adjacent to point A. The cosine of the angle at point A is the ratio AC/
AB. That is, the ratio of the adjacent length, AC, over the length of the hypotenuse, AB:
The returned value, the ratio, is a double-precision number between -1 and 1.
To calculate the cosine of an angle, the Math class provides the Cos() method. Its syntax is:
public static double Cos(double d);
Here is an example:
using System;
class Program
static int Main()
int number = 82;
Console.WriteLine("The cosine of {0} is {1}", number, Math.Cos(number));
return 0;
This would produce:
The cosine of 82 is 0.949677697882543
Press any key to continue . . .
Consider AB the length of A to B, also called the hypotenuse to point A. Also consider CB the length of C to B, which is the opposite side to point A. The sine represents the ratio of CB/AB; that is,
the ratio of the opposite side, CB over the hypotenuse AB.
To calculate the sine of a value, you can call the Sin() method of the Math class. Its syntax is:
public static double Sin(double a);
Here is an example:
using System;
class Program
static int Main()
double number = 82.55;
Console.WriteLine("The sine of {0} is {1}", number, Math.Sin(number));
return 0;
This would produce:
The sine of 82.55 is 0.763419622322519
Press any key to continue . . .
Consider AC the length of A to C. Also consider BC the length of B to C. The tangent is the result of BC/AC; that is, the ratio of BC over AC. To assist you with calculating the tangent of of a
number, the Math class is equipped with a method named Tan whose syntax is:
public static double Tan(double a);
Here is an example:
using System;
class Program
static int Main()
uint number = 225;
Console.WriteLine("The tangent of {0} is {1}", number, Math.Tan(number));
return 0;
This would produce:
The tangent of 225 is -2.53211499233434
Press any key to continue . . .
Consider BC the length of B to C. Also consider AC the length of A to C. The arc tangent is the ratio of BC/AC. To calculate the arc tangent of a value, you can use the Math.Atan() method. Its syntax
public static double Atan(double d);
Here is an example:
using System;
class Program
static int Main()
short number = 225;
Console.WriteLine("The arc tangent of {0} is {1}",
number, Math.Atan(number));
return 0;
This would produce:
The arc tangent of 225 is 1.56635191161394
Press any key to continue . . . | {"url":"http://www.functionx.com/csharp/topics/math.htm","timestamp":"2014-04-19T11:56:49Z","content_type":null,"content_length":"47209","record_id":"<urn:uuid:7eefebd0-9936-4a3c-82ed-28abd445700c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
technical noise measurement question - diyAudio
diyAudio Member
Join Date: Feb 2005
Location: south of lower saxon
Hi cuibono,
I wouldn't worry about the different noise levels you mentioned (-75dB, -87dB, ...). Trust your ears and you'll be more than satisfied.
I have done a lot, doing right now some and will possibly doing in the future some more FFTs (programming and applications using it) and it's most of the time a question of the used window type
(before you calculate the FFT itself) the used "processing unit" (i.e. an 8 bit micro controller, a fixed point DSP or, meanwhile, a floating-point DSP) and further methods (like time averaging of
the FFT bins and so on).
Since you mentioned the Zoom H2 (where I'm not familiar with) as the recording source, capable of recording with 24 bit at 96kHz (that's all I could figure out in a hurry about this little thingy), I
assume the issue will not be the ADC. It's more likely the algorithm/method of deriving the FFT from the input data (this points directly to the window application). There are certain types of
windows used prior to run the FFT. For instance Blackman-, Hamming-, von Hann- (aka Hanning), Kaiser-Bessel-, etc., just to name a few of them. These windows determine, beside other things, the hight
of the sidelobes - or the noise floor. Whatever your special interest is about the signal you want to put your magnifying glass on, this determines the proper selection of your window type.
As long as you don't have the choice to select the window type, or to get an info about what window type is used in a particular window application, I would be very carefully when interpreting such
If you're still interested / keen on it in reading more about FFTs and why windowing of the input data is necessary just 'google' and look for "FFT" and "windowing". You should come up with a couple
a pages explaining it for you.
After all, my final rule would be: Never trust what you haven't cheated or manipulated yourself | {"url":"http://www.diyaudio.com/forums/equipment-tools/115248-technical-noise-measurement-question.html","timestamp":"2014-04-19T08:37:34Z","content_type":null,"content_length":"83679","record_id":"<urn:uuid:27b3b6d2-918b-4c54-a92a-3ac0abb62682>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimum Growth Rate of Hamming Weight of Multiples of Primitive Polynomials
up vote 1 down vote favorite
Let $F_2[x]$ denote the ring of polynomials over the field of 2 elements.
Richard Brent has a page on finding primitive trinomials in $F_2[x]$ of huge degree at http://maths.anu.edu.au/~brent/trinom.html.
My problem is different, I want to find primitive polynomials none of whose multiples--which are of course not primitive--are low weight.
Let the Hamming weight $W(f)$ of a polynomial $f \in F_2[x]$ be the number of nonzero coefficients it has, so a trinomial has Hamming weight 3. Let $P_n$ be the set of primitive polynomials in $F_2
[x]$ of degree $n$. Let $N=2^n-1.$ For $f \in P_n$ let
$W_{min}(f)=\min ( W( f(x) a(x))~:~deg(a) \in [2,2^n-n-1] ).$
For each $n\geq 3$ let $W_n=\max ( W_{min}(f): f \in P_n ).$ Is anything known about the growth rate of $W_n$?
Essentially, $W_n$ represents how good the best possible primitive polynomial of degree $n$ is. The application is to cryptosystems which use primitive LFSRs as components. If there is a low weight
multiple [lowest possible weight being 3] then there are linear parity checks between output bits that can be exploited for an attack.
There is related work [Blake, Gao, Lambert: "Construction and Distribution Properties for Irreducible Trinomials over Finite Fields", 1994 Finite Fields and Applications Conference, see citeseer]
which shows that given $n$ there are $1\leq k < m \leq 2^n-1,$ such that $\gcd(x^m+x^k+1,x^{2^n-1}+1)=h(x)$ for some $h(x) \in P_n$. In fact, experimentally, for $n$ up to 500, $m$ is not much larger
than $n$ in most cases.
I find this problem similar to the following: given n odd, find l and k at most n such that n divides 2^l + 2^k + 1. While the arithmetic is different, I would be surprised if in your problem the
growth rate is larger than log(n), and I expect more like log(log(n)). But that is a guess on my part. Start by considering x^k mod f and see how few k you need to add up to something with low
Hamming weight which is also 0 mod f. Gerhard "Ask Me About System Design" Paseman, 2011.09.07 – Gerhard Paseman Sep 8 '11 at 3:03
1 You are trying to compute the minimal distance of certain cyclic codes. Look first in MacWilliams and Sloane. – Felipe Voloch Sep 8 '11 at 10:21
@Felipe: Almost but not quite. The code is not cyclic, because those multiples of the generator polynomial $f(x)$, where the other factor $a(x)$ is at most linear are apparently left out. Anyway,
I'm fairly sure that you are correct in that this sequence $W_n$ is bounded. I need to think of suitable coding theoretical bound... – Jyrki Lahtonen Sep 11 '11 at 14:22
add comment
2 Answers
active oldest votes
Scratch the use of the parity check matrix. Combine Felipe's idea with a counting argument similar to the scratched solution.
Assume $n\ge4.$ Let $\alpha$ be a root of $f(x)$. The set $P=\{ \alpha^i\mid n+2\lt i\lt N \}$ contains $N-n-3\ge 8$ distinct elements of the field $GF(2^n)$. Therefore it must intersect
up vote 1 non-trivially with the set $S=\{ 1+\alpha^i\mid 0\lt i\lt 2^n-1\}=GF(2^n)\setminus\{0,1\} $. So $\alpha^j=1+\alpha^i$ for some $i,j, j\ge n+2$ The trinomial $p(x)=1+x^i+x^j$ is thus
down vote divisible by $f(x)$, and the degree of the quotient $a(x)=p(x)/f(x)$ is in the prescribed range. Therefore $W_{min}(f)=3$ for all primitive polynomials $f$. The case $n=3$ can be checked
accepted easily, and the same conclusion holds.
add comment
$W_{min}(f)$ is independent of $f$. So $W_n$ is just the minimal weight among all polynomials of $P_n$, which is conjectured to be at most $5$. I am not sure what's the best proved
upper bound.
Let me show, for example, that if there is trinomial $g=x^n + x^k +1 \in P_n$, then $W_{min}(f)=3$.
up vote 2 down
vote Let $\zeta$ be a root of $f$ in an extension field. Then $\zeta^r$ is a root of $g$ for some $r$. Define $u \equiv rn, v \equiv rk \mod N, u,v < N$. Then $\zeta$ is a root of $x^u+x^
v+1$, so this polynomial is divisible by $f$ and $W_{min}(f)=3$.
+1: A nice idea. Combinining this with a counting argument (see my answer) shows that $W_{min}(f)=3$ irrespective of whether there is a trinomial in $P_n$. – Jyrki Lahtonen Sep 13
'11 at 7:08
add comment
Not the answer you're looking for? Browse other questions tagged finite-fields or ask your own question. | {"url":"http://mathoverflow.net/questions/74821/minimum-growth-rate-of-hamming-weight-of-multiples-of-primitive-polynomials","timestamp":"2014-04-18T08:11:27Z","content_type":null,"content_length":"60675","record_id":"<urn:uuid:b0a98cac-8c7a-4cab-a358-2584d2d17000>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Really stuck please help
March 8th 2007, 01:38 PM #1
Feb 2007
Really stuck please help
Hey people,
im really stuck on this and wondered if you could help please.
Q: A heavy crate of mass m is pulled along a rough horizontal surface at a constant speed by a rope. The coefficient of friction between the crate and surface is u (substituted for mew as i dont
know how to insert that)
A) Show that T = umg/(cos(theta)+usin(theta).
B) By finding dT/d(theta) and letting dT/d(theta) = 0, show that for a minimum value of T , tan(theta) = u. ( you may assume that, for a maximum value of T, (theta) = 0).
any help would be really greatful guys as im really stuck.
Last edited by the_sensai; March 8th 2007 at 03:44 PM. Reason: forgot diagram
Hey people,
im really stuck on this and wondered if you could help please.
Q: A heavy crate of mass m is pulled along a rough horizontal surface at a constant speed by a rope. The coefficient of friction between the crate and surface is u (substituted for mew as i dont
know how to insert that)
A) Show that T = umg/(cos(theta)+usin(theta).
B) By finding dT/d(theta) and letting dT/d(theta) = 0, show that for a minimum value of T , tan(theta) = u. ( you may assume that, for a maximum value of T, (theta) = 0).
any help would be really greatful guys as im really stuck.
So where is the diagram?
Hey people,
im really stuck on this and wondered if you could help please.
Q: A heavy crate of mass m is pulled along a rough horizontal surface at a constant speed by a rope. The coefficient of friction between the crate and surface is u (substituted for mew as i dont
know how to insert that)
A) Show that T = umg/(cos(theta)+usin(theta).
B) By finding dT/d(theta) and letting dT/d(theta) = 0, show that for a minimum value of T , tan(theta) = u. ( you may assume that, for a maximum value of T, (theta) = 0).
any help would be really greatful guys as im really stuck.
Still no diagram, but I think I know what's going on here.
First of all, the Greek letter "mew" (shudders) is spelled "mu." It's not a cat!
My diagram has a crate being dragged to the right by a rope. The rope makes an angle t (in place of theta) with the horizontal.
My Free-Body Diagram on the crate has a weight (w) acting directly downward, a normal force (N) acting directly upward, a kinetic friction force (f) acting to the left, and a tension (T) acting
upward and to the right at an angle t with the horizontal. I have a +x axis to the right and a +y axis directly upward. The coefficient of kinetic friction is u.
The speed is constant, so the acceleration is 0 m/s^2. Thus Newton's 2nd in the x and y directions say:
SumFx = -f + Tcos(t) = 0
SumFy = N - w + Tsin(t) = 0
N = w - Tsin(t) = mg - Tsin(t)
f = uN = umg - uTsin(t)
-(umg - uTsin(t)) + Tcos(t) = 0
-umg + uTsin(t) + Tcos(t) = 0
T(usin(t) + cos(t)) = umg
T = umg/(usin(t) + cos(t))
For the second part, find dT/dt:
dT/dt = -umg/(usin(t) + cos(t))^2 * (ucos(t) - sin(t))
Setting dT/dt = 0 gives:
-umg(ucos(t) - sin(t))/(usin(t) + cos(t))^2 = 0
So the numerator must be 0 (as long as the denominator isn't 0 for this angle). Thus
ucos(t) - sin(t) = 0
u = tan(t)
as advertised.
I'll leave it to you to prove if this is a max or min value for T.
March 8th 2007, 11:25 PM #2
MHF Contributor
Apr 2005
March 9th 2007, 07:02 AM #3 | {"url":"http://mathhelpforum.com/advanced-applied-math/12336-really-stuck-please-help.html","timestamp":"2014-04-19T10:34:01Z","content_type":null,"content_length":"40154","record_id":"<urn:uuid:de537d56-e4e5-41c5-8e5e-2cca729cf0c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference Request: a paper by Yoseloff about a proof of Sperner's Lemma
up vote 0 down vote favorite
Dear Overflow,
Apologies in advance if I'm posting this in the bad place, but I was hoping some of you could point out to me a place where I could read online the following paper by Yoseloff, where he proves
Sperner's Lemma using Brouwer's fixed point theorem (not the other way around).
The exact reference is: M. Yoseloff, Topologic Proofs of some Combinatorial Theorems, J. Combinatorial Theory Ser., A 17 (1974), 95-111.
Thank you.
reference-request co.combinatorics
4 sciencedirect.com/science/article/pii/0097316574900314 – darij grinberg Jan 5 '13 at 19:31
Generally, since the Elsevier boycott bore fruit and Elsevier had to open up access to most of its mathematics journals, sciencedirect.com is a good place to find a lot of mathematical papers, if
you don't mind some of them being scanned very badly (I've even seen a few unintelligible ones). Unfortunately either Google is bad at indexing sciencedirect, or Elsevier is bad at making their
site indexable; either way you don't usually find the sciencedirect page when you google for a paper. Probably the fastest way to get to the paper is googling the name of the journal (complete,
... – darij grinberg Jan 5 '13 at 19:34
... no abbreviations), the year, the issue, the pages, and "sciencedirect". E. g., in this case: Journal of Combinatorial Theory, Series A 17 (1974), 95-111 sciencedirect – darij grinberg Jan 5
'13 at 19:35
Thanks a lot, Darij!! – Cosmin Pohoata Jan 8 '13 at 10:09
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request co.combinatorics or ask your own question. | {"url":"http://mathoverflow.net/questions/118144/reference-request-a-paper-by-yoseloff-about-a-proof-of-sperners-lemma","timestamp":"2014-04-16T14:17:55Z","content_type":null,"content_length":"50557","record_id":"<urn:uuid:ba580314-6bdf-41b1-9d6f-abfd1795bc5e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rank of a matrix
November 30th 2010, 03:57 PM #1
Nov 2010
Rank of a matrix
Let A be an m x n matrix. Show that if B is n x p, then rank AB <= rank b.
the matrix AB will be a m x p matrix.
I think Rank A is at most n while Rank AB is at most p.
I think I'm supposed to show that p<= n somehow, but I'm not really sure.
I have no idea how to approach this problem. Hints and Help would be much appreciated.
Let A be an m x n matrix. Show that if B is n x p, then rank AB <= rank b.
the matrix AB will be a m x p matrix.
I think Rank A is at most n while Rank AB is at most p.
I think I'm supposed to show that p<= n somehow, but I'm not really sure.
I have no idea how to approach this problem. Hints and Help would be much appreciated.
Which definition of rank are you most comfortable with?
Are you comfortable with the idea that as a linear transformation then $\text{rnk} A=\dim A\left(V\right)$?
I think I have this.... I don't know how to use TeX. Sorry.
Let the columns of A be [A1 A2 ... An] and the columns of B be [B1 B2 ... Bp]
then the ith column of the matrix AB will be:
ABi = [A1 A2 ... An]Bi = A1bi1 + A2bi2 + .... +Anbin
From this, I can see that The span of the columns of AB are within the span of the columns of A, or is the span of A in the case where the b coeficents are 1.
Col AB <= Col A
Dim Col AB <= Dim Col A
Rank AB <= Rank A
November 30th 2010, 05:14 PM #2
November 30th 2010, 05:40 PM #3
Nov 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/164883-rank-matrix.html","timestamp":"2014-04-23T23:37:14Z","content_type":null,"content_length":"35667","record_id":"<urn:uuid:9533bcdf-51d3-41da-85fd-7dabcf1fbedc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 231
Hello I need some ideas to elaborate on in an essay about the following question "1.According to Frankenstein what makes the Creature more human, his weaknesses or his strengths?Why?" Now I believe
it is his weaknesses but i dont know what Victor Frankens...
Beacon Height
what is the time for the first clock
What is a basic draft for a claim (Toulmin) about the theme. For example, "The theme of the piece is "blah-blah" " doesn't have an argument. What does?
50 x 50 x 50 x 50
maths lite,life science,geography and tourism
What career can I take
Grammar Advisor
Are these correct? I mark my answer with an X. 88. The reason to list all of the preliminary (non-procedural) information in a lesson plan is to be sure have considered each aspect. keep good records
for your future needs. be able to communicate clearly with administrators, ot...
Grammar Advisor
I just cannot figure out if these answers are correct or not. Can you please help me? 78. If the sound system hadn t failed, last night s show would have been better. This is a correct sentence using
the 1st conditional form. x This is a correct sentence using the 2n...
what are the 5 number combo in 1 to 39
Which of the following is an environmental concern of genetically engineered crops mentioned in your text? (Points : 1) Genetically modified crops lead to larger applications of toxic herbicides and
insecticides. The genetic modification of crops increases soil erosion while d...
Joan receives an annual salary of $22,768.68. What is the total amount of her gross pay each period if she is paid monthly?
Erma is paid $516.72 biweekly. What are her annual earnings?
Sheila received a 6.5% commission on a house she sold for $110,400. How much commission did she receive?
american history
Under the _______ system, Spanish conquistadors were rewarded with local villages and control over the local labor force
Explain whether the following statement is true or false: The ordered pair (1,5) is the same as the order pair (5,1).
can any one add or subtact this (m^2 - m - 4) + (m - 5)
Combining Inequalities
Some treasure has been buried at a point(x,y)on the grid, where x and y are whole numbers. Here are three clues to help you to find the treasure: Clue 1:x>2 Clue 2:x+y<8 Clue 3: 2y-x>0
Which of the following is true about an object that floats in water? (1 point) The object is less dense than frozen water. The mass of the object is less than that of water. The density of the object
is less than that of water. The volume of the water is greater than the objec...
the floor area of a rectangular storm shelter is 65 square meters, and its length is 6.5 meteres. What is the width of the storm shelter
Physical Science
An exercise spring has a spring constant of 315 N/m. How much work is required to stretch the spring 30 cm? What force is needed to stretch the spring 30 cm?
8th Grade Science
Label these reactants and products: 3Fe + 2 O2> Fe3O4
thank you!
I am moving to a new house. My old house is 3 kilometers from my new house. How many meters is the old house from the new house?
Sources of stress do not automatically cause stress-related problems. Moderators of stress, which can include our personal strengths, might mean that we can have a healthier and more productive life
in spite of life's challenges, or maybe even because of them. As the text ...
4 gr
thank u :)
4 gr
forty people are lined up to ride the ski lift to the top of the mountain. Each chair on the holds 3 people. How many chairs will it take to get everyone up the mountain?
One band has fewer than 40 speakers. They stack them in a array exactly 9 high. How many speakers could they have? Explain how you know.
Another band has 48 speakers. They stack them at least 4 high, but no taller than 6 high. What are all the different arrays they can make?
One band has 30 speakers. They stack them at least 3 high, but no taller than 6 high. What are all the different arrays they could make?
thank u :)
Rock bands often stack their speakers in an array. One teen band has 24 speakers. They stack them at least 2 high, but no taller than 8 high. What are all the different arrays they can make.
another band has 48 speakers. they stack them at 4 high, but no taller than 6 high. what are all the different arrays they can make?
another band has 30 speakers. they stack them at least 4 high, but no taller than 6 high. what are all the different arrays they could make?
social studies
Which of the following best describes the difference between a keyword and subject search? Please make selection! A keyword search uses subject headings, but a subject search does not A subject
search can retrieve false hits, but a keyword search does not A subject search uses...
The ----------- definition of gloomy is dismal or depressing.
In a raffle there are four winning tickets from 12, What is the probability from six tickets there are two winners? Thank you in advance.
LIfe orientation
Identify and descriibe one environmental problem that causes ill health
Life Orientation
Describe one environmental problem that causes ill health,accidents,crises and disasters
how many ml of a 1 M NaOH solution do you need to add to a 250 ml flask and bring it to volume with deionized water to make a 0.12 M NaOH solution
A piece of metal at 80 degrees celsius is placed in 1.2 L of water at 72 degrees celsius. The system is thermally isolated and reaches a final temperature of 75 degrees celsius. Estimate the
approximate change in entrophy for this process.
Explain how the specific heats of materials are measured using the technique of calorimetry.
Heat is added to an ideal gas at 20 degrees Celsius. If the internal energy of the gas increases by a factor of three during this process, what is the final temperature?
The target capital structure for QM Industries is 39% common stock 6% is preferred stock, and 55% debt. If the costs of common equity for the firm is 18.2%, and the cost of preferred stock is 9.4%,
the before tax cost of debt is 7.5%, and the firms tax rate is 35% what is QM...
One bag contains 2 green marbles and 4 white marbles and a second bag contains 3 green marbles and 1 white marble. If Jon randomly draws one marble from each bag what is the probability that they are
both green?
At a city park, a person throws some bread into a duck pond. Two 4.00-kg ducks and a 9.00-kg goose paddle rapidly toward the bread, as shown in our sketch (Intro 1 figure) . If the ducks swim at 1.10
m/s, and the goose swims with a speed of 1.30 m/s, the total momentum of the ...
The circumference of a sphere was measured to be 74 cm with a possible error of 0.5 cm. (a) Use differentials to estimate the maximum error in the calculated surface area. (Round your answer to the
nearest integer.) cm2 What is the relative error? (Round your answer to three d...
Find the value of constant n if the lines y=(n-2)x+12 and 3x-4y+5=0 are: a) parallel and b) perpendicular to each other. Thanks in advance.
Basic Math
How do you solve: 5/30 + 3/40 + 1/8
A diver has a mass of 75 kg takes part in a competition by having to perform 10 dives off the 10m platform above the water. Before he takes part he snacks on an energy bar (150 g) which provides 15
kj of energy. How many would the diver need to consume to meet the necessary en...
Thank you so much Reiny =]
Winnebagel Corp. currently sells 33,000 motor homes per year at $49,500 each, and 13,200 luxury motor coaches per year at $93,500 each. The company wants to introduce a new portable camper to fill
out its product line; it hopes to sell 23,100 of these campers per year at $13,...
For which salt in each of the following groups will the solubility depend on pH? Pb(OH)2 PbCl2
How many moles of carbon tetrahydride are needed to produce 3.59 moles of water
find 45 of 60
How you can use a hundred chart to subtract 12 from 46
creat a logic for a program that continuously prompts the user for a numeric number of dollars until the user enters 0. pass each entered amount to a conversion method that displays breakdown of the
passed amount into the fewest bills
The combustion of 0.1619 g benzoic acid increases the temperature of a bomb calorimeter by 2.77°C. Calculate the heat capacity of this calorimeter. (The energy released by combustion of benzoic acid
is 26.42 kJ/g.) kJ/°C A 0.1510 g sample of vanillin (C8H8O3) is then b...
Consider the dissolution of CaCl2. CaCl2(s) Ca2+(aq) + 2 Cl-(aq) ΔH = -81.5 kJ A 13.0 g sample of CaCl2 is dissolved in 134 g of water, with both substances at 25.0°C. Calculate the final temperature
of the solution assuming no heat lost to the surroundings and assumi...
What is the next number in the sequence 270, 27, 2.7,0.27
A car can slow down at 5.10 m/s2 without skidding when coming to rest on a level road. What would its acceleration be if the road were inclined at 12o uphill? [Ask yourself What is the same about
the situation on the level road and on the incline?]
A car can slow down at 5.10 m/s2 without skidding when coming to rest on a level road. What would its acceleration be if the road were inclined at 12o uphill?
find the product or quotient. write the strategy you used, write model, break apart, multiplication table, inverse operations, pattern, or doubles
college physics
A 4.5 cm long insect is perpendicular to the optical axis of a thin lens; the bug is 5.4 cm away from the lens. The lens forms an inverted image of the insect which is 0.9 cm long on a screen behind
the lens. a. How far behind the lens is the screen? b. What is the lens' f...
college physics
I know you need to use geometry but i don't know how a room which is h = 9.0 feet tall and r = 12.0 feet wide. Attached to the ceiling is a piece of glass (with unknown index of refraction) which is
2.5 feet thick. A laser pointer in the bottom left corner is aimed so that...
What is the value of the underline digit 6 in 4,600,028
4th grade math
what is the value of the underline digit 6.... 4,600,028
true or false only animals undergo respiration
which of these are strong electrolytes, weak electrolytes and non electrolytes. H2SO4 FeCl2 HClO AgCl I put AgCl as a non H2So4 and HCl as strong FeCl2 as weak but the computer told me my answer is
wrong Can some one help please?
100 grams of glucose is mixed with 500 grams of water. 10 grams of yeast is then added to the mixture. If the reaction goes to completion, what is the mass of ethanol in the final mixture?
Is the below paragraph and evaluation? Hockenbury and Hockenbury present important information and raise some interesting questions about how daily hassles affect our lives. In a few paragraphs, they
present a great deal of information on the subject of daily hassles wh...
College Help
I am a Sophomore in High School. I have a GPA of 3.3 currently, (it's very bad) with no AP classes taken until Junior and Senior year. What are my chances of getting into UC Irvine? That's the school
I really want to get into!
Could you please tell me if the first sentence in each paragraph are topic sentences. The higher the interest rate of credit cards, mortgage or vehicles, the less cash we have in our pockets. When
the interest rate of savings is higher that means more money in your pocket. If ...
Just to clarify, 18/2= 9 therefore its in position 9 and 10? then add up and divided by two again? This is only for an even number of data right?
The following data represent what 18 people said when they were asked to estimate the crowd at a public gathering: 325 450 500 500 550 575 575 600 600 650 650 700 700 700 725 750 800 900 State the
mean, mode, median Mean: 325+450+500+500+550+575+575+600+ 600+650+650+700+700+70...
Two similar triangles. 1) 5.8m, 5m long,3m high 2)?m, 28m long, 16.8m high. Find the length of VR, the direct distance of the river, to the nearest tenth.
Find the length of VR, the direct distance across the river, to the nearest tenth.
The Middle East is the spiritual homeland for all of the following world religions EXCEPT
7th gr. Algebra
what is the meaning of a relative frequency of 1?
A car moves with a speed v on a horizontal circular highway turn of radius is R = 100. Assume the height of the car s center of mass above the ground is h = 1 m, and the separation between its inner
and outer wheels (car's width) is w = 2 m. The car does not skid (sta...
Which of the following are liner? Explain a) y + -2/3x b) y=1/x c) y=x(to the second power)+8x+15
a block prism has a volume of 36 cubic units, what is the least and greatest surface area it could have
Math - Calculus
Show that the equation x^3-15x+c=0 has at most one root in the interval [-2,2].
Math - Calculus
Show that the equation x^3-15x+c=0 has at most one root in the interval [-2,2]. Perhaps Rolle's Theorem, Mean Value Theorem, or Intermediate Value Theorem hold clues? ...Other than simply using my
TI-84, I have no idea how to accomplish this.
Math - Calculus
Show that the equation x^3-15x+c=0 has at most one root in the interval [-2,2]. Perhaps Rolle's Theorem, Mean Value Theorem, or Intermediate Value Theorem hold clues? ...Other than simply using my
TI-84, I have no idea how to accomplish this.
Physics Craziness
A big olive (m = 0.40 kg) lies at the origin of an xy coordinate system, and a big Brazil nut (M = 1.2 kg) lies at the point (1.0,2.0) m. At t = 0, a force o = (2.0 + 3.0) N begins to act on the
olive, and a force n = (-3.0 - 2.0) N begins to act on the nut. In unit-vector not...
so for part b: d*sin(theta)=m*λ when you plug in the d from part a and λ and 90 degree for theta(maximum theta). you get m=6.8. This is for one side of the slit. You must subtract the missing
fringes, 4 th one. you get m=5. m=10 for both side of the screen and add on...
A 91 kg man lying on a surface of negligible friction shoves a 75 g stone away from himself, giving it a speed of 4.0 m/s. What speed does the man acquire as a result?
Left over from the big-bang beginning of the universe, tiny black holes might still wander through the universe. If one with a mass of 3.5 1011 kg (and a radius of only 1.0 10-14 m) reached Earth, at
what distance from your head would its gravitational pull on you match that o...
Basic Math
what is 5/30 + 3/40 + 1/8
Basic Math
what is 5/30 + 3/40 + 1/8
Olivia picked 2.5 kilograms of cherries. She divided the cherries evenly among 10 small baskets. How much did each basket hold?
Can I replace the word to with from in the following sentence: Laura often lost her homework on the way to school.
Physics- Angular Momentum
Hi Miki! I think we are taking the same physics class... I'd like to possibly share insights about the material and possibly help each other out in a mutually beneficial way. If you are interested,
shoot me an email at William dot Cordoba at yaho dot com Thanks William
What two numbers have a product of 3.6 and a sum of 3.8?
Need ten names of famous men and women who are of spanish descent
how many grams of Na2CO3 must be dissolved into 155 g of water to create a solution with the molality of 8.20 mol/kg?
A manufacturer sells two products, one at a price of $3000 a unit and the other at a price of $12000 a unit. A quantity q1 of the first product and q2 of the second product are sold at a total cost
of $5000 to the manufacturer. Express the manufacturer's profit, as a funct...
6TH GRADE MATH
Repost perimeter is 30 and area 36
Math 6th grade
What is the answer the perimeter is 30 and the area is 3??
physics help
An auto race is held on a circular track. A car completes one lap in a time of 16.9 s, with an average tangential speed of 48.1 m/s. Find the following (a) the average angular speed (b) the radius of
the track i found part a which was .3716 but need help with part b please
help me physics
The femur is a bone in the leg whose minium cross-sectional area is about 3.60 E-4 m2. A compressional force in excess of 6.90 E4 N will fracture this bone. (a) Find the maximum stress that this bone
can withstand. 1 N/m2 (b) What is the strain that exists under a maximum-stre...
Pages: 1 | 2 | 3 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=William","timestamp":"2014-04-16T14:44:12Z","content_type":null,"content_length":"28309","record_id":"<urn:uuid:cc364b85-32fe-4ee1-8489-6157a2398d7f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: [xsl] What is the shortest expression converting an xs:
<-prev [Thread] next-> <-prev [Date] next->
Month Index | List Home
RE: [xsl] What is the shortest expression converting an xs:double to xs:integer?
Subject: RE: [xsl] What is the shortest expression converting an xs:double to xs:integer?
From: "Michael Kay" <mike@xxxxxxxxxxxx>
Date: Sat, 4 Dec 2004 09:30:51 -0000
You could use
11 to (f:sqrt($pNum, 0.1E0) + 0.5) idiv 1
but it's not a great improvement.
I'm trying to remember why round() returns a double - I think it's because
of the problem of numbers that are too large for an xs:integer.
Michael Kay
> -----Original Message-----
> From: Dimtre Novatchev [mailto:dnovatchev@xxxxxxxxx]
> Sent: 04 December 2004 00:53
> To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
> Subject: [xsl] What is the shortest expression converting an
> xs:double to xs:integer?
> In my code I had to use this expression:
> (11 to xs:integer(round(f:sqrt($pNum, 0.1E0))))
> Can it be expressed in a more simple way?
> Cheers,
> Dimitre | {"url":"http://www.biglist.com/lists/lists.mulberrytech.com/xsl-list/archives/200412/msg00184.html","timestamp":"2014-04-17T22:02:30Z","content_type":null,"content_length":"4366","record_id":"<urn:uuid:28e98418-e5f1-46fb-a400-8e1f352880ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
If 25 lines are drawn in a plane such that no two of them
Author Message
If 25 lines are drawn in a plane such that no two of them [#permalink] 06 Dec 2007, 05:04
Amardeep Sharma 45% (medium)
Director Question Stats:
Joined: 13 Dec 2006 45%
Posts: 521 (02:07) correct
Location: Indonesia 54% (01:07)
Followers: 4 wrong
Kudos [?]: 83 [0], based on 95 sessions
given: 0
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
A. 2300
B. 600
C. 250
D. 300
E. none of these
Spoiler: OA
walker 1
CEO This post received
Joined: 17 Nov 2007
Expert's post
Posts: 3599
Entrepreneurship, let try draw lines one by one.
1st line - 0 points
Schools: Chicago 2nd line - new 1 point
(Booth) - Class of 3td line - new 2 points + old 1 point
2011 4th line - new 3 points + old 2+1 points
5th line - new 4 points + old 3+2+1 points
GMAT 1: 750 Q50 V40 n-th line - (n-1) points + (n-2) .... 3+2+1
Followers: 326 therefore, S=1+2+3...(n-1)
Kudos [?]: 1574 [1] S=(n-1)n/2=24*25/2=300
, given: 354
VP Re: Permutation & combination [#permalink] 06 Dec 2007, 05:27
Joined: 22 Nov 2007 It is very hard. I can only say that the answer must not be E, because "no two of them are parallel", so they must meet in some point.
Posts: 1102
Followers: 6
Kudos [?]: 92 [0],
given: 0
Can you explain me your reasoning step by step very slowly?
Joined: 22 Nov 2007
Posts: 1102
Followers: 6
Kudos [?]: 92 [0],
given: 0
CEO 1
Joined: 17 Nov 2007 This post received
Posts: 3599
Expert's post
Entrepreneurship, marcodonzelli wrote:
Can you explain me your reasoning step by step very slowly?
Schools: Chicago
(Booth) - Class of No. less than 2 min.
Another way, even easier and faster:
GMAT 1: 750 Q50 V40
one line has 24 intersections. We have 25 line. Therefore the number of intersection points is 24*25/2 (2- because we count twice the same point)
Followers: 326
Kudos [?]: 1574 [1]
, given: 354
The question should state that the lines are of infinite length otherwise the answer could be anything.
Joined: 13 Jun 2007
Posts: 48
Followers: 1
Kudos [?]: 3 [0],
given: 0
Expert's post
Joined: 17 Nov 2007
alexperi wrote:
Posts: 3599
The question should state that the lines are of infinite length otherwise the answer could be anything.
Entrepreneurship, see difference between "line" and "line segment" in geometry
Schools: Chicago
(Booth) - Class of or
GMAT 1: 750 Q50 V40
Followers: 326
Kudos [?]: 1574 [0]
, given: 354
Re: Permutation & combination [#permalink] 08 Dec 2007, 03:39
This post received
Amardeep Sharma wrote:
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
Joined: 11 Aug 2007
A 2300
Posts: 65 B 600
C 250
Followers: 1 D 300
E none of these
Kudos [?]: 4 [1] ,
given: 0 Please explaing for me to understand the concept
I got D:
since any 2 lines has 1 intersect with each other, we need to find the number of ways to choose 2 out of 25: 25C2=25*24/2=300.
Re: Permutation & combination [#permalink] 25 Aug 2008, 12:26
Amardeep Sharma wrote:
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
x2suresh A 2300
B 600
SVP C 250
D 300
Joined: 07 Nov 2007 E none of these
Posts: 1833 Please explaing for me to understand the concept
Location: New York Amar
Followers: 23 2 lines form 1 intersect points =2C2
Kudos [?]: 390 [0], 3 lines form 3 intersect points= 3C2
given: 5
25 llines form 25C2 intersect points = 25*12=300.
Your attitude determines your altitude
Smiling wins more friends than frowning
Re: Permutation & combination [#permalink] 27 Sep 2009, 20:12
This post received
srivas If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
Manager A 2300
B 600
Joined: 27 Oct 2008 C 250
D 300
Posts: 186 E none of these
Followers: 1 Soln: Since no three are concurrent, hence any point that is formed by two different lines are distinct.
The first line intersects each of the other 24 lines at 24 points. => statement 1
Kudos [?]: 69 [1] , The second line intersects each of the other 23 lines at 23 points. The point with first line has already been counted in the statement no.1.
given: 3 The third line intersects each of the other 22 lines at 22 points
and so on.
Thus total number of points is
= 24 + 23 + 22 + ... + 1
= 24 * 25/2
= 300
Ans is D
Re: Permutation & combination [#permalink] 16 Feb 2010, 07:41
Amardeep Sharma wrote:
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
A 2300
B 600
jeeteshsingh C 250
D 300
Senior Manager E none of these
Joined: 22 Dec 2009 Please explaing for me to understand the concept
Posts: 366 Amar
Followers: 10 No three lines are concurrent and no two lines are parallel gives us the info that every line intersects the other and no intersection point is common. Hence no of intersection
points = 25c2 = 300 = D
Kudos [?]: 183 [0],
given: 47 _________________
If u like my post..... payback in Kudos!!
|Do not post questions with OA|Please underline your SC questions while posting|Try posting the explanation along with your answer choice|
|For CR refer Powerscore CR Bible|For SC refer Manhattan SC Guide|
~~Better Burn Out... Than Fade Away~~
Re: If 25 lines are drawn in a plane such that no two of them [#permalink] 24 Nov 2013, 20:42
This post received
Expert's post
Amardeep Sharma wrote:
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
A 2300
VeritasPrepKarishma B 600
C 250
Veritas Prep GMAT D 300
Instructor E none of these
Joined: 16 Oct 2010 Please explaing for me to understand the concept
Posts: 4178 Amar
Location: Pune, Responding to a pm:
We need to draw lines such that they are not parallel. Why is 'not parallel' important? Any two distinct lines drawn on the xy axis will either be parallel or will intersect in
Followers: 895 exactly one point. Lines can be extended infinitely on both ends so somewhere they will intersect with each other if they are not parallel. Since any given two lines are not
parallel, we can say that they must intersect at exactly one point. So every pair of two lines will intersect at exactly one point. We are also given that no three lines are
Kudos [?]: 3795 [2] concurrent. This means that no three lines intersect at the same point. So every pair of two lines we select will have a unique point of intersection which they will not share
, given: 148 with any third line. So how many such unique points of intersection do we get? That depends on how many pairs of 2 lines can we select from the 25 lines?
We can select 2 lines from 25 lines in 25C2 ways i.e. 300 ways. Each one of these pairs will give us one unique point of intersection so we will get 300 points of intersection.
Answer (D)
Veritas Prep | GMAT Instructor
My Blog
Save $100 on Veritas Prep GMAT Courses And Admissions Consulting
Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options.
Veritas Prep Reviews
Intern Re: If 25 lines are drawn in a plane such that no two of them [#permalink] 12 Dec 2013, 22:28
Joined: 19 Mar 2013 The answer is easier than it seems to be
Posts: 17 As any two lines have exactly 1 intersection point (just draw a few non-parallel lines), we simply need to find in how many ways we can chose 2 lines out of 25
Followers: 0
Kudos [?]: 1 [0],
given: 22
Intern Re: If 25 lines are drawn in a plane such that no two of them [#permalink] 13 Dec 2013, 08:57
Joined: 03 Mar 2013 I think Walker made it solveable. Thanks
Posts: 4
Followers: 0
Kudos [?]: 0 [0],
given: 3
Re: If 25 lines are drawn in a plane such that no two of them [#permalink] 26 Feb 2014, 16:26
Amardeep Sharma wrote:
If 25 lines are drawn in a plane such that no two of them are parallel and no three are concurrent, then in how many points do they intersect?
Joined: 28 Jan 2013
A. 2300
Posts: 14 B. 600
C. 250
Followers: 0 D. 300
E. none of these
Kudos [?]: 1 [0],
given: 2 No 3 lines intersect at one point.. and none of them are parallel...
point is created when 2 lines intersect... how many ways can you select 2 out of 25 = 25C2=300
gmatclubot Re: If 25 lines are drawn in a plane such that no two of them [#permalink] 26 Feb 2014, 16:26 | {"url":"http://gmatclub.com/forum/if-25-lines-are-drawn-in-a-plane-such-that-no-two-of-them-56705.html","timestamp":"2014-04-19T22:21:26Z","content_type":null,"content_length":"200132","record_id":"<urn:uuid:f9306c9a-675c-48e9-8852-d988765fcba8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Category theory
Universal constructions
Enriched category theory
Basic concepts
Universal constructions
Homotopical enrichment
A cosmos is a “good place in which to do category theory,” including both ordinary category theory as well as enriched category theory.
The word is chosen by analogy with topos which can be regarded as “a good place to do set theory,” but there are notable differences between the two situations; a more direct categorification of a
topos is, unsurprisingly, a 2-topos. In contrast, cosmoi also include enriched category theory, while toposes do not allow non-cartesian enrichment.
There are a number of different, inequivalent, definitions of “cosmos” in the literature.
Bénabou’s definition
Jean Bénabou's original definition was that a cosmos $V$ is a complete and cocomplete closed symmetric monoidal category. This is an ideal situation for studying categories enriched over $V$.
Street’s “fibrational cosmoi”
Ross Street has taken a different tack, defining a “cosmos” to be the collection of (enriched) categories and relevant structure for doing category theory, rather than the “base” category $V$ over
which the enrichment occurs.
In his paper “Elementary cosmoi,” Street defined a (fibrational) cosmos to be a 2-category in which internal fibrations are well-behaved and representable by a structure of “presheaf objects” (later
realized to be a special sort of Yoneda structure?). Note that while this includes $Cat$, it does not include $V$-$Cat$ for non-cartesian $V$, since internal fibrations are poorly behaved there.
Street’s second definition
In his paper “Cauchy characterization of enriched categories,” Street instead defined a “cosmos” to be a 2-category that behaves like the 2-category $V$-$Mod$ of enriched categories and profunctors.
The precise definition: a cosmos is a 2-category (or bicategory) such that:
• Small (weak, or bi-) coproducts exist.
• Each monad admits a Kleisli construction? (analogous to the exactness of a topos).
• It is locally small-cocomplete, i.e. its hom-categories have small colimits that are preserved by composition on each side.
• There exists a small “Cauchy generator”.
These hypotheses imply that it is equivalent to the bicategory of categories and profunctors enriched over some “base” bicategory. (Note the generalization from enrichment over a monoidal category to
enrichment over a bicategory.)
Defined in this way, cosmoi are closed under dualization, parametrization and localization, suitably defined. | {"url":"http://ncatlab.org/nlab/show/cosmos","timestamp":"2014-04-20T20:56:50Z","content_type":null,"content_length":"27316","record_id":"<urn:uuid:4b0b3e1b-3718-4c94-a088-04f4a27cca9d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Palos Park Math Tutor
Find a Palos Park Math Tutor
...I have also tutored Geometry and Calculus students. I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College.
7 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...I have over eight years of experience tutoring math. My students have ranged from middle school to college. I pride myself on being able to push past any difficulty a student is faced with.I
love this subject and am extremely competent teaching it.
21 Subjects: including SAT math, prealgebra, GRE, GMAT
...I earned a Bachelor of Music degree from the Eastman School of Music and, shortly thereafter (in 1985) became the music director of the Chicago Philharmonic Orchestra - a position that I still
hold today - and have taught various general music courses at Governors State University. I earned a Ba...
37 Subjects: including SAT math, piano, finance, public speaking
...On the contrary, simple memorization of standard formulas and/or typical solutions is the way to binding yourself with the cords of dogmatism. I believe that I have skills and abilities to
explain complex concepts in a simple and clear way and would like to help anyone, who is interested, achiev...
18 Subjects: including differential equations, algebra 1, algebra 2, calculus
...As one of these tutors it was my responsibility to assist students with questions and to periodically check in on students who were studying in the lab. Algebra 1 (or Elementary Algebra)
introduces the use of variables in equations and usually concludes with the basic concept of functions. The ...
7 Subjects: including algebra 1, algebra 2, calculus, geometry
Nearby Cities With Math Tutor
Chicago Ridge Math Tutors
Clarendon Hills Math Tutors
Countryside, IL Math Tutors
Crestwood, IL Math Tutors
Hickory Hills Math Tutors
Hodgkins, IL Math Tutors
Hometown, IL Math Tutors
Merrionette Park, IL Math Tutors
Orland Hills, IL Math Tutors
Palos Heights Math Tutors
Palos Hills Math Tutors
Posen, IL Math Tutors
Robbins, IL Math Tutors
Willow Springs, IL Math Tutors
Worth, IL Math Tutors | {"url":"http://www.purplemath.com/Palos_Park_Math_tutors.php","timestamp":"2014-04-19T12:30:08Z","content_type":null,"content_length":"23725","record_id":"<urn:uuid:2cc9599a-37ef-4f9e-a389-1f7134964f41>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: Stata graphs to Latex with Graph2tex command
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Stata graphs to Latex with Graph2tex command
From Suzy <scott_788@wowway.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: Stata graphs to Latex with Graph2tex command
Date Wed, 03 May 2006 17:01:54 -0400
Yes, you are correct. I just typed "pwd" to find out where the file was (this is the graph2tex command suggested), moved it into the correct directory and it works fine. The graph2tex command is
simple and great for LaTeX newbies. Will table output by Stata also work ? If so, how explicitly is it done? I do have a a word to tex converter (Word2teX), if that helps steer a recommendation.
Thanks Maarten!
Maarten Buis wrote:
I am not familiar with the -graph2tex- command, what I usually do is
export the graph to an eps file with the -graph export- command, and
in my LaTeX file use LaTeX commands like the one you have showed. I have never experienced problems with those. One thing I could imagine
is that -graph2tex- saves in the current directory (type -cd- to find
out were that is) and you try to call it from a LaTeX file located in
another directory. If you include a graphic in LaTeX without specifying
a path it assumes that it is in the same directory.
----- Suzy wrote:
I am now trying to convert and insert a simple Stata graph using the graph2tex command. This is my Stata code below. I copied and pasted the tex code output provided by Stata into my text editor
(teXnicCenter). The graph was not there. I tried adding the .eps to Stata1 to see if that would work, but it did not.
scatter Yvar Xvar, sort
. graph2tex, epsfile(Stata1)
% exported graph to Stata1.eps
Maarten L. Buis
Department of Social Research Methodology Vrije Universiteit Amsterdam Boelelaan 1081 1081 HV Amsterdam The Netherlands
visiting adress:
Buitenveldertselaan 3 (Metropolitan), room Z214
+31 20 5986715
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-05/msg00148.html","timestamp":"2014-04-17T10:54:01Z","content_type":null,"content_length":"8655","record_id":"<urn:uuid:0b886962-037a-45e2-908b-4a6dc4d66b2e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00212-ip-10-147-4-33.ec2.internal.warc.gz"} |
Source Code - Java Applet I
Any of today's programming languages can be used to generate 3D images of varying degrees of performance and quality. The standard Java language has various features which allow the creation of 3D
applications. Even though a new generation of Java API called Java3D promises improved performance and reduced coding effort most PCs only have a copy of Java installed. Few PCs have the Java3D
software users are often unwilling to download the required megabytes of files to get the capability. This page describes an applet called gbCube which displays a rotating cube. I've also written a
more generic 3D object viewer call gbViewer which can display 3D objects read in from standard 3D file formats.
Overview HTML Source Code Discussion Download gbViewer
Return to top of document
gbCube Overview
If you've read my page on generating a 3D rotating cube with VB, then you'll already have a good idea of what to expect on this page. I've written a Java Applet which will display a 3D rotating cube.
I list the code below, along with discussion to explain how the code works and special issues or concerns that a programmer should be aware of.
Regardless of the language used to create a 3D graphic scene, the same basic set of steps must be accomplished. The entire sequence is called a 3D graphics pipeline and consists of the following
major steps:
• Modelling
• Rotation
• Depth Sorting (Painter's Algorithm)
• Backface culling
• Projection
• Shading
There are other steps as well, such as lighting, but this page will be limited to just those topics listed above. Future updates to the Java Applet are likely which will expand the capabilities that
are included within the applet. Here is the current version of the applet:
Return to top of document
Whether using pure Java or Java3D, the mechanics of embedding an applet into an HTML page are the same. This section provides a discussion of the HTML code needed to embed the applet.
<h1>Java Applet Demo</h1>
<applet code=3dcubejava.class width=300 height=400>
<param name=Speed value=10>
<param name=POV value=10>
<param name=Theta value=.05>
In this simple example a Java applet named "3dcube.class" is embedded in the HTML page, with a window of size 300x400 pixels reserved for the output of the applet.
Three parameters (Speed, POV, and Theta) are passed from the HTML code to the Java applet, with values of 10, 10, and 0.05 respectively.
Return to top of document
gbCube - Pure Java Rotating 3D Cube
It is entirely possible to create 3D images using pure Java code - without using the recent Java3D advanced features. This section examines the source code of gbCube, a Java applet which creates a
rotating 3D cube using only built-in Java features.
Program Operational Overview
The paint() method of the applet is used to call out the various elements of the 3D graphics pipeline. The paint() method is set for a continuous loop through the use of the repaint() method. A 25ms
time delay is used to control the speed of rotation.
The paint() method is as follows:
public void paint(Graphics screen) {
if (Continue) {
try { Thread.sleep(Delay); } catch _
(InterruptedException e) {} //delay
The screen object is shared between the various pipeline method. It shows as an argument for each of the methods.
The shading used by gbCube is called flat shading - all the pixels of a triangle are colored exactly the same, as provided by the Java fillPolygon method. This approach is simple to use and very fast
but is not very realistic. It does not provide color gradients nor does it take into account shadows resulting from the 3D scene's light source. Future enhancements to gbCube will include a light
source and more advanced shading algorithms, such as Phong shading.
As was noted in the 3D math page at this site, standard trigonometric calculations or matrix operations can be used to perform the calculations needed to animate a 3D scene. In the example that
follows matrices are not used. Code examples of matrix math are, however, provided elsewhere at this site.
To be consistent with discussions elsewhere on this site, the cube is modelled using triangles. Twelve triangles are needed, 2 for each of the 6 cube faces. The cube model requires only 8 points,
with points shared by multiple triangles.
This model uses arrays to store point and triangle information. The following initialization of variables is used:
double Theta = 0.005; //angle of rotation in radians
double L = 50; //temp variable used to love Px,Py,Pz,PPx,PPy
double POV = 500; //distance from eye to display screen
double Offset = 100; //used to center the cube in the applet window
Polygon T = new Polygon(); //Polygon object used for drawing triangles
int Delay = 25; //delay between rotations in milliseconds
double[] Px = {-L,-L, L, L,-L,-L,L, L}; //real point x-coord, 8 pts
double[] Py = {-L, L, L,-L,-L, L,L,-L}; //real point y-coord, 8 pts
double[] Pz = {-L,-L,-L,-L, L, L,L, L}; //real point z-coord, 8 pts
double[] PPx = {-L,-L, L, L,-L,-L,L, L}; //projected point x-coord, 8 pts
double[] PPy = {-L, L, L,-L,-L, L,L,-L}; //projected point y-coord, 8 pts
int V1temp, V2temp, V3temp; //temp variables used in sorting
double V4temp, V5temp; //temp variables used in sorting
double oldX, oldY, oldZ; //temp variables used in rotating
int[] V1 = {0, 0, 4, 4, 7, 7, 3, 3, 2, 2, 3, 3,}; //vertex1
int[] V2 = {3, 2, 0, 1, 4, 5, 7, 6, 6, 5, 0, 4,}; //vertex2
int[] V3 = {2, 1, 1, 5, 5, 6, 6, 2, 5, 1, 4, 7,}; //vertex3
double[] V4 = new double[12]; //Average Z of all 3 vertices
double[] V5 = new double[12]; //DotProduct of Normal and POV
double CPX1,CPX2,CPX3,CPY1,CPY2,CPY3; //temp var used in Cross Product
double CPZ1,CPZ2,CPZ3,DPX,DPY,DPZ; //temp var used in Cross/Dot Product
boolean Continue = true; //controls whether painting continues
The Px, Py, and Pz arrays contain the coordinates of the 8 points. The PPx and PPx arrays contain the projections of those points onto the computer screen.
The x, y, and z coordinates of the vertices of the 12 triangles are kept in arrays V1, V2, and V3. The order of the initial point assignment is made to ensure a counter-clockwise listing. The
coordinates assume the cube is centered about 0,0,0.
The average Z value of the vertices in each triangle are kept in array V4. This value is used for sorting triangles by depth - part of the Painter's algorithm for drawing 3D scenes.
The Dot Product of the normal to each triangle and the scene's point of view is kept in array V5. This value is used to determine whether a triangle is facing the viewer and should then be drawn, or
facing away and does need to be drawn.
The Continue variable is used to escape the repeat of the paint() method. It is set to false whenever the browser tells the applet to stop.
Each triangle, consisting of 3 vertices, is loaded into the Polygon object T before drawing. The variable is loaded (with vertex coordinates) and cleared each time a triangle needs to be drawn.
Initialization of the global variables and arrays is performed outside the init() method only because it was simpler to write the code that way. Normally, initialization of variables would take place
within the init() method.
An alternate approach to defining 8 common points from which all triangles were made, could have to define each triangle with 9 coordinates (no storage of point information). That approach would have
used 9 arrays, one for each coordinate value making up each triangle (3 points x 3 axes). Either approach would work, but the common point approach results in fewer calculations and faster speed.
Java provides the fillPolygon method for filling screen areas bounded by points. In this case, the x,y coordinates of each triangle vertex are incorporated into a polygon using the addPoint method,
followed by use of the fillPolygon method to render the triangles. An additional call is made to the Java method drawPolygon to draw the edges of the triangles in a different color - improving the
visibility of the cube.
Depth Sorting
At the end of the 3D graphics pipeline, the DrawCube subroutine draws the triangles one at a time in the order the vertices are stored in arrays V1, V2, and V3.
To improve the realism of the drawing, gbCube calculates the average z coordinate of each triangle and then sorts the triangles (arrays V1, V2, and V3) the objects farthest away are drawn first. The
z-depth of each triangle is stored in the array V4.
This approach is called the Painter's Algorithm and ensures that the nearest objects will be in front of the farthest objects. It works well but has limitations, such as not working well for
intersecting triangles. There are variations of the Painter's Algorithm which address these shortcomings but gbCube uses only the sort routine. For a simple cube this approaches works just fine.
public void SortByDepth(Graphics screen) {
for (int w = 0; w < 12 ; w++) {
V4[w] = (Pz[V1[w]]+Pz[V2[w]]+Pz[V3[w]]) / 3;
for (int g = 0; g < 11 ; g++) {
for (int h = 0; h < 12; h++) {
if (V4[g] < V4[h]) {
V1temp = V1[g]; V2temp = V2[g]; V3temp = V3[g]; _
V4temp = V4[g]; V5temp = V5[g];
V1[g]=V1[h]; V2[g]=V2[h]; V3[g]=V3[h]; _
V4[g]=V4[h]; V5[g]=V5[h];
V1[h]=V1temp; V2[h]=V2temp; V3[h]=V3temp; _
V4[h]=V4temp; V5[h]=V5temp;
The sort algorithm used here is called a bubble sort. It works well enough for a few hundred triangles to be sorted, but is not suited for more complex 3D scenes. Other sort routines can be written
which sort up to a hundred times faster. These will be included in future gbCube updates.
As has been discussed, any triangle pointing away from the point of view cannot be seen - it's on the back side of the 3D scene. Identifying such triangles, and not displaying them, is called
backface culling and typically results in eliminating the need to display about half of the triangles in the 3D scene.
public void BackFaceCulling(Graphics screen) {
for (int w = 0; w < 12 ; w++) {
// Cross Product
CPX1 = Px[V2[w]] - Px[V1[w]];
CPY1 = Py[V2[w]] - Py[V1[w]];
CPZ1 = Pz[V2[w]] - Pz[V1[w]];
CPX2 = Px[V3[w]] - Px[V1[w]];
CPY2 = Py[V3[w]] - Py[V1[w]];
CPZ2 = Pz[V3[w]] - Pz[V1[w]];
DPX = CPY1 * CPZ2 - CPY2 * CPZ1;
DPY = CPX2 * CPZ1 - CPX1 * CPZ2;
DPZ = CPX1 * CPY2 - CPX2 * CPY1;
// DotProduct uses POV vector 0,0,POV as x1,y1,z1
V5[w] = 0 * DPX + 0 * DPY + POV * DPZ;
This code first calculates the cross product of the first two line segments of each triangle. Then a dot product is calculated between the resulting normal vector and the POV vector. The result for
each triangle is stored in array V5.
When the code is executed to draw the triangles, only those facing the viewer (positive dot product) will be drawn.
Another very key point to notice in the source code is that the cross product equations you've seen so far assume that the vector components represent position vectors - with starting points at the
origin (0,0,0). To calculate the cross product between two triangle edges you must use the displacement vectors which are calculated by the difference of the starting and ending points of the
triangle line segments.
Until now all point coordinates have been world coordinates - the position of the 3D cube in space. These point coordinates must now be projected onto the computer screen - onto a 2D surface. The
subroutine is as follows:
// calculate projection coordinates
for (int s = 0; s < 8 ; s++) {
PPx[s] = Px[s]+Offset;
PPy[s] = Py[s]+Offset;
Each point of the 3D scene must be displayed on the 2D computer screen. The mapping of the points from the 3D scene to the computer screen is called point projection. There are two general forms of
projection, parallel and perspective.
With parallel projection, the x-y coordinates of a 3D point simply map one-to-one to the computer screen. The z coordinates are simply dropped. While very simple to perform, the resulting images do
not display realistic images in that objects far away will appear to be the same size as objects close in.
gbCube uses parallel projection only because I haven't gotten around to adding the projection equations. This should be added soon.
Perspective projection, which uses the z dimensions to adjust the 2D images to create more realistic images. With perspective projection, objects farther away will appear smaller in the resulting
This simulates real life views of scenes with depth.
The final step in the 3D graphics pipeline used by gbCube is to draw the cube on the computer screen. The x-y coordinates of the three vertices in each triangle are used in the Java fillPolygon and
drawPolygon methods to display the cube. Flat shading is used (same color for all pixels within a triangle). Later versions of this applet will include improved rendering, such as Phong shading.
public void DrawCube(Graphics screen) {
screen.clearRect(0,0,getSize().width, getSize().height);
for (int w = 0; w < 12 ; w++) {
if (V5[w] > 0) {
T.addPoint ((int)(PPx[V1[w]]+Offset), (int)(PPy[V1[w]]+Offset));
T.addPoint ((int)(PPx[V2[w]]+Offset), (int)(PPy[V2[w]]+Offset));
T.addPoint ((int)(PPx[V3[w]]+Offset), (int)(PPy[V3[w]]+Offset));
Note that both the drawPolygon and fillPolygon methods are used. This is done using two colors so that the edges of the cube are clear. The drawing is made using the projection x-y coordinates, not
the true coordinates of the 3D cube points. If the fillPolygon statement were commented out gbCube would display only a wire-frame image.
Following the display and shading of all triangles a 25ms time delay is introduced, with code as follows:
try { Thread.sleep(Delay); } catch (InterruptedException e) {}
Conventional Java syntax would have resulted in spreading the timer code over multiple lines, but it was visually convenient to have the code listed on a single line.
The last step in the gbCube 3D graphics pipeline is to rotate each of the eight points through an angle of rotation, in preparation for the next cycle. While separate angles of rotation for each axis
are possible, gbCube uses the same angle of rotation for each axis.
Rotation about each axis is calculated separately and successively applied to get the accumulative effect of rotation about all 3 axes.
public void RotatePoints(Graphics screen) {
for (int w=0; w < 8; w++) {
oldY = Py[w]; oldZ = Pz[w];
Py[w] = oldY * Math.cos(Theta) - oldZ * Math.sin(Theta);
//rotate about X
Pz[w] = oldY * Math.sin(Theta) + oldZ * Math.cos(Theta);
//rotate about X
oldX = Px[w]; oldZ = Pz[w];
Px[w] = oldZ * Math.sin(Theta) + oldX * Math.cos(Theta);
//rotate about Y
Pz[w] = oldZ * Math.cos(Theta) - oldX * Math.sin(Theta);
//rotate about Y
oldX = Px[w]; oldY = Py[w];
Px[w] = oldX * Math.cos(Theta) - oldY * Math.sin(Theta);
//rotate about Z
Py[w] = oldX * Math.sin(Theta) + oldY * Math.cos(Theta);
//rotate about Z | {"url":"http://www.garybeene.com/3d/3d-java.htm","timestamp":"2014-04-19T19:34:18Z","content_type":null,"content_length":"21310","record_id":"<urn:uuid:6c4ef886-0914-4026-9847-e6d70afbc789>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-Dev] Splines
Pablo Winant pablo.winant@gmail....
Thu Feb 21 04:33:33 CST 2013
It's good to see people working in that direction. I will be more than
happy to participate if I can.
As a user : I really wish I had a cubic spline interpolation with
natural bounding conditions (which implies linear extrapolation). This
is what is implemented in Matlab's griddedInterpolator and is missing
from the existing options in scipy (right?). Being able to evaluate the
derivatives is also a big advantage.
As a developper : there is the Einspline library that has a
straightforward implementation in C. In particular, the representation
of spline objects (C structs) and the low-level API are very
straightforward and may be a good of inspiration. The library is
currently GPL, but the author told me it could be made BSD if some m4
macros are removed.
For what it's worth, I have made a simple Cython wrapper around some of
its functions
and was considering repackaging it. I had some other plans (like writing
code for higher dimensions using some kind of templating, and updating
SIMD evaluation). If there is a more elegant alternative, I would be
happy to jump on the bandwagon.
I agree completely about the remarks on the spline format: it should be
left as simple as possible. Having it isomorphic to C structs would be a
good thing too as it permits easy experiments with extensions (for
instance PyCuda kernels)
On 21/02/2013 10:41, Pauli Virtanen wrote:
> Hi,
> Charles R Harris <charlesr.harris <at> gmail.com> writes:
>> There have been several threads on the list about splines
>> and consolidation of splines. For instance, there are several
>> uniform spline implementations for images and signal processing,
>> various low level functions in fitlib that are unexposed, and
>> perhaps useful altenatives to b-splines for some applications
>> like straight interpolation. For myself, I've started implementing
>> several functions in pure Python with an eye to converting them
>> to Cython once the interface and documentation is in place,
>> mostly for doing things that fitpack doesn't do because it is
>> very integrated, as opposed to supplying a basic toolset.
>> As part of this project, I'd like to get some feedback
>> on which functions people use most and what features they
>> would like to see. I'm not interested in the high level classes
>> at this point, either the current classes or combo functions
>> like fpcurf, but rather a collection of good lower level function
>> that could be used to implement the higher level functions
>> in a more flexible way. Thoughts?
> Great! I was going to start on this for 0.13.0, but this should
> speed things up considerably :)
> Overall, I think what we need is are (i) a well-specified spline
> format, and (ii) solid basic functions for evaluating and
> manipulating them, (iii) ditto for tensor product splines.
> How to construct the splines (interpolation, smoothing, etc.) should
> be considered as a separate problem. We can turn to FITPACK for
> smoothing splines, but it should not be used for interpolating
> splines.
> ***
> Some misc thoughts on this:
> * The spline data format should be documented and set in
> stone as a first step. Users (and future developers) will
> want to toy around with it.
> Also, the data format for tensor product N-dim splines needs
> to be set. They are what we are missing, and what people are
> constantly asking for. We don't want them turn to
> `ndimage.map_coordinates`, which is clunky to use.
> The Fitpack tck format looks like this:
> https://github.com/pv/scipy-work/commits/spline-unify
> Currently, there's also a second B-spline data format used in
> scipy.interpolate with different padding, but we may want to
> stick with the FITPACK one, it's probably as good as any.
> * Functions for splines with varying x-coordinates are needed.
> Uniform grid splines would be a nice bonus as a speed-gain
> optimization.
> * The 1-D spline routines should be able to work over an
> arbitrary axis of multidimensional data. Even better if this
> can be done without reshaping and copying the input data
> (e.g. with Numpy iterators).
> This sounds like providing strided 1-D loops for heavy lifting,
> and bolting array iterators on top.
> * Functions for integration + differentiation of splines
> as as abstract objects would be useful. For efficiency,
> evaluation of derivatives & integrals probably might need
> to be provided separately.
> * For tensor product splines, evaluation on a scattered point
> set + on a grid would be useful.
> * It will probably be easiest to start from a clean slate,
> rather than trying to reuse scipy.interpolate.
> * FITPACK should not be used as a basis for this work, no
> need to use ancient FORTRAN 77 code for simple stuff. We can
> use its routines for generating smoothing splines, though.
> * Routines for constructing interpolating splines --- I think most
> of the time people will use these for simple gridded data interpolation
> rather than smoothing. FITPACK's knot selection is nice when it works,
> but often it doesn't (or requires careful fiddling), so we should have
> something simple and robust as a default algorithm.
> * I'm not sure what to do with the various boundary conditions
> and out-of-bounds value handling. It's probably best to leave room
> for various ways to do this...
> * Scipy also has two implementations of piecewise polynomials
> --- these should be consolidated into one, too.
> Cheers,
> Pauli
> _______________________________________________
> SciPy-Dev mailing list
> SciPy-Dev@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-dev
More information about the SciPy-Dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2013-February/018384.html","timestamp":"2014-04-16T22:38:29Z","content_type":null,"content_length":"9349","record_id":"<urn:uuid:468c9283-7b3e-4c9d-9333-418401aff56b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tau Topics - Accuracy of Diagnostic Laboratory Medical Tests: Specificity and Sensitivity
An acquaintance recently mentioned that "20/20 or one of those news shows just did a piece on the use of hair testing. They sent a standard poodle's hair in for testing to 4 or 5 labs and got
different results from each. I have come to the educated conclusion that hair testing is a great farce. But for some it might lead to being treated by just the thing that their body needs. But don't
think it would be due to the accuracy of testing, just luck!"
The following is information to help anyone, including medical professionals, understand why diagnostic tests have widely varying accuracy. The reasons for the levels of accuracy depend on a variety
of factors.
Results from most diagnostic tests, not just hair testing, will differ from laboratory to laboratory. There are several reasons for this, ranging from the training and skill of the lab technicians to
proper test set-up protocol (for example, when analyzing hair samples, which portion of the hair shaft the sample came from is critical), to - as 20/20/whoever was trying to point out - the actual
ability of the test to measure what it is supposed to measure.
We know of labs in Wisconsin that were given awards when their accuracy rate (that is, their rate of correctly carrying out tests from start to finish) was 97%, and I would guess that nationwide that
is considered an 'award-winning' rate of accuracy for correctly doing diagnostic tests.
But this whole issue raises a point that is key to understanding the accuracy of any diagnostic test: even with the best of training and perfect protocols, laboratory tests do not yield yes/no
answers. The accuracy of the tests are derived from statistical distributions. Because of this, the rarer a condition in the general population, the higher the probability that a laboratory test will
yield inaccurate results.
Here's a bit of explanation to help understand why I say that.
<Judy pulls out and dusts off her statistics hat, and promises that this is NOT math!>
A diagnostic test investigates the statistical relationship between test results and the presence of disease. For all diagnostic tests there are two critical components that determine its accuracy:
sensitivity and specificity.
Sensitivity is the probability that a test is positive, given that the person has the disease.
Specificity is the probability that a test is negative, given that the person does not have the disease.
Using AIDS as an example: assume for this example that both the specificity and sensitivity of the test for AIDS is 99%.
Given this, if you took an AIDS test and have a positive result, the test itself may only be 50% accurate!!!!
How can that be??? Well, it turns out that when you work with statistics you also have to take into account the incidence of something - how common it is - in the population at large. Let's assume
the incidence of AIDS in the United States is 1%, and then test 10,000 people (since I'm assuming that 1% have AIDS, that means that 100 people out of these 10,000 really have AIDS, and 9900 really
do not have AIDS). We would get the following results from our diagnostic laboratory test (each 'number' is the number of people that fall into that category):
Test Result Test Result Total Number
Positive Negative of People
Really Has AIDS 99^a 1 100
Doesn't have AIDS 99^c 9801^b 9900
Totals 198 9802 10,000
^aIf you look across the first row you will see that the sensitivity - the probability that the test found that the patient had AIDS, given that the person really has AIDS - is 99%. (99/100 = 99%)
^bIf you look across the second row you will see that the specificity - the probability that the test didn't find AIDS, given that the person doesn't have AIDS - is also 99% (9801/9900).
^cBut here's where it gets weird - look DOWN the first column. What this says is: the probability that you have AIDS, given that you tested positive for AIDS, is 50%!!! (99/198). If you look down
this column you will clearly see that 99 of the 198 people who tested positive for AIDS in this test clearly do NOT have AIDS.
Bottom line: when someone tells you that a diagnostic test, done properly, is 99% accurate (meaning that both the specificity and the sensitivity = 99%), the actual 'accuracy' of the test will in
fact really depend on how common the disease you are testing for is in the population you are testing.
This example clearly points out why diagnostic tests are designed to confirm a diagnosis for rare conditions. They should not be used to go 'fishing' for a diagnosis (which unfortunately happens as
an inappropriate use of many diagnostic tests).
In real life, we don't nab 10,000 people randomly and run them through AIDS tests. People who tend to get AIDS tests are people who are at high risk for a variety of factors (lifestyle, occupation,
medical condition such as hemophilia, transfusion recipient and the like), so they are in a statistical sense a different 'population' than the general population.
Even so, keep in mind that the test is most certainly NOT 99% 'accurate' in the way all of us think about accuracy, even though both its specificity and sensitivity are 99%. You always must take into
account how common the disease you are testing for is in the population you are testing. Many, MANY tests have much lower specificity and sensitivity than the 99% I've used in this example. However,
if the condition these tests are trying to diagnose is much more common in the population, then what we think of as 'accuracy' becomes better for a given level of specificity and sensitivity as the
prevalence of the disease increases.
However, no professional should be complacent about the 'high level of accuracy' they get from laboratory diagnostic tests even for 'common' conditions. Let's quickly run through an example of a
‘best case' scenario, using the assumption that we wish to screen for a new fictional bacteria (I'll call it "Pyro") which affects one in nine people in the population at any given time.
If it affects 1 in 9 people, the incidence of Pyro in the United States is 11%. Suppose we test 10,000 people (if 11% have Pyro, that means that 1111 people out of these 10,000 really have Pyro, and
8889 really do not have Pyro). We would get the following results from our diagnostic laboratory test (each 'number' is the number of people that fall into that category):
Test Result Test Result Total Number
Positive Negative of People
Really Has Pyro 1100^a 11 1111
Doesn't have Pyro 89^c 8800^b 8889
Totals 1189 8811 10,000
^aIf you look across the first row you will see that the sensitivity - the probability that the test found that the patient had Pyro, given that the person really has Pyro - is 99% (1100/1111 = 99%).
^bIf you look across the second row you will see that the specificity - the probability that the test didn't find Pyro, given that the person doesn't have Pyro - is also 99% (8800/8889).
^cNow look DOWN the first column. What this says is: the probability that your test incorrectly says someone has Pyro, when in fact they do NOT have Pyro, is 8% (89/1189).
So, in this case - where we have a condition that is widespread (1 out of 9 is pretty widespread!) your best diagnostic test - one which has 99% sensitivity and 99% specificity - is still going to
generate at best 92% ‘accuracy' (in what we usually think of as accuracy).
Ok, I think I'll put that statistics hat away for the night...
I'd like to take credit, btw, for laying out all the above, but it was a joint effort between my husband and myself (yes, we are both boring statistical-types! <g>). It was fueled by an article we
saw many years ago by Marilyn vos Savant, where she first pointed this out and knocked our socks off! :-) | {"url":"http://www.michaelandjudystouffer.com/judy/articles/specsen.htm","timestamp":"2014-04-20T18:23:08Z","content_type":null,"content_length":"19897","record_id":"<urn:uuid:f42454d4-0ac1-4203-8cb9-628e35277013>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Single Page Fermat Theorem Proof
Replies: 2 Last Post: Sep 5, 2009 10:01 PM
Messages: [ Previous | Next ]
Single Page Fermat Theorem Proof
Posted: Sep 3, 2009 9:37 AM
I have posted this at the geometry forum but have had no response. Maybe better placed here. Best wishes.
Date Subject Author
9/3/09 Single Page Fermat Theorem Proof ishvaaag
9/3/09 Re: Single Page Fermat Theorem Proof ishvaaag
9/5/09 Re: Single Page Fermat Theorem Proof ishvaaag | {"url":"http://mathforum.org/kb/thread.jspa?threadID=1981890","timestamp":"2014-04-16T13:58:31Z","content_type":null,"content_length":"19329","record_id":"<urn:uuid:69a7808c-658e-461f-b670-93721dce6630>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quality is dropping in recruiting
Quality is dropping in recruiting
Last week, in an e-mail concerning Army recruiting for November, I wrote to Mr. M that I strongly believed that two things were going on that were a little bit wierd. The first was that the November
2005 goal was significantly below the November 2004 goal, and the second piece of information that I wanted to find an answer on was the quality of recruit. October 2005, as I blogged about
last month
had 12% of recruits come into the Army as Cat. IV recruits, which is the lowest quality that is acceptable. The DOD has an internal quality assurance check of taking in no more than 4% of all
recruits as Cat. 4, so if that objective is to be met, then the Army in the first month of recruiting used up 18% of their entire allotment of Cat. 4 recruits for the fiscal year, with eleven
recruiting months left in it. So I was very curious as to what the November Cat. 4 numbers were, for I suspected that they would be high as we have a two year pattern of behavior of the Army dropping
standards to hold onto their numbers.
Matt Yglesias
did some reading so I did not have to and found the following
LA Times article
with the following two informaton points:
"The Army exceeded its 5,600 recruit goal by 256 for November, and the Army Reserve brought in 1,454 recruits, exceeding its target by 112. To do so, they accepted a "double digit" percentage of
recruits who scored from 16 to 30 out of a possible 99 on the military's aptitude test, said officials who spoke on the condition of anonymity."
Pulling the
officila DOD press releases and then grabbing the data that has been released, we can do a quick anaylsis of how much does this quality drop allow for the Army to meet their objectives.
If we assume "double digit" percentage is exactly 10%, and we assume that the Army only recruited these individuals because they had tapped out all higher than Cat. IV prospects, and we assume that
the 4% objective was held as a firm ceiling, then the Army would have been short 100 recuits of the November objective. If the percentage of Cat. 4 was 12.5%, then the shortfall under these same
assumptions would have been 240 recruits for November, and if the Cat. 4 percentage was 15%, then the shortfall of recruits would have been 390 for the month. Quality is dropping significantly to
make numbers.
The next interesting quote is the following:
""We will be at 4% at the end of the fiscal year, that's what matters," said Lt. Col. Bryan Hilferty, a spokesman for the Army."
This claim is extraordinarily dubious. The October Cat. 4 group used up 18% of the total year's allotment of Cat. 4 recruits if the objective of 96% Cat 3 or better recruits are met for FY-06.
Assuming, and this is a very favorable assumption that "double digit" means exactly 10% of November recruits are Cat. IV, and assuming equal distribution of recruits between active and reserve
formations, that means 586 new recruits are Cat. IV in November. The Army wants to recruit 80,000 individuals this fiscal year, which means 4% for the year is 3,200 recruits as Cat. IV.
November, at this conservative, lowballed estimate would eat up 18% of the total allowable pool, so combined with October, the Army has used up 36% of their allotment of Cat. IV recruits to pull in
13.5% of their entire recruiting goal, or roughly 3 times as many Cat. 4 recruits have been pulled in then they should be. To hold 4% for the year, the next 70,000 recruits can not exceed 2.9% Cat.
IV, or a reduction of 70% or more in current Cat. IV rates. And that drop must happen in December or the target goals get even lower quicker. These numbers get tougher if you assume double digits
means something greater than 10%.
Links to this post: | {"url":"http://festersplace.blogspot.com/2005/12/quality-is-dropping-in-recruiting.html","timestamp":"2014-04-16T16:01:51Z","content_type":null,"content_length":"23465","record_id":"<urn:uuid:64ed81c3-5f53-414b-81ce-b7fa1fd59e58>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rational regulation of learning dynamics by pupil–linked arousal systems
The ability to make inferences about the current state of a dynamic process requires ongoing assessments of the stability and reliability of data generated by that process. We found that these
assessments, as defined by a normative model, were reflected in non–luminance–mediated changes in pupil diameter of human subjects performing a predictive–inference task. Brief changes in pupil
diameter reflected assessed instabilities in a process that generated noisy data. Baseline pupil diameter reflected the reliability with which recent data indicated the current state of the
data–generating process and individual differences in expectations about the rate of instabilities. Together these pupil metrics predicted the influence of new data on subsequent inferences.
Moreover, a task– and luminance–independent manipulation of pupil diameter predictably altered the influence of new data. Thus, pupil–linked arousal systems can help regulate the influence of
incoming data on existing beliefs in a dynamic environment. | {"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3386464/?lang=en-ca","timestamp":"2014-04-21T00:09:41Z","content_type":null,"content_length":"145178","record_id":"<urn:uuid:857e7ac4-2ff3-494f-976c-1181b8919fc6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Round Security of Symmetric-Key Cryptographic Primitives
- In Advances in Cryptology – Asiacrypt’08, volume 5350 of LNCS. Pages , 2008
"... Abstract. Every public-key encryption scheme has to incorporate a certain amount of randomness into its ciphertexts to provide semantic security against chosen ciphertext attacks (IND-CCA). The
difference between the length of a ciphertext and the embedded message is called the ciphertext overhead. ..."
Cited by 8 (2 self)
Add to MetaCart
Abstract. Every public-key encryption scheme has to incorporate a certain amount of randomness into its ciphertexts to provide semantic security against chosen ciphertext attacks (IND-CCA). The
difference between the length of a ciphertext and the embedded message is called the ciphertext overhead. While a generic brute-force adversary running in 2 t steps gives a theoretical lower bound of
t bits on the ciphertext overhead for IND-CPA security, the best known IND-CCA secure schemes demand roughly 2t bits even in the random oracle model. Is the t-bit gap essential for achieving IND-CCA
security? We close the gap by proposing an IND-CCA secure scheme whose ciphertext overhead matches the generic lower bound up to a small constant. Our scheme uses a variation of a four-round Feistel
network in the random oracle model and hence belongs to the family of OAEP-based schemes. Maybe of independent interest is a new efficient method to encrypt long messages exceeding the length of the
permutation while retaining the minimal overhead.
, 2005
"... This paper studies the security against differential/linear cryptanalysis and the pseudorandomness of a class of generalized Feistel scheme with SP round function called GFSP. We consider the
minimum number of active sboxes in four, eight and sixteen consecutive rounds of GFSP, which provide the upp ..."
Cited by 3 (0 self)
Add to MetaCart
This paper studies the security against differential/linear cryptanalysis and the pseudorandomness of a class of generalized Feistel scheme with SP round function called GFSP. We consider the minimum
number of active sboxes in four, eight and sixteen consecutive rounds of GFSP, which provide the upper bound of the maximum differential/linear probabilities of 16-round GFSP scheme, in order to
evaluate the strength against differential/linear cryptanalysis. Furthermore, we point out seven rounds GFSP is not pseudorandom for nonadaptive adversary, and prove that eight rounds GFSP is
pseudorandom for any adversaries.
- Fast Software Encryption, 9th International Workshop, FSE 2002
"... Abstract. Four round Feistel permutation (like DES) is super-pseudorandom if each round function is random or a secret universal hash function. A similar result is known for five round MISTY
type permutation. It seems that each round function must be at least either random or secret in both cases. I ..."
Cited by 2 (0 self)
Add to MetaCart
Abstract. Four round Feistel permutation (like DES) is super-pseudorandom if each round function is random or a secret universal hash function. A similar result is known for five round MISTY type
permutation. It seems that each round function must be at least either random or secret in both cases. In this paper, however, we show that the second round permutation g in five round MISTY type
permutation need not be cryptographic at all, i.e., no randomness nor secrecy is required. g has only to satisfy that g(x) ⊕ x � = g(x ′ ) ⊕ x ′ for any x � = x ′. This is the first example such that
a non-cryptographic primitive is substituted to construct the minimum round super-pseudorandom permutation. Further we show efficient constructions of super-pseudorandom permutations by using above
mentioned g.
"... Abstract. In this paper we consider the security of the Misty structure in the Luby-Rackoff model, if the inner functions are replaced by involutions without fixed point. In this context we show
that the success probability in distinguishing a 4-round L-scheme from a random function is O(m 2 /2 n) ( ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. In this paper we consider the security of the Misty structure in the Luby-Rackoff model, if the inner functions are replaced by involutions without fixed point. In this context we show that
the success probability in distinguishing a 4-round L-scheme from a random function is O(m 2 /2 n) (where m is the number of queries and 2n the block size) when the adversary is allowed to make
adaptively chosen encryption queries. We give a similar bound in the case of the 3-round R-scheme. Finally, we show that the advantage in distinguishing a 5-round scheme from a random permutation
when the adversary is allowed to adaptively chosen encryption as well as decryption queries is also O(m 2 /2 n). This is to our knowledge the first time involutions are considered in the context of
the Luby-Rackoff model. 1 Introduction. Proving the security of block ciphers has been a long-standing problem, and it is not solved yet. In their seminal paper [4], M. Luby and C. Rackoff | {"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.32.7665","timestamp":"2014-04-20T21:49:34Z","content_type":null,"content_length":"23222","record_id":"<urn:uuid:9c6abbe0-c979-44a8-b537-d217560d579a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
data on lamellar spacings and transformation temperatures
I have data on lamellar spacings and transformation temperatures for pearlite transformation in a plain carbon eutectoid steel. How do I prove the data conforms with Zener's equation??
I know that S (lamellar spacign) is inversely proportional to Delta T ( Eutectoid temperature minus transformation temperature). I don't know the exact equation that relates S and T (transformation).
The 2n part of the question is to find the Eutectoid temperature. I think i can find that after I plot a linear graph and get the gradient.
Please Help!! | {"url":"http://www.physicsforums.com/showthread.php?p=2724882","timestamp":"2014-04-17T00:58:52Z","content_type":null,"content_length":"20148","record_id":"<urn:uuid:8fbcb8b4-7568-4a7a-9e70-b3dccc96d7a1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
idy vs. ady question
Hey fellas,
Got a dumb question. I made the switch to IDY along time ago and this is my go to recipe.
2 1/2 cups KASL
1 1/8 cups water
1/8 TSP + 1/4 TSP IDY
2 1/4 TSP Sicilian Sea Salt
1 TSP Olive Oil
However i don't have time to order idy from penn mac and all i have is this red star ADY junk i got in a jar at the supermarket.
Can someone help me figure out how much ADY to use in this recipe?
Also the flour amount in this recipe varies depending on how it absorbs at any given time.
also if there's any changes to my process i should note, your advice is much appreciated.
Thanks to all,
Hope everyone had a merry XMAS
MY USUAL MIX METHOD with Kitchenaid mixer when using IDY:
1) Mix Salt and Water
2) Add 3/4 of the flour + All of the IDY
3) Mix 2 min until a batter formed
4) Rest for 20 minutes
5) Mix for 7 more minutes. adding flour gradually after 5 min, after 6 min increase speed
6) Rested for 5 minutes
7) Mix another 5 minutes adding rest of flour until dough was just hardly sticking to my hands | {"url":"http://www.pizzamaking.com/forum/index.php?topic=9925.0","timestamp":"2014-04-20T21:24:13Z","content_type":null,"content_length":"34647","record_id":"<urn:uuid:8f53676a-0a2d-4c33-af22-f7029c7cfdcb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
UMass Amherst Theoretical Physicists Find A Way To Simulate Strongly Correlated Fermions
March 19, 2012
New technique could open the door to practical superconductor applications, as well as to solving difficult ‘many-body’ problems in high-energy physics, condensed matter and ultra-cold atoms
Combining known factors in a new way, theoretical physicists Boris Svistunov and Nikolai Prokof’ev at the University of Massachusetts Amherst, with three alumni of their group, have solved an
intractable 50-year-old problem: How to simulate strongly interacting quantum systems to allow accurate predictions of their properties.
It could open the door to practical superconductor applications, as well as to solving difficult “many-body” problems in high-energy physics, condensed matter and ultra-cold atoms.
The theoretical breakthrough by Prokof’ev and Svistunov at UMass Amherst, with their alumni Kris Van Houcke now at Ghent University, Felix Werner at Ecole Normale Supérieure Paris and Evgeny Kozik
at Ecole Polytechnique, is reported in the current issue of Nature Physics. The paper also includes crucial results of an experimental validation conducted by Martin Zwierlein and colleagues at MIT.
Svistunov says, “The accompanying experiment is a breakthrough on its own because achieving a few percent accuracy has long been a dream in the field of ultra-cold atoms. We needed this confirmation
from Mother Nature.”
Van Houcke adds, “Our answers and the experimental results perfectly agree. This is important because in physics you can always make a prediction, but unless it is controlled, with narrow error bars,
you’re basically just gambling. Our new method makes accurate predictions.”
Physicists have long been able to numerically simulate statistical behavior of bosonic systems by mapping them onto polymers in four dimensions, as Richard Feynman proposed in the 1950s. “In a
bosonic liquid one typically wants to know at what temperature the superfluid phase transition occurs,” Prokof’ev explains, “and mapping onto the polymers yields an essentially exact answer.”
But simulating particle behavior in strongly interacting fermionic liquids, like strongly interacting electrons in high-temperature superconducting compounds, has been devilishly elusive, he adds.
“The polymer trick does not work here because of the notorious negative-sign problem, a hallmark of fermionic statistics.”
Apart from mapping onto the polymers, Feynman proposed yet another solution, in terms of “diagrams” now named after him. These Feynman diagrams are graphical expressions for serial expansion of
Green’s functions, a mathematical tool that describes statistical properties of each unique system. Feynman diagrams were never used for making quantitatively accurate predictions for strongly
interacting systems because people believed that evaluating and summing all of them was simply impossible, Svistunov points out. But the UMass Amherst team now has found a way to do this.
What they discovered is a trick–called Diagrammatic Monte Carlo–of sampling the Feynman series instead of calculating diagrams one by one. Especially powerful is the Bold Diagrammatic Monte Carlo
(BDMC) scheme. This deals with a partially summed Feynman series (Dyson’s development) in which the diagrams are constructed not from the bare Green’s functions of non-interacting system (usually
represented by thin lines), but from the genuine Green’s functions of the strongly interacting system being looked for (usually represented by bold lines).
“We poll a series of integrals, and the result is fed back to the series to keep improving our knowledge of the Green’s function,” says Van Houcke, who developed the BDMC code over the past three
The BDMC protocol works a bit like sampling to predict the outcome of an election but with the difference that results of polling are being constantly fed back to the “electorate,” Prokof’ev and
Svistunov add. “We repeat this with several hundred processors over several days until the solution converges. That is, the Green’s function doesn’t change anymore. And once you know the Green’s
function, you know all the basic thermodynamic properties of the system. This has never been done before.”
On the Net: | {"url":"http://www.redorbit.com/news/science/1112495894/umass-amherst-theoretical-physicists-find-a-way-to-simulate-strongly-correlated-fermions/","timestamp":"2014-04-17T17:54:33Z","content_type":null,"content_length":"31551","record_id":"<urn:uuid:a8469fbc-5bbd-4d60-9fe4-d76e4eaef07b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
When asked to find the zeros of the function r(x) = 5t3 – 8t2 + 7t + 2 which task do you complete first (Rational Root Theorem or Descartes’ Rule of Signs? Use complete sentences to explain your
Best Response
You've already chosen the best response.
I've never heard of these before, but just looked them up! At first glance, I'd argue that the Rational Root Theorem should come first. It's more specific than Decartes'. By this I mean, RRT will
get you some finite amount of numbers, the factors of the coefficients of the highest power, and lowest power of x respectively. For example, if you have \[3x^3-5x^2+5x-2=0\], RRT will return 8
possible roots. That's all of the numbers in existence, narrowed down to 8 particular numbers. On the other hand Decartes will give you a maximum number for the amount of positive and negative
roots. If I understand it right, it will return "at most 3 positive roots" for the example given above. Well big whoop, right? All of infinfity narrowed down to half, \(maybe\). On its own, it
doesn't really seem to be of any help whatsoever. If you do it after the other one, you can narrow 8 down to 3, so it's moderately useful there.
Best Response
You've already chosen the best response.
Then again, it's possible I don't understand these theorems at all.
Best Response
You've already chosen the best response.
hmm..its ok... can you help me with this one... What are the possible number of positive, negative, and complex zeros of f(x) = –2x3 + 5x2 + 6x – 4 ? Answer a.) Positive: 2 or 0; Negative: 1;
Complex: 2 or 1 b.) Positive: 2 or 0; Negative: 1; Complex: 2 or 0 c.) Positive: 1; Negative: 2 or 0; Complex 2 or 0 d.) Positive: 1; Negative: 2 or 0; Complex: 0
Best Response
You've already chosen the best response.
Well I see 2 sign changes there, which I think means there are at most 2 positive roots, meaning a) or b). After multiplying the odd coefficients by -1, there is 1 sign change, so that means
there is 1 negative root I think. Then the number of complex roots is 3-(2+1)=0 OR 3-(0+1)=2. So I think b) is the answer.
Best Response
You've already chosen the best response.
i figured the first one out...=) one last question...i promise!! =) and thanks alot for the help What are the possible number of negative zeros of f(x) = –2x7 + 2x6 + 7x5 + 7x4 + 4x3 + 4x2 ?
Answer a.) 4, 2, or 0 b.) 2 or 0 c.) 3 or 1 d.) 7, 5, 3, or 1
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee07a81e4b05ed8401b4c20","timestamp":"2014-04-17T09:40:19Z","content_type":null,"content_length":"38855","record_id":"<urn:uuid:32bf3fd9-9443-4559-911f-67062e2038ab>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coalgebraic automata theory: basic results
"... In this paper, we present a systematic way of deriving (1) languages of (generalised) regular expressions, and (2) sound and complete axiomatizations thereof, for a wide variety of systems. This
generalizes both the results of Kleene (on regular languages and deterministic finite automata) and Miln ..."
Cited by 15 (4 self)
Add to MetaCart
In this paper, we present a systematic way of deriving (1) languages of (generalised) regular expressions, and (2) sound and complete axiomatizations thereof, for a wide variety of systems. This
generalizes both the results of Kleene (on regular languages and deterministic finite automata) and Milner (on regular behaviours and finite labelled transition systems), and includes many other
systems such as Mealy and Moore machines.
"... Abstract. We present a novel coalgebraic logic for deterministic Mealy machines that is sound, complete and expressive w.r.t. bisimulation. Every finite Mealy machine corresponds to a finite
formula in the language. For the converse, we give a compositional synthesis algorithm which transforms every ..."
Cited by 9 (6 self)
Add to MetaCart
Abstract. We present a novel coalgebraic logic for deterministic Mealy machines that is sound, complete and expressive w.r.t. bisimulation. Every finite Mealy machine corresponds to a finite formula
in the language. For the converse, we give a compositional synthesis algorithm which transforms every formula into a finite Mealy machine whose behaviour is exactly the set of causal functions
satisfying the formula. 1
- 24TH ANNUAL IEEE SYMPOSIUM ON LOGIC IN COMPUTER SCIENCE , 2009
"... Several dynamical systems, such as deterministic automata and labelled transition systems, can be described as coalgebras of so-called Kripke polynomial functors, built up from constants and
identities, using product, coproduct and powerset. Locally finite Kripke polynomial coalgebras can be charact ..."
Cited by 7 (7 self)
Add to MetaCart
Several dynamical systems, such as deterministic automata and labelled transition systems, can be described as coalgebras of so-called Kripke polynomial functors, built up from constants and
identities, using product, coproduct and powerset. Locally finite Kripke polynomial coalgebras can be characterized up to bisimulation by a specification language that generalizes Kleene’s regular
expressions for finite automata. In this paper, we equip this specification language with an axiomatization and prove it sound and complete with respect to bisimulation, using a purely coalgebraic
argument. We demonstrate the usefulness of our framework by providing a finite equational system for (non-)deterministic finite automata, labelled transition systems with explicit termination, and
automata on guarded strings.
, 2008
"... Applications of modal logics are abundant in computer science, and a large number of structurally different modal logics have been successfully employed in a diverse spectrum of application
contexts. Coalgebraic semantics, on the other hand, provides a uniform and encompassing view on the large vari ..."
Cited by 4 (0 self)
Add to MetaCart
Applications of modal logics are abundant in computer science, and a large number of structurally different modal logics have been successfully employed in a diverse spectrum of application contexts.
Coalgebraic semantics, on the other hand, provides a uniform and encompassing view on the large variety of specific logics used in particular domains. The coalgebraic approach is generic and
compositional: tools and techniques simultaneously apply to a large class of application areas and can moreover be combined in a modular way. In particular, this facilitates a pick-and-choose
approach to domain specific formalisms, applicable across the entire scope of application areas, leading to generic software tools that are easier to design, to implement, and to maintain. This paper
substantiates the authors ’ firm belief that the systematic exploitation of the coalgebraic nature of modal logic will not only have impact on the field of modal logic itself but also lead to
significant progress in a number of areas within computer science, such as knowledge representation and concurrency/mobility.
- In Areces and Goldblatt [3
"... abstract. We give a sound and complete derivation system for the valid formulas in the finitary version of Moss ’ coalgebraic logic, for coalgebras of arbitrary type. ..."
Cited by 2 (1 self)
Add to MetaCart
abstract. We give a sound and complete derivation system for the valid formulas in the finitary version of Moss ’ coalgebraic logic, for coalgebras of arbitrary type.
- MFPS , 2009
"... Coalgebra develops a general theory of transition systems, parametric in a functor T; the functor T specifies the possible one-step behaviours of the system. A fundamental question in this area
is how to obtain, for an arbitrary functor T, a logic for T-coalgebras. We compare two existing proposals, ..."
Cited by 1 (1 self)
Add to MetaCart
Coalgebra develops a general theory of transition systems, parametric in a functor T; the functor T specifies the possible one-step behaviours of the system. A fundamental question in this area is
how to obtain, for an arbitrary functor T, a logic for T-coalgebras. We compare two existing proposals, Moss’s coalgebraic logic and the logic of all predicate liftings, by providing one-step
translations between them, extending the results in [21] by making systematic use of Stone duality. Our main contribution then is a novel coalgebraic logic, which can be seen as an equational
axiomatization of Moss’s logic. The three logics are equivalent for a natural but restricted class of functors. We give examples showing that the logics fall apart in general. Finally, we argue that
the quest for a generic logic for T-coalgebras is still open in the general case.
, 2011
"... In this article we give an accessible introduction to stream differential equations, ie., equations that take the shape of differential equations from analysis and that are used to define
infinite streams. Furthermore we discuss a syntactic format for stream differential equations that ensures that ..."
Add to MetaCart
In this article we give an accessible introduction to stream differential equations, ie., equations that take the shape of differential equations from analysis and that are used to define infinite
streams. Furthermore we discuss a syntactic format for stream differential equations that ensures that any system of equations that fits into the format has a unique solution. It turns out that the
stream functions that can be defined using our format are precisely the causal stream functions. Finally, we are going to discuss non-standard stream calculus that uses basic (co-)operations
different from the usual head and tail operations in order to define and to reason about streams and stream functions. 1
"... Abstract. Coalgebras provide a uniform framework for the study of dynamical systems, including several types of automata. The coalgebraic view on systems has recently been proved relevant by the
development of a number of expression calculi which generalize classical results by Kleene, on regular ex ..."
Add to MetaCart
Abstract. Coalgebras provide a uniform framework for the study of dynamical systems, including several types of automata. The coalgebraic view on systems has recently been proved relevant by the
development of a number of expression calculi which generalize classical results by Kleene, on regular expressions, and by Kozen, on Kleene algebra. This note contains an overview of the motivation
and results of the generic framework we developed – Kleene Coalgebra – to uniformly derive the aforementioned calculi. We present an historical overview of work on regular expressions and
axiomatizations, as well a discussion of related work. We show applications of the framework to three types of probabilistic systems: simple Segala, stratified and Pnueli-Zuck. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=242238","timestamp":"2014-04-20T05:25:20Z","content_type":null,"content_length":"30227","record_id":"<urn:uuid:daa7bb01-cf27-4ecd-a053-46029eaed123>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 802.11 WEP frame consists of many fields: headers, data, ICV, etc. Let's consider only data and ICV, and assume a constant IV.
ICV algorithm is an implementation of CRC32. It is calculated incrementally for every byte of data the frame has. Each step in C:
crc = crc_tbl[(crc ^ data[i]) & 0xFF] ^ ( crc >> 8 );
ICV is stored little-endian and frame is xor'red with RC4 keystream. From now on, we'll represent XOR operation with `+'.
Frame 1:
_____ DATA ___ ____ICV ___
D0 D1 D2 D3 D4 I3 I2 I1 I0
+ + + + + + + + +
K0 K1 K2 K3 K4 K5 K6 K7 K8
= = = = = = = = =
R0 R1 R2 R3 R4 R5 R6 R7 R8
Where D is data, I is ICV, K is keystream and R is what you get. If we add a data byte we get Frame 2:
_____ DATA ______ ____ICV ___
D0 D1 D2 D3 D4 D5 J3 J2 J1 J0
+ + + + + + + + + +
K0 K1 K2 K3 K4 K5 K6 K7 K8 K9
= = = = = = = = = =
S0 S1 S2 S3 S4 S5 S6 S7 S8 S9
Where J is ICV and S is what you get.
It is possible to go from Frame 2 to Frame 1 just by guessing the value of the sum I3+D5, that we will call X (one of 256 chances). X=I3+D5
• D0 to D4 remain the same.
• R5 = I3 + K5 = I3 + (D5+D5) + K5 = (I3+D5) + (D5+K5) = X + S5.
• R6 to R8 are computed by reversing one crc step based on the value of X. There's a correspondence among I2-I0 and J3-J1 because crc shifts them back but D5 “pushes” them forward again. They are
not necessarily keeping the same values, but their difference depends only on X, which we have guessed.
• J0 depends only on X. K9 = S9 + J0. We have guessed the last message byte and the last byte of keystream.
We will guess X by trial and error. The access point must discard invalid frames and help us in guessing the value of X.
By doing this, we have found a valid frame 1 byte shorter than original one, and we have guessed one byte of keystream. This process can be induced to get the whole keystream.
For additional detailed descriptions see: | {"url":"http://aircrack-ng.org/doku.php?id=chopchoptheory&DokuWiki=c3063b3a2e711f78916e3b520d71a6fd","timestamp":"2014-04-19T08:26:21Z","content_type":null,"content_length":"13617","record_id":"<urn:uuid:03b1efe6-73f8-45fd-ae41-e7c4757eb6ca>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plugged in to CO2
Students actually measure energy use with a Kill-a-Watt meter.
Concrete activity that helps student relate their everyday experiences to the discussion of climate change.
Comment from expert scientist: Activity needs better primer on the difference between power and energy needs to be taught before the exercise since most people do not know the difference. It could be
explained as follows: The scientific definition of power is simply the rate of energy use that is power is equal to energy per time. Many people confuse power with energy. Knowing a particular
machine's power rating tells you nothing about how much energy it will use unless you know for how long it will run. The unit of energy is the joule (J) which is the force of one Newton acting over
the distance of one meter.
As power is simply the energy flow per unit time, it is measured in watts; one watt is equal to one joule per second. One watt is also the force of one Newton acting over the distance of one meter
per second. Power Joules per second or Watts Energy/time Energy Joules or Watt-second Power x time Joules or equivalently Watt-seconds are SI units international system of units.
Energy can also be measured in Watt-hours (Wh) or kilo Watt-hours (kWh), which is how your electricity use at home is measured and how you get charged for your electricity consumption every month.
A 100-Watt light bulb power rating is 100 W, left on for one hour it will use 100Wh of energy. In NYC it costs about 19 cents per kWh, so leaving your 100 Watt bulb on for 10 hours uses 1000 Wh or 1
kWh and would cost you $0.19. | {"url":"http://www.climate.gov/teaching/resources/plugged-co2","timestamp":"2014-04-17T13:06:03Z","content_type":null,"content_length":"38247","record_id":"<urn:uuid:36271225-de27-42c0-a246-b5988d83d7f8>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
An airplane leaves an airport and flies due west 120 miles and then 150 miles in the direction S 39.17°W. How far is the plane from the airport (round to the nearest mile)?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5140cc11e4b0f08cbdc926c8","timestamp":"2014-04-18T13:45:22Z","content_type":null,"content_length":"114234","record_id":"<urn:uuid:51bf500b-c53d-4d36-845f-95a1a1f91050>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unitary Matrices and Hermitian Matrices
Recall that the conjugate of a complex number
In this section, I'll use conjugate transpose.
Complex conjugation satisfies the following properties:
(a) If
(b) If
(c) If
The proofs are easy; just write out the complex numbers (e.g.
The conjugate of a matrix A is the matrix
You can check that if A and B are matrices and
You can prove these results by looking at individual elements of the matrices and using the properties of conjugation of numbers given above.
Definition. If A is a complex matrix,
Note that the conjugation and transposition can be done in either order: That is,
Example. If
Since the complex conjugate of a real number is the real number, if B is a real matrix, then
Remark. Most people call adjoint of A --- though, unfortunately, the word "adjoint" has already been used for the transpose of the matrix of cofactors in the determinant formula for
The Hermitian --- but this has always sounded ugly to me, so I won't use this terminology.
Since this is an introduction to linear algebra, I'll usually refer to conjugate transpose, which at least has the virtue of saying what the thing is.
Proposition. Let U and V be complex matrices, and let
(d) If
Proof. I'll prove (a), (c), and (d).
For (a), I use the fact noted above that
I have
This proves (a).
For (c), I have
For (d), recall that the dot product of complex vectors
Notice that you take the complex conjugates of the components of v before multiplying!
This can be expressed as the matrix multiplication
It's a common notational abuse to write the number "
There are two points about the equation and transpose v? The reason for the conjugation goes back to the need for inner products to be positive definite (so
The reason for the transpose is that I'm using the convention that vectors are column vectors. So if u and v are n-dimensional column vectors and I want the product to be a number --- i.e. a row
vector (
Finally, why do u and v switch places in going from the left side to the right side? The reason you write
Of course, none of this makes any difference if you're dealing with real numbers. So if x and y are vectors in
Definition. A complex matrix U is unitary if
Notice that if U happens to be a real matrix, unitary is the complex analog of orthogonal.
By the same kind of argument I gave for orthogonal matrices,
Proposition. Let U be a unitary matrix.
(a) U preserves inner products:
(b) An eigenvalue of U must have length 1.
(c) The columns of a unitary matrix form an orthonormal set.
Proof. (a)
Since U preserves inner products, it also preserves lengths of vectors, and the angles between them. For example,
(b) Suppose x is an eigenvector corresponding to the eigenvalue
But U preserves lengths, so
(c) Suppose
For example, take the first row
This says that
Example. Find c and d so that the following matrix is unitary:
I want the columns to be orthogonal, so their complex dot product should be 0. First, I'll find a vector that is orthogonal to the first column. I may ignore the factor of
This gives
I may take
So I need to divide each of a and b by
Proposition. ( Adjointness) let
Remark. If adjoint
(This definition assumes that there is such a transformation.) This explains why, in the special case of the complex inner product, the matrix adjoint. It also explains the term self-adjoint in the
next definition.
Corollary. ( Adjointness) let
Proof. This follows from adjointness in the complex case, because
Definition. An complex matrix A is Hermitian (or self-adjoint) if
Note that a Hermitian matrix is automatically square.
For real matrices,
Example. Here are examples of Hermitian matrices:
It is no accident that the diagonal entries are real numbers --- see the result that follows.
Here's a table of the correspondences between the real and complex cases:
Proposition. Let A be a Hermitian matrix.
(a) The diagonal elements of A are real numbers, and elements on opposite sides of the main diagonal are conjugates.
(b) The eigenvalues of a Hermitian matrix are real numbers.
(c) Eigenvectors of A corresponding to different eigenvalues are orthogonal.
Proof. (a) Since
But a complex number is equal to its conjugate if and only if it's a real number, so
(b) Suppose A is Hermitian and
(c) Suppose
Since real symmetric matrices are Hermitian, the previous results apply to them as well. I'll restate the previous result for the case of a symmetric matrix.
Corollary. Let A be a symmetric matrix.
(a) The elements on opposite sides of the main diagonal are equal.
(b) The eigenvalues of a symmetric matrix are real numbers.
(c) Eigenvectors of A corresponding to different eigenvalues are orthogonal.
Example. Consider the symmetric matrix
The characteristic polynomial is
Note that the eigenvalues are real numbers.
Example. A
(a) Find an eigenvector corresponding to the eigenvalue 3.
Since eigenvectors for different eigenvalues of a symmetric matrix must be orthogonal, I have
So, for example,
(b) Find A.
From (a), a diagonalizing matrix and the corresponding diagonal matrix are
Note that the result is indeed symmetric.
Example. Let
Compute the characteristic polynomial of A, and show directly that the eigenvalues must be real numbers.
The discriminant is
Since this is a sum of squares, it can't be negative. Hence, the roots of the characteristic polynomial --- the eigenvalues --- must be real numbers.
Send comments about this page to: Bruce.Ikenaga@millersville.edu.
Copyright 2013 by Bruce Ikenaga | {"url":"http://www.millersville.edu/~bikenaga/linear-algebra/unitary/unitary.html","timestamp":"2014-04-18T10:40:51Z","content_type":null,"content_length":"30433","record_id":"<urn:uuid:48039039-e638-488b-b5b6-761f967b1f09>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference two squares: pos. integer solns of x^2-y^2=120
Re: Difference two squares: pos. integer solns of x^2-y^2=12
Re: Difference two squares: pos. integer solns of x^2-y^2=12
Thanks we got 3 out of the four. What are the sons.
Chasyesker wrote:Looking for a way to find all positive integers of x^2-y^2=120. Found some by trial and error but don't know how to find them all.
There is a method for this sort of Diophantine equation. You can read a paper on the method
It appears that there are
four solution pairs
Difference two squares: pos. integer solns of x^2-y^2=120
Looking for a way to find all positive integers of x^2-y^2=120. Found some by trial and error but don't know how to find them all.
Chasyesker wrote:Thanks we got 3 out of the four.
Which one (from the link) had you missed?
Chasyesker wrote:What are the sons.
Um... what? | {"url":"http://www.purplemath.com/learning/viewtopic.php?p=9168","timestamp":"2014-04-17T22:14:19Z","content_type":null,"content_length":"22249","record_id":"<urn:uuid:d0d82d5d-3780-4ee0-9b3b-804e9659af40>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baruch College Department of Mathematics
The major portion of this course will explore the evolution of Mathematical ideas in the ancient period (approximately 2000 B.C. to 1200 A.D.), and conclude with a study of the re-emergence of
Mathematics in Europe between the fourteenth and sixteenth centuries. In the ancient period Mathematical contributions of Babylonian, Egyptian, Greek, Chinese, Indian and Arab Mathematicians will be
studied. Then re-emergence of Mathematics in Europe will be explored through the contributions of Leonardo Pisa (Fibonacci), Luca Pacioli, Gerolamo Cardano, and Francois Vieta.
Prequisites: MTH 3006 or MTH 3010 or department permission. | {"url":"http://www.baruch.cuny.edu/math/course_syllabi/4230.html","timestamp":"2014-04-17T09:38:30Z","content_type":null,"content_length":"9400","record_id":"<urn:uuid:03c7a668-ea6c-4508-9fb8-5caa97eb82b9>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrals - first by substitution then by parts
May 17th 2006, 06:29 PM #1
Integrals - first by substitution then by parts
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
Right direction?
Okay, go to the Calculus thread in this forum, look for the posting by nirva, "Some calc questions". Open that. The second problem there is almost the same as yours, except that yours has "2"
before the cos(ln(x)).
If you know what to do with that "2", and you could follow the solution there of that second problem, then you should be able to get your integral.
we're given the integral 2cos(ln(x))dx
and told to first use substitution then integration by parts, could someone kindly point me in the right direction
You have,
$\int \cos (\ln x) dx=\int \frac{x\cos (\ln x)}{x}<br />$
Use substitution $u=\ln x$ then, $u'=1/x$ and $x=e^u$
$\int e^u \cos u \frac{du}{dx} dx=\int e^u \cos u du$
I am going to stop here because I realized that ticbol made a post before me and already answered this question. Do as he says.
Last edited by ThePerfectHacker; May 17th 2006 at 06:55 PM.
May 17th 2006, 06:48 PM #2
MHF Contributor
Apr 2005
May 17th 2006, 06:50 PM #3
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/3007-integrals-first-substitution-then-parts.html","timestamp":"2014-04-18T22:04:16Z","content_type":null,"content_length":"36386","record_id":"<urn:uuid:89fee591-3059-4139-b5d4-d645138243fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalized Lorenz–Mie theory for infinitely long elliptical cylinders
A generalized Lorenz–Mie theory for infinite elliptical cylinders is presented. This theory describes the interaction between arbitrary shaped beams and infinitely long cylinders having an elliptical
cross section.
© 1999 Optical Society of America
OCIS Codes
(260.0260) Physical optics : Physical optics
(290.0290) Scattering : Scattering
G. Gouesbet and L. Mees, "Generalized Lorenz–Mie theory for infinitely long elliptical cylinders," J. Opt. Soc. Am. A 16, 1333-1341 (1999)
Sort: Year | Journal | Reset
1. G. Gouesbet, B. Maheu, and G. Gréhan, “Light scattering from a sphere arbitrarily located in a Gaussian beam, using a Bromwich formulation,” J. Opt. Soc. Am. A 5, 1427–1443 (1988).
2. G. Gouesbet, G. Gréhan, and B. Maheu, “Generalized Lorenz–Mie theory and applications to optical sizing,” in Combustion Measurements, N. Chigier, ed. (Hemisphere, New York, 1991), pp. 339–384.
3. G. Gouesbet and G. Gréhan, “Sur la généralisation de la théorie de Lorenz–Mie,” J. Opt. (Paris) 13, 97–103 (1982).
4. J. P. Barton, D. R. Alexander, and S. A. Schaub, “Internal and near-surface electromagnetic fields for a spherical particle irradiated by a focused laser beam,” J. Appl. Phys. 64, 1632–1639
5. J. A. Lock, “Contribution of high-order rainbows to the scattering of a Gaussian laser beam by a spherical particle,” J. Opt. Soc. Am. A 10, 693–706 (1993).
6. F. Onofri, G. Gréhan, and G. Gouesbet, “Electromagnetic scattering from a multilayered sphere located in an arbitrary beam,” Appl. Opt. 34, 7113–7124 (1995).
7. G. Gouesbet, “Interaction between Gaussian beams and infinite cylinders, by using the theory of distributions,” J. Opt. (Paris) 26, 225–239 (1995).
8. G. Gouesbet, “Interaction between an infinite cylinder and an arbitrary shaped beam,” Appl. Opt. 36, 4292–4304 (1997).
9. K. F. Ren, G. Gréhan, and G. Gouesbet, “Scattering of a Gaussian beam by an infinite cylinder in the framework of generalized Lorenz–Mie theory: formulation and numerical results,” J. Opt. Soc.
Am. A 14, 3014–3025 (1997).
10. J. A. Lock, “Scattering of a diagonally incident focused Gaussian beam by an infinitely long homogeneous circular cylinder,” J. Opt. Soc. Am. A 14, 640–652 (1997).
11. J. B. Barton, “Internal and near-surface electromagnetic fields for a spheroidal particle with arbitrary illumination,” Appl. Opt. 34, 5542–5551 (1995).
12. H. Mignon, G. Gréhan, G. Gouesbet, T. H. Hu, and C. Tropea, “Measurement of cylindrical particles with phase-Doppler anemometry,” Appl. Opt. 25, 5180–5190 (1996).
13. N. Gauchet, T. Girasole, K. F. Ren, G. Gréhan, and G. Gouesbet, “Application of generalized Lorenz–Mie theory for cylinders to cylindrical characterization by phase-Doppler anemometry,” Opt.
Diag. Eng. 2, 1–10 (1997).
14. X. Han, K. F. Ren, Z. Wu, F. Corbin, G. Gouesbet, and G. Gréhan, “Characterization of initial disturbances in liquid jet by rainbow sizing,” Appl. Opt. 37, 8498–8503 (1998).
15. J. A. Lock, C. L. Adler, B. R. Stone, and P. D. Zajak, “Amplification of high-order rainbows of a cylinder with an elliptical cross-section,” Appl. Opt. 37, 1527–1533 (1998).
16. S. Lange and G. Schweiger, “Structural resonances in the total Raman- and fluorescence-scattering cross section: concentration-profile dependence,” J. Opt. Soc. Am. B 13, 1864–1872 (1996).
17. G. Gouesbet, L. Mees, and G. Gréhan, “Partial-wave description of shaped beams in elliptical-cylinder coordinates,” J. Opt. Soc. Am. A 15, 3028–3038 (1998).
18. C. Yeh, “The diffraction of waves by a penetrable ribbon,” J. Math. Phys. (N.Y.) 4, 65–71 (1963).
19. C. Yeh, “Backscattering cross section of a dielectric elliptical cylinder,” J. Opt. Soc. Am. A 55, 309–314 (1965).
20. P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Part I (McGraw-Hill, New York, 1953).
21. T. J. Bromwich, “Electromagnetic waves,” Philos. Mag. 38, 143–164 (1919).
22. F. E. Borgnis, “Elektromagnetische Eigenschwingungen dielektrischer Raüme,” Ann. Phys. (Leipzig) 35, 359–384 (1939).
23. G. Gouesbet, G. Gréhan, B. Maheu, and K. F. Ren, “Electromagnetic scattering of shaped beams (generalized Lorenz–Mie theory),” available from G. Gouesbet, LESP, UMR 6614-CNRS, INSA de Rouen, B.P.
08, 76131 Mont-Saint-Aignan Cedex, France.
24. R. Campbell, Théorie générale de l’équation de Mathieu (Masson et Cie, Paris, 1955).
25. N. W. McLachlan, Theory and Application of Mathieu Functions (Clarendon, Oxford, UK, 1951).
26. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions (Dover, New York, 1972), pp. 723–745.
27. G. Gouesbet, “Theory of distributions and its application to beam parametrization in light scattering,” Part. Part. Syst. Charact. (to be published).
28. G. Gouesbet, L. Mees, and G. Gréhan, “Partial wave expansions of higher-order Gaussian beams in elliptical cylinder coordinates,” J. Opt. (to be published).
29. L. W. Davis, “Theory of electromagnetic beams,” Phys. Rev. 19, 1177–1179 (1979).
30. J. P. Barton and D. R. Alexander, “Fifth-order corrected electromagnetic field components for fundamental Gaussian beam,” J. Appl. Phys. 66, 2800–2802 (1989).
31. G. Gouesbet, J. A. Lock, and G. Gréhan, “Partial wave representations of laser beams for use in light scattering calculations,” Appl. Opt. 34, 2133–2143 (1995).
32. G. Gouesbet, L. Mees, G. Gréhan, and K. F. Ren, “Description of arbitrary beams in elliptical cylinder coordinates, by using a plane wave spectrum approach,” Opt. Commun. 161, 63–78 (1999).
33. J. A. Lock and G. Gouesbet, “Rigorous justification of the localized approximation to the beam shape coefficients in generalized Lorenz–Mie theory. 1. On-axis beams,” J. Opt. Soc. Am. A 11,
2503–2515 (1994).
34. G. Gouesbet and J. A. Lock, “Rigorous justification of the localized approximation to the beam shape coefficients in generalized Lorenz–Mie theory. 2. Off-axis beams,” J. Opt. Soc. Am. A 11,
2516–2525 (1994).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-16-6-1333","timestamp":"2014-04-21T12:49:06Z","content_type":null,"content_length":"142169","record_id":"<urn:uuid:a570861b-175c-4962-8c14-c0bcd4ad474b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yield to Call (YTC)
What it is:
Yield to call is a measure of the yield of a bond if you were to hold it until the call date.
How it works/Example:
To understand yield to call, one must first understand that the price of a bond is equal to the present value of its future cash flows, as calculated by the following formula:
P = price of the bond
n = number of periods
C = coupon payment
r = required rate of return on this investment
F = principal at maturity
t = time period when payment is to be received
To calculate the yield to call, the investor then uses a financial calculator or software to find out what percentage rate (r) will make the present value of the bond's cash flows equal to today's
selling price.
The big distinction with yield to call, however, is that the investor assumes that the bond is called at the earliest possible date rather than held to maturity. (To run the calculations assuming the
bond is held to maturity would be to calculate the yield to maturity).
For example, say you own a Company XYZ bond with a $1,000 par value and a 5% zero-coupon bonds that matures in three years. Also suppose this bond is callable in two years at 105% of par. To
calculate the yield to call, you simply pretend that the bond matures in two years rather than three, and calculate the yield accordingly. You should also consider the call price (105% of $1,000, or
$1,050) as the principal at maturity (F). Thus, if this Company XYZ bond is selling for $980 today, using the formula above we can calculate that the yield to call is 4.23%.
[Use our Yield to Call (YTC) Calculator to measure your annual return if you hold a particular bond until its first call date.]
Why it Matters:
Although the yield to call calculation considers the three sources of potential return from a bond (coupon payments, capital gains, and reinvestment returns), some analysts consider it inappropriate
to assume that the investor can reinvest the coupon payments at a rate equal to the yield to call. The yield to call makes two other tenuous assumptions: it assumes the investor will hold the bond
until it is called, and it assumes the issuer will call the bond on one of the exact dates used in the analysis.
The true yield of a callable bond at any given price is usually lower than its yield to maturity because the call provisions limit the bond's potential price appreciation -- when interest rates fall,
the price of a callable bond will not go any higher than its call price. This is because the issuer should act in the best interests of the company and call the bond as soon as it is favorable to do
As a result, investors usually consider the lower of the yield to call and the yield to maturity as the more realistic indication of the return an investor will actually receive on a callable bond.
Some investors go a step further and calculate the yield to call not just for the first call date, but for all possible call dates. Then the investor compares all the calculated yields to call and
yields to maturity and relies on the lowest of them, called the yield to worst.
Best execution refers to the imperative that a broker, market maker, or other agent acting on behalf of an investor is obligated to execute the investor's order in a way that is most advantageous to
the investor rather than the agent. | {"url":"http://www.investinganswers.com/financial-dictionary/bonds/yield-call-ytc-860","timestamp":"2014-04-18T13:11:03Z","content_type":null,"content_length":"54003","record_id":"<urn:uuid:3bd09e3c-da95-4be2-ae56-6e7c44c443cf>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lokhorst, 'Computational Meta-ethics' (2011) - Less Wrong Discussion
Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Besides Yudkowsky and Goertzel, the only person I know of doing serious computational meta-ethics is Dutch philosopher and computer scientist Gert-Jan Lokhorst. He has a paper forthcoming in Minds
and Machines called "Computational Meta-Ethics: Towards the Meta-Ethical Robot." I suspect it wil be of interest to some.
His paper also mentions some work in formal epistemology on computational metaphysics and computational meta-modal logic. Ah, the pleasures of scholarship! (You're all tired of me harping on about
scholarship, right?) | {"url":"http://lesswrong.com/r/discussion/lw/4qs/lokhorst_computational_metaethics_2011/","timestamp":"2014-04-20T10:47:07Z","content_type":null,"content_length":"55533","record_id":"<urn:uuid:184659fc-c8bf-4b11-a194-6d37d53fa255>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
plz help! Integration problems
September 17th 2008, 03:39 PM
plz help! Integration problems
Alright. I already put a lot of thought into some of these but I cannot get the right answer out or am stuck in the middle.
1) The integral, from 0 to pi/4, of ((sinx)^4)((cosx)^2)dx
My thought process has gotten to (1/16)integral(1-cos4x)dx -integral((1/8)((cosx)^2)((sin2x)^2)dx
What do i do from here? Or is there a different way to go at it?
2) Integral, from pi/4 to pi/2 of (cotx)^2 for this one, I got all the way to solving the integral, which i Think is -.5(cotx)^2 -ln(abs value of)sinx ...
I am stuck here, since pi/2 cant go into cotangent... what do I do?
3) The integral of 1 to radical 3 of arctan(1/x)dx. I solved this several times, and I got the integral to come out as xarctan(1/x) +.5ln(x^2 +1)
However, this is wrong, because i know that the answer SHOULD be -.468. How do i get -.468?
THANK YOU! for your helP!
September 17th 2008, 04:47 PM
1. I'll have to play with this one ... even powers of sine and cosine are a pain-in-the-...
2. $\cot^2{x} = \csc^2{x} - 1$
3. how is it negative ?
from 1 to sqrt(3), the function y = arctan(1/x) is greater than 0.
I get positive .468
September 17th 2008, 06:33 PM
Hello, 3deltat!
$3)\;\;\int^{\sqrt{3}}_1 \arctan\left(\frac{1}{x}\right)\,dx$
I got the integral to come out as: . $x\arctan\left(\frac{1}{x}\right) + \frac{1}{2}\ln(x^2 +1)$ . . . . Right!
However, this is wrong, because i know that the answer SHOULD be -0.468 . ??
How can the answer be negative? . . . The graph is above the x-axis.
| ..*
| .*:::::|
| *:::::::::|
| * |:::::::::|
|* |:::::::::|
| |:::::::::|
- * - + - - - - + -
| 1 √3
Ah, I see skeeter already beat me to it!
We have: . $x\arctan\left(\frac{1}{x}\right) + \frac{1}{2}\ln(x^2+1)\:\bigg]^{\sqrt{3}}_1$
. . $\bigg[\sqrt{3}\arctan\left(\frac{1}{\sqrt{3}}\right) + \frac{1}{2}\ln(4)\bigg] - \bigg[1\!\cdot\!\arctan(1) + \frac{1}{2}\ln(2)\bigg]$
. . $= \;\bigg[\sqrt{3}\left(\frac{\pi}{6}\right) + \ln\left(4^{\frac{1}{2}}\right)\bigg] - \bigg[\frac{\pi}{4} + \frac{1}{2}\ln(2)\bigg]$
. . $= \;\frac{\pi\sqrt{3}}{6} + \ln(2) - \frac{\pi}{4} - \frac{1}{2}\ln(2)$
. . $= \;\left(\frac{2\sqrt{3}-3}{12}\right)\pi + \frac{1}{2}\ln(2)$
. . $= \;\boxed{0.468}075109$
September 17th 2008, 06:46 PM
Chop Suey
$\int \sin^4{x}\cos^2{x}$
Here's my way:
$\frac{1}{8} \int (1-\cos{2x})^2(1+\cos{2x})$
$\frac{1}{8} \int (\underbrace{1-\cos^2{2x}}_{\sin^2{2x}})(1-\cos{2x})$
$\frac{1}{8} \int (\sin^2{2x} - \sin^2{2x}\cos{2x})$
And we're done...
Another double angle formula for the first integral and sub $u = \sin{2x}$ for second integral.
September 17th 2008, 07:09 PM
Chris L T521
Alright. I already put a lot of thought into some of these but I cannot get the right answer out or am stuck in the middle.
1) The integral, from 0 to pi/4, of ((sinx)^4)((cosx)^2)dx
My thought process has gotten to (1/16)integral(1-cos4x)dx -integral((1/8)((cosx)^2)((sin2x)^2)dx
What do i do from here? Or is there a different way to go at it?
When both are even, you need to apply these identities: $\sin^2 u=\frac{1-\cos (2u)}{2}$ and $\cos^2u=\frac{1+\cos(2u)}{2}$:
$\int_0^{\frac{\pi}{4}}\left[\sin^2x\right]^2\cos^2x\,dx=\int_0^{\frac{\pi}{4}}\left[\tfrac{1}{2}(1-\cos (2x))\right]^2\left[\tfrac{1}{2}(1+\cos(2x))\right]\,dx$$=\tfrac{1}{8}\int_0^{\frac{\pi}
{4}}\left[1-\cos (2x)-\cos^2(2x)+\cos^3(2x)\right]\,dx$
Now split up the integral:
$\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\left[1-\cos (2x)-\cos^2(2x)+\cos^3(2x)\right]\,dx$$=\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\left[1-\cos (2x)-\cos^2(2x)\right]\,dx + \tfrac{1}{8}\int_0^{\frac{\pi}
Let's focus on this integral:
$\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\left[1-\cos (2x)-\cos^2(2x)\right]\,dx$
This is the same as saying $\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\left[\tfrac{1}{2}-\cos (2x)-\tfrac{1}{2}\cos(4x)\right]\,dx$
Evaluating, we get $\tfrac{1}{8}\left.\left[\tfrac{1}{2}x-\frac{1}{2}\sin(2x)-\tfrac{1}{8}\sin(4x)\right]\right|_0^{\frac{\pi}{4}}=\tfrac{1}{8}\left[\tfrac{1}{8}\pi-\tfrac{1}{2}\right]=\frac{\
Now let's evaluate $\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\cos^3(2x)\,dx$
Break off a factor of $\cos(2x)$ and apply the identity $1-\sin^2 u = \cos^2 u$
Thus, the integral becomes $\tfrac{1}{8}\int_0^{\frac{\pi}{4}}\left[1-\sin^2(2x)\right]\cos(2x)\,dx$
Now let $z=\sin(2x)\implies \,dz=2\cos(2x)\,dx$
We can change the limits of integration as well.
The integral can now be written as $\tfrac{1}{16}\int_0^1\left[1-z^2\right]\,dz=\tfrac{1}{16}\left.\left[z-\tfrac{1}{3}z^3\right]\right|_0^1=\frac{2}{48}$
Finally, our total solution is $\frac{\pi}{64}-\frac{1}{48}=\color{red}\boxed{\frac{3\pi-4}{192}}$
Does this make sense?
September 17th 2008, 07:12 PM
Chris L T521
$\int \sin^4{x}\cos^2{x}$
Here's my way:
$\frac{1}{8} \int (1-\cos{2x})^2(1+\cos{2x})$
$\frac{1}{8} \int (\underbrace{1-\cos^2{2x}}_{\sin^2{2x}})(1-\cos{2x})$
$\frac{1}{8} \int (\sin^2{2x} - \sin^2{2x}\cos{2x})$
And we're done...
Another double angle formula for the first integral and sub $u = \sin{2x}$ for second integral.
This is a "bit" easier (Rofl)
September 18th 2008, 06:15 AM
Yep this makes more sense! Thank you for your help. =) | {"url":"http://mathhelpforum.com/calculus/49517-plz-help-integration-problems-print.html","timestamp":"2014-04-18T04:04:48Z","content_type":null,"content_length":"19957","record_id":"<urn:uuid:47f76a90-6351-4953-85e1-a61f38de2624>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating VSWR, Return Loss, Reflection Coefficient, and Mismatch Loss
Spectrum Software has released Micro-Cap 11, the eleventh generation of our SPICE circuit simulator.
For users of previous Micro-Cap versions, check out the new features available in the latest version. For those of you who are new to Micro-Cap, take our features tour to see what Micro-Cap has to
Calculating VSWR, Return Loss, Reflection Coefficient, and Mismatch Loss
There are a number of calculations that are useful when simulating the transmission of a wave through a line. These calculations can be quite important in calculating the energy that arrives at the
load versus how much energy the transmitter is producing. Ideally, the load impedance should match the characteristic impedance of the transmission line so that all of the transmitted energy is
available at the load. When the load impedance does not match the characteristic impedance of the transmission line, part of the voltage will be reflected back down the line reducing the available
energy at the load.
One measurement is the reflection coefficient (Γ). The reflection coefficient measures the amplitude of the reflected wave versus the amplitude of the incident wave. The expression for calculating
the reflection coefficient is as follows:
Γ = (ZL - ZS)/(ZL + ZS)
where ZL is the load impedance and ZS is the source impedance. Since the impedances may not be explicitly known, the reflection coefficient can be measured in a similar manner to an S11 measurement
by using the wave amplitudes at the source and at the node following the source impedance. The following define statement user function can be used to measure the reflection coefficient.
.define RefCo(In,Src) Mag(2*V(In)-V(Src))
where In is the node name of the node following the source impedance and Src is the part name of the source component.
The VSWR (Voltage Standing Wave Ratio) measurement describes the voltage standing wave pattern that is present in the transmission line due to the phase addition and subtraction of the incident and
reflected waves. The ratio is defined by the maximum standing wave amplitude versus the minimum standing wave amplitude. The VSWR can be calculated from the reflection coefficient with the equation:
VSWR = (1 + Γ)/(1 - Γ)
The following define statement user function can be used to measure the VSWR.
.define VSWR(In,Src) (1+RefCo(In,Src))/(1-RefCo(In,Src))
The return loss measurement describes the ratio of the power in the reflected wave to the power in the incident wave in units of decibels. The standard output for the return loss is a positive
value, so a large return loss value actually means that the power in the reflected wave is small compared to the power in the incident wave and indicates a better impedance match. The return loss
can be calculated from the reflection coefficient with the equation:
Return Loss = -20*Log(Γ)
The following define statement user function can be used to measure the return loss.
.define RetLoss(In,Src) -20*Log(RefCo(In,Src))
The mismatch loss measurement describes the amount of power that will not be available at the load due to the reflected wave in units of decibels. It indicates the amount of power lost in the system
due to the mismatched impedances. The mismatch loss can also be calculated from the reflection coefficient with the following equation:
Mismatch Loss = -10*Log(1 - Γ²)
The following define statement user function can be used to measure the mismatch loss.
.define MismatchLoss(In,Src) -10*Log(1 - RefCo(In,Src)**2)
For the VSWR, return loss, and mismatch calculations, the In and Src parameters are defined in the same manner as they are for the reflection coefficient define statement.
If only the VSWR, return loss, or mismatch loss measurement is to be performed in the analysis, the reflection coefficient define statement must also be present to perform the calculation since it
is referenced in all three of these calculations.
A simple circuit is displayed in the figure below to demonstrate the use of these define statement user functions. The circuit consists of a voltage source, two resistors, and an ideal, lossless
transmission line. The load resistance has been set to 75 ohms to create a mismatch with the 50 ohm characteristic impedance of the transmission line. The four define statement user functions have
been entered in the schematic as grid text. Each statement must be entered as a separate grid text. They may also be entered in the Text page of the schematic to reduce the clutter in the schematic.
An AC analysis simulation is then run on the circuit. The four Y expressions plotted for the simulation are:
The AC simulation results are displayed below. Since this example circuit is entirely resistive, the AC analysis response will be constant across the entire frequency range. Note that the node name
used as the parameter within these functions does not have to be named In. It can be any name that the user chooses or even the node number of the node. V1 is the part name for the voltage source in
the schematic.
The AC simulation returns the following results for this circuit:
VSWR = 1.5
Reflection Coefficient = .2
Return Loss = 13.979dB
Mismatch Loss = .177dB
The define statements can also be placed in the MCAP.INC file which can be accessed through the User Definitions under the Options menu. Placing them in this file makes the functions globally
available for all circuits.
1) VOLTAGE STANDING WAVE RATIO (VSWR) / REFLECTION COEFFICIENT RETURN LOSS / MISMATCH LOSS, Granite Island Group, http://www.tscm.com/vswr.pdf | {"url":"http://www.spectrum-soft.com/news/fall2009/vswr.shtm","timestamp":"2014-04-19T19:33:27Z","content_type":null,"content_length":"17424","record_id":"<urn:uuid:8c538fd3-a033-4cbd-a3f7-182b92bb9568>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
2010 SPM Exams Tips - Physics
Paper 2
Section A:
Question 1: Form 4 (Chapter 1)
Measuring Instrument - Vernier calliper/micrometer screw gauge (reading, accuracy, zero error)
Question 2: Form 5 (Chapter 2)
Choose type of fuse
Electric circuit for safety
Question 3: Form 4 (Chapter 4)
Specific latent heat of fusion and Specific latent heat of vaporisation
Steaming of food
Question 4: Form 4 (Chapter 3)
Bernoulli principle (Aerofoil)
Question 5: Form 5 (Chapter 3)
Electromagnet (electric bell)
Question 6: Form 5 (Chapter 1)
Refraction of water
Question 7: Form 5 (Chapter 5)
Equation for alpha, beta and gamma decay (Graph of nucleon number against proton number)
Calculation for the nuclear energy
Management of radioactive substance
Question 8: Form 4 (Chapter 2)
Principle of conservation of energy (definition and calculation)
Choose spring (Most elastic)
Section B: (Modify and explain)
Question 9-Essay: Form 4 (Chapter 5)
Compare size of image formed by different object distance
Application of convex mirror. (widen the viewing angle)
Modify projector @ microscope @ telescope)
Question 10-Essay: Form 5 (Chapter 3)
Electromagnetic induction (Lenz Law and Faraday law, calculation, graph, AC generator, factors)
Design DC adaptor from AC supply (transformer and rectifier)
Section C: (Choose suitability)
Question 11-Essay: Form 4 (Chapter 5)
Archimedes’s principle (definition, calculation, float concept calculation)
Choose material for hot air balloon
Question 12-Essay: Form 5 (Chapter 4)
Semiconductor and transistor as automatic switch
Choose material for semiconductor
Paper 3
Section A:
Question 1: Form 4 (Chapter 5)
Snell Law/Refractive index (sin i against sin r) (Reading on protractor)
Question 2: Form 5 (Chapter 2)
Internal resistance (E = V + Ir)
Section B: (Practical)
1. Form 4 (Chapter 2)
Mass and Acceleration experiment (Ticker timer)
2. Form 4 (Chapter 4)
Gas Law experiment (Boyle’s Law, Charles’s Law and Pressure Law)
Specific Heat Capacity experiment (mass water increases, change in temperature decreases)
3. Form 5 (Chapter 1)
Interference of Light or Sound experiment (a increases, x decreases)
4. Form 5 (Chapter 2)
Factors that affect Resistance (length, cross-sectional area, temperature and type of material)
Ohm’s Law experiment (V = IR)
List of Experiemts:
Form 4 – Chapter 2
(1) To investigate the relationship between inertia and mass
(2) To investigate the relationship between force and acceleration
(3) To investigate the relationship between mass and acceleration
Form 4 – Chapter 3
(4) To investigate the relationship between depth and pressure of liquid
(5) To investigate the relationship between immersion destance in water and weight
Form 4 – Chapter 4
(6) To investigate the relationship between mass of water and change in temperature
(7) To investigate the relationship between the pressure and the volume (Boyle’s Law)
(8) To determine the relationship between the pressure and the temperature (Pressure Law)
(9) To determine the relationship between the volume and the temperature (Charles’ Law)
Form 5 – Chapter 1
(10) To investigate the relationship between the distance between two coherent source and distance bwteen two consecutive constructive inteference (Light and sound)
Form 5 – Chapter 2
(11) To investigate the relationship between current and the potential difference for a wire
(12) To investigate the factors affecting resistance (length, thickness of wire and temperature | {"url":"http://chngtuition.blogspot.com/2010/10/2010-spm-exams-tips-physics.html","timestamp":"2014-04-19T07:54:22Z","content_type":null,"content_length":"157388","record_id":"<urn:uuid:27b07686-552d-409e-952f-e312bde47e3e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
(Paper) Statistics : Main Examination Exams in India
Sample space and events, probability measure and probability space, random variable as a measurable function, distribution function of a random variable, discrete and continuous-type random variable
probability mass function, probability density function, vector-valued random variable, marginal and conditional distributions, stochastic independence of events and of random variables, expectation
and moments of a random variable, conditional expectation, convergence of a sequence of random variable in distribution, in probability, in p-th mean and almost everywhere, their criteria and
interrelations, Borel-Cantelli lemma, Chebyshevs and Khinchines weak laws of large numbers, strong law of large numbers and Kolmogorovs theorems, Glivenko-Cantelli theorem, probability generating
function, characteristic function, inversion theorem, Laplace transform, related uniqueness and continuity theorems, determination of distribution by its moments. Linderberg and Levy forms of central
limit theorem, standard discrete and continuous probability distributions, their inter-relations and limiting cases, simple properties of finite Markov chains.
Statistical Inference:
Consistency, unbiasedness, efficiency, sufficiency, minimal sufficiency, completeness, ancillary statistic, factorization theorem, exponential family of distribution and its properties, uniformly
minimum variance unbiased (UMV U) estimation, Rao-Blackwell and Lehmann-Scheffe theorems, Cramer-Rao inequality for single and several-parameter family of distributions, minimum variance bound
estimator and its properties, modifications and extensions of Cramer-Rao inequality, Chapman-Robbins inequality, Bhattacharyyas bounds, estimation by methods of moments, maximum likelihood, least
squares, minimum chi-square, properties of maximum likelihood and other estimators, idea of asymptotic efficiency, idea of prior and posterior distributions, Bayes estimators.
Non-randomised and randomised tests, critical function, VIP tests, Neyman-Pearson lemma, UMP tests, monotone likelihood ratio, generalised Neyman-Pearson lemma, similar and unbiased test. UMPU tests
for single and several-parameter families of distributions, likelihood rotates and its large sample properties, chi-square goodness of fit test and its asymptotic distribution.
Confidence bounds and its relation with tests, uniformly most accurate (UMA) and UMA unbiased confidence bounds.
Kolmogorovs test for goodness of fit and its consistent), sign test and its optimality, Wilcoxori signed-ranks test and its consistency, Kolmogorov-Smirnove two-samples test, run test,
Wilcoxon-Mann-Whitney test and median test, their consistency and asymptotic normality. Walds SPR I and its properties, OC and ASN functions, Walds fundamental identity, sequential estimation.
Linear Inference and Multivariate Analysis:
Linear statistical models, theory of least squares and analysis of variance, Gauss-Markoff theory, normal equations, least squares estimates and their precision, test of significance and interval
estimates based on least squares theory in one-way, two-way and three-way classified data, regression analysis, linear regression, curvilinear regression and orthogonal polynomials, multiple
regression, multiple and partial correlations, regression diagnostics and sensitivity analysis, calibration problems, estimation of variance and covariance components, MINQUE theory, multivariate
normal distribution, Mahalanobis-D2 and Hotellings T2 statistics and their applications and properties, discriminant analysis, canonical correlations, one-way M ANOVA, principal, component analysis,
elements of factor analysis.
Sampling Theory and Design of Experiments:
An outline of fixed-population and super-population approaches, distinctive features of finite population sampling, probability sampling designs, simple random sampling with and without replacement,
stratified random sampling, systematic sampling and its efficacy for structural populations, cluster sampling, two-stage and multi-stage sampling, ratio and regression, methods of estimationinvolving
one or more auxiliary variables, two-phase sampling, probability proportional to size sampling with and without replacement, the Hansen-Hurwitz and the Horvitz-Thompson estimators, non-negative
.variance estimation with reference to the Horvitz-Thompson estimator, non-sampling errors, Warners randomised response technique for sensitive characteristics.
Fixed effects model (two-way classification), random and mixed effects models (two-way classification per cell), CRD, RBD, LSD and their analyses, incomplete block designs, concepts of orthogonality
and balance, BIBD, missing plot technique, factorial designs: 2n, 32 and 33 confounding in factorial experiments, split-plot and simple lattice designs.
I. Industrial Statistics:
Process and product control, general theory of control charts, different types of control charts for variables and attributes, X,R,s,p,np and c charts, cumulative sum chart, V-mask, single, double,
multiple and sequential sampling plans for attributes, OC, ASN, AOQ and ATI curves, concepts of producers and consumers risk, AQL, LTPD and AOQL, sampling plans for variables, use of Dodge-Romig and
Military Standard tables.
Concepts of reliability, maintainability and availability, reliability o( series and parallel systems and other simple configurations, renewal density and renewal function, survival models
(exponential), Weibull, lognormal, Rayleigh and bath-tub), different types o( redundancy and use of redundancy in reliability improvement, problems in life-testing, censored and turncated experiments
for exponential models.
II. Optimization techniques:
Different types of models in Operational Research, their construction and general methods of solution, simulation and Monte-Carlo methods, the structure and formulation of linear programming (LP)
problem, simple LP model and its graphical solution, the simplex procedure, the two-phase method and the M-technique with artificial variables, the duality theory of LP and its economic
interpretation, sensitivity analysis, transportation and assignment problems, rectangular games, two-person zero-sum games, methods of solution (graphical and algebraic).
Replacement of failing or deteriorating items, group and individual replacement policies, concept of scientific inventor) management and analytical structure of inventory problems, simple models with
deterministic and stochastic demand with and without lead time, storage models with particular reference to dam type.
Homogeneous discrete-time Markov chains, transition probability matrix, classification of states and ergodic theorems, homogeneous continuous-time Markov chains, Poisson process, elements of queueing
theory, M/M/l, M/ M/K, G/M/l and M/G/l queues.
Solution of statistical problems on computers using well known statistical software packages like SPSS.
III. Quantitative Economics and Official Statistics:
Determination of trend, seasonal and cyclical components, Box-Jenkins method, tests for stationary of series, ARIMA models and determination of orders of autoregressive and moving average components,
Commonly used index numbers Laspeyres, Paasches and Fishers ideal index numbers, chain-base index number, uses and limitations of index numbers, index number of wholesale prices, consumer price index
number, index numbers of agricultural and industrial production, tests for index numbers like proportionality test, time-reversal test, factor-reversal test, circular test and dimensional invariance
General linear model, ordinary least, squares and generalised least squares methods of estimation, problem of multicollinearity, consequences and solutions of multicollinearity, auto-correlation and
its consequences, heteroscedasticity of disturbances and its testing, tests for independence of disturbances, Zellners seemingly unrelated regression equation model and its estimation, concept of
structure and model for simultaneous equations, problem of identification -rank and order conditions of identifiability, two-stage least squares method of estimation.
Present official statistical system in India relating to population, agriculture, industrial production, trade and prices, methods of collection of official statistics, their reliability and
limitation and the principal publications containing such statistics, various official agencies responsible for data collection and their main functions.
IV. Demography and Psychometry:
Demographic data from census, registration, NSS and other surveys, and their limitation and uses, definition, construction and uses of vital rates and ratios, measures of fertility, reproduction
rates, morbidity rate; standardized death rate, complete and abridged life tables, construction of life tables from vital statistics and census returns, uses of life tables, logistic and other
population growth curves, fifting a logistic curve, population projection, stable population quasi-stable population techniques in estimation of demographic parameters, morbidity and its measurement,
standard classification by cause of death, health surveys and use of hospital statistics.
Methods of standardisation of scales and tests, Z-scores, standard scores, T-scores, percentile scores, intelligence quotient and its measurement and uses, validity of test scores and its
determination, use of factor analysis and path analysis in psychometry. | {"url":"http://www.upscportal.com/civilservices/Statistics/Statistics-Main-Examination-Exams-in-India","timestamp":"2014-04-16T21:53:58Z","content_type":null,"content_length":"46742","record_id":"<urn:uuid:20d7f289-1ba5-4fc8-834b-f99336964385>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show that T is surjective
November 19th 2011, 03:46 AM #1
Junior Member
Mar 2010
Show that T is surjective
Let $(X,d)$ be a compact metric space, and let $T: X \longrightarrow X$ be a continuous map satisfying the expansion property:
$d(T(x),T(y)) \geq d(x,y)$ for all $x,y \in X$.
Prove that T is surjective.
Hey I've difficulty starting this question. How do I go about doing it?
Thanks in advance.
Re: Show that T is surjective
my thought is this:
choose an open cover of ε-balls for X. since X is compact, it has a finite subcover. consider images under T of this subcover. show they form a cover of X.
(hint: for every U in our sub-cover, we have U contained in T(U) because....?)
Re: Show that T is surjective
Another way would be to prove the result by contradiction. Suppose that the point $u\in X$ is not in the range of T. The range of T is closed (by compactness) so there exists $\delta>0$ such that
$d(u,Tx)\geqslant\delta$ for all x in X.
Now consider the sequence $u,Tu,T^2u,T^3u,\ldots$. Use the non-contracting property of T to show that any two points in this sequence are at a distance at least $\delta$ apart. Therefore the
sequence cannot have a convergent subsequence, which contradicts the compactness of X.
Re: Show that T is surjective
U is not necessarily contained in T(U). See for example X is the standard sphere, T(p)=-p is the antipodal map.
Re: Show that T is surjective
Re: Show that T is surjective
sorry deleted
Last edited by xxp9; November 19th 2011 at 06:21 PM.
November 19th 2011, 06:35 AM #2
MHF Contributor
Mar 2011
November 19th 2011, 07:53 AM #3
November 19th 2011, 10:14 AM #4
Senior Member
Mar 2010
Beijing, China
November 19th 2011, 10:29 AM #5
MHF Contributor
Mar 2011
November 19th 2011, 04:50 PM #6
Senior Member
Mar 2010
Beijing, China | {"url":"http://mathhelpforum.com/differential-geometry/192240-show-t-surjective.html","timestamp":"2014-04-16T14:35:07Z","content_type":null,"content_length":"47200","record_id":"<urn:uuid:ae52c550-eeb2-4d3a-894a-db82b74329c6>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Open Problems
If we are given n points in space, the Delaunay triangulation of which has t tetrahedra, how quickly can we find that triangulation? Chan, Snoeyink and Yap have an algorithm with running time O((n +
t) log^2 t), but this does not quite match the known lower bound of Omega(n log t + t).
Some more practical problems from Mac (meaning, a solution would likely involve an actual working system, although one might imagine theoretical results in these areas):
1. Hex meshing with quality approaching that which is producable by hand - i.e., by region decomposition controlled by an expert.
2. Intelligent meshing of features (if you tell the CAD system to put a certain type of feature on your object, that information should be used by the mesher).
3. Fast remeshing after a local change.
4. Mesh smoothing for high order elements with curved boundaries.
5. Problem dependent, black box meshing, in which the whole process of selecting a mesh type, performing mesh generation, applying a numerical algorithm, etc is automatically performed given some
specification of the problem to be solved.
6. Decomposition into "nice" 2-1/2D regions (generalized cylinders, meeting parallel to each other; one could then apply a planar mesh algorithm to the cylinder cross-section and cut horizontally to
get a good 3D mesh).
David Eppstein, Theory Group, Dept. Information & Computer Science, UC Irvine.
Last update: | {"url":"http://www.ics.uci.edu/~eppstein/280g/open.html","timestamp":"2014-04-21T09:37:53Z","content_type":null,"content_length":"13216","record_id":"<urn:uuid:796ba5cc-ba08-42ee-a255-faf1b1540e37>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gary, IN Statistics Tutor
Find a Gary, IN Statistics Tutor
...I also obtained my middle school math endorsement, with a solid background tutoring math, as well as teaching math. I worked as a middle school and high school substitute math instructor. I
would love to have the opportunity once more to aid in the growth of students' math skills.
16 Subjects: including statistics, reading, algebra 1, geometry
...I do pride myself on also knowing the history of when such mathematical concepts were developed and why they were developed. Since I have had many years of teaching from middle schools to
universities, I have been afforded the luxury of teaching many ages. I have taught in middle schools, high schools, community colleges, and universities.
12 Subjects: including statistics, calculus, geometry, algebra 2
...My BA in Physics and Math, as well as my MBA have helped me relate well to the bio-sciences, health-care and social sciences. The math is the same, and I have learned the "bio- terminology". I
have taught linear algebra at the college level, both at Lewis University and at Morraine Valley Community College.
11 Subjects: including statistics, physics, algebra 1, algebra 2
...I continued using prealgebra as I received my minor in mathematics at the University of Michigan. I can help make math fun and easy to learn! I have a minor in math from the University of
Michigan and have been tutoring students since I was in elementary school.
28 Subjects: including statistics, reading, English, writing
...I worked in small groups or individually with mostly 7th and 8th graders to help them keep up with the pace of the class. I also assisted them with homework problems. I am a current graduate
student in Statistics, so I have been through both introductory and advanced courses.
5 Subjects: including statistics, algebra 1, prealgebra, probability
Related Gary, IN Tutors
Gary, IN Accounting Tutors
Gary, IN ACT Tutors
Gary, IN Algebra Tutors
Gary, IN Algebra 2 Tutors
Gary, IN Calculus Tutors
Gary, IN Geometry Tutors
Gary, IN Math Tutors
Gary, IN Prealgebra Tutors
Gary, IN Precalculus Tutors
Gary, IN SAT Tutors
Gary, IN SAT Math Tutors
Gary, IN Science Tutors
Gary, IN Statistics Tutors
Gary, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/gary_in_statistics_tutors.php","timestamp":"2014-04-18T11:23:41Z","content_type":null,"content_length":"23983","record_id":"<urn:uuid:1db7022d-21b2-4809-8aab-494e20e0eb0a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lester, PA Algebra 2 Tutor
Find a Lester, PA Algebra 2 Tutor
I graduated from Chestnut Hill College (Philadelphia, PA) with a degree in French Language & Literature with a minor in Art History. While there, I tutored several students in not only French but
also other subjects, particularly different math topics. I am comfortable tutoring students from the 7th grade forward.
33 Subjects: including algebra 2, English, French, physics
I have been teaching Algebra and middle school math for 4 years in Camden, NJ. My experience includes classroom teaching, after-school homework help, and one to one tutoring. I frequently work
with students far below grade level and close education gaps.
8 Subjects: including algebra 2, geometry, algebra 1, SAT math
...As a young graduate student, I had a probability professor who was a professional gambler and a world-class backgammon player. I have been fascinated with the subject ever since. The SAT is
the gold standard in college admissions testing and, in most cases, I discourage students from taking the ACT.
23 Subjects: including algebra 2, English, calculus, geometry
...Also, during the day, I stay at home with my two young daughters.I took a differential equations course in fall of 2007 at Rensselaer Polytechnic Institute. I received an A. I used these
topics in many chemical engineering courses after that.
25 Subjects: including algebra 2, chemistry, writing, geometry
I am teaching math, for over 20 years now, and was awarded four times as educator of the year. I was also mentor of the year twice. I have a variety of experience teaching not only in different
countries, but also teaching here in public school, private school, charter school, and adult continuing education school.
15 Subjects: including algebra 2, geometry, algebra 1, GED
Related Lester, PA Tutors
Lester, PA Accounting Tutors
Lester, PA ACT Tutors
Lester, PA Algebra Tutors
Lester, PA Algebra 2 Tutors
Lester, PA Calculus Tutors
Lester, PA Geometry Tutors
Lester, PA Math Tutors
Lester, PA Prealgebra Tutors
Lester, PA Precalculus Tutors
Lester, PA SAT Tutors
Lester, PA SAT Math Tutors
Lester, PA Science Tutors
Lester, PA Statistics Tutors
Lester, PA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Billingsport, NJ algebra 2 Tutors
Black Horse, PA algebra 2 Tutors
Carroll Park, PA algebra 2 Tutors
Center City, PA algebra 2 Tutors
Drexelbrook, PA algebra 2 Tutors
Elwyn, PA algebra 2 Tutors
Essington algebra 2 Tutors
Feltonville, PA algebra 2 Tutors
Garden City, PA algebra 2 Tutors
Linwood, PA algebra 2 Tutors
Moylan, PA algebra 2 Tutors
Passyunk, PA algebra 2 Tutors
Penn Ctr, PA algebra 2 Tutors
Tinicum, PA algebra 2 Tutors
Verga, NJ algebra 2 Tutors | {"url":"http://www.purplemath.com/Lester_PA_algebra_2_tutors.php","timestamp":"2014-04-20T10:55:06Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:38ea9c46-5b76-4d68-b078-9c99c2bae433>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concepts or technique for Precalc self-study?
Go with the rigorous ones. You can easily learn precalc in class. Most students dont even use textbooks when learning precalc, it is that shallow. You will get hit hard in college if you don't know
how to write proofs, and as it is a difficult skill to develop, you would do wonders starting out now. University will ASSUME you can write basic proofs at the level of geometry or basic number
theory (see Niven).
I am not familiar with any of your books except Alendeofer. Your free time should be spent on developing proof skills if you intend to pursue a math major. School will prepare you in terms of
calculation and technique. Other titles I recommend at your level are as follows:
How to Prove It - Velleman
Numbers: Rational and Irrational - Niven
Trigonometry - Gelfand
Geometry Revisited - Coexeter
Use the modern text. The main fault with an old text is that standard elementary topics change over time rather rapidly, and you won't get all that you need out of an old book, not to mention that
you'll learn a whole lot in excess that you'll never need because of narrow scope.
I'm not quite sure how you would make precalculus any more rigorous than it is in modern classrooms, except by making it more theoretical and less based on problem-solving. The thing is, with many
modern treatments, a theoretical derivation is often included, although not used by the teacher. I remember looking back at my calculus textbook a while ago and seeing proofs of all of the theorems,
and thinking to myself, "Where did all these proofs come from? We never did these in class!"
Yes, but he can save that for class. Modern texts tend to be a shallow coverage of a lot of topics, more suited towards engineers than mathies. I think I'd rather see him understand what a logarithm
is for instance than to have extra practice in performing calculations in it. And you can make precalc quite theoretical, studying various properties of functions such as odd/even and periodicity.
The nature of the questions in books like Gelfand for example is more akin to what you'd see in a book like Spivak. Modern precalc books assume you will be learning calculus out of stewart or | {"url":"http://www.physicsforums.com/showthread.php?t=298095","timestamp":"2014-04-16T22:15:51Z","content_type":null,"content_length":"43418","record_id":"<urn:uuid:06e35c2e-b3b6-4dd4-8243-f358dac20377>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Toronto Probability Seminar
Thursday, May 10, 2:10pm
Liliana Borcea
(Rice University)
Array Imaging in Random Media
In array imaging, we wish to find strong reflectors in a medium, given measurements of the time traces of the scattered echoes at a remote array of receivers. I will discuss array imaging in
cluttered media, modeled with random processes, in regimes with significant multipathing of the waves by the inhomogeneities in the clutter. In such regimes, the echoes measured at the array are
noisy and exhibit a lot of delay spread. This makes imaging difficult and the usual techniques give unreliable, statistically unstable results. I will present a coherent interferometric imaging
approach for random media, which exploits systematically the spatial and temporal coherence in the data to obtain statistically stable images. I will discuss theresolution of this method and its
statistical stability and I will illustrate its performance with numerical simulations.
Monday, April 23
Rowan Killip (UCLA)
From the cicular moment problem to random matrices
I will begin by reviewing some classical topics in analysis then segue into my recent work on random matrices.
Monday, April 16
Dan Romik (Bell Laboratories)
Gravitational allocation to Poisson points
An allocation rule for the standard Poisson point process in R^d is a translation-invariant way of allocating to the Poisson points mutually disjoint cells of volume 1 that cover almost all R^d. I
will describe a new construction in dimensions 3 and higher of an allocation rule based on Newtonian gravitation: each Poisson point is thought of as a star of unit mass, and the cell allocated to a
star is its basin of attraction with respect to the flow induced by the total gravitational force exerted by all the stars. This allocation rule is efficient, in the sense that the distance a typical
point has to move is a random variable with exponentially decreasing tails.
The talk is based on joint work with Sourav Chatterjee, Ron Peled and Yuval Peres.
Monday, March 26, 16:10, 2007, 4:10 pm
Thomas Bloom (University of Toronto):
Random Polynomials and (Pluri)-Potential Theory
I will report on results on the expected distribution of zeros of random polynomials in one and several (complex) variables.The results will involve concepts from potential and pluripotential theory.
In particular,a recent result(joint with B.Shiffman)showing that the expected distribution of the common zeros of m random Kac polynomials (i.e.polynomials with standard Gaussians as coefficients) in
m variables tends,as the degree increases,to the product of the angular measures on each of the m unit circles.This generalizes a classical result of Hammarsley.
Monday, March 12
Márton Balázs (Technical University Budapest)
Order of current variance in the simple exclusion process
The simple exclusion process is one of the simplest stochastic interacting particle systems: particles try to perform nearest neighbor jumps on the integer line Z, but only succeed when the
destination site is not occupied by another particle. It is somewhat surprising that such a system shows very exotic, time^{1/3}-scaling properties when turning to these particles' current
fluctuations. Limiting distribution results have existed in this direction for the totally asymmetric case (particles only try to jump to their right neighboring site), and heavy combinatoric and
analytic tools were used to prove them.
By a joint work with T. Seppäläinen, we managed to prove this scaling (but not the limiting distribution) for the general nearest neighbor asymmetric case, with the use of purely probabilistic ideas.
I will introduce the process, define the objects we worked with in probabilistic coupling arguments, and summarize the method that led to the proof of the scaling.
(This work is related to recent results of Jeremy Quastel and Benedek Valkó.)
Thursday, March 8, 2007, 4:10 pm,
Alan Hammond (Courant Institute)
Resonances in the cycle rooted spanning forest on a two-dimensional torus
Consider an n by m discrete torus with a directed graph structure, in which one edge, pointing north or east with probability one-half, independently, emanates from each vertex. The behaviour of the
cycle structure of this graph depends sensitively on the aspect ratio m/n of the torus. The expected total number of edges contained in cycles, for example, is peaked when m/n is close to a small
rational. This work, joint with Rick Kenyon, complements an earlier paper of Kenyon and Wilson, that analyses resonance among paths in a model that is equivalent to a honeycomb dimer model on a
discrete torus.
Monday, February 26, 2007, 4:10 pm
Elena Kosygina (Baruch College and the CUNY Graduate Center)
Stochastic Homogenization of Hamilton-Jacobi-Bellman Equations
We consider a homogenization problem for Hamilton-Jacobi-Bellman equations in a stationary ergodic random media. After a brief review of the standard approach for periodic Hamiltonians, we shall
discuss the difficulties and current methods of stochastic homogenization for such equations and explain the connection with large deviations for diffusions in a random medium. This is a joint work
with F. Rezakhanlou and S.R.S. Varadhan.
Monday, February 12, 2007, 4:10 pm
Jeremy Quastel (University of Toronto)
White Noise and the Korteweg-de Vries Equation
In joint work with Benedek Valko (Toronto) we found that Gaussian white noise is an invariant measure for KdV on the circle. In this talk we will describe the relevant concepts, what the result means
both mathematically and physically, and give some ideas of the proof. (The preprint may be downloaded from here
Monday, February 5, 2007, 4:10 pm
Manjunath Krsihnapur (University of Toronto)
Zeros of random analytic functions and Determinantal point processes
On each of the plane, the sphere and the unit disk, there is exactly a one-parameter family of Gaussian analytic functions whose zeros have isometry-invariant distributions (Sodin). Of these there is
only one whose zero set is a determinantal point process (Peres-Virag). By using Gaussian analytic functions as building blocks, we construct many non-Gaussian random analytic functions with
invariant zero sets. We pick out certain candidates among these, whose zero sets may be expected to be determinantal. We prove that this is indeed the case for a family of random polynomials on the
sphere, and partially prove the same for a family of random analytic functions on the unit disk. No prior knowledge of determinantal point processes or random analytic functions is necessary. These
results are from my thesis.
Monday, January 29, 2007, 14:10
Bálint Virág (University of Toronto)
Scaling Limits of Random Matrices
Recently, it has become clear that the sine and Airy point processes arising from random matrix eigenvalues play a fundamental role in probability theory, partly due to their connection to Riemann
zeta zeros and random permutations. I will describe recent work on the Stochastic Airy and Stochastic sine differential equations, which are shown to describe these point processes and can be thought
of as scaling limits of random matrices. This new approach resolves some open problems, e.g. it generalizes these point processes for all values of the parameter beta.
Wednesday, December 6, 2006, 15:10
Dimitris Cheliotis (University of Toronto)
Patterns for the 1-dimensional random walk in the random environment - a functional LIL
We start with a one dimensional random walk (or diffusion) in a Wiener-like environment. We look at its graph at different, increasing scales natural for it. What are the patterns that appear
repeatedly? We characterize them through a functional law of the iterated logarithm analogous to Strassen's result for Brownian motion and simple random walk.
The talk is based on joint work with Balint Virag.
Monday, November 27, 2006, 4:10 pm
Antal Járai (Carleton University)
Random walk on the incipient infinite cluster for oriented percolation in high dimensions
We consider simple random walk on the incipient infinite cluster for the spread-out model of oriented percolation in d+1 dimensions. For d > 6, we obtain bounds on exit times, transition
probabilities, and the range of the random walk, which establish that the spectral dimension of the incipient infinite cluster is 4/3, and thereby prove a version of the Alexander-Orbach conjecture
in this setting. The proof divides into two parts. One part establishes general estimates for simple random walk on an arbitrary infinite random graph, given suitable bounds on volume and effective
resistance for the random graph. A second part then provides these bounds on volume and effective resistance for the incipient infinite cluster in dimensions d > 6, by extending results about
critical oriented percolation obtained previously via the lace expansion.
Monday, November 20, 2006, 4:30 pm
Alexander Holroyd (University of British Columbia)
Bootstrap Percolation - a case study in theory versus experiment
Cellular automata arise naturally in the study of physical systems, and exhibit a seemingly limitless range of intriguing behaviour. Such models lend themselves naturally to computer simulation, but
rigorous analysis can be notoriously difficult, and can yield highly unexpected results. Bootstrap percolation is a very simple model for nucleation and growth which turns out to hold many surprises.
Sites in a square grid are initially declared "infected" independently with some fixed probability. Subsequently, healthy sites become infected if they have at least two infected neighbours, while
infected sites remain infected forever. The model undergoes a phase transition at a certain threshold whose asymptotic value differs from numerical predictions by more than a factor of two! This
discrepancy points to a previously unsuspected phenomenon called "crossover", and leads to further intriguing questions.
Monday, November 13, 2006, 4:10 pm
Balázs Szegedy
(University of Toronto)
Limits of discrete structures and group invariant measures
An important branch of statistics studies networks (structures) that grow randomly according to some law. A natural question is whether there is a natural limit object for the process. We present a
group theoretic approach to this problem.
Monday, October 30, 2006, 4:10 pm
Bálint Tóth (Technical University Budapest)
Tagged particle diffusion in 1d Rayleigh-gas - old and new results
I will consider the M -> 0 limit for tagged particle diffusion in a 1-dimensional Rayleigh-gas, studied originaly by Sinai and Soloveichik (1986), respectively, by Szász and Tóth (1986). In this
limit we derive a new type of model for tagged paricle diffusion, with Calogero-Moser-Sutherland (i.e. inverse quadratic) interaction potential between the two central particles. Computer simulations
on this new model reproduce exactly the numerical value of the limiting variance obtained by Boldrighini, Frigio and Tognetti (2002). I will also present new bounds on the limiting variance of tagged
particle diffusion in (variants of) 1D Rayleigh gas which improve some bounds of Szász, Tóth (1986). The talk will be based on joint work of the following three authors: Péter Bálint, Bálint Tóth,
Péter Tóth.
Friday, October 27, 2006, 2:10pm
Bernard Shiffman (John Hopkins University)
Complex zeros of random multivariable polynomial systems
I will discuss the distribution of zeros of systems of independent Gaussian random polynomials in n complex variables. Results on the distribution of the number N(U) of zeros in a complex domain U of
a random polynomial of one complex variable were given in recent papers of Sodin-Tsirelson and Forrester-Honner. They showed that the variance of N(U) grows like the square root of the degree d, and
thus the number of zeros in U is "self-averaging" in the sense that its fluctuations are of smaller order than its typical values. A natural question is whether self-averaging occurs for zeros of
systems of n independent Gaussian random polynomials of n complex variables. To answer this question, I will give asymptotic formulas for the variance of the number of simultaneous zeros in a domain
U in C^n as the degree d of the polynomials goes to infinity. I will explain how "correlation currents" for zeros and complex potential theory are used to compute variances for complex zeros. This
talk involves joint work with Steve Zelditch.
Monday, October 16, 2006, 4:10 pm
Vladimir Vinogradov (Ohio University)
On Local Approximations For Two Classes of Distributions
We derive local approximations along with estimates of the remainders for two classes of integer-valued variables. One of them is comprised of Pólya-Aeppli distributions, while members of the other
class are the convolutions of a zero-modified geometric law. We also derive the closed-form representation for the probability function of the latter convolutions and investigate its properties. This
provides the distribution theory foundation for the studies on branching diffusions. Our techniques involve a Poisson mixture representation, Laplace's method and upper estimates in the local Poisson
theorem. The parallels with Gnedenko's method of accompanying infinitely divisible laws are established.
Monday, October 2, 2006, 4:10 pm,
Omer Angel (University of Toronto)
Invasion Percolation on Trees
We consider the invasion percolation cluster (IPC) in a regular tree. We calculate the scaling limit of $r$-point functions, the volume at a given level and up to a level. While the power laws
governing the IPC are the same as for the incipient infinite cluster (IIC), the scaling functions differ. We also show that the IPC stochastically dominates the IIC. Given time I will discuss the
continuum scaling limit of the IPC.
Monday, September 25, 2006, 4:10 pm,
Paul Federbush (Ann Arbor)
A random walk on the permutation group, some formal long-time asymptotic expansions
We consider the group of permutations of the vertices of a lattice. A random walk is generated by unit steps that each interchange two nearest neighbor vertices of the lattice. We study the heat
equation on the permutation group, using the Laplacian associated to the random walk. At t = 0 we take as initial conditions a probability distribution concentrated at the identity. A natural
conjecture for the probability distribution at long times is that it is "approximately" a product of Gaussian distributions for each vertex. That is, each vertex diffuses independently of the others.
We obtain some formal asymptotic results in this direction. The problem arises in certain ways of treating the Heisenberg model in statistical mechanics.
Monday, September 18, 2006, 4:10 pm,
Siva Athreya (Indian Statistical Institute, Bangalore)
Age-Dependent Superprocesses
In this talk I will discuss an age dependent branching particle system and its rescaled limit the super-process. The above systems are non-local in nature (i.e. the position of the offspring is not
the same as that of the parent) and some specific difficulties arise in this setting. We shall begin with a review of the literature, discuss the above difficulties and present some new observations.
Tuesday, September 5, 2006, 4:10pm
Wilfrid Kendall (Warwick)
Coupling all the Levy stochastic areas of multidimensional Brownian motion
I will talk about how to construct a successful co-adapted coupling of two copies of an n-dimensional Brownian motion (B1, ... , Bn) while simultaneously coupling all corresponding copies of Levy
stochastic areas. | {"url":"http://www.fields.utoronto.ca/programs/scientific/06-07/to-probability/","timestamp":"2014-04-18T20:59:22Z","content_type":null,"content_length":"29987","record_id":"<urn:uuid:62207d14-4872-49ab-bc79-966844d766b0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 31
Why a rectangular prism composed of 2 unit cubes has 6 faces. How do its dimensions compare to a unit cube
easy one OK. so if jack has a bike that he got for 50 dollars less than the original price so if the bike was 50 dollars less than the original price how much did he pay for the bike. write and
equation and solve it.
Summer Creek
An isosceles triangle has legs that are each x inches long a base that is y inches long. The perimeter of this triangle is 38 inches. The base is 8 inches shorter than the length of a leg.
a train starts from rest and accelerates unifotmly, until it has travled 3.7 km and acquired a velocity of 30 m/s. the train then moves at a constant velocity of 30 m/s for 410 s. the train then
slows down uniformly at 0.065 m/s^2, until it reaches a halt. what distance does t...
a train starts from rest and accelerates unifotmly, until it has travled 3.7 km and acquired a velocity of 30 m/s. the train then moves at a constant velocity of 30 m/s for 410 s. the train then
slows down uniformly at 0.065 m/s^2, until it reaches a halt. what distance does t...
US History
Why did President Reagan aim to reduce government regulations?
US History
What theory was President Reagan's economic program based on?
US History
What were the effects of Reagans increased spending on US defense to counter Soviet threats?
Algebra 1
How exactly do you find/solve a Perfect Square Trinomials and factoring them?
accounting please help
Phil Phoenix is paid monthly. For the month of January of the current year, he earned a total of $8,288. The FICA tax rate for social security is 6.2% and the FICA tax rate for Medicare is 1.45%. The
FUTA tax rate is 0.8%, and the SUTA tax rate is 5.4%. Both unemployment taxes...
5th Grade math
solve graphically,giving real roots to the nearest tenth. x2-4x+3=0
Katie is 3/5 of the way to Brianna's house. Larry is 7/10 of the way to Brianna's house. How much closer to Brianna's house is Larry?
Pre- Algebra
Can someone please explain how to do this problem: 7n-5=10n+13
find the value of -18 + l-12 l show steps. find the value of -l 23- 14 l show steps.
an oceangraphic research vessel comes across a volcano under the sea. The top of the volcano is 450 ft below the surface and the sea floor is -6,700 below sea level. what is the height of the
volcano? show what you are doing to get your answer, explain.
which number in each pair is greater? how do you know? -31 and -42 -18 and 0
ice scrapes and loosens rock particles.what is thais an example of?
ok thanks soo much!
the prob is 19[7+w+(-2)]
i dont understand how this is supposed to be layed out and how to do it in order... can someone write out the problem so i understand how to answer it, thanks!
yup thats right!!!
What is the critical z used to form a 85% confidence interval?
The new Twinkle bulb is being developed to last more than 1000 hours. A random sample of 100 of these new bulbs is selected from the production line. It was found that 48 lasted more than 1000 hours.
Find the point estimate for the population proportion, the margin of error fo...
When using the substitution method to solve a nonlinear system of equations, you should first see if you can _____ one variable in one of the equations in the system.
criminal Justice
Punishment and sentencing have gone through various phases throughout the history of Western civilization. Discuss the concept of and rationale behind criminal punishment.
critical thinking
How does the writing process you read about in this class differ from the process you have used in the past? Which step in the writing process is easiest for you to complete? Which step is the
most difficult? How might you overcome this obstacle to become ...
math word problem
If there are 36 different flavors of ice cream and two types of cones are available, how many choices for a single scoop of ice cream on some type of cone do you have? Do i multiply the 2cones*36=72
single scoops
Thank you but I still don't understand
Apply the properties discussed in this section to simplify each of the following as much as possible. Show all work. I don't know how to do these problems. 4x-2-3x 4x-(2-3x)
How do I find the k,c,V.S., and P.S. when I have a chart of monthly temps for a year? | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=heaven","timestamp":"2014-04-16T19:43:25Z","content_type":null,"content_length":"11946","record_id":"<urn:uuid:b55413d6-2ba9-403d-bd0b-1e999f37b500>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |