content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Wadsworth, IL Algebra Tutor
Find a Wadsworth, IL Algebra Tutor
...I hope that I can be of service to anyone who requires aid with these disciplines. They offer life-long skills by offering problem-solving techniques and numerical tools. I have background in
peer-tutoring when I was in school, helping in both Physics and Math.This was my major in college.
16 Subjects: including algebra 2, algebra 1, chemistry, calculus
I have spent the past 8 years working with students of all ages and abilities, most specifically however, students with special learning needs. My experience ranges from teaching students with
severe development delays to working with students of all abilities in a co-taught setting (one general ed...
16 Subjects: including algebra 1, reading, English, writing
...Since I obtained degrees in both Spanish Languages and Literature and Chemistry with an Emphasis in Biochemistry, I have explored various complex topics and gave me insights to help students
be successful. I also minored in Asian Studies. After graduating from Loyola University, I began tutoring in ACT Math/Science at Huntington Learning Center in Elgin.
26 Subjects: including algebra 1, algebra 2, chemistry, English
...I have had great success with my students improving their ACT and SAT math scores even achieving a perfect score. I have worked with middle school students who have tested into Calculus or
beyond as freshmen. I have a good track record with students scoring 5 on the AP Calculus exam.
24 Subjects: including algebra 1, algebra 2, calculus, precalculus
...I volunteer with my local church in many different outreach programs, and the Bible serves as my number one source of guidance and direction. I have worked for 8 years as the Program
Administrator of an educational/vocational outreach program for a non-profit organization. It is my responsibility to counsel and consult with individuals on their desired educational and career
46 Subjects: including algebra 1, English, writing, reading
Related Wadsworth, IL Tutors
Wadsworth, IL Accounting Tutors
Wadsworth, IL ACT Tutors
Wadsworth, IL Algebra Tutors
Wadsworth, IL Algebra 2 Tutors
Wadsworth, IL Calculus Tutors
Wadsworth, IL Geometry Tutors
Wadsworth, IL Math Tutors
Wadsworth, IL Prealgebra Tutors
Wadsworth, IL Precalculus Tutors
Wadsworth, IL SAT Tutors
Wadsworth, IL SAT Math Tutors
Wadsworth, IL Science Tutors
Wadsworth, IL Statistics Tutors
Wadsworth, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/wadsworth_il_algebra_tutors.php","timestamp":"2014-04-21T04:36:09Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:147c1b01-01f4-498b-80f0-d1cf7f7d02f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many circles of diameter d on average in area of size A?
January 31st 2013, 04:43 AM #1
May 2012
the Netherlands
How many circles of diameter d on average in area of size A?
I have an unlimited large plane. I am trying to calculated how many circles with a certain diameter d can fit into any square of size SxS on the plane. I think this should be the maximum density.
So the question is NOT "How many circles can I fit into square of size x?", as it does not matter if the circles overlap the boundaries of the square.
(The setting is: How do I calculate how many people that try to keep distance d between their cores (modelled as circles) can fit into an area of size 1: What is the maximum density?)
I have looked into circle packing but that seems to calculate slightly different things.
I think it's either
2 / sqrt(3 * d * d)
2 / (sqrt(3) * d * d).
Any help (even a search word for google!) would be appreciated!
Re: How many circles of diameter d on average in area of size A?
The densest packing is indeed the circle packing problem, and the ratio of circles to total area is $\frac {\pi}{\sqrt {12}}$
Both of your two alternative choices have a problem with dimensions. The first has units of 1/length, and the second 1/length^2, whereas the answer should be dimensionless.
Re: How many circles of diameter d on average in area of size A?
I don't understand?
The question is : How many circles with diameter d can fit into an area of 1 x 1.The answer should always contain d?
circle packing example.bmp
In this situation, how many circles (so that could be 1 + 1 + 1 ... + .5 + .3 + .7) with diameter d fit into the red square of size 1 x 1?
January 31st 2013, 05:45 AM #2
February 1st 2013, 01:07 AM #3
May 2012
the Netherlands | {"url":"http://mathhelpforum.com/differential-geometry/212341-how-many-circles-diameter-d-average-area-size.html","timestamp":"2014-04-16T12:10:26Z","content_type":null,"content_length":"35974","record_id":"<urn:uuid:dc2c73a6-0875-467c-9dc3-b54c175b0a4a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Please Help??
Posted by Robin on Friday, September 27, 2013 at 8:11pm.
Evaluate the integral (3x^2-16x-19)/(x^3-4x^2-3x+18)
• Calculus Please Help?? - bobpursley, Friday, September 27, 2013 at 8:14pm
Have you done the method of partial fractions yet?
• Calculus Please Help?? - Robin, Friday, September 27, 2013 at 8:24pm
Yes, I know how to get variables. I got b=-8, c=1, and a=2, but I don't know how to get an integral as an answer. I need help integrating 2 other problems I will post as well, if you can help me.
• Calculus Please Help?? - Robin, Friday, September 27, 2013 at 8:28pm
1. (-5x^2+10x-12)/(x-5)(x^2+4)
2. (-8x-28)/((x-2)(x+9))
I know how to integrate, I just don't know how to get an integral, after finding a,b,c, and from that point
• Calculus Please Help?? - Reiny, Friday, September 27, 2013 at 9:03pm
So you were able to use partial fractions to decompose it to
-8/(x-3)^2 + 2/(x-3) + 1/(x+2)
that was the hardest part, the rest is easy
isn't the integral of -8/(x-3)^2 equal to 8/(x-3) ?
as to the others, you should recognize the pattern of the derivative of a log function
recall that is y = ln (u)
then dy/dx = (du/dx) / ln (u)
so if we integrate
-8/(x-3)^2 + 1/(x+2) + 2/(x-3)
we get
8/(x-3) + ln(x+2) + 2ln(x-3) + constant
• Calculus Please Help?? - Reiny, Friday, September 27, 2013 at 9:09pm
For your 2nd part,
did you get the partial fraction breakdown of
-2x/(x^2+4) - 3/(x-5) from (-5x^2 + 10x - 12)/((x-5)(x^2+4)) ?
then your integral would be
-ln(x^2+4) - 3ln(x-5) + a constant
let me know what you get for your last question.
Related Questions
Calculus - Evaluate (integral) e^3x dx. A. e^3x + C B. 1/3e^3x + C C. e^4x + C D...
Calculus - 1.evaluate (integral sign)x cos 3x dx A.1/6 x^2 sin 3x + C B.1/3 x ...
Calculus II - Integrate using integration by parts (integral) (5-x) e^3x u = 5-x...
calculus - Evaluate the integral of (x)cos(3x)dx A. (1/6)(x^2)(sin)(3x)+C B. (1/...
Calculus - (integral) e^3x dx A. e^3x+C B. 1/3e^3x+C C. e^4x+C D. 1/4e^4x+C ...
calc - 6(4x+3)^5(4)(3x-5)^5+ 5(3x-5)^4(3)(4X+3)^6 common factors: (4x+3)^5 (3x-5...
U-substitution (calculus) - this is an example from the book. I do not ...
bobpursley,math - about this problem , I figured it out and i did get the answer...
algebra - simplify (6x^3-16x^2+11x-5)/(3x-2) (3x-2)(2x-1)^2-3 my muliple choice ...
calculus - Use integration by parts to evaluate the integral of x*sec^2(3x). My ... | {"url":"http://www.jiskha.com/display.cgi?id=1380327089","timestamp":"2014-04-16T11:32:51Z","content_type":null,"content_length":"10140","record_id":"<urn:uuid:763ef325-e2bd-4b59-8fc0-908a0ebb8750>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
A dog food manufacturer wants to advertise its products. A magazine charges 60$ per ad and requires a min of three ads. A radio station chages 150$ per commercial minute and requires at least 4 min.
Each magazine reaches 12,000 people while each commercial minute reaches 16,000 people. At most, 900$ dollors can be spent on advertising.
1. Let "a" represents magazine ads and "m" represent the number of comercial minutes. Write a system of inequalities that represent the advertising plan for the company.
2. How many ads and commercial minutes shlould be purchased to reach the most people? How many people would this be?
Having a problem for some reason with the first part of this for system of equalities and it is messin with my mind!!!
Help please.
syraboy wrote:A dog food manufacturer wants to advertise its products. A magazine charges 60$ per ad and requires a min of three ads. A radio station chages 150$ per commercial minute and
requires at least 4 min. Each magazine reaches 12,000 people while each commercial minute reaches 16,000 people. At most, 900$ dollors can be spent on advertising.
1. Let "a" represents magazine ads and "m" represent the number of comercial minutes. Write a system of inequalities that represent the advertising plan for the company.
Having a problem for some reason with the first part of this for system of equalities and it is messin with my mind!
What inequality symbol is implied by "at least"? Given that "a" stands for "ads", what inequality is implied by "at least three ads"? Given that "m" stands for "minutes", what inequalilty is implied
by "at least four minutes"? What expression would represent the total cost of "a" ads and "m" minutes? Given that expression, what inequality is implied by "at most $900"?
Translating Word Problems | {"url":"http://www.purplemath.com/learning/viewtopic.php?f=9&t=2344&p=6802","timestamp":"2014-04-17T00:59:38Z","content_type":null,"content_length":"20129","record_id":"<urn:uuid:db3a2035-ee63-48fd-88ca-ef6bf0dfe141>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tech Briefs
The Subsonic Potential-based Fluid Element in ADINA
Acoustic fluid elements are frequently used to model water in pressure vessels, tanks, etc. These elements model the mass of the water, and also wave propagation in the water. The acoustic fluid
elements are computationally very effective, since the acoustic fluid elements are linear.
One effect that is not contained in the acoustic fluid elements is the Bernoulli effect (½ρv^2 term in the Bernoulli equation). Therefore the acoustic fluid elements should not be used in regions
where this effect is important.
The subsonic potential-based fluid elements of ADINA can be used when the Bernoulli effect needs to be accounted for. These elements are similar to the acoustic fluid elements, except that this
effect is included. Since the Bernoulli effect is nonlinear, the subsonic potential-based fluid elements are nonlinear.
Discharge of water from a tank
As a simple illustrative example of a problem in which the Bernoulli effect is important, we consider the discharge of water from a tank, as shown in Figure 1 below:
Figure 1 Discharge of water from a tank. (a) Schematic (b) Mesh
This type of problem can easily be solved using the subsonic potential-based elements. A free surface potential-interface is placed at the top of the tank, and an inlet-outlet potential-interface is
placed at the valve. The outlet pressure is specified at the valve.
In the first run, the outlet pressure is set to the hydrostatic pressure and the gravity load is applied, all in one static load step. The pressure in the fluid is the expected hydrostatic pressure.
In the second (restart) run, the outlet pressure is suddenly lowered to zero and a dynamic analysis is performed. Figure 2 shows the results.
Figure 2 Discharge of water from a tank: Results
HDR blowdown experiment
As a practical example of a problem in which the Bernoulli effect is important, we consider the HDR blowdown experiment V31.1. An important problem in the analysis of light water nuclear reactors is
to compute the response of the core barrel and pressure vessel resulting from the loss of coolant in a pressurized water reactor. The HDR (Heissdampfreaktor) safety project in Germany was developed
to provide experimental verification for computer programs used in this type of analysis.
Figure 3 below shows a diagram of the FSI model used to simulate the HDR blowdown experiment.
Figure 3 HDR blowdown experiment: FSI model
Subsonic potential-based elements are used to model the fluid, and shell and solid elements are used to model the structure. In the first run, the fluid internal pressure is applied to the model in
one static load step. In the second (restart) run, the pressure at the pipe outlet is lowered to simulate a pipe break, and a dynamic analysis is performed.
The animation at the top of this page shows the analysis results. The left-hand-side shows the pressure in the fluid and the right-hand side shows the magnified deformations of the structure. A very
good comparison with experimental data is observed, see the reference.
Clearly the subsonic potential-based fluid element in ADINA is very effective in this type of analysis.
Fluid structure interaction, nuclear power plant, blowdown experiment, pipe break analysis, HDR vessel, subsonic potential-based fluid element, Bernoulli effect, acoustic fluid
• T. Sussman, J. Sundqvist, "Fluid-structure interaction analysis with a subsonic potential-based fluid formulation", Computers and Structures, 81 (2003), 949-962. | {"url":"http://www.adina.com/newsgH37.shtml","timestamp":"2014-04-20T21:30:00Z","content_type":null,"content_length":"17402","record_id":"<urn:uuid:a05612a2-13b3-4abc-b79b-15d5129dc1a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recursive Problem.
07-07-2005 #1
Registered User
Join Date
Jul 2005
Recursive Problem.
Hello. I am trying to write a recursive program that will check partitions of an array to see if they add up to a certain number. A person gives the target number, the length of the array, and
the numbers in the array, and then the program will check combinations of the numbers in the array and print out how many possible combinations give the desired target number. For example, in a
set of 1, 4, and 5, with the target number 5, it will give back 2 solutions, because a partition of 1 and 4 adds up to 5, and a partition of 5 adds up to 5.
This is the code I have so far, but it's not giving the proper result, and I believe it has to do with the fact that I have it stop when the size of the array equals 0, which seems to force it to
quit searching before it has checked all the combinations. However, I don't know what condition to put in place in order to get it to stop at the right place and give the desired result. I've
tried doing many things but nothing seems to work and I would appreciate it if someone could help me figure out how to get this code finished.
Thanks for any help anyone can give me, and if no help can be given then I thank you for your time.
#include <stdio.h>
int NumberofPartitions (int *set, int size, int result, int parts);
int main ()
int target;
int n;
int number;
int length;
int array[100];
int *start = &array[0];
int partitions = 0;
printf("Enter target number: ");
scanf("%d", &target);
printf("Enter array length: ");
scanf("%d", &length);
printf("Enter numbers for the array: ");
for (n = 0; n < length; n++) {
scanf("%d", &number);
array[n] = number;
partitions = NumberofPartitions(start, length, target, partitions);
printf("Number of partitions equals %d.\n", partitions);
int NumberofPartitions(int *set, int size, int result, int parts) {
int guide;
if (size == 0) {
return parts;
else if (*set - result == 0) {
else {
guide = *set;
size = size - 1;
NumberofPartitions(set, size, result, parts);
NumberofPartitions(set, size, (result - guide), parts);
I'm not clear on exactly what you're trying to achieve, but if size is non-zero the function does what a friend of mine describes as "dropping off the end" --- there is no return statement in the
other cases.
One of the effects of that is: if you call NumberOfPartitions() from other code (eg in main()), it will always return the the value of parts.
I'm not clear on exactly what you're trying to achieve, but if size is non-zero the function does what a friend of mine describes as "dropping off the end" --- there is no return statement in the
other cases.
One of the effects of that is: if you call NumberOfPartitions() from other code (eg in main()), it will always return the the value of parts.
Well, I'm not sure if it needs a return statement in the other cases. See NumberOfPartitions is supposed to be a recursive function.
For example, let's say the array given by the user is 1, 3, 4, 5 and the target number is 5. The NumberOfPartitions function is given the target number 5, the length 4, and the array of 1, 3, 4,
5 and it's supposed to find how many number combinations in the array create 5.
So the program starts out by assuming that 1, the first number in the array, is either part of the solution or not. So the pointer to the first number in the array is moved to the second number
in the array, and the function calls itself with the array 3, 4, 5, the length 3, and the target number 5 and also calls itself with the array 3, 4, 5, the length 3, and the target number 4. This
is because 1 is either part of a solution or not.
The function continues in this fashion until the array is "empty". The way it moves through the array and finds the solutions should look like a binary tree pattern.
Like, if we start with array 1 4 5 and 5 is the target number then the function should go through the array like this
1 4 5, 5
4 5, 4 4 5, 5
5, 0 5, 4 5, 1 5, 5
-5 0 -1 4 -4 1 0 5
And end up with two solutions.
What I want is for the function to raise parts by 1 each time it finds a solution, and then when it goes through all the possibilites in the array, it unwinds and returns the parts. In the
example above, it would bring back 2.
I thought that if I kept track of the array length as it went through the function and subtracted 1 from it each time that it would unwind when the array length equals 0 and bring back the
answer, but it's unwinding too early, probably because it unwinds as soon as it hits 0 instead of going through all the possibilities when it reaches 0, and thus it's not giving the right answer.
I don't know how to fix this all up to make it do what I want it to do.
So, anyone want to take a shot at it? I'm still trying variations. Any help would be appreciated.
Never mind, I solved the problem myself. It turned out that the solution was to do this:
#include <stdio.h>
int NumberofPartitions (int *set, int size, int result, int parts);
int main ()
int target;
int n;
int number;
int length;
int array[100];
int *start = &array[0];
int partitions = 0;
printf("Enter target number: ");
scanf("%d", &target);
printf("Enter array length: ");
scanf("%d", &length);
printf("Enter numbers for the array: ");
for (n = 0; n < length; n++) {
scanf("%d", &number);
array[n] = number;
partitions = NumberofPartitions(start, length, target, partitions);
printf("Number of partitions equals %d.\n", partitions);
int NumberofPartitions(int *set, int size, int result, int parts) {
int guide;
if (*set - result == 0) {
return parts;
else if (size == 0) {
return parts;
else {
guide = *set;
size = size - 1;
return parts + NumberofPartitions(set, size, result, parts) + NumberofPartitions(set, size, (result - guide), parts);
And it works fine now. It seems so obvious now that I think about it.
07-07-2005 #2
Registered User
Join Date
Jun 2005
07-07-2005 #3
Registered User
Join Date
Jul 2005
07-07-2005 #4
Registered User
Join Date
Jul 2005
07-07-2005 #5
Registered User
Join Date
Jul 2005 | {"url":"http://cboard.cprogramming.com/c-programming/67382-recursive-problem.html","timestamp":"2014-04-16T17:47:08Z","content_type":null,"content_length":"57973","record_id":"<urn:uuid:fbd802f9-34d9-4085-990b-56d38d93aa5d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Siapa penyanyi dan title lagu ini "lom pan long lai"? : Music & Celebrities
Moderator: Moderators
Lagu Cina.
"Lom pan, long lai": itu saja saya ingat dia punya lirik (dinyanyikan pada verse pertama).
Ini lagu lama, tapi sampai sekarang saya tidak tau dia punya title atau penyanyi asal.
Kalau ada sesiapa yg tahu, mohon dikongsikan maklumat.
Terima kasih...
Posts: 642
Joined: Mon Dec 12, 2005 12:35 pm
Location: Sabah
pinolobu wrote:Lagu Cina.
"Lom pan, long lai": itu saja saya ingat dia punya lirik (dinyanyikan pada verse pertama).
Ini lagu lama, tapi sampai sekarang saya tidak tau dia punya title atau penyanyi asal.
Kalau ada sesiapa yg tahu, mohon dikongsikan maklumat.
Terima kasih...
bukan si lou man kiong ka?
Dog Barks At Nothing...
Keep Telling Yourself That
Veteran SF
Posts: 8869
Joined: Fri Nov 14, 2008 8:14 am
Credit in hand: Locked
Bank: Locked
Location: land of kenyalang
Kalau tak silap penyanyi asal lagu itu ialah "Francis Yip" yg berasal dari Hongkong
Lagu ini theme song dari TVB series 1980 "Shanghai Bund"
"The good life is one inspired by love and guided by knowledge"
Global Mod
Posts: 2522
Joined: Tue Aug 17, 2004 5:41 pm
Location: KL
Faith wrote:Kalau tak silap penyanyi asal lagu itu ialah "Francis Yip" yg berasal dari Hongkong
Lagu ini theme song dari TVB series 1980 "Shanghai Bund"
yes..ni lagu tema utk drama series The Bund yg begitu popular pada tahun 1980 yang d bintangi oleh Chow Yun Fatt as Hui Man Keung..
drama ni best...rindu pula mau tengok balik drama ni..
Green all the Day
Teman Setia SF
Posts: 3210
Joined: Tue Mar 24, 2009 2:15 pm
Location: Narita
Huh...mendengar lagu ni teringat pula aku di kampung..hihihi, ya tuan bilis harap2 RTM akan menayangkan semula drama ini. memang siuk habis pilem ni.
Peminat SF
Posts: 463
Joined: Fri Jun 27, 2008 1:29 am
Wow, thanks everybody!!!
Selepas hampir 30 tahun, akhirnya sekarang saya sudah tau detail lagu ini:
Title lagu: Shanghai Beach (上海灘)
Bahasa: Kantonis
Pencipta melodi dan susunan muzik: Joseph Koo Ka-Fai (b. 1933)
Pencipta lirik: James "Uncle Jim" Wong Jim (1940 – 2004)
Penyanyi asal: Frances Yip [actually saya tahun 1970an lagi sudah dengar lagu dia ini, dia pernah nyanyi lagu "Rasa Sayang"!)
Lirik dan terjemahannya:
Merupakan lagu tema period drama TV klasik dari Hong Kong berjodol The Bund, diterbitkan kali pertama pada 1980 dan original run ada 25 episod.
Pada tahun 1996 dalam filem Shanghai Grand, lagu ini dinyanyikan semula oleh Andy Lau.
Pada awal tahun 1980an saya selalu mendengar lagu ini dinyanyikan di majlis-majlis keramaian...
Studio version oleh Frances Yip
Versi Andy Lau
Posts: 642
Joined: Mon Dec 12, 2005 12:35 pm
Location: Sabah
time budak² pernah tinguk skijap ni tu crita oo.. tapi ingat² lupa ni. tpi yg pasti chow yun fatt.
Try.. n Try again...untill i got it...
Orang Baru
Posts: 153
Joined: Tue Nov 25, 2008 9:20 am
domba wrote:time budak² pernah tinguk skijap ni tu crita oo.. tapi ingat² lupa ni. tpi yg pasti chow yun fatt.
best bah drama ni....
Green all the Day
Teman Setia SF
Posts: 3210
Joined: Tue Mar 24, 2009 2:15 pm
Location: Narita
sa tia suka...tapi sia biasa dingar la...
sia suka ni lagu sina dulu dulu....bnyk juga but...oleh kerana perbezaan bahasa...sa tia ingat tajuk
surry kalau off topic...
I Walk Alone...No Need To Accompany me
Teman Setia SF
Posts: 3228
Joined: Mon Mar 23, 2009 5:17 pm
Bukan Andy Lau kah??
[spoiler]Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a
pull, a torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector
and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object
about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are
called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment
(called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and
angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for
any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center
of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p
= mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m.
[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of
using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied
through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force".
For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and
not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of
the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of
the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and
vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in
two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\
tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is
capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce
useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and
shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak.
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels
is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change
in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether
the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the
instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned
to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians
per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to:
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By
definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox
{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the
derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give:
\mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \,
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion
factor 33000 ft·lbf/min per horsepower:
\mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\mbox{
lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252}
because 5252.113122... = \frac {33000} {2 \pi}. \,
Principle of moments
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single
point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
(\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots).
Torque multiplier
A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output,
thereby achieving greater load with minimal effort.
sumber wikipediaGuntap...
kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta
Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull, a
torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector
and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object
about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are
called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment
(called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and
angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for
any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center
of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p
= mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m.
[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of
using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied
through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force".
For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and
not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of
the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of
the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and
vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in
two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\
tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is
capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce
useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and
shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak.
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels
is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change
in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether
the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the
instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned
to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians
per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to:
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By
definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox
{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the
derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give:
\mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \,
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion
factor 33000 ft·lbf/min per horsepower:
\mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\mbox{
lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252}
because 5252.113122... = \frac {33000} {2 \pi}. \,
Principle of moments
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single
point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
(\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots).
Torque multiplier
A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output,
thereby achieving greater load with minimal effort.
sumber wikipediaGuntap...
kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta
Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull, a
torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force vector
and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a mechanical
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an object
about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the forces are
called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a moment
(called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia, and
angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\boldsymbol
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass for
any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or center
of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear momentum p
= mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or N m.
[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of
using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1 N·m applied
through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes "metre-kilograms-force".
For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and
not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the surface of
the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of
the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to the
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and
vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in
two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf{\
tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the engine is
capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce
useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and
shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power peak.
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive wheels
is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the change
in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether
the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the
instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned
to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians
per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing to:
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular speed. By
definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \mbox
{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the
derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give;[/spoiler]
Semenjak bn-amino cerita pasal seks;liwat,video seks,terkini suara seks(matSabu)smpi negara luar gelar Msia Negara Seks;otak sia kotor oleh ketahian bn-amino
SF Leaders
Posts: 35060
Joined: Sat Apr 05, 2008 2:30 am
Location: KOTA KINABALU-RANAU
orangaslisabah wrote:Bukan Andy Lau kah??
[spoiler]Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push
or a pull, a torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force
vector and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a
mechanical advantage.
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an
object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the
forces are called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a
moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia,
and angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass
for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or
center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear
momentum p = mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or
N m.[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the
practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1
N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes
"metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that
the "pound" is pound-force and not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the
surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction
of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to
the spanner.
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal
and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems
in two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf
{\tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the
engine is capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines
produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a
dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive
wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the
change in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on
whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on
the instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is
assigned to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed
(radians per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular
speed. By definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \
mbox{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of
the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give:
\mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \,
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion
factor 33000 ft·lbf/min per horsepower:
\mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\
mbox{ lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252}
because 5252.113122... = \frac {33000} {2 \pi}. \,
Principle of moments
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a
single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
(\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots).
Torque multiplier
A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output,
thereby achieving greater load with minimal effort.
sumber wikipediaGuntap...
kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta
Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull,
a torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force
vector and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a
mechanical advantage.
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an
object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the
forces are called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a
moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia,
and angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass
for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or
center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear
momentum p = mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or
N m.[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the
practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1
N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes
"metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that
the "pound" is pound-force and not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the
surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction
of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to
the spanner.
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal
and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems
in two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf
{\tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the
engine is capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines
produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a
dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive
wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the
change in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on
whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on
the instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is
assigned to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed
(radians per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular
speed. By definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \
mbox{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of
the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give:
\mbox{power}=\mbox{torque} \times 2 \pi \times \mbox{rotational speed}. \,
If torque is in lbf·ft and rotational speed in revolutions per minute, the above equation gives power in ft·lbf/min. The horsepower form of the equation is then derived by applying the conversion
factor 33000 ft·lbf/min per horsepower:
\mbox{power} = \mbox{torque } \times\ 2 \pi\ \times \mbox{ rotational speed} \cdot \frac{\mbox{ft}\cdot\mbox{lbf}}{\mbox{min}} \times \frac{\mbox{horsepower}}{33000 \cdot \frac{\mbox{ft }\cdot\
mbox{ lbf}}{\mbox{min}} } \approx \frac {\mbox{torque} \times \mbox{RPM}}{5252}
because 5252.113122... = \frac {33000} {2 \pi}. \,
Principle of moments
The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a
single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from:
(\mathbf{r}\times\mathbf{F}_1) + (\mathbf{r}\times\mathbf{F}_2) + \cdots = \mathbf{r}\times(\mathbf{F}_1+\mathbf{F}_2 + \cdots).
Torque multiplier
A torque multiplier is a gear box, which works on the principle of epicyclic gearing. The given load at the input gets multiplied as per the multiplication factor and transmitted to the output,
thereby achieving greater load with minimal effort.
sumber wikipediaGuntap...
kalau ikut ni penerangan pening jg tu kepala mo pikir...sinang cerita torque kekuatan tu injin tarik tu badan kereta
Torque, also called moment or moment of force (see the terminology below), is the tendency of a force to rotate an object about an axis,[1] fulcrum, or pivot. Just as a force is a push or a pull,
a torque can be thought of as a twist.
Loosely speaking, torque is a measure of the turning force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a
torque (turning force) that loosens or tightens the nut or bolt.
The terminology for this concept is not straightforward: In the US, in physics it is usually called "torque" and in mechanical engineering it is called "moment".[2] However outside the US this
varies. In the UK for instance, most physicists will use the term "moment". In mechanical engineering, the term "torque" means something different,[3] described below. In this article the word
"torque" is always used to mean the same as "moment".
The symbol for torque is typically τ, the Greek letter tau. When it is called moment, it is commonly denoted M.
The magnitude of torque depends on three quantities: the force applied, the length of the lever arm[4] connecting the axis to the point of force application, and the angle between the force
vector and the lever arm. In symbols:
\boldsymbol \tau = \mathbf{r}\times \mathbf{F}\,\!
\tau = rF\sin \theta\,\!
τ is the torque vector and τ is the magnitude of the torque,
r is the displacement vector (a vector from the point from which torque is measured to the point where force is applied), and r is the length (or magnitude) of the lever arm vector,
F is the force vector, and F is the magnitude of the force,
× denotes the cross product,
θ is the angle between the force vector and the lever arm vector.
The length of the lever arm is particularly important; choosing this length appropriately lies behind the operation of levers, pulleys, gears, and most other simple machines involving a
mechanical advantage.
The SI unit for torque is the newton metre (N·m). For more on the units of torque, see below.
1 Terminology
2 History
3 Definition and relation to angular momentum
3.1 Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
4 Units
5 Special cases and other facts
5.1 Moment arm formula
5.2 Static equilibrium
5.3 Net force versus torque
6 Machine torque
7 Relationship between torque, power and energy
7.1 Conversion to other units
7.2 Derivation
8 Principle of moments
9 Torque multiplier
10 See also
11 References
12 External links
See also: Couple (mechanics)
In mechanical engineering (unlike physics), the terms "torque" and "moment" are not interchangeable. "Moment" is the general term for the tendency of one or more applied forces to rotate an
object about an axis (the concept which in physics is called torque).[3] "Torque" is a special case of this: If the applied force vectors add to zero (i.e., their "resultant" is zero), then the
forces are called a "couple" and their moment is called a "torque".[3]
For example, a rotational force down a shaft, such as a turning screw-driver, forms a couple, so the resulting moment is called a "torque". By contrast, a lateral force on a beam produces a
moment (called a bending moment), but since the net force is nonzero, this bending moment is not called a "torque".
This article follows physics terminology by calling all moments by the term "torque", whether or not they are associated with a couple.
The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The rotational analogues of force, mass, and acceleration are torque, moment of inertia,
and angular acceleration, respectively.
Definition and relation to angular momentum
A particle is located at position r relative to its axis of rotation. When a force F is applied to the particle, only the perpendicular component F⊥ produces a torque. This torque τ = r × F has
magnitude τ = |r||F⊥| = |r||F|sinθ and is directed outward from the page.
Torque is defined about a point not specifically about axis as mentioned in several books.
A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the
fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the
fingers of the right hand curl in the direction of rotation and the thumb points along the axis of rotation, then the thumb also points in the direction of the torque.[5]
More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product:
\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F},
where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by
\tau = rF\sin\theta,\!
where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively,
\tau = rF_{\perp},
where F⊥ is the amount of force directed perpendicularly to the position of the particle. Any force directed parallel to the particle's position vector does not produce a torque.[6]
It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. It points along the axis of rotation, and its direction is
determined by the right-hand rule.[6]
The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum,
\boldsymbol{\tau} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}
where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum:
\boldsymbol{\tau}_1 + \cdots + \boldsymbol{\tau}_n = \boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t}.
For rotation about a fixed axis,
\mathbf{L} = I\boldsymbol{\omega},
where I is the moment of inertia and ω is the angular velocity. It follows that
\boldsymbol{\tau}_{\mathrm{net}} = \frac{\mathrm{d}\mathbf{L}}{\mathrm{d}t} = \frac{\mathrm{d}(I\boldsymbol{\omega})}{\mathrm{d}t} = I\frac{\mathrm{d}\boldsymbol{\omega}}{\mathrm{d}t} = I\
where α is the angular acceleration of the body, measured in rad·s−2.This equation has limitation that torque equation is to be only written about instantaneous axis of rotation or center of mass
for any type of motion-either motion is pure translation,pure rotation or mixed motion.I=Moment of inertia about point about which torque is written(either about instantaneous axis of rotation or
center of mass only). If body is in translatory equilibrium then torque equation is same about all points in plane of motion.
Proof of the equivalence of definitions for a fixed instantaneous centre of rotation
The definition of angular momentum for a single particle is:
\mathbf{L} = \mathbf{r} \times \mathbf{p}
where "×" indicates the vector cross product and p is the particle's linear momentum. The time-derivative of this is:
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \frac{d\mathbf{p}}{dt} + \frac{d\mathbf{r}}{dt} \times \mathbf{p}.
This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definitions of velocity v = dr/dt, acceleration a = dv/dt and linear
momentum p = mv (if mass is constant),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times m \frac{d\mathbf{v}}{dt} + \mathbf{v} \times m\mathbf{v}.
The cross product of any vector with itself is zero, so the second term vanishes. Hence with the definition of force F = ma (Newton's 2nd law),
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}.
Then by definition, torque τ = r × F.
If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that
\frac{d\mathbf{L}}{dt} = \mathbf{r} \times \mathbf{F}_{\mathrm{net}} = \boldsymbol{\tau}_{\mathrm{net}}.
The proof relies on the assumption that mass is constant; this is valid only in non-relativistic systems in which no mass is being ejected.
Torque has dimensions of force times distance. Official SI literature suggests using the unit newton metre (N·m) or the unit joule per radian.[7] The unit newton metre is properly denoted N·m or
N m.[8] This avoids ambiguity with mN, millinewtons.
The joule, which is the SI unit for energy or work, is dimensionally equivalent to a newton metre, but it is not used for torque. Energy and torque are entirely different concepts, so the
practice of using different unit names for them helps avoid mistakes and misunderstandings.[7] The dimensional equivalence of these units, of course, is not simply a coincidence: A torque of 1
N·m applied through a full revolution will require an energy of exactly 2π joules. Mathematically,
E= \tau \theta\
where E is the energy, τ is magnitude of the torque, and θ is the angle moved (in radians). This equation motivates the alternate unit name joules per radian.[7]
In British unit, "pound-force-feet" (lbf x ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (oz x in) are used, and other non-SI units of torque includes
"metre-kilograms-force". For all these units, the word "force" is often left out,[9] for example abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that
the "pound" is pound-force and not pound-mass).
Sometimes one may see torque given units that don't dimensionally make sense. For example: g x cm . In these units, g should be understood as the force given by the weight of 1 gram at the
surface of the earth. The surface of the earth is understood to have an average acceleration of gravity (approx. 9.80665 m/sec2).
Special cases and other facts
Moment arm formula
Moment arm diagram
A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
|\tau| = (\textrm{moment\ arm}) (\textrm{force}).
The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction
of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the
distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force:
|\tau| = (\textrm{distance\ to\ centre}) (\textrm{force}).
For example, if a person places a force of 10 N on a spanner (wrench) which is 0.5 m long, the torque will be 5 N m, assuming that the person pulls the spanner by applying force perpendicular to
the spanner.
The torque caused by the two opposing forces Fg and −Fg causes a change in the angular momentum L in the direction of that torque. This causes the top to precess.
Static equilibrium
For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal
and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems
in two-dimensions, we use three equations.
Net force versus torque
When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same
regardless of your point of reference. If the net force \mathbf{F} is not zero, and \mathbf{\tau}_1 is the torque measured from \mathbf{r}_1, then the torque measured from \mathbf{r}_2 is \mathbf
{\tau}_2 = \mathbf{\tau}_1 + (\mathbf{r}_1 - \mathbf{r}_2) \times \mathbf{F}
Machine torque
Torque curve of a motorcycle ("BMW K 1200 R 2005"). The horizontal axis is the speed (in rpm) that the crankshaft is turning, and the vertical axis is the torque (in Newton metres) that the
engine is capable of providing at that speed.
Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines
produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a
dynamometer, and shown as a torque curve. The peak of that torque curve occurs somewhat below the overall power peak. The torque peak cannot, by definition, appear at higher rpm than the power
Understanding the relationship between torque, power and engine speed is vital in automotive engineering, concerned as it is with transmitting power from the engine through the drive train to the
wheels. Power is a function of torque and engine speed. The gearing of the drive train must be chosen appropriately to make the most of the motor's torque characteristics. Power at the drive
wheels is equal to engine power less mechanical losses regardless of any gearing between the engine and drive wheels.
Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints).
Reciprocating steam engines can start heavy loads from zero RPM without a clutch.
Relationship between torque, power and energy
If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation
about a fixed axis through the center of mass,
W = \int_{\theta_1}^{\theta_2} \tau\ \mathrm{d}\theta,
where W is work, τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body.[10] It follows from the work-energy theorem that W also represents the
change in the rotational kinetic energy Krot of the body, given by
K_{\mathrm{rot}} = \tfrac{1}{2}I\omega^2,
where I is the moment of inertia of the body and ω is its angular speed.[10]
Power is the work per unit time, given by
P = \boldsymbol{\tau} \cdot \boldsymbol{\omega},
where P is power, τ is torque, ω is the angular velocity, and · represents the scalar product.
Mathematically, the equation may be rearranged to compute torque for a given power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on
whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on
the instantaneous speed – not on the resulting acceleration, if any).
In practice, this relationship can be observed in power stations which are connected to a large electrical power grid. In such an arrangement, the generator's angular speed is fixed by the grid's
frequency, and the power output of the plant is determined by the torque applied to the generator's axis of rotation.
Consistent units must be used. For metric SI units power is watts, torque is newton metres and angular speed is radians per second (not rpm and not revolutions per second).
Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is
assigned to a scalar.
Conversion to other units
A conversion factor may be necessary when using different units of power, torque, or angular speed. For example, if rotational speed (revolutions per time) is used in place of angular speed
(radians per time), we multiply by a factor of 2π radians per revolution.
\mbox{power} = \mbox{torque} \times 2 \pi \times \mbox{rotational speed}
Adding units:
\mbox{power (W)} = \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rps)}
Dividing on the left by 60 seconds per minute and by 1000 watts per kilowatt gives us the following.
\mbox{power (kW)} = \frac{ \mbox{torque (N}\cdot\mbox{m)} \times 2 \pi \times \mbox{rotational speed (rpm)}} {60000}
where rotational speed is in revolutions per minute (rpm).
Some people (e.g. American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf·ft) for torque and rpm for rotational speed. This results in the formula changing
\mbox{power (hp)} = \frac{ \mbox{torque(lbf}\cdot\mbox{ft)} \times 2 \pi \times \mbox{rotational speed (rpm)} }{33000}.
The constant below in, ft·lbf/min, changes with the definition of the horsepower; for example, using metric horsepower, it becomes ~32,550.
Use of other units (e.g. BTU/h for power) would require a different custom conversion factor.
For a rotating object, the linear distance covered at the circumference in a radian of rotation is the product of the radius with the angular speed. That is: linear speed = radius × angular
speed. By definition, linear distance=linear speed × time=radius × angular speed × time.
By the definition of torque: torque=force × radius. We can rearrange this to determine force=torque ÷ radius. These two values can be substituted into the definition of power:
\mbox{power} = \frac{\mbox{force} \times \mbox{linear distance}}{\mbox{time}}=\frac{\left(\frac{\mbox{torque}}{\displaystyle{r}}\right) \times (r \times \mbox{angular speed} \times t)} {t} = \
mbox{torque} \times \mbox{angular speed}.
The radius r and time t have dropped out of the equation. However angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of
the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give;[/spoiler]
andy lau laitu sebab ada Lau di hujung tu lagu .. long lau
~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~
Veteran SF
Posts: 15299
Joined: Sat Oct 30, 2010 2:15 pm
Location: Di sana sini
Bukan Liew kaitu
Semenjak bn-amino cerita pasal seks;liwat,video seks,terkini suara seks(matSabu)smpi negara luar gelar Msia Negara Seks;otak sia kotor oleh ketahian bn-amino
SF Leaders
Posts: 35060
Joined: Sat Apr 05, 2008 2:30 am
Location: KOTA KINABALU-RANAU
Respected SF Veteran
Posts: 31119
Joined: Sat Dec 20, 2008 8:35 pm
Location: Dalam santut Mongolutan banyak hutan.
Bukit_Padang_Roller wrote:org cina atau si joshua
Semenjak bn-amino cerita pasal seks;liwat,video seks,terkini suara seks(matSabu)smpi negara luar gelar Msia Negara Seks;otak sia kotor oleh ketahian bn-amino
SF Leaders
Posts: 35060
Joined: Sat Apr 05, 2008 2:30 am
Location: KOTA KINABALU-RANAU
orangaslisabah wrote:Bukan Liew kaitu
uik..bukan tu
~~You can complain because roses have thorns, or you can rejoice because thorns have roses~~
Veteran SF
Posts: 15299
Joined: Sat Oct 30, 2010 2:15 pm
Location: Di sana sini
Who is online
Users browsing this forum: No registered users and 1 guest | {"url":"http://www.sabahforum.com/forum/music-celebrities/topic13386.html","timestamp":"2014-04-16T07:14:00Z","content_type":null,"content_length":"208583","record_id":"<urn:uuid:e61e6c34-004c-4e8b-93d8-187f197dc3c7>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kinesiology Problem set 3
1)A small plane travels from Austin to San Antonio (80 miles) with a velocity of 55 mi/hr. How long does this trip take? 1.45 hr
2)A bicyclist cycles from Rohnert Park to the Sonoma/ Marin county line (15 miles) with a velocity of 30 mi/hr. How long does the trip take? .5 hours
3) A runner runs around a lake (60 meters in circumference) at a rate of 4 meters/sec. How long does the trip take? 15 sec
4) How much average acceleration is necessary to slow a luge racer down from 55 mi/hr to 35 mi/hr in 20 sec? –3600 mi/hr ^2
5) How much average acceleration is necessary to slow down a baseball from 60 to 30 mi/hr in 15 sec? –7200 mi/hr ^2
6) How much average acceleration is necessary to slow down a bicyclist from 30 to 20 kilometers/sec in 40 sec? .25 km/s^2
7) How much average acceleration is necessary for a speed boat to reach 60 mi/hr from a standing start in one minute? 3600 mi/hr^2
8) How much average acceleration is necessary for a horse to reach a gallop 20 mi/hr from a trot 5 mi/hr is 5 sec? 10, 869 mi/hr^2
9) How much average acceleration is necessary for a swimmer to reach 2 meters/sec from 3 meters/sec in 20 sec? .05 m/s^2
10) A baseball is dropped from an airplane from height of 555 ft. How long will it take for the baseball to reach the ground? 5.87 sec
11) A crazy diver dives from a bridge (100 meters). How long will he take to reach the ground? 4.32 sec
12) How long does it take for a penny dropped from the statue of liberty (200 meters) to reach the ground? 6.38 sec
13) If a ball is dropped from a height of 100 ft, how fast is it going when it hits the ground? 80.24 ft/sec
14) If an apple is dropped from an airplane (40,000 ft), how fast is it going when it hits the ground? 1604.99 ft/sec downward
15) If a pencil drops to the ground from a height of 2 meters how fast is it going when it hits the ground? 6.26 m/sec | {"url":"http://sonoma.edu/users/b/boda/kin350/Problemset3.htm","timestamp":"2014-04-20T18:52:42Z","content_type":null,"content_length":"6485","record_id":"<urn:uuid:9c5ca21a-844f-4649-ac8c-bce054cdb8f3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Elm Science Tutor
Find a Little Elm Science Tutor
...I am patient and will help students not only improve their grades, but will call them to excel in their studies. I teach them how to take notes, take tests, and manage their time. I have
coached youth league Basketball for 2 years and also played on Intramural basketball teams while in college.
18 Subjects: including anatomy, ASVAB, psychology, English
...The course includes units: Operation of Rational Numbers,Proportions and Percent, Algebraic reasoning, Transformation and dilation, 2d and Pythagorean Theorem, 3D wrap and filling, Data
Analysis, Probability, Solving one Variable equations. I am a Texas state certified teacher (math 4-12), I tea...
20 Subjects: including physics, biology, statistics, calculus
I am the official chemistry tutor for North Central Texas College and can accurately be described as a bit of a chemistry nerd. I have a tremendous amount of success explaining the concepts of
chemistry to my students and I love seeing their faces light up when the subject finally clicks. We've all struggled with a subject at one time or another, so don't be ashamed to ask for help.
2 Subjects: including organic chemistry, chemistry
...I am certified to teach in the U.K., FL and TX (fingerprint and FBI checks done). I am currently employed by Mckinney ISD as a substitute teacher, teaching from elementary to high school. I
have a degree in Chemistry BSc.Hons, with a minor in Biology and I have a post graduate certificate in edu...
4 Subjects: including biochemistry, physical science, biology, chemistry
My name is Manvi Raghupatruni, and I live in Southlake, TX. I am an adjunct professor of biology at Tarrant County College and am also a biology tutor. I love teaching and helping students excel
in biology.
28 Subjects: including chemistry, physics, English, biochemistry | {"url":"http://www.purplemath.com/little_elm_science_tutors.php","timestamp":"2014-04-16T16:47:26Z","content_type":null,"content_length":"23900","record_id":"<urn:uuid:d82aa72d-3d2b-41c8-87f5-cd5756bd13ce>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
An elementary question about adjunctions between presheaf categories preserving pullbacks.
up vote 7 down vote favorite
A functor $C \to D$ between categories induces a morphism of presheaf categories $Pre(D) \to Pre(C)$. This functor has a left adjoint given by left Kan extension and I am interested in knowing when
this left adjoint preserves pull-back squares.
I'm interested in any conditions that make this happen, but I am particularly interested in a special case. Let me say a little more about the context I am working in, and why I am interested. In the
situation that this came up $C$ is a "lluf" subcategory of $D$, that is $C$ has the same objects as $D$ and the functor $C \to D$ is faithful. In that case it is good to call the functor $U:Pre(D) \
to Pre(C)$ the forgetful functor. It automatically preserves limits and colimits. Let L be its left-adjoint.
Since C has the same objects as D, this forgetful functor is also conservative, meaning that it reflects isomorphisms. So by general non-sense (specifically Beck's monadicity theorem) this is a
monadic adjunction. This means that $Pre(D) = T-alg$ is the category of T-algebras in $Pre(C)$ where T is the monad $T= UL$.
I'm trying to understand conditions under which this monad is cartesian in the sense described at the n-lab. This means, among other things, that the monad T is supposed to send fiber products to
fiber products. This is equivalent to having L send fiber products to fiber products. I want to understand when this happens. Does it always happen? Are there reasonable conditions on C or D that
ensure that this happens?
Notice that I am not asking for L to be "left-exact", i.e. to commute with all finite limits. This property is generally much too strong. In particular L will not usually preserve terminal objects.
This means it doesn't preserve products, but should instead send products to fiber products over $L(1)$.
Here is an example. Let $C = pt$ be the singleton category and $D = G$ be the one object category with morphisms a group G. There is a unique inclusion $C \to D$ which is obviously faithful. The
forgetful functor $$U:Pre(D) \to Pre(C)$$ sends a G-set to its underlying set. The left adjoint L sends a set S to the free G-set $S \times G$. This doesn't preserve terminal objects, but it does
commute with fiber products. What is more, the monad $T=UL$ is a classic example of a cartesian monad in the n-lab sense.
I've played around with this, but can't seem to get it to work. I feel like this is going to be a well known result or there is going to be a counter example which sheds light on the situation.
Question: In the context I described above (where $C \to D$ is lluf), does the left adjoint $$L: Pre(C) \to Pre(D)$$ always commute with fiber products? If not what is a counter example, and are
there conditions one can place on C and D to ensure that L does commute with fiber products?
ct.category-theory monads adjoint-functors
Here's an idea for an approach; I don't have time to work it through now, but maybe someone else can? First: "preserves pullbacks"="preserves finite connected limits". Now, the left adjoint $L = f
\otimes -$ preserves finite limits iff the original functor $f$ is flat, i.e. "has (co?)filtered (co?)commas". Now, "filtered"="has a cocone under every finite diagram". What if we replace this
with "…every finite connected diagram"? Will this condition be equivalent to $f \otimes -$ preserving finite connected limits? I suspect this won't quite work, but that something similar will. –
Peter LeFanu Lumsdaine Aug 3 '10 at 22:36
add comment
2 Answers
active oldest votes
$\newcommand{\C}{\mathbf{C}} \newcommand{\D}{\mathbf{D}} \newcommand{\Lan}{\mathrm{Lan}} \newcommand{\yon}{\mathbf{y}} \newcommand{\CC}{[\C^\mathrm{op},\mathbf{Set}]} \newcommand{\DD}
{[\D^\mathrm{op},\mathbf{Set}]}$ Expanding on my comment above:
Define: a category is semi-filtered iff every pair of arrows $x \leftarrow z \rightarrow y$ can be completed to a commutative square, and every parallel pair of arrows $f,g \colon x \
to y$ have some $h : y \to w$ with $hf = hg$; equivalently, if every finite connected diagram has some co-cone under it. (Afaik, this isn't standard terminology; I don't recall seeing
this property discussed, but it almost certainly has been.)
It's filtered if moreover it's non-empty and every pair of objects $x,y$ is connected by some $x \to w \rightarrow y$; equivalently, if every finite diagram has some co-cone under it.
Answer: for $f \colon \C \to \D$, the left Kan extension $f_* \colon \CC \to \DD$ will preserve pullbacks (equivalently, all connected finite limits) exactly if the opposite of its
up vote 3 down every comma category $(f \downarrow d)$ is semi-filtered.
vote accepted
This is a close variant of the standard lemma (see e.g. Mac Lane and Moerdijk Sheaves in Geometry and Logic) that $f_*$ preserves pullbacks and the terminal object (equivalently, all
finite limits) iff the opposite of every $(f \downarrow d)$ is filtered, i.e. if $f$ is flat.
Proof: the values of $f_*$ can be computed as colimits over the opposites of comma categories (see this answer). But in $\mathbf{Set}$, finite limits commute with filtered colimits;
and similarly, pullbacks commute with semi-filtered colimits.
(The first of these facts is standard. The second follows because in a semi-filtered category, each connected component is filtered; so a semi-filtered colimit is a coproduct of
filtered colimits; and pullbacks commute with both coproducts and filtered colimits.)
Mac Lane (CWM exercise IX.2.2) calls your property 'pseudo-filteredness' and attributes it to Verdier. – Finn Lawler Aug 6 '10 at 20:16
add comment
Here's another way of getting to the same answer as Peter's. A functor $F\colon A\to B$ preserves pullbacks if and only if the induced functor $F/1 \colon A \to B/F1$ preserves all finite
limits, where 1 is the terminal object of A. When F is left Kan extension $L\colon Psh(C) \to Psh(D)$ along a functor $f\colon C\to D$, it's not hard to check that Psh(D)/L1 is equivalent to
presheaves on the opposite of the category el(L1) of elements of L1 (this is true with any presheaf replacing L1), and that L/1 is left Kan extension along the induced functor $f'\colon C\to
el(L1)^{op}$. Thus, we need to know when that functor is flat.
Now an object of $el(L1)^{op}$ is an object $d\in D$ together with a connected component, call it X, of the comma category $(d\downarrow f)$. And a morphism in $el(L1)^{op}$ is a morphism
$d_1\to d_2$ such that the induced functor $(d_2\downarrow f) \to (d_1\downarrow f)$ maps $X_2$ to $X_1$. You can then check that the comma category $((d,X)\downarrow f')$ is precisely the
up vote connected component X of the comma category $(d\downarrow f)$. Therefore, since $f'$ is flat just when all categories $((d,X)\downarrow f')$ are cofiltered, we conclude that left Kan
2 down extension along f preserves pullbacks iff all connected components of all comma categories $(d\downarrow f)$ are cofiltered, i.e. if all $(d\downarrow f)$ are "semi-filtered" in Peter's
vote terminology.
Edit: you also asked for a specific counterexample when $f\colon C\to D$ is the inclusion of a lluf subcategory. Let D be the walking commutative square generated by arrows $a\to b$, $a\to
c$, $b\to d$, and $c\to d$, and let C be its lluf subcategory containing the identities and the arrows $b\to d$ and $c\to d$. Then the comma category $(a\downarrow f)$ has two connected
components, one of which is not semi-cofiltered.
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory monads adjoint-functors or ask your own question. | {"url":"http://mathoverflow.net/questions/34433/an-elementary-question-about-adjunctions-between-presheaf-categories-preserving?sort=newest","timestamp":"2014-04-21T15:27:50Z","content_type":null,"content_length":"62326","record_id":"<urn:uuid:fe20f547-1408-4e26-b325-ae3b85f06c67>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Jane’s Exercises III
Re: Jane’s Exercises III
Hi Jane;
So good to see you!
A crazy man wrote:
As usual the clever attempt is wrong. You see math and cleverness just don't go together.
Yes, one is quite wrong, or maybe both!
One is so clever
One is straight out of the book
Omnipresent traps
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=156987","timestamp":"2014-04-20T23:40:18Z","content_type":null,"content_length":"37059","record_id":"<urn:uuid:f33b0846-d0ab-46b3-a92a-2f379c8b9571>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
vector spaces & subspaces
July 22nd 2010, 03:47 PM #1
Junior Member
Jul 2010
vector spaces & subspaces
Determine whether W is a subspace of V
$W = \left\{ a + bx + cx^2 : abc=0 \right\}$
my solution:
First step: W is nonempty because it contains the zero polynomial (a=b=c=0)
Second step: Let
$p(x) = a + bx + cx^2$
$q(x) = d + ex + fx^2$
$p(x) + q(x) = \left( a+d \right) + \left( b+e \right)x + \left( c + f \right)x^2$
So p(x) + q(x) is also in W (because it has the right form). Similarly, if k is a scalar, then
$kp(x) = ka + kbx + kcx^2$
so kp(x) is in W.
Thus, W is a nonempty subset of $p_2$ that is closed under addition and scalar multiplication. therefore, W is a subspace of $p_2$
Was just wondering if this was all good, becuase the initial condition of the set stated above (a*b*c=0) confused me a bit
p(x) + q(x) will be in W if and only if (a+d)(b + e)(c + f) = 0. W contains all polynomials of second degree such that the product of the coefficients is equal to zero. Similiary, for kp(x) to be
in W, we must have (ka)(kb)(kc) = 0. You have to check this before you can go on and say that p(x) + q(x) and kp(x) are in W.
Hope this helps!
Determine whether W is a subspace of V
$W = \left\{ a + bx + cx^2 : abc=0 \right\}$
my solution:
First step: W is nonempty because it contains the zero polynomial (a=b=c=0)
Second step: Let
$p(x) = a + bx + cx^2$
$q(x) = d + ex + fx^2$
$p(x) + q(x) = \left( a+d \right) + \left( b+e \right)x + \left( c + f \right)x^2$
So p(x) + q(x) is also in W (because it has the right form).
What do you mean "it has the right form"? What "form" are you talking about? Surely it is V because it is a quadratic but to be in W it must also have the product of its coefficients equal to 0
(which is the same as saying at least one of its coefficients is 0. Consider $u= x^2+ 2$, $v= x^2+ 3x$. Are they both in W? What is their sum? Is it in W?
Similarly, if k is a scalar, then
$kp(x) = ka + kbx + kcx^2$
so kp(x) is in W.
Thus, W is a nonempty subset of $p_2$ that is closed under addition and scalar multiplication. therefore, W is a subspace of $p_2$
Was just wondering if this was all good, becuase the initial condition of the set stated above (a*b*c=0) confused me a bit
Ahh it's all clear now, thanks for the help guys, much appreciated!!
July 22nd 2010, 04:59 PM #2
Junior Member
Jun 2010
July 23rd 2010, 02:42 AM #3
MHF Contributor
Apr 2005
July 23rd 2010, 03:11 AM #4
Junior Member
Jul 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/151737-vector-spaces-subspaces.html","timestamp":"2014-04-19T21:17:56Z","content_type":null,"content_length":"43076","record_id":"<urn:uuid:d8f9a9e3-6ea8-49ce-b428-e37d99c6b25e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Appendix C.
Community-Based Mass Prophylaxis
Public Health Emergency Preparedness
This resource was part of AHRQ's Public Health Emergency Preparedness program, which was discontinued on June 30, 2011, in a realignment of Federal efforts.
This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities
having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Appendix C. Technical Appendix: Modeling DVC Operations
Note: In the following discussion, the term "DVC patient flow rate" refers to the planning concept of an average rate of patient processing over the duration of the mass prophylaxis response, not as
an actual measure of the unpredictable rate at which patients may show up at DVCs in the aftermath of a bioterrorist attack. The equations and spreadsheet models presented here use this average
patient flow rate for 2 reasons. First, there is no good data to guide prediction of patient surge arrivals at DVCs, so any model that tried to estimate surge arrivals would be inherently prone to
error. Second, it is likely that, with appropriate use of law enforcement and public information campaigns, planners could maintain constant patient flow rates at their DVCs by controlling entry.
The goal of mass prophylaxis planning is to ensure that dispensing of necessary antibiotics, vaccines, or other medical supplies to target populations occurs within a designated time frame. In
certain cases, the time frame for response will be fixed (e.g., in a widespread smallpox attack wherein vaccination of all potential contacts should take place within 4 days of exposure). However,
most other factors in the response scenario will either be variable (e.g., population affected) or under planners' control (e.g., number of DVC sites, number of staff, and station process times).
A. Modeling Approach
The spreadsheet programs included with this Planning Guide allow planners to model DVC activities based on 2 assumptions. First, all actions in the DVC are considered deterministic, rather than
stochastic, processes. While this eliminates naturalistic variability from elements like patient interarrival time and station processing times, it greatly enhances the simplicity and
understandability of model estimates. Second, these spreadsheet programs give results for DVCs at what is called "steady-state operation." The definition of steady-state in this setting is that
queues occurring at any station in any given DVC in the system do not experience a net increase in length over the course of the prophylaxis campaign. Another way of saying this is that the rate of
arrivals equals the rate of departures from the DVC as a whole and from every station in the DVC, as shown in the figure:
Average Entering (PT/Min) = Average Exiting (PT/Min)
The first step in modeling DVC activities is to determine this average patient flow.
1. Determining Campaign, DVC, and Station Flow
Under the deterministic steady-state assumption, individuals arrive at each DVC station at a constant rate throughout operation of the prophylaxis campaign. The flow at these stations can be
calculated from features of the campaign as a whole (i.e., overall processing rate across a community), the number of DVCs, and the DVC patient flow plan. For ease in calculations and to avoid
errors, these flows should all be in the same unit of time (e.g., per minute).
a. Average Campaign Flow
Average campaign flow represents the total number of individuals processed per unit of time across the entire affected community. It is a function of the total population in the target community
(e.g., town population) and the length of the prophylaxis campaign. Algebraically,
i) R[Campaign] = Pop ÷ T
Where: R[Campaign] = Average campaign flow (or rate)
Pop = Total size of population (or number of patients)
T = Length of Time for campaign
This calculation will give a campaign flow rate of patients per unit time of campaign. T can be days, hours, or minutes. To set T at minutes, first determine how many hours per day the campaign
will be operating (e.g., the DVCs will be open 24 hours per day). The equation for T in terms of minutes becomes:
ii) T = D × H × M
Where: D = Length of campaign in days
H = Hours of operation per day
M = Minutes of operation per hour
Combining i) and ii) gives a calculation of campaign flow in terms of Patients per Minute, as follows:
iii) R[Campaign] = Pop ÷ (D × H × M)
For example, a campaign targeting 10,000 people over 5 days, operating at 8 hours per day will have an average flow of 10,000÷(5×8×60) = 4.17 pts/min.
Assuming R[Campaign] is fixed and constant, it becomes the variable to which all staffing calculations ultimately become tethered. Consequently, changes in either staff per DVC, number of DVCs,
or station process times, for example, will necessarily cause changes in each other such that the campaign flow remains constant.
b. Average DVC Flow
The average DVC flow (denoted as R[DVC]) is a measure of the total patients per unit of time each DVC in a campaign can process. Three methods of determining the average DVC flow include a)
User-defined; b) Briefing-defined; c) DVC number-defined.
1. User-defined DVC Flow
The average number of patients per unit of time processed can be based on past experiences or live exercises (denoted as R[DVC-UD]). However, as explained in more detail below in Section 3:
Staffing Calculations, the number of staff per station and per DVC is directly proportional to this flow. Consequently, a higher flow (i.e., larger number of patients processed per unit time)
demands a larger number of working staff. Spatial constraints (i.e., number of staff a given DVC can accommodate) may not allow for this number and thus the DVC flow may need to be decreased.
2. Briefing-defined
On-site briefings to ensure patient education and consent may be required by Federal, State, or local regulations (e.g., as currently required for all Investigational New Drug (IND)
protocols). Because of both their duration (i.e., briefings likely will have the longest process time of all DVC stations) and their scope (i.e., all patients will have to be briefed),
briefings will determine the patient flow for each DVC. Regardless of their placement within a DVC flow plan, the briefing will impact other stations both upstream and downstream. Upstream
stations should be capable of achieving the briefing flow in order to fill the briefing space to capacity (and thus prevent wasted space and materials). At the same time, upstream stations
should not operate faster than the briefing flow as this will produce queues of increasing length outside of the briefing area. Downstream stations should also be capable of achieving the
briefing flow to prevent queues of increasing length.
Consequently, planners creating DVCs with on-site briefings should determine their DVC flow by equating it to the briefing flow (denoted as R[DVC-BD]). The briefing flow is a function of 2
characteristics: the number of patients simultaneously briefed (the product of number of briefing rooms and room capacity) and the length of the briefing. Algebraically,
iv) R[DVC] = R[DVC-BD] = (N[Rooms] × N[Patients per room] ) ÷ T[Briefing]
Where: N[Rooms] = Number of briefing rooms
N[Patients per room] = Capacity of each room
T[Briefing] = Length of each briefing (in minutes for R[DVC-BD] to be equal to patients per minute)
3. DVC Number-defined DVC Flow
In certain cases, planners may decide on a maximum number of DVCs in their campaign prior to calculating patient flow. The average DVC flow can be calculated as follows:
v) R[DVC] = R[DVC-ND] = R[Campaign] ÷ N[DVC]
Where: R[DVC-ND] = Average DVC patient flow using the number-defined method
N[DVC] = Maximum number of DVCs within the campaign.
Combining equation iii) with v) will allow calculation of DVC flow in patients per minute as follows:
vi) R[DVC] = R[DVC-ND] = Pop ÷ (D × H × M × N[DVC])
c. Average Station-Specific Flow
The station-specific flow is a function of two variables: the average DVC flow and the proportion of that flow that arrives at the station of interest. This proportion is determined by features
of DVC patient flow plan. The DVC flow plan determines the paths that patients may travel. The proportion of patients who take a given path is determined by calculating the percentage taking that
path at each branch point along the way (percentages which must be assigned by planners). These station-specific probabilities are then multiplied by the overall patient flow for the DVC (R[DVC])
as calculated above. Algebraically, for any station i, within a DVC pathway containing a total of I sequentially numbered stations, the corresponding station-specific flow (R[Si]) can be
calculated as follows:
vii) R[Si] = R[DVC] x Π (Pi)
i = 1
Where: i = Sequential number of station located within flow path of DVC
I = Total number of stations within flow path containing this station
Pi = Proportion of patients entering into station i
The following example demonstrates this method. This diagram represents a simple DVC layout. Circles represent individual stations within the DVC and the station of interest is highlighted.
[D] Select for Text Description.
To calculate the station-specific average flow of Station 4, first identify the pathway a patient would follow from entrance into the DVC to reach the station (represented by the thick lines) and
multiply the corresponding estimated probabilities. Finally, multiply this result by the average DVC flow. By example:
Assume: R[DVC] = 10 pts/min
p1 = 1.0 User-defined value
p2 = .80 User-defined value
p3 = .8 User-defined value
p4 = .5 User-defined value
Then: R[S4] = (p4 × p3 × p2 × p1) × R[DVC] or
= (.5 × .8 × .8 × 1.0) × 10 pts/min
= 3.2 pts/min
Certain stations may have multiple pathways of entrance. In such case, the product of the chain of probabilities of each associated pathway should be added and this total then multiplied by R
2. Determining the Number of DVCs
The total number of DVCs must be sufficient to process the total population within the given time frame or the campaign will not be a success. Consequently, the most direct method of calculating the
total number of DVCs is to divide the average campaign flow by the average DVC flow, as follows:
viii) N[DVC] = R[CAMPAIGN] ÷ R[DVC]
The total number of DVCs within a campaign is inversely proportional to the average flow of each DVC. Decreasing the average DVC flow will increase the number of necessary DVCs. Decreasing the number
of DVCs (e.g., because of resource limitations) will increase the necessary average DVC flow to process a population within the given time frame of the campaign. Fixing the number of DVCs or the DVC
flow rate (e.g., by mandating that all DVCs must operate at 100 patients per minute) will force a change in the overall campaign flow and therefore in the overall time needed to complete the
prophylaxis campaign.
3. Staffing Calculations
The number of staff required for a prophylaxis campaign can be calculated for each station within a DVC, for the DVC as a whole, and for the campaign in total. The number of staff is a function of
patient flow, average process time, and the ratio of staff to patient. Under the deterministic representation of a steady-state (where queues, if existent, are constant in length), staff can be
calculated using the following general formula:
ix) S = R × T × I
Where: S = Staff
R = Entering patient flow
T = Process time
I = Ratio of staff to patients
Calculating staff then becomes a matter of plugging in the appropriate R as explained in Section 1, ensuring the unit of time measure for T and R are consistent, and determining the ratio of staff to
patients for the activity.
a. Station-specific staffing
Two factors determine the optimal number of staff at a DVC station: patient flow (the average number of patients arriving at a station per unit time) and the station-specific processing time (the
time needed to process the average patient at that station). When a DVC is running at steady-state operation, staff activities and patient arrivals are balanced so that no new bottlenecks or
queues form. (Note: a system that is functioning at steady-state can have queues, but they do not get any longer during the steady-state operation.) A simple formula shows how these 2 factors
determine the optimal number of staff for each station under steady-state operation:
x) S[Station] = R[Station] × T[Station] × I[Station]
Where S[Station] = Staff at station
R[Station] = Patient flow arriving at station (patients per minute)
T[Station] = Processing time for station
I[Station] = Staff-to-Patient ratio at station (e.g., I=1 if one staff member is required for the entire duration of processing of each patient)
b. Total DVC staffing
The total number of staff needed to run a DVC is the sum of the number of staff needed at each station:
xi) S[DVC] = Σ S[Station].
B. Definitions of DVC Efficiency
Two measures reflect the efficiency of DVC design: bottlenecks and staff utilization. If more patients arrive than can be processed by DVC staff, a bottleneck will occur at one or more of the
stations inside the DVC. A bottleneck at a single station can decrease efficiency of the entire DVC by reducing processing rates at other stations in 1 of 2 ways: long lines at one station may
interfere with operations at other stations (e.g., by blocking access), and staff may be shifted to the affected station, thereby compromising efficiency of other areas. To solve bottlenecks, DVC
managers may need to increase the total number of DVC staff or decrease processing times (e.g., by shortening forms or protocols).
If the DVC plan overestimates either the need for staff at a given station or the need for entire DVCs to achieve community-wide prophylaxis, waste in the form of staff underutilization or excess
"down-time" will occur. As noted, in a large-scale mass prophylaxis operation staff will be one of the resources in shortest supply. In that case, inefficient use of staff at one station or DVC can
be expected to decrease the efficiency of some other aspect of the prophylaxis campaign. In plain language, if staff at a DVC or station find themselves idle during a large-scale event, the DVC plan
that assigned them to that station needs reevaluation.
Return to Contents
Current as of August 2004
AHRQ Publication No. 04-0044
The information on this page is archived and provided for reference purposes only. | {"url":"http://archive.ahrq.gov/research/cbmprophyl/cbmpApC.htm","timestamp":"2014-04-18T22:08:51Z","content_type":null,"content_length":"25790","record_id":"<urn:uuid:52d57b3e-a852-4dd7-9cae-f664d3f567cd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patente US4512195 - Ultrasonic living body tissue characterization method
This invention relates to a method of measuring various characteristic values such as the β of a medium; the medium being, for example, living body tissue. In performing the measurement, an
attenuation constant which is proportional to frequency (the proportional constant being β) and a reflection coefficient which is a function of frequency and determined by various living body tissue
characteristic values are normally used. More specifically, the invention relates to a method of obtaining various characteristic values of the medium by regressing to the theoretical equation,
experimental equation from measured power spectrum values.
It is experimentally known that an attenuation constant in transmission of an ultrasonic wave is proportional to frequency f, the proportional constant β, indicates a tissue characteristic. It is
also experimentally known that a reflection coefficient is proportional to the nth power of the frequency of the ultrasonic waves, and that the exponent n indicates another tissue characteristic.
The inventors of this invention, Dr. Ueda et al. theoretically indicate that the reflection coefficient is expressed in the form of
bf^4 e^-(df)2
Here, b and d have values depending on the tissue.
When the reflection coefficient is expressed as a function of frequency f, as expained above, there is not any generally established method of obtaining a living body tissue characteristic from the
profile of the power spectrum. Dr. Miwa et al., inventors of this invention, have presented a patent application disclosing a method for obtaining values of n and β via an energy ratio method when
the reflection coefficient is expressed as a function of the nth power of frequency f. (U.S. Ser. No. 372,547, now U.S. Pat. No. 4,452,082, U.S. Ser. No. 269,861, now U.S. Pat. No. 4,414,850.
This method is effective, but because attention is due to at least three narrow band energy effects, an error may be caused by local unevenness of the ultrasonic spectrum, known as scalloping.
Accordingly, this method has the following disadvantages: calculations are necessary for different sets of three frequencies in the effective frequency band; and statistic averaging process must be
performed on the set of obtained values for n and β. Thus, many calculations are required.
It is an object of this invention, when deriving a transfer function of a living body tissue from the received signal of reflected ultrasonic waves and a living body tissue characteristic, to
1. A method of looking at a shape of a spectrum to avoid the effect of power discontinuities at the interface of tissue regions;
2. A method which assures easy regression from measured tissue transfer function to the tissue model function obtained from theory and experiments, the tissue model function being expressed as a
product of an exponential function and a non-exponential function; and
3. A method where the parameters involved in the function are determined by such regression and living body tissue characteristic values are obtained.
In this invention, the frequency response spectrum of a living body tissue transfer function is normalized; the important factor of the spectrum being its shape, not its absolute value. Moreover,
when using a function derived for a tissue model from theory or experiment is expressed as a product of an exponentional function and a non-exponentional function, parameters relating to the living
body tissue characteristic can be obtained by regressing the measured function obtained firstly by a logarithmic operation, and secondly dividing the measured tissue transfer function with the
non-exponentional function.
FIG. 1 is a schematic cross-sectional view of a living body tissue structure along the Z axis;
FIG. 2 is a block diagram of an embodiment of this invention;
FIG. 3 illustrates waveforms at respective points of FIG. 2; and
FIGS. 4a-c illustrates relation of frequency characteristics.
In FIG. 1 an ultrasonic wave pulse (center frequency fo, bandwidth Ω) is transmitted from a transducer 1, through the surface of a body and into a deep region of a living body along a measuring line
(i.e., in the direction Z). This pulse travels within the living body at the sound velocity C, and any reflected waves travel in the reverse direction at the sound velocity C. These reflected waves
are then received by the transducer 1.
FIG. 1 schematically represents living body tissue. in FIG. 1 it is desired to measure the characteristics of the tissue at the depth z. The living body comprises i different kinds of tissue regions
from the surface to the depth Z. The sound velocity along z is almost constant.
A sound pulse transmitted from the surface of living body 2, namely from the position of z=0, is attenuated as it travels deeper. For each region i, the attenuation constant is αi. The attenuation
constant is proportional to a frequency f in each region i. If a proportionality constant is chosen to be βi,
Where ai is a constant and βi is a parameter indicating a characteristic of the tissue at region i and is called an attenuation slope.
Associated with each region is an acoustic impedance due to macro tissue nature, and a power reflection coefficient r(f) due to random micro tissue structure, which is a function of frequency.
At the interface of each region, there is a large change in the acoustic specular. Additionally, the surface of each region is often specular. Consequently, when the pulse passes from the region i-1
to the i, a small amount of transmission loss occurs. The transmission loss, called transmissivity, is denoted by τi.
In the same way, the reflected waves from the depth z pass from the region i to region i-1 back towards the transducer 1. In this case the transmissivity is denoted as τ'i. The τi and τ'i, are not
considered to vary with frequency.
The transducer 1 begins to receive, shortly after transmitting an ultrasonic wave pulse, a continuous series of waves reflected from every region within the body. Since the reflected wave
corresponding to the depth z is received at the time t=2z/C, the tissue characteristic at the depth z can be obtained by analyzing the reflected waveform during a certain narrow time around this
A power spectrum Er(f) of the received signal reflected from the depth z can be easily obtained using a known frequency analyzer such as FFT (Fast-Fourier transformer). This power spectrum Er(f) is
given as the square of the product of a transfer function of the measuring system consisting of the frequency characteristic of transducer, the frequency characteristic of the beam convergence at the
depth z, the transfer function of electronic circuit etc. and the transfer function of the living body tissue.
Here, a standard reflector is placed at the depth equivalent to z within non-attenuative homogeneous medium such as water and a power spectrum Eo(f) of the received wave is obtained. Here, Eo(f) is
considered to be the square of the transfer function of the measuring system. When Er(f) is divided by Eo(f) the mormalized power spectrum results, and a square of the intrinsic transfer function of
a living body tissue can be obtained. An actually measured transfer function of a living body tissue is indicated as R(f).
R(f)=Er/Eo (1)
On the other hand, from the above explained tissue model, the measured transfer function R(f) must be expressed by the following equation. ##EQU1## r (f): power reflection coefficient k: a constant
not depending on f
i: path length within the region i
The symbols mean accumulation, namely ##EQU2##
Explained hereunder is the method of obtaining a tissue characteristic by comparing R(f) of equation (1) and R(f) of equation (2).
A transducer is only capable of obtaining a portion of R(f), within an effective bandwidth of the transducer. Therefore, the function R(f), covering a sufficiently wide frequency range must be
obtained by utilizing a plurality of different frequency transducers and then combining these spectrums.
Because tissues actually comprise a collection of cells or muscle fibers, when actually measuring the power spectra, the measurements show local unevenness. This is because of the interference of
reflected waves from closely but randomly located scatterers such as the cells fibers with the tissue, or tissue walls. The unevenness is called spectrum scalloping, and results in a large error in
the measurement of the spectrum shape. In order to prevent such error, it is preferable to measure the power spectra of regions around to the measuring area; for example, the areas in front of,
behind, to the right, left, above and below the desired region and/or to repeat the measurements several times. This special and/or temporal statistical averaging of the spectra can remove the
Equation (2) can be normalized as indicated below with the R(fo) at a particular frequency fo (for example, a frequency fm which gives the maximum value of equation (1) etc.) in equation (2). ##EQU3#
In equation (3), the factors k, τi, τ'i, are eliminated. Elimination of the unknown factors τi and τ'i, is a very important merit of this patent.
Since both fo and R(fo) can actually be measured, the measured power spectrum P(f) can be normalized by R(fo) as indicated below.
P(F)=R(F)/R(fo) (4)
Therefore, the parameters involved in Q(f) can be determined by regressing P(f) to Q(f) without reference to τi and τ'i.
The power reflection function r(f) can be either an experimentally or a theoretically derived equation. For example, the following experimental equation can be used.
r(f)=a*F^n (5)
Where a is a constant, and n is a constant depending on tissue. Or for example, r(f) can be expressed by the following theoretical equation by Ueda et al., ##EQU4## where,
b' is a function of frequency, the transducer size and its radius of curvature;
b is a constant depending on the macro nature of a heterogeneous tissue which is the same as the average microminiature structure of the tissue being measured;
σ[z] is a self-correlational distance factor, measured in the direction that the ultrasonic pulse traverses. Because each cell or fiber has a different size, this factor is equivalent to the means
self-correlational length measured in the direction that the ultrasonic pulse traverses; and
c is the velocity of sound.
Equations (5) and (6) appear significantly different but, these equations provide almost identical results in a certain practically used frequency range such as the conventional range of (1˜7 MH[z]).
When equation (5) or (6) is substituted into equation (3), Q[5] and Q[6] are respectively obtained as follows. ##EQU5##
As seen in equations (7) and (8), Q is a product of an exponential function of B(f) and an non-exponential function of A(f). When both equations are divided by the respective non-expenential
functions the result can be expressed by the following logarthmic expressions. ##EQU6##
In equations (9) and (10), an actually measured P is substituted in place of Q[5] and Q[6] and the left sides are plotted as functions with frequency. Such plots are regressed to the right side
functions with frequency by the means such as least the squared error method. Thereby, the parameters n, Σβili, σ[z] can be obtained.
Equation (9) contains the unknown value n on the left side but, a value of n which makes the right side to become a linear function of f, most closely can be determined numerically assuming a
plurality of n.
A value of βi for each i can be obtained from a difference of ##EQU7## by obtaining a value of ##EQU8## at the depth z. When β is expressed as a continuous function of z, its line integration is
expressed as ##EQU9## Therefore, a value of β(z) can be obtained by differentiation of the line integral or from the line integrations in all directions by the algorithm used in X-ray CT (Computer
Distribution of values for n, β and σ[z] can be obtained along each measuring line, and a two dimensional distribution of n, β and σ[z] can be obtained by scanning in a plane the measuring lines.
Explained above is the principle of this invention. In actual implementation of this invention, time gating of echo signal corresponding to the sampling points at a certain depth z in a body is a
well known technology in Doppler measurement. The gated waveforms are Fourier-analyzed by, for example, the DFT (Digital Fourier Transform) which is an easy means for obtaining the power spectrum of
the gated waveforms. The processing of the spectrum to obtain the tissue characteristics can be executed by a computer or by specially constructed computing hardware. An outline of an illustrative
system will be explained with reference to FIG. 2.
An ultrasonic transmitter synchronizing signal Pd, shown in FIG. 3 (A) is transmitted to a driving circuit 2, from a timing control circuit 1. A transducer 3, is driven by a pulse having a sufficient
power necessary for the transducer to transmit an ultrasonic wave. Thereby an ultrasonic wave is transmitted into living body tissues (or into a water containing a standard reflector 4).
A reflected wave from the living body tissue (or from the standard reflector 4), is received by the transducer 3. This reflected wave is then amplified to an adequate level by a reciving circuit 5.
The amplified signal is then sent to a data collecting circuit 6 as the received signal Vr shown in FIG. 3 (B).
The timing control circuit 1 sends a gate pulse Pg, shown in FIG. 3(C), to the data collecting circuit 6, delayed by the time T1 from Pd. The delay corresponding to a distance from the surface of the
transducer to the reflecting area to be measured; thus, the reflected signal from the desired measuring region is collected as the data.
The width τ of Pg is determined corresponding to the range of depths to be measured. The data collected corresponds, for example, to the waveform shown in FIG. 3(D).
The collected data is sent to a frequency analyzer 7 and a result of the frequency analysis is sent to a data memory 8. The results of frequency analysis are the reflected wave spectrum from a living
body tissue as shown in, for example, FIG. 4(A), and the reflected wave spectrum from the standard reflector as shown in FIG. 4(B).
An arithmetic operation circuit 9 performs various calculations as described above using the results of the frequency analysis stored in the data memory 8. More specifically, a spectrum such a shown
in FIG. 4(A) is divided by a spectrum such as shown in FIG. 4(B). Thereby a tissue transfer function R(f) (corresponding to equation (1)) shown in FIG. 4(C) is obtained. Thus, the frequency f[m] at
which equation (3) is a maximum can be obtained, and moreover the left sides of the equations (9) and (10) can be calculated. Thereafter, regression calculations are carried out and values of n,
Σβili and σ[z] are determined. A values of βi can also be obtained from the calculated value of Σβili at each depth z.
The arithmetic operation circuit 9 may take any kind of structure so long as it can realize the above calculations. For example, a micro-computer consisting of a microprocessor, RAM, ROM, I/O port
etc. may be used.
The present invention achieves the following results:
1. The effect of the measuring system can be eliminated by extracting a living body tissue transfer function by normalizing the spectrum with the one of a standard reflector, and the effect of
discontinuous transmission can be eliminated by normalizing again the tissue transfer function with its value at a certain frequency;
2. Regression calculations can be realized easily by dividing the measured tissue transfer function with a non-exponential portion of the tissue model function and then rearranging the expression
using logarithmic operators; and
3. The tissue characteristic can be obtained from the parameters thus obtained. | {"url":"http://www.google.es/patents/US4512195?dq=flatulence","timestamp":"2014-04-20T14:07:36Z","content_type":null,"content_length":"69965","record_id":"<urn:uuid:4aa1147e-ff0c-4b54-a2bb-ae054e132f31>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Orange Geometry Tutor
Find an East Orange Geometry Tutor
...The student will only reflect the enthusiasm and seriousness that the tutor radiates. I like to get my students to gain a vested interest in their education and to take ownership of it. I have
taught in very challenging environments and have been able to get results.
13 Subjects: including geometry, reading, English, algebra 1
I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely
comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere.
7 Subjects: including geometry, accounting, algebra 1, algebra 2
...Having spent the last five years in the entertainment industry (my sister is an actress), I am very familiar with filmmaking, the production process and show business in general. I am an
award-winning filmmaker; I have won second prize in the nation (twice) in C-SPAN's StudentCam Video Documenta...
43 Subjects: including geometry, English, reading, algebra 1
...A strong vocabulary is important to do well in any subject. I have taken a full year of Rhetoric in college to be the best writer possible. These skills I learned in school and in the
professional world about good persuasive writing and grammar I teach to children and adults.
39 Subjects: including geometry, English, reading, GRE
...I went to elementary and middle school there. I learned a lot of the basics that a second language learner would need to learn after I started living in the U.S. I encountered non-Turkish
speakers and was requested to teach them the language, so I taught them grammar and daily words.
34 Subjects: including geometry, reading, English, algebra 1
Nearby Cities With geometry Tutor
Ampere, NJ geometry Tutors
Belleville, NJ geometry Tutors
Bloomfield, NJ geometry Tutors
Doddtown, NJ geometry Tutors
Harrison, NJ geometry Tutors
Irvington, NJ geometry Tutors
Kearny, NJ geometry Tutors
Montclair, NJ geometry Tutors
Newark, NJ geometry Tutors
Orange, NJ geometry Tutors
South Kearny, NJ geometry Tutors
South Orange geometry Tutors
Union Center, NJ geometry Tutors
Union, NJ geometry Tutors
West Orange geometry Tutors | {"url":"http://www.purplemath.com/East_Orange_geometry_tutors.php","timestamp":"2014-04-16T16:16:37Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:8ca9866c-4dad-4643-b194-88193037b99f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pneumatic Resistive Tube
Pneumatic pipe accounting for pressure loss and added heat due to flow resistance
The Pneumatic Resistive Tube block models the loss in pressure and heating due to viscous friction along a short stretch of pipe with circular cross section. Use this block with the Constant Volume
Pneumatic Chamber block to build a model of a pneumatic transmission line.
The tube is simulated according to the following equations:
p[i], p[o] Absolute pressures at the tube inlet and outlet, respectively. The inlet and outlet change depending on flow direction. For positive flow (G > 0), p[i] = p[A], otherwise p[i] = p[B].
T[i], T[o] Absolute gas temperatures at the tube inlet and outlet, respectively
G Mass flow rate
μ Gas viscosity
f Friction factor for turbulent flow
D Tube internal diameter
A Tube cross-sectional area
L Tube length
Re Reynolds number
The friction factor for turbulent flow is approximated by the Haaland function
where e is the surface roughness for the pipe material.
The Reynolds number is defined as:
where ρ is the gas density and v is the gas velocity. Gas velocity is related to mass flow rate by
For flows between Re[lam] and Re[turb], a linear blend is implemented between the flow predicted by the two equations.
In a real pipe, loss in kinetic energy due to friction is turned into added heat energy. However, the amount of heat is very small, and is neglected in the Pneumatic Resistive Tube block. Therefore,
q[i] = q[o], where q[i] and q[o] are the input and output heat flows, respectively.
Basic Assumptions and Limitations
● The gas is ideal.
● The pipe has a circular cross section.
● The process is adiabatic, that is, there is no heat transfer with the environment.
● Gravitational effects can be neglected.
● The flow resistance adds no net heat to the flow.
Dialog Box and Parameters
Parameters Tab
Internal diameter of the tube. The default value is 0.01 m.
Tube geometrical length. The default value is 10 m.
This parameter represents total equivalent length of all local resistances associated with the tube. You can account for the pressure loss caused by local resistances, such as bends, fittings,
armature, inlet/outlet losses, and so on, by adding to the pipe geometrical length an aggregate equivalent length of all the local resistances. The default value is 0.
Roughness height on the tube internal surface. The parameter is typically provided in data sheets or manufacturer catalogs. The default value is 1.5e-5 m, which corresponds to drawn tubing.
Specifies the Reynolds number at which the laminar flow regime is assumed to start converting into turbulent flow. Mathematically, this value is the maximum Reynolds number at fully developed
laminar flow. The default value is 2000.
Specifies the Reynolds number at which the turbulent flow regime is assumed to be fully developed. Mathematically, this value is the minimum Reynolds number at turbulent flow. The default value
is 4000.
Variables Tab
Use the Variables tab to set the priority and initial target values for the block variables prior to simulation. For more information, see Set Priority and Initial Target for Block Variables.
The block has the following ports:
See Also | {"url":"http://www.mathworks.se/help/physmod/simscape/ref/pneumaticresistivetube.html?nocookie=true","timestamp":"2014-04-24T13:58:26Z","content_type":null,"content_length":"42483","record_id":"<urn:uuid:4ecb0d66-e141-4972-8779-0efefd176abe>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
As I mentioned in an earlier post, OpenGL ES 2.x replaces the fixed function pipeline of OpenGL ES 1.x with a programmable pipeline. This means that you have to write a vertex shader program and a
fragment shader program that perform the operations that the fixed function pipeline used to do. What you gain is an unprecedented level of flexibility as you are no longer restricted to hardwired
calculations in the GPU.
The vertex shader processes polygon vertices and the fragment shader processes rasterized pixels. There are lots of tutorials and documentation on the basics of shaders, so I won't go into all the
details. Suffice it to say that the vertex shader program is run once for each vertex in the polygon and the fragment shader is run for each polygon pixel to be drawn. Instead, I will show you a
couple of examples of extremely basic vertex shaders.
Each polygon vertex has a position in 3D space. One of the primary responsibilities of the vertex shader is to project the vertex position into 2D space. By default, the OpenGL 2D coordinate system
of the iPhone looks like this:
This is an orthographic projection, which means there is no sense of perspective and the depth component of the vertex position is more or less ignored. For a (non-degenerate) polygon to be visible
in the default coordinate system, at least one of its vertices would have to have an XY position within the range shown in the image.
The following vertex shader simply copies the position attribute of the vertex to the gl_Position variable, which is the predefined vertex position that will be used by the hardware when the polygon
is rasterized and drawn.
attribute vec4 position;
void main()
gl_Position = position;
This is probably really the simplest possible vertex shader you can write without hardcoding the position. However, you rarely want to use the default coordinate system, because it has an awkward
aspect ratio and no perspective projection.
Let's say we want to create a 2D game where the coordinate system matches the resolution of the iPhone screen, which is 320x480 pixels. We wouldn't have to worry about perspective in this case, since
we're not going to do 3D, so it's ok to still have an orthographic projection. Basically, we want a coordinate system that is more common when dealing with 2D pixel graphics in general, where the
upper left corner is the origin. Here's an illustration of the coordinate system we want:
To accomplish this, we would have to create a view projection matrix that could be used to project the vertex position from our coordinate system to the default coordinate system. To perform the
projection transformation in the vertex shader, we simply multiply the vertex position with the view projection matrix:
attribute vec4 position;
uniform mat4 viewProjectionMatrix;
void main()
gl_Position = viewProjectionMatrix * position;
As you may have noticed, the position is represented by a vector with 4 components instead of 3. This is because the vector has to be expressed in homogenous coordinates for affine matrix
transformations to be possible. If this makes no sense to you, it is safe (most of the time) to just accept that there has to be a 1 in the fourth component of the vector.
The last remaining piece of the puzzle is to create the view projection matrix, so that it can be fed to the vertex shader and bound to the viewProjectionMatrix variable. Those of you who have a
perverted math fetish may go ahead and calculate the transformation matrix manually, but the rest of us will use a preexisting math library function instead. I highly recommend the free GLGX library
for matrix and vector operations. It was created specifically for use with OpenGL. Moreover, GLGX is heavily inspired by the DirectX utility library D3DX. For almost every function in D3DX, there is
a corresponding GLGX function.
Using GLGX, you would create the view projection matrix by using the following function call:
GLGXMatrixOrthoOffCenter2D(&viewProjectionMatrix, 0.0, 320.0, 480.0, 0.0);
I'm too lazy to go into how to bind the C matrix to the matrix variable in the vertex shader, so I'll leave it as an exercise to the reader for now. | {"url":"http://memfrag.se/blog/tag/GLSL","timestamp":"2014-04-17T01:08:16Z","content_type":null,"content_length":"21065","record_id":"<urn:uuid:174c2fdf-ef30-41eb-8424-a8f9d85fccab>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
a union b intersection c proof
Posted by jigeherm@nokiamail.com on
a union b intersection c proof
a union b intersection c proof
Space with Connected Intersection has Connected Union/Corollary.
Feb 25, 2013. Union and Intersections Let A= {x|x is a prime factor of 720} Let B= {x|x is a prime factor of 330} Find a) A intersect B b) A union B c) A - B i don't.
Answer to Prove the following set for A and B so it's read somet. More ?.
PROVE THAT A-(B UNION C)= (A-B) INTERSECTION (A-C). View the answer. Modern Algebra, [ Set Theory (X): Laws of Algebra of Sets: De Morgan's Laws.
definitions. Theorem 9 Given any three sets A, B and C; the following equality holds:. conjunction to prove distributivity of union over intersection. This works be.
What is b intersect c on a venn diagram.
Sets, Functions, Relations - School of Computer Science and IT.
Jul 13, 2005. (defthm distributivity-of-intersection (A B C) (set A) (set B) (set C) (eq (intersect A ( union B C)) (union (intersect A B) (intersect A C)))).
Mar 2, 2012. I forgot to mention that both A and B should intersect C. ? user22705 Mar 2 '12. Proof that every open set in $\mathbb{R}^n$ is the union of ?.
forall A B:Ensemble U, Intersection U A B = Intersection U B A. Theorem Distributivity : forall A B C:Ensemble U, Intersection U A (Union U B C) = · Union U.
Oct 20, 2010. A intersection complement of B union C. 1 Answer. PROVE THAT-A intersection( B union C)=(A intersection B)UNION (A intersection C).
Space with Connected Intersection has Connected Union - ProofWiki.
Dec 8, 2012. Space with Connected Intersection has Connected Union. Suppose that, for all $B, C \in \mathcal A$, the intersection $B \cap C$ is non-empty.
Is a and b a subset of a and b? empty set. Prove if a union c equals b union c and a intersect c equals b intersect c then a equals b? is in A or x is in C. since x is.
Jul 25, 2011. I have 4 variables of integer a, b, c, d which could define 2 intervals: [min(a, b), max(a, b)] .. unions/intersections · Union and Intersection Proof.
Feb 28, 2013. Contents. 1 Theorem; 2 Proof; 3 Also see; 4 Sources. $A \cup \left({B \cup C}\ right) = \left({A \cup B}\right) \cup C$ .. Unions and Intersections.
Feb 27, 2013. I'm trying to prove that if $A \cap B = A \cap C$ then $A \cap .. Related. Intersection and union of 2 variable intervals with integers and indexes.
proof involving unions, intersections and complements.
Anyone understand union and intersection problems for discrete.
Largest family without A union B contained in C intersect D.
Jul 13, 2005. (defthm distributivity-of-intersection (A B C) (set A) (set B) (set C) (eq (intersect A ( union B C)) (union (intersect A B) (intersect A C)))).
Mar 2, 2012. I forgot to mention that both A and B should intersect C. ? user22705 Mar 2 '12. Proof that every open set in $\mathbb{R}^n$ is the union of ?.
forall A B:Ensemble U, Intersection U A B = Intersection U B A. Theorem Distributivity : forall A B C:Ensemble U, Intersection U A (Union U B C) = · Union U.
Oct 20, 2010. A intersection complement of B union C. 1 Answer. PROVE THAT-A intersection( B union C)=(A intersection B)UNION (A intersection C).
Simplify the expression: (B union C) intersection (B union NOT-C) intersection ( NOT-B union C) 3. The attempt at a solution. I have no clue how.
Categories: None | {"url":"http://juxashiv.webs.com/apps/blog/show/24587511-a-union-b-intersection-c-proof","timestamp":"2014-04-19T17:31:12Z","content_type":null,"content_length":"34937","record_id":"<urn:uuid:4aaf4827-befa-40fc-92f0-30e14a82e447>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00641-ip-10-147-4-33.ec2.internal.warc.gz"} |
Black Mountain Lyrics
Lyrics Depot is your source of lyrics to Black Mountain by Isobel Campbell. Please check back for more Isobel Campbell lyrics.
Black Mountain Lyrics
Artist: Isobel Campbell
Album: Ballad of the Broken Seas
I met a man whom I'll never doubt
Flew into the sun, our bodies on fire
Betrothed to a mate, pray it is not so
Invoke father time, then he let her go
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
I know a dog much blacker than night
Who lives in a house where nothing is right
All sorrow and loving riches at bay
All watching a few to the ???
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Some dream of coins or some beauty's find
But I want the truth while I serve my time
Most complex and pain, I'd share it with you
The moon looks bright, so come say I'll do
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
Lie true lie true lie true lie true
no comments yet
Isobel Campbell Lyrics
Isobel Campbell Ballad of the Broken Seas Lyrics
More Isobel Campbell Music Lyrics:
Isobel Campbell - Cachel Wood Lyrics
Isobel Campbell - Honey Child What Can I Do? Lyrics
Isobel Campbell - Love For Tomorrow Lyrics
Isobel Campbell - Salvation Lyrics
Isobel Campbell - Time Is Just The Same Lyrics
Isobel Campbell - Who Built The Road Lyrics
Isobel Campbell - Why Does My Head Hurt So Lyrics
Isobel Campbell - Willow's Song Lyrics | {"url":"http://www.lyricsdepot.com/isobel-campbell/black-mountain.html","timestamp":"2014-04-21T07:35:06Z","content_type":null,"content_length":"10160","record_id":"<urn:uuid:ec760a9c-4e24-4b58-a452-dee3f02c9167>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hammond, IN Science Tutor
Find a Hammond, IN Science Tutor
...I have also tutored for the Dupage Literacy Program. I am a very patient person and am passionate about teaching. I will go the extra mile to make sure the student thoroughly understands the
23 Subjects: including physical science, genetics, algebra 1, algebra 2
...I have ten years of experience learning German. I have lived abroad in Germany and am currently obtaining my masters degree in German literature at UIC. In addition to specializing in German,
I also have a degree in psychology, specifically specializing in the human body and brain.
4 Subjects: including physiology, anatomy, psychology, German
...I also have a knack for picking up foreign languages fairly quickly and I can read and write in Spanish and I have some experience with French, Arabic, and Japanese as well. From January 2009
until the present I have been working as a technician in the Media Services/Production department at Sai...
12 Subjects: including anthropology, reading, Spanish, English
...My records are on file for you to check. I also went to Loyola University of Chicago under the Pre-Med Program. I graduated with a Biology/Chemistry Degree with almost a full Math Degree.
27 Subjects: including microbiology, chemistry, elementary math, precalculus
...He went from struggling to earn passing marks in mathematics the previous year to consistently receiving A/B marks on progress tests required by his former school district. I've also tutored
in ACT/SAT prep, pre-calc/calc, biology, physics, chemistry, and English.As a first-generation college ap...
45 Subjects: including ACT Science, biology, nutrition, microbiology
Related Hammond, IN Tutors
Hammond, IN Accounting Tutors
Hammond, IN ACT Tutors
Hammond, IN Algebra Tutors
Hammond, IN Algebra 2 Tutors
Hammond, IN Calculus Tutors
Hammond, IN Geometry Tutors
Hammond, IN Math Tutors
Hammond, IN Prealgebra Tutors
Hammond, IN Precalculus Tutors
Hammond, IN SAT Tutors
Hammond, IN SAT Math Tutors
Hammond, IN Science Tutors
Hammond, IN Statistics Tutors
Hammond, IN Trigonometry Tutors | {"url":"http://www.purplemath.com/Hammond_IN_Science_tutors.php","timestamp":"2014-04-19T07:20:10Z","content_type":null,"content_length":"23694","record_id":"<urn:uuid:c01b4f42-9318-40aa-b135-cd529748d967>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
$\psi$ class in $\overline{M}_{0,n}$
up vote 2 down vote favorite
Basic question, but I found no reference.
Is the $\psi$ class the only one which is not a boundary class in the PIcard group of the Deligne-Mumford compactification of $\mathcal{M}_{0,n}$? Or can it be expressed in terms of boundary
divisors? If yes what is its expression?
ag.algebraic-geometry picard-group moduli-spaces algebraic-curves
add comment
1 Answer
active oldest votes
Unless you mean something else when you write $\psi$ class, it is expressible in terms of boundary divisors.
That is, if $\psi_i$ is the $i$-th cotangent bundle, then you can write it in terms of boundary divisors. One reference for this is the tome "Mirror Symmetry" by Hori, Katz, Klemm,
et al. on p. 513, the comparison lemma.
up vote 4 down
vote accepted The idea is that you can consider the forgetful maps $\pi$ from $M_{0,n}$ to $M_{0,n-1}$ and look at the divisor $$ \psi_i - \pi^*\psi_i $$ (where the $\psi$ classes are, by abuse of
notation, living on different spaces). This is expressible in terms of boundary divisors, and so we can inductively write out the $\psi$ classes on any $M_{0,n}$.
Thank you Simon, excellent and precise answer. – IMeasy Feb 7 '12 at 22:01
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry picard-group moduli-spaces algebraic-curves or ask your own question. | {"url":"http://mathoverflow.net/questions/87669/psi-class-in-overlinem-0-n?sort=oldest","timestamp":"2014-04-18T13:55:02Z","content_type":null,"content_length":"51203","record_id":"<urn:uuid:09b81ba1-9bab-4448-a506-1ee71362981e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Draw the triangle, and then draw it again rotated so that B is in the same corner that C was. Now since ICA < IBA and ICB < IBC, ICA + ICB < IBA + IBC. Since this is true, the angle at C is less than
the angle at B. I forget the name of the proof, but if you have such a triangle, the side with a greater angle has a smaller diagonal (BB' and CC'). So Since angle C < B, CC' > BB'.
I don't understand quite understand the wording in 2. | {"url":"http://www.mathisfunforum.com/post.php?tid=1753&qid=16265","timestamp":"2014-04-19T05:03:03Z","content_type":null,"content_length":"21439","record_id":"<urn:uuid:fb330cc8-a49e-4a4a-aea5-07628610766b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diffusion Labs is an enterprise whose mission is to research, analyze, and abstract complex systems; advance knowledge and understanding of systems theory; and develop fuzzy expert systems for a
range of social and industrial applications.
The vision of this enterprise is to promote soft computing solutions which increase the efficiency, knowledgeability, and functionality of intelligent agents, equipping them with a means to improve
the quality of life for whom those agents serve.
The core disciplines exercised by this enterprise include thermodynamics, electrodynamics, quantum mechanics, computational intelligence, systems theory, information theory, and systems biology.
Computational Intelligence
Diffusion Labs is a member of the IEEE, and as such encourages professionals to learn more about computational intelligence by subscribing to the IEEE Computational Intelligence Magazine.
Artificial Intelligence 2.0
Computational intelligence (CI) can be thought of as sub-symbolic artificial intelligence (AI). While AI focuses on the high level symbolism of information, CI focuses on how units of information are
related to one another, in addition to the collective value they represent. This is a useful departure from classical AI, as it provides a more scientific framework for investigation.
Neural Networks
This view of intelligence is derived primarily from neural networks where a cognitive state is represented as a multidimensional vector of neural unit activation (probability potential) values, and
knowledge is represented as a matrix of neural unit connection strength values. The concept of the neural network originated as an effort to explain biological cognition, and has been successful in
modeling many natural connectionist functions. Neural networks, in contrast to the circuits of most electronic hardware, exhibit what is referred to as neuroplasticity: an ability to undergo
autonomous adaptation and modification, or more simply, learn.
Hebbian Learning
One popular mechanistic convention by which neural networks learn is known as Hebbian-style learning. Hebb's rule, introduced by Donald Hebb in 1949, attempts to describe how the strength of natural
and artificial neural unit connections increase in response to simultaneous activation or potentiation. It forms the basis for several more modern theories of associative learning and neuroplasticity
models such as Bienenstock-Cooper-Munro (BCM) theory and the Generalized Hebbian Algorithm (GHA).
Fuzzy Systems
These mechanisms for learning are prominent in fuzzy systems, where continuous truth values are defined by input variables mapped to fuzzy set membership functions. Traditional control systems are
generally rigid and myopic, requiring a rigorous framework to map their input variables to their output. When a rule base is used in combination with fuzzy sets, fuzzy systems can make highly
effective decisions given incomplete or delayed input values through the use of inference.
Bayesian Inference
The inductive logic of choice for many has its origins in Bayes' theorem. A theorem which relates the conditional and marginal probabilities of two random events. Bayesian statistics is sometimes
used in the course of machine learning where a prediction of posterior input values is assigned a probability defined by the product of the prior input value probability and posterior input value
likelihood. As a result, Bayesian networks are often used to represent a belief system that can be queried for knowledge.
Evolutionary Computation
Perhaps the most promising strategy for solving problems through system optimization involves evolutionary techniques. These techniques are typically metaheuristic and intensely recursive. Some
techniques such as evolutionary algorithms deal primarily with highly interconnected domains, while others like swarm intelligence, self-organization, and cultural algorithms deal with decentralized
or fragmented domains. Evolutionary techniques have already proven very effective, and sometimes have even outperformed traditional expert systems by large margins.
IEEE Computational Intelligence Society
Artificial life: organization, adaptation and complexity from the bottom up
Expert Systems
Contact Us
Postal Mail
Diffusion Labs
PO Box 15177
Panama City, FL 32406-5177
(850) 913-7084
e.g. Jacob Bernoulli
e.g. jacob@gmail.com | {"url":"http://diffusionlabs.com/","timestamp":"2014-04-20T08:15:15Z","content_type":null,"content_length":"9068","record_id":"<urn:uuid:02137f7e-97a0-4625-98e6-7b124799e9f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enlightenment Science and Its Fall
Spring 2006 Vol. 1, No. 1
This article is from TOS Vol. 1, No. 1. The full contents of the issue are listed here.
Enlightenment Science and Its Fall
Like the lives of individuals, history has its key moments. In Ayn Rand's novel The Fountainhead, the hero, Howard Roark, is asked: “When you look back, does it seem to you that all your days rolled
forward evenly, like a sort of typing exercise, all alike? Or were there stops—points reached—and then the typing rolled on again?” Roark answers: “There were stops.”1 Even on the grand scale of
history we can see such stops, that is, points at which one period ends, the direction changes, and another period begins.
One historian, Alan Cromer, refers to the extraordinary transition that took place at the end of the 18th century and writes: “In many ways, the decade of the 1780s divides the past from the
present.”2 That is a profound truth, in a much deeper sense than Cromer explains. I will argue that this was the point at which the Enlightenment reached its zenith—and at which it was brought down
by its one failure.
The Enlightenment is the century between two major figures: Isaac Newton and Immanuel Kant. In The Ominous Parallels, Leonard Peikoff wrote that this period is the only time in modern history that
“an authentic respect for reason became the hallmark of an entire culture.”3 Man's nature, most thinkers of the period agreed, was clear from his accomplishments, particularly those in science. He is
not the helpless animal described by the skeptics or the depraved animal described by the mystics; he is, as Aristotle said long ago, the rational animal. As such, he is a being with unlimited
potential for gaining knowledge and acting to achieve his spiritual and physical well-being. In 1750, French economist and statesman Anne Robert Jacques Turgot expressed the attitude that dominated
the era: “At last all clouds are dissipated. What a glorious light is cast on all sides! What a crowd of great men on all paths of knowledge! What perfection of human reason!”4
In physical science, the crowd of great men knew who had cleared their path. “The physics of the eighteenth century,” writes one historian, “provides an example of the profound influence exerted by
the work of a single man, Isaac Newton, to a degree that is unique in the development of modern science.”5 Newton had blazed the trail with two works of genius. In the Principia, he had presented the
universal laws of motion and gravitation and thus opened men's eyes to the extraordinary power of mathematics. In his Optics, he had provided a tour de force demonstration of how to ask questions of
nature and obtain the answers by systematic experimentation. The Enlightenment made the most of both lessons.
By far the greatest contribution to mathematical physics was made by Leonhard Euler. As one physicist puts it: “All branches of mathematics abound with Euler's theorems, Euler's coefficients, Euler's
methods, Euler's proofs, Euler's constant, Euler's integrals, Euler's functions, and Euler's everything else.”6 He wrote exhaustive treatises on differential and integral calculus, and he presented
the first modern treatment of methods for solving differential equations. He made fundamental contributions to several new areas of mathematics, including the theory of functions of complex
variables, the theory of special functions, and the calculus of variations. Furthermore, he invented a great deal of the notation used today in mathematics, including the ubiquitous modern symbols
for summations, finite differences, pi, the square root of minus one, the base of natural logarithms, and trigonometric functions.
Above all, Euler did for calculus what Euclid did for geometry. Neither man was the original discoverer, but each made an enormous contribution to his respective field and then presented the theory
systematically. The comparison also highlights an interesting difference between the two men. Euclid was more the theorist concerned with logical foundations, whereas Euler was more the “practical”
thinker so typical of the Enlightenment. Euler never lost sight of applications to the physical world, and such applications motivated his major innovations in mathematics.
Both logical rigor and applications are crucial. Without the first, we cannot be certain that our statements are true; without the second, it does not matter whether or not they are true. Throughout
most of history, however, mathematicians have committed the Platonic error of denigrating applications. Newton's influence caused a profound (and unfortunately temporary) change of attitude. During
the Enlightenment, mathematicians celebrated the indispensable role of their science in understanding the physical world. . . .
Acknowledgment: The author wishes to acknowledge the generous support of the Ayn Rand Institute.
1 Ayn Rand, The Fountainhead (New York: Signet, 1993), pp. 542–543.
2 Alan Cromer, Uncommon Sense: The Heretical Nature of Science (New York: Oxford University Press, 1993), p. 5.
3 Leonard Peikoff, The Ominous Parallels (New York: Stein and Day, 1982), p. 102.
4 Richard Panek, Seeing and Believing (New York: Viking Penguin, 1998), pp. 102–103.
5 I. Bernard Cohen, Franklin and Newton (Baltimore: J. H. Furst Company, 1956), quoted from Preface, vii.
6 Petr Beckmann, A History of Pi (New York: St. Martin's Press, 1971), p. 148. | {"url":"http://www.theobjectivestandard.com/issues/2006-spring/enlightenment-science.asp","timestamp":"2014-04-19T07:31:39Z","content_type":null,"content_length":"25075","record_id":"<urn:uuid:685530b4-8f22-407a-977a-9fcd0b88d336>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
August 30, 2011
The Genius In My Basement
Posted by John Baez
Simon J. Norton was a math prodigy as a child. He went to Cambridge for grad school. Together with his advisor John Conway, he did some amazing work on group theory. In 1985, based on an idea from
John McKay, they conjectured an astounding relation between the Monster group and the modular $j$-function. Conway dubbed this “Monstrous Moonshine”. The proof turned out to involve ideas from string
theory, but the full implications are yet to be understood.
But in 1985 — when some mathematicians claim he suffered a “catastrophic intellectual collapse” — Simon took to collecting thousands of bus and train timetables. What happened to him? What is he
doing now?
Posted at 4:11 AM UTC |
Followups (14)
August 29, 2011
Hadwiger’s Theorem, Part 2
Posted by Tom Leinster
Two months ago, I told you about Hadwiger’s theorem. It’s a theorem about Euclidean space, classifying all the ways of measuring the size of convex subsets. For example, here are three ways of
measuring the size of convex subsets of the plane:
• Area. This is a $2$-dimensional measure, in the sense that if you scale up a set by a factor of $t$ then its area increases by a factor of $t^2$.
• Perimeter. This is a $1$-dimensional measure, in the same obvious sense.
• Euler characteristic. This takes value $1$ on nonempty convex sets and $0$ on the empty set. It’s a $0$-dimensional measure, since if you scale up a set by a factor of $t$ then its Euler
characteristic increases by a factor of $t^0 = 1$: it doesn’t change at all.
Hadwiger’s theorem in two dimensions says that these are essentially the only ways of measuring the size of convex subsets of the plane.
But Hadwiger’s theorem is all about measurement in Euclidean space. There are many other interesting metric spaces in the world! Today I’ll tell you about the quest to imitate Hadwiger in an
arbitrary metric space.
Posted at 4:16 AM UTC |
Followups (6)
August 26, 2011
Mixed Volume
Posted by Tom Leinster
Take $2$ convex bodies in $\mathbb{R}^2$, or $3$ convex bodies in $\mathbb{R}^3$, or, more generally, $n$ convex bodies in $\mathbb{R}^n$. Mixed volume assigns to each such family a single real
The mixed volume of convex bodies $A_1, \ldots, A_n$ is written as $V(A_1, \ldots, A_n) \in \mathbb{R}$, and it’s uniquely characterized by the following three properties:
1. Volume: $V(A, \ldots, A) = Vol(A)$, for any convex body $A$
2. Symmetry: $V$ is symmetric in its arguments
3. Multiadditivity: $V(A_1 + \tilde{A}_1, A_2, \ldots, A_n)$ equals $V(A_1, A_2, \ldots, A_n) + V(\tilde{A}_1, A_2, \ldots, A_n)$ for any convex bodies $A_i$ and $\tilde{A}_1$, where $+$ denotes
Minkowski sum.
This looks rather like the characterization of determinant: $det$ is unique satisfying $det(I) = 1$, antisymmetry, and multilinearity. One difference is that we have symmetry rather than
antisymmetry. But a much bigger difference is that where determinant assigns a number to $n$vectors in $\mathbb{R}^n$, mixed volume assigns a number to $n$convex bodies in $\mathbb{R}^n$.
But maybe you don’t find the unique characterization satisfying. What is mixed volume?
Posted at 12:54 AM UTC |
Followups (60)
August 23, 2011
The Set-Theoretic Multiverse
Posted by David Corfield
There’s an interesting paper out today on the ArXiv – Joel Hamkins’ The set-theoretic multiverse.
The multiverse view in set theory, introduced and argued for in this article, is the view that there are many distinct concepts of set, each instantiated in a corresponding set-theoretic
universe. The universe view, in contrast, asserts that there is an absolute background set concept, with a corresponding absolute set-theoretic universe in which every set-theoretic question has
a definite answer. The multiverse position, I argue, explains our experience with the enormous diversity of set-theoretic possibilities, a phenomenon that challenges the universe view. In
particular, I argue that the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and as a result it can no longer be settled
in the manner formerly hoped for.
So set theorists’ experience of dealing with various models of set theory, and the small modifications needed to generate models either satisfying or not satisfying the continuum hypothesis, tells
them that settling its truth or falsity by devising an evident axiom from which either it or its negation follows is not a live option. Gödel’s platonism was wrong then, yet Hamkins retains a form of
The multiverse view is one of higher-order realism – Platonism about universes – and I defend it as a realist position asserting actual existence of the alternative set theoretic universes into
which our mathematical tools have allowed us to glimpse. The multiverse view, therefore, does not reduce via proof to a brand of formalism. In particular, we may prefer some of the universes in
the multiverse to others, and there is no obligation to consider them all as somehow equal.
Posted at 3:18 PM UTC |
Followups (27)
August 21, 2011
All Job Ads Should Be Like This
Posted by Tom Leinster
I wouldn’t normally post a job ad here, but if you read this one carefully, you’ll see why I made an exception. It’s from Neil Ghani at the University of Strathclyde, which is in Glasgow city centre.
6 month postdoc position
Mathematically Structured Programming Group, University of Strathclyde
We have the potential to apply for funds for a 6 month post doctoral position. The idea is that the successful candidate would spend those 6 months writing a full scale grant to fund themselves
for the next 3 years. The postdoctoral position would be within the Mathematically Structured Programming group at the University of Strathclyde whose research focusses on category theory, type
theory and functional programming. Current staff include Neil Ghani, Patricia Johann, Conor McBride, Peter Hancock, Robert Atkey and 6 PhD students. The candidate we are looking for should be
highly self motivated and appreciate that without beauty, we are lost.
Unfortunately, the deadline is extremely short and so any interested candidates should contact me immediately. I can then tell you more about what we would need to do.
Professor Neil Ghani (ng#cis.strath.ac.uk, with obvious change)
Posted at 9:14 PM UTC |
Followups (28)
August 20, 2011
Fixed Point Indices for Groupoids
Posted by Mike Shulman
The fixed point index is a rare example of a concept in homotopy theory that is much easier to motivate, as far as I can tell, when you think of $\infty$-groupoids as presented by topological spaces.
It is possible, however, to define it in purely categorical language (and quite simply, too, without referring to any complicated technology). I want to pose this as a puzzle: can you define the
fixed-point index this way (just for 1-groupoids, to make it easy) — and better, can you motivate it?
Posted at 12:58 AM UTC |
Followups (12)
August 18, 2011
Fields Institute Workshop on Category Theoretic Methods in Representation Theory
Posted by Alexander Hoffnung
We’re having a workshop this fall:
Sabin Cautis (Columbia)
James Dolan (UC Riverside)
Ben Elias (M.I.T.)
Joel Kamnitzer (Toronto)
Aaron Lauda (Southern California)
Anthony Licata (Institute for Advanced Study)
Marco Mackaay (Universidade do Algarve)
Volodymyr Mazorchuk (Uppsala)
Kevin McGerty (Imperial College London)
Raphaël Rouquier (Oxford)
Catharina Stroppel (Bonn)
Pedro Vaz (University of Zurich and IST Lisbon)
Ben Webster (Northeastern)
Geordie Williamson (Oxford)
Oded Yacobi (Toronto)
Alexander Hoffnung (Ottawa)
Alistair Savage (Ottawa)
Financial Support
There is some funding available for graduate students and postdoctoral fellows. Those interested should complete the funding application available on the workshop website (found below). The deadline
for applying for support is September 1, 2011.
Posted at 6:54 PM UTC |
Followups (2)
August 17, 2011
Klein 2-Geometry XII
Posted by John Baez
Back in May 2006, David Corfield wrote a blog entry called Klein 2-geometry, saying:
As a small experiment in collective, public thinking, I’m going to devote a post to the attempt to categorify Kleinian geometry, and update the date so it doesn’t slip off the radar of ‘Previous
His question was:
What prevents an Erlangen program for 2-groups?
The Erlangen program, is, of course, Felix Klein’s plan to study highly symmetrical spaces by thinking of them as quotient spaces $G/H$ where $G$ is a group and $H$ a subgroup. If you’ve heard of
this program but never really read about it, you might like his recent review article:
• Felix Klein, A comparative review of recent researches in geometry, arXiv:0807.3161.
Generalizing this idea to 2-groups (or beyond) seemed like a great idea, and David’s original post helped trigger the formation of the n-Category Café. The discussion went on and on, all the way to
Klein Geometry XI. However, it never developed to the height of magnificence that I’d hoped, mainly because of the lack of a clear goal.
But then this summer I went to Erlangen, and talked to Derek Wise…
Posted at 7:57 AM UTC |
Followups (42)
August 15, 2011
The Strangest Numbers in String Theory
Posted by John Baez
Here’s a really easy introduction to normed division algebras, particularly the octonions, and their role in string theory. You basically just need to have gone to high school:
Posted at 2:52 PM UTC |
Followups (12)
August 13, 2011
Geometries: Diffeomorphism Classes vs Quilts
Posted by John Baez
What follows is a guest post by Greg Weeks. If your memory extends back before the formation of this blog to the glory days of sci.physics.research, you should remember Greg.
Posted at 8:27 AM UTC |
Followups (91)
August 6, 2011
AKSZ-Models in Higher Chern-Weil Theory
Posted by Urs Schreiber
We would like to ask for comments on an early version of an article that we are writing:
Domenico Fiorenza, Chris Rogers, U.S., A higher Chern-Weil derivation of AKSZ $\sigma$-models (pdf)
but before I say what this is about (below the fold) here some background meant to put our theorem into perspective.
In the previous entry I gave a rough indication of the original definition of the class of topological sigma-model quantum field theories called AKSZ models .
This class coincides in dimension 2 with the class of Poisson sigma-models – which in turn contains the A-model and the B-model – and in dimension 3 with the class of Courant sigma-models – which in
turn contains the class of ordinary Chern-Simons theory as the special case where the base of target space is the point.
Therefore it is clear that the AKSZ models are some noteworthy type of generalization of Chern-Simons theory. Here I want to discuss a precise sense in which this is true systematically and give an
alternative definition of the AKSZ models that identifies them as a canonical construction in abstract higher Chern-Weil theory. In fact, the claim is that the action functional that defines the AKSZ
models is precisely the value of the higher Chern-Weil homomorphism with values in”secondary characteristic classes” and applied to a binary and non-degenerate invariant polynomial on any L-infinity
This in turn shows that the class of AKSZ models itself is only a special case of something more general which exists on very general abstract grounds, and which we call infinity-Chern-Simons theory
: this is defined for every invariant polynomial on every $L_\infty$-algebroid. Aspects of this I had mentioned before: this larger class contains of course higher dimensional abelian Chern-Simons
theories (these come from the canonical invariant polynomial on line Lie n-algebras) but for instance also the class of infinity-Dijkgraaf-Witten theories with sub-classes such as ordinary
Dijkgraaf-Witten theory and the Yetter models, and also for instace higher Chern-Simons supergravity.
Therefore all these topological $\sigma$-models (and many more that haven’t been given names yet) are incarnations of one single phenomenon: the higher Chern-Weil homomorphism. This exists on
entirely abstract grounds in every cohesive ∞-topos. Therefore, in a sense, all these types of $\sigma$-models have an existence from “first principles”.
This is maybe noteworthy, since many of these topological QFTs (maybe all of them?) play a role in the description of genuine physics via the holographic principle: for instance the 2d Poisson $\
sigma$-model as well as the A-model holographically encode ordinary quantum mechanics of particles (= 1-dimensional non-topological QFT), then 3-dimensional Chern-Simons theory holographically
encodes the quantum mechanics of non-topological strings and generally higher dimensional Chern-Simons theory in dimension $D = 4k+3$ (for $k \in \mathbb{N}$) holographically encodes self-dual higher
gauge theory in dimension $d = 4k+2$ (at least in the abelian case), such as the effective type II-superstring QFT in $d = 10$ – which in turn is famously thought to have vacua that look like the
standard model of observed particle physics.
Due to all these relations it should be interesting to see that and how AKSZ $\sigma$-models are a special class of $\infty$-Chern-Simons theories, too. This I have tried to work out with Domenico
Fiorenza and Chris Rogers. We now have an early writeup and would enjoy to hear whatever comments you might have:
A higher Chern-Weil derivation of AKSZ $\sigma$-models (pdf)
The essence of our main theorem is easily stated. See below.
Posted at 10:29 AM UTC |
Followups (10)
August 5, 2011
AKSZ Sigma-Models
Posted by Urs Schreiber
This is a continuation of the series of posts on sigma-model quantum field theories. It had started as a series of comments in
and continued in
String Topology Operations as a Sigma-Model.
Here I indicate the original definition of the class of models called AKSZ sigma-models (see there for a hyperlinked version of the following text).
In a previous post on exposition of higher gauge theories as sigma-models I had discussed how ordinary Chern-Simons theory is a $\sigma$-model. Indeed this is also a special case of the class of AKSZ
In a followup post I will explain that AKSZ sigma-models are characterized as precisely those ∞-Chern-Simons theories that are induced from invariant polynomials which are both binary and
non-degenerate. (Which is incidentally precisely the case in which all diffeomorphisms of the worldvolume can be absorbed into gauge transformations.)
Posted at 1:02 AM UTC |
Followups (5) | {"url":"http://golem.ph.utexas.edu/category/2011/08/index.shtml","timestamp":"2014-04-19T19:33:31Z","content_type":null,"content_length":"92784","record_id":"<urn:uuid:7aa0afcc-4b69-4311-a450-d3093c8a87ae>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gyro 3d rotational displacement using 2D polar coordinates? - Arduino Forum
I am using the ITG-3200 gyroscope and have it calibrated very well with very little drift. I'm building a toy prototype, so pinpoint accuracy is not a huge issue. The gyro is in the center of a ball.
I need to track the position of a single point (P) on the shell of a plastic sphere to trigger programmed actions when it is rotated certain directions relative to the user. I think spherical
coordinates are the way to go, because they allow me to know how far it has moved from the starting position and use that as the offset for calculations dealing with the current position. After some
research I am still fuzzy on the exact process of turning the gyros digital x, y, and z angular rate into a 2d projection of my point in spherical coordinates.
The whole point of this is to remove the frame of reference of the rotating object and instead, approximate the user's global frame of reference (from the initial conditions) close enough to allow
rotations to be mapped relative to the user's body, (not tied to the axi of the sphere). This would mean directions like rotate ball left, rotate ball right, rotate forward, backwards etc. could be
followed without reference to the accumulated rotational condition of the sphere.
In short I want to integrate the speed of rotation to keep track of the 3d rotational displacement using 2D polar coordinates (dropping the radius because it stays constant)
This is what I think should work. (Any help or conceptual guidance will be appreciated)
A.) Start with x, y, z angular rate
B.) Integrate over time to get angular displacement for each axis (some small error will be present)
C.) Calculate the single compound angle w and it's arbitrary rotation matrix (and axis)
D.) multiply the coordinates of initial point P with the resultant rotation matrix of step C
E.) covert displaced point in Cartesian coords to polar coords
F.) Use the Azimuth and Zenith of the polar mapping as if it were an x-y plane, effectively projection mapping the point on the sphere into a two dimensional array for driving various actions (like
blink this light and make this sound when the point P enters polar region x, y)
I learned projective geometry in art school, but never did calculus. This might be a major over-complication. It seems like more work than I probably need to do. Or it might be yet too simple. I'm
going to continue hashing it out but feel free to advise if you are familiar with this math.
I have read the somewhat dated primer document here:
but turning those equations into Arduino code is boggling my brain. Thanks. | {"url":"http://forum.arduino.cc/index.php?topic=91317.msg685689","timestamp":"2014-04-17T13:18:56Z","content_type":null,"content_length":"43657","record_id":"<urn:uuid:0e401966-62ae-448d-8afc-f1428e1f0225>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
I. Gayte Delgado
F. Guillén González
F.P. Marques Lopes
M. A. Rojas Medar
Optimal Control and Partial Differential Equations
In this work, some type of optimal control problems with equality constraints given by Partial Differential Equations (PDE) and convex inequality constraints are considered, obtaining their
corresponding first order necessary optimality conditions by means of Dubovitskii-Milyutin (DM) method. Firstly, we consider problems with one objective functional (or scalar problems) but non-well
posed equality constraints, where existence and uniqueness of state in function on control is not true (either one has existence but not uniqueness of state, or one has not existence of state for any
control). In both cases, the classical Lions argument (re-writing the problem as an optimal control problem for the control without equality constraints, see for instance [14]) can not be applied.
Afterwards, we consider multiobjective problems (or vectorial problems), considering three different concepts of solution: Pareto, Nash and Stackelberg. In all cases, an adequate abstract DM method
is developed followed by an example.
Copy of the file:
rp41-04.pdf (Adobe PDF)
rp41-04.pdf.gz (gzipped PDF)
October 07, 2004
Volta ao indíce de Relatórios de Pesquisa | {"url":"http://www.ime.unicamp.br/rel_pesq/2004/rp41-04.html","timestamp":"2014-04-20T04:14:10Z","content_type":null,"content_length":"2962","record_id":"<urn:uuid:e40d0042-e506-4aa8-b928-cabf1f2db803>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Ixquick
I believe that TOR is safe only if you're very very careful.
Often, there can be client side apps that run on your PC to get the original IP and send it back to the network. That was done by some agency recently as a proof of concept to illustrate that it is
possible to get the IP.
While you're on TOR you should block plugins like Flash and especially Java. You should not run Word Documents, PDF files, executables downloaded from the internet on TOR
Can you really be anonymous online?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
They do not need to inject a rootkit in. Micro$oft already supplies the backdoors and they are built in.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
But I'm using GNU/Linux
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
I know, I meant for Windows. But if they do it, what makes you think others do not?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
Because they're open source. If you doubt it, you can compile your OS yourself
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
Have you heard of the fiasco of Orbit Downloader? I mean has anyone really gone through every line of linux code?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
No, what about the orbit downloader?
Maybe not, but the point is that you're allowed to
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
Orbit has been the leading downloader for a number of years now. People rave about it. The truth it uses its host machines to crash sites. It also plants trojans on the host machine. A well known
plugin for FF spies on your machine. it too is open source and free.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
It is giving the users a chance to check the source and compile it themselves. If the users do not do that, they're losing an oppurtunity. What can I do about it?
What plugin?
Some good GNU/Linux user wrote:
To have more GNU/Linux users is not the answer.
The answer is to have less clueless people
Last edited by Agnishom (2013-11-09 12:19:15)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
Compiling it yourself is not the answer. You have to check each line of code in whatever language it is written in. Too big a task for most people. Noscript is open source but tried to pull a fast
one. So did Sony. For everyone that is caught, 10 get away.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
You should still agree that the risk of the malicious code being there is still reduced in open source projects
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
there is one more software ,i.e, HIDE IP EASY to hide your IP address.
But then the ultimate truth is that it is nearly impossible to really make our connections 100% secure.
friendship is tan 90°.
Re: Ixquick
The problem is that the proxy knows your IP, so how secure is it really?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
who's proxy?
i am new to these terms,pls tell me.
friendship is tan 90°.
Re: Ixquick
A proxy or proxy server is basically another computer which serves as a hub through which internet requests are processed
Acts as an intermediate between you and the sites you wish to go to. It provides another IP address. As a security measure it is very overrated.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
The software named "Hide IP Easy" or something actually uses such a proxy.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
ohk, i got it now.
so that's what they make use of.
We cannot actually call that secure then.right?
friendship is tan 90°.
Re: Ixquick
Not really, they know your IP and who is to say they do not track it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
exactly, that's what i was saying.
friendship is tan 90°.
Re: Ixquick
Yesterday I got a look at all the information FF is collecting about me, even though I have all the surveillance turned off. They say they are not giving it to anyone...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
How did you look at that?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Ixquick
By clicking FF->Help->FireFox Health Report.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
lol! this is what they do!!
friendship is tan 90°.
Re: Ixquick
Another one is the antivirus.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Ixquick
What is the big deal with the Firefox Health Report?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19766&p=4","timestamp":"2014-04-19T07:21:08Z","content_type":null,"content_length":"37994","record_id":"<urn:uuid:7eb42909-bb48-4821-9448-5939b8e5a626>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Anybody know how to understand the physical principles behind a given numerical in physics?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I think you'll have to be more specific...
Best Response
You've already chosen the best response.
That is,I am unable to solve Physics numericals.I cant find what principle or concept is behind that problem.I want to be an expert problem solver..Please help me...Thanks
Best Response
You've already chosen the best response.
I'm not sure how to help you, I don't know exactly what you want help with. If you have any examples of questions, that'd help. Otherwise, I don't know what kind of problems or concepts you need
help with.
Best Response
You've already chosen the best response.
I will talk about it in detail this evening..thanks
Best Response
You've already chosen the best response.
every time, when you begin solving problem , first of all find out the prerequisites of a problem then look on what principle your question is based on finally, try to apply the theoretical
statement of principle mathematically, lets say , work done on a body is stored in the form of potential energy , that means W=P.E=mgh , this way you can solve your questions and always try to
break your questions in smaller parts and then solve each part one by one .
Best Response
You've already chosen the best response.
@ghazi thanks
Best Response
You've already chosen the best response.
:D YW
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
can we chat?
Best Response
You've already chosen the best response.
sure , message me :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50cd56fde4b0031882dc43d7","timestamp":"2014-04-20T06:31:50Z","content_type":null,"content_length":"49509","record_id":"<urn:uuid:bd4d458b-c6e6-4e7c-ab99-d64610bc75d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laguna Hills Calculus Tutor
...I have also tutored different math subjects privately in the past. I graduated from MIT with a degree in electrical engineering. I currently work as an applications engineer.
11 Subjects: including calculus, physics, geometry, algebra 1
...I'm nearly done with my undergraduate degree in Chemistry, but I've been tutoring students and colleagues since high school. My favorite part of tutoring, by far, is the reward of seeing
someone succeed in an area they were struggling with. When I walk in and a student shows me they got an A for the first time on a Chemistry test, it confirms my work and renews my love of the
9 Subjects: including calculus, chemistry, algebra 1, algebra 2
Greetings potential students! I graduated from the University of California, Irvine in 2013 with a B.S. in biological sciences and received a 3.71 undergraduate GPA. My main teaching interests are
chemistry and biology (Honors & AP), although I am able to teach physics as well.
11 Subjects: including calculus, chemistry, physics, biology
...Additional Information: 2005 National Merit Scholar from Indiana AP Calculus AB...5 AP Biology...4 SAT Verbal...770 SAT Math...750 SAT II Writing...740 SAT II Biology...770 SAT II Math level
2...800 GRE Verbal...710 (98th percentile) GRE Quantitative...790 (92nd percentile) GRE Writing...4.5 (63r...
6 Subjects: including calculus, physics, trigonometry, precalculus
...The CBEST tests basic Mathematical knowledge like multiplication in word problem format. I know how to take a student and have them be fluent in their Mathematical skills and pass the test. I
have an Electrical Engineering degree (BSEE) from University of California Irvine.
28 Subjects: including calculus, chemistry, physics, geometry | {"url":"http://www.purplemath.com/Laguna_Hills_calculus_tutors.php","timestamp":"2014-04-18T14:15:05Z","content_type":null,"content_length":"24100","record_id":"<urn:uuid:f5af0609-3305-4e99-b04e-cb8bb0aabd35>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
1) a=b 2) ab=a^2 3) ab-b^2=a^2-b^2 4) b(a-b)=(a+b)(a-b) 5) b= a+b Reminder the first step where b = 2b So, 1=2. In this case in my opinion the wrong step is that in the third, because you can´t
subtract -b^2 from both sides.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5083f885e4b0dab2a5ec361c","timestamp":"2014-04-20T10:56:26Z","content_type":null,"content_length":"44412","record_id":"<urn:uuid:ff1411f0-c580-4bf1-8b62-00c596dd4d96>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Special Relativity Thwarts Eternalism (And More)
Image Credit: John D. Norton
INTRODUCTION. The image above illustrates a well-known argument due to Rietdijk and Putnam, which says that Special Relativity implies Eternalism (also called the “Block Universe” view). I recommend
John Norton’s exposition if you’re not already familiar with the argument.
Norton has pointed out that the Rietdijk-Putnam argument requires assumptions that are not implied by Special Relativity (I review this in the Part I.) But the plight of the Eternalist is worse than
that. After observing that observer-dependence is an essential part of the Eternalist claim (Part II), I’ll show that Special Relativity actually implies that there is no fact of the matter about
Eternalism (or the “determinate present” itself, for that matter), since observers disagree about the Eternalist claim (Part III).
PART I: SR Does Not Imply Eternalism. Eternalism is the view that all events in the past and future have a kind of Platonic or “determinate” existence. But Norton points out there are two hidden
assumptions in the Rietdijk-Putnam argument for this view. They are:
1. IF two events lie on the same hypersurface of simultaneity, THEN they are equally determinate; and
2. for all events e[1], e[2], e[3]: IF e[1] and e[2] are equally determinate and e[2] and e[3] are equally determinate, THEN e[1] and e[3] are equally determinate (transitivity of determinateness).
These assumptions seem to be independent of the theory of Special Relativity (SR). Therefore: SR does not imply eternalism.
PART II: Determinateness is Observer-Dependent. Notice that assumptions (1) and (2) only allow one to say that two events are equally determinate. There’s not yet a way to say that any event actually
is determinate. So if the case for Eternalism is to be made, a third assumption needed, that
3. There exists at least one event that is determinate.
How might one establish (3)? The only way to do it is through an observer. If a property of spacetime (like “determinateness”) isn’t in principle accessible by some observer, then we have good reason
to suspect that it’s meaningless.
Fortunately, a precise kind of “observer-dependence” is already built into the Eternalist view, and into the view of (her nemesis) the Presentist. Both agree that an observer’s experience of the
event “the present now” is required to first establish that an event is determinate. Therefore: both the Eternalist and the Presentist have only established the ability to say that an event is
determinate for some observer.
In order to establish the Eternalist claim (that Minkowski spacetime is factually determinate), one would have to claim that spacetime in Special Relativity is determinate for all observers.
PART III: SR Thwarts Eternalism. Minkowski spacetime (the spacetime of Special Relativity) is not determinate for every observer. In fact, for every event e in Minkowski spacetime, there are
observers who disagree about whether or not e is determinate. For example, consider Alice:
(In this spacetime diagram, the vertical axis is time, the horizontal axis is space, and c = 1.) Alice accelerates uniformly from v = -c to +c over the course of her lifetime. Her hypersurfaces of
simultaneity are indicated in red. But none of Alice’s hypersurfaces of simultaneity intersect any event in regions I or II. So Alice will claim that regions I and II are indeterminate.
On the other hand, suppose Bob travels with constant velocity for all of time. Bob’s hypersurfaces of simultaneity will collectively cover all of Minkowski spacetime. So Bob will conclude (by way of
assumptions 1-3 above) that all events (and thus also the in regions I and II) are determinate.
CONCLUSION. Determinateness cannot be an objective property of spacetime, because observers disagree whether or not a given event is determinate. As a result, there is no fact of the matter about
Eternalism, either.
This conclusion falls straight out of the assumptions (1-3) above, which were adopted by the Eternalist in the Rietdijk-Putnam argument. So insofar as one accepts these assumptions, Special
Relativity not only thwarts Eternalism: it implies that the very notion of a determinate event is bogus.
Soul Physics is authored by Bryan W. Roberts. Thanks for subscribing.
Want more Soul Physics? Try the Soul Physics Tweet.
6 thoughts on “How Special Relativity Thwarts Eternalism (And More)”
1. I’m a bit confused–how does determinateness relation to the notion of being determined (or determinism)?
2. Justin: yes, you won’t want to confuse those two concepts. Here’s the difference.
Determinism is a mathematical property, which is (or is not) possessed by a theory. (Earman (1986) for more.)
Determinateness is a (vague, Platonic) property that some suggest is possessed by the world.
Here’s a way to pull the two ideas apart. In order to ask whether or not determinism holds, you will typically need an equation of motion and an initial state. If these two ingredients guarantee
a unique way that the initial state can evolve in the future and past (a unique path through phase-space), the this system is deterministic.
Notice that there is no question yet about what’s real and what isn’t. This was a question about the uniqueness of solutions to a differential equation.
You don’t need any of that mathy stuff to talk about determinateness. In fact, you can (presumably) ask if an empty universe with no dynamical field equations has determinate spacetime points.
The reason is, determinateness is supposed to be a question about what is real and what isn’t at various points in time and space. And I argue that the very idea is whacky.
Hope that helps! -B.
3. Just to check I’ve understood the argument in part III correctly: there’s a frame of reference (namely, Alice’s) according to which some events (those in regions I or II) are not simultaneous
with any moment whatsoever, and therefore not determinate/real. Is that it?
But why does it matter if “there are observers who disagree about whether or not e is determinate”? You earlier talk about how a property must be “in principle accessible by some observer“. So
why not stick with that, and say that an event is determinate/real so long as it fits within some frame of reference or other? Why require all? That stricter requirement seems unmotivated to me.
Alice should simply recognize that there are blocks of spacetime which are inaccessible, in terms of simultaneity planes, from her frame of reference.
4. Richard: you’ve got the idea, and your objection is well taken. My argument does depend on this claim:
(*) The only factual properties of spacetime are those that are agreed upon by all observers.
An objector could consistently discharge that assumption, and indeed avoid my conclusion. However, I claim that physics in practice assumes (*). For example:
(1) The existence of an electric field depends on the motion of an observer, as in the famous example of a conductor around a magnet. 19th century ether-theorists assumed that there was one
correct observer (namely, the ether rest-frame), and thus a fact of the matter about the field’s existence. But the 20th century saw the wide adoption of (*), and thus the rejection of the
electric field by itself as an objective property of spacetime.
(2) More recently, QFT-interpreters have noticed that Unruh radiation consists in particles that exist for accelerating observers (like Alice), but not for inertial observers (like Bob). My
assumption (*) is often explicitly adopted to argue that “particle number” is not a fundamental property in QFT (for example, see Wald 1994, pp.116).
Rather than list more examples, let me concede that (*) might still be wrong. In this case, I am willing to fall back to a more modest claim: Special Relativity together with modern physics
practice thwarts eternalism.
Thanks for the note! -Bryan
5. Bryan, it seems this extra assumption ends up being problematic since it confuses what all observers would agree upon ontologically with what they’d agree upon epistemologically. That is you are
in effect saying that epistemology (what they can measure) entails the ontology. I don’t think everyone would agree.
Of course this ends up being a problem with uses of SR against presentism since the epistemology vs. realism issue is key. And the skeptic can always point to the epistemological issues.
6. Nice post, Bryan. It reminds me of Einstein’s Galilean relativistic dream, tweeked a bit.
Imagine three cowbells, tuned to notes A, B, and C, hung a hundred feet apart in a straight line in the order A-B-C;
Einstein stands in the middle and presses a button to simultaneously send an electrical signal through three equal length electric wires to sound the bells simultaneously.
Assistant at A records the sequence ABC,
Assistant at C records the sequence CBA,
Einstein records B followed by an AC chord.
Each of these three observations is repeatable and factual, yet contradicts the other two. All three event sequences are simultaneously true!
Therefore, one must conclude that:
1) All events are relative to space-time locality
2) There are an infinite number of objectively inconsistent histories of the world, each originating in its here-now.
Steve Gabor | {"url":"http://www.soulphysics.org/2008/07/how-special-relativity-thwarts/","timestamp":"2014-04-20T08:16:02Z","content_type":null,"content_length":"48033","record_id":"<urn:uuid:77337c14-aada-4e1d-9122-c481be8ed091>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
What Does The 10th Dimension Look Like?
Does more than three dimensions exist? Yes. No. Maybe. Read the article!
The 1st Dimension
We start with point. Then we place another point somewhere else. Now we connect these to points. We have our first dimension, a line.
The 2nd Dimension
If we take our 1 dimensional line then place another 1 dimensional line across our first line we’ll have our second dimension. It was length and width, but no depth.
An easier way to represent this might be to make a 1 dimensional line branch out of our other 1 dimensional line. (Like a branch.) We will call this a a split.
The 3rd Dimension
We can imagine the third dimension easily because we’re always in the third dimension. But lets take a different approach.
Remember the 1 dimensional line with a branch sticking out of it? Yes, our split. Imagine a 2 dimensional ant walking along the branch that sticks out. Now, if we fold that line and connect it to the
first line, it is taking the ant from one place and transporting it to another. Our 3rd dimension is a fold.
The 4th Dimension
If we were to think of ourselves as we were one minute ago and then think of ourselves as we are now and draw a line between our one minute ago selves to our right now selves we would be drawing a
line in the fourth dimension. We could call this Duration (Not time). The fourth dimension is yet again a line.
The 5th Dimension
To us it feels like Duration is going in a straight line. However if we were to draw a branch from our lifespan line (Birth-Death) it would be like all the different futures. From one branch you are
a doctor, in another branch you would be a millionaire, and so on. So the 5th dimension is a split in the 4th dimension.
In simpler terms the 5th dimension is all the different outcomes of whatever object.
The 6th Dimension
Lets consider ourselves in one of the outcomes of the 4th dimension. In this outcome I am an average man. What if I want to change me current state of being (Be a rich rich man!)? One might say
travel back in time give your young self a an invention then travel back to the future. What if you could just take a shortcut and jump from your average self to your millionaire self. That would be
the 6th dimension. A fold in the 5th dimension.
The 7th Dimension
In the seventh dimension we will be treating everything in the 6th dimension as a single point. To get the big picture, imagine, from the beginning of the universe (Big Bang) and we drew a line to
each of the possible outcomes or deaths of the universe, and we considered that as a single point. We can call this point infinity. So a 7th dimensional line would be one infinity connected to
another infinity by a line. (One universe and its outcomes to another universe and its outcomes.)
The 8th Dimension
If we were to branch off from that line between the two infinities into another infinity we would be in the 8th dimension, a split.
The 9th Dimension
Now to go from one 8th dimensional branch to another 8th dimensional branch then we would fold one line into the other letting us travel from one line to another. So the 9th dimension is a fold just
like the 3rd and 6th dimension.
The 10th Dimension – Dun dun duuunnnnnn!
If we take all the possible universes and all their possible time lines and treat that as a single point, than we’ve got a point in the 10th dimension. Now to continue the cycle we’ll need another
point to connect this point to. But there’s no place left to go! We’re screwed.
No Responses to “What Does The 10th Dimension Look Like?”
Post Comment | {"url":"http://scienceray.com/mathematics/what-does-the-10th-dimension-look-like/","timestamp":"2014-04-25T06:08:29Z","content_type":null,"content_length":"64377","record_id":"<urn:uuid:7229b451-c556-42cc-8fd9-29bdde3302a8>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Powers of Matrices, Putzer's Method
Date: 03/09/97 at 17:30:34
From: Dina Pesenson
Subject: Matrices and exponents
I'm a junior in college. In my elementary differential equations
course, we received an interesting extra credit problem. I was
wondering if you could tell me where to look for help. Here's the
3 0 0 -1 -2
1 3 -1 0 -3
2 0 2 -2 -3
1 -1 1 0 0
I'm not supposed to use the Taylor expansion to approximate. The only
other way I found so far is by diagonalizing the matrix. The problem
is that when you try to find the eigenvalues, you get 2,2,2,2,2 (I
used Mathematica to find them). This means that I cannot find 5
linearly independent eigenvectors, which in turn means I cannot find
an inverse to the matrix consisting of eigenvectors. Therefore, I
cannot diagonalize the original matrix. I also tried to do the entire
problem on Mathematica, but it wouldn't solve it - the most it offers
is the Taylor approximation. In our class, we learned how to deal with
eigenvalues of multiplicity 2 only. Where could I find more
information on how to work with eigenvalues of higher multiplicity in
a case like this? What would you do about this?
Date: 03/10/97 at 14:54:16
From: Doctor Pete
Subject: Re: Matrices and exponents
Look for the following textbook in your university library:
Apostol, Tom M. _Calculus_, Vol II, 2nd Ed. pgs. 205-211
In case you cannot find this book (it is excellent), I will briefly
describe what the above pages discuss, which is Putzer's method for
calculating e^(tA), where t is a constant and A is an n x n matrix.
In particular, a special case of Putzer's method applies, for, as you
have noted, the matrix in question has equal eigenvalues.
First, the Cayley-Hamilton theorem states:
Let A be an n x n matrix and let f(L) = det(LI-A)
= L^n + c[n-1]L^(n-1) + c[n-2]L^(n-2) + ... + c[1]L + c[0] be its
characteristic polynomial. Then f(A) = 0.
I hope you are familiar with this statement, and better yet, its
proof - though we won't go into it. The basic idea we want to get
out of this is that the (n)th power of any n x n matrix A is
expressible as a linear combination of lower powers I, A, A^2, ... ,
A^(n-1). Then it immediately follows that A^(n+1), A^(n+2), and in
general, all higher powers of A are also expressible as linear
combinations of these lower powers. Since e^(tA) has a convergent
Taylor series expansion as an infinite sum of powers of A, it is
reasonable to expect that we can reexpress this infinite sum as a
finite sum over powers of 0 to (n-1). That is, we may have:
e^(tA) = Sum[q[k,t]A^k,{k,0,n-1}]
(I am using a "pseudo-Mathematica" notation here) where q[k,t] are
scalar coefficients which depend on t. Putzer's method, then, is
really a theorem which demonstrates that q[k,t] exists, and finds it.
In its general form, Putzer's method is outlined as follows: let
L[1], L[2], ... L[n] be the eigenvalues of the n x n matrix A. Define
a sequence of polynomials in A:
P[0,A] = I,
P[k,A] = Product[A-L[m]I,{m,1,k}],
for k = 1, 2, ..., n. Then:
e^(tA) = Sum[r[k+1,t]P[k,A],{k,0,n-1}]
where the scalar coefficients r[1,t], ..., r[n,t] are determined
recursively from the system of linear differential equations:
r'[1,t] = L[1]r[1,t], r[1,0] = 1,
r'[k+1,t] = L[k+1]r[k+1,t]+r[k,t], r[k+1,0] = 0,
k = 1, 2, ..., n-1. I will not prove this; try to prove it yourself
(if you can, you'd probably get more than extra credit!), or refer to
Now, this is not very convenient to work with; however, there is a
special case of the above, when L[1] = L[2] = ... = L[n] = L; i.e.,
all the eigenvalues of A are equal. If we call this common eigenvalue
L, then we have:
e^(tA) = e^(Lt) Sum[t^k/k! (A-LI)^k, {k,0,n-1}].
The proof of this is not too difficult--observe that the matrices LtI
and t(A-LI) commute, and apply the Cayley-Hamilton theorem.
Of course, in your example, t=1, L=2. Since I've essentially told you
the answer, I would strongly suggest you find out more about Putzer's
method, and take a whack at proving it. It isn't extraordinarily
difficult, but it's a bit computational. I also highly recommend
Apostol's text, both volumes - it is calculus done right.
Best wishes,
-Doctor Pete, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/52052.html","timestamp":"2014-04-16T23:05:43Z","content_type":null,"content_length":"9329","record_id":"<urn:uuid:18129a9f-9db1-4b85-a273-a1e814762ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Check solution: find the probability density function of Y=logX
I think I have a good answer to this one, but I'd like to make sure it's correct.
If $X$ has exponential distribution with parameter $\lambda$, find the probability density function of $Y=\log X$.
Here is what I did.
$F_Y(y) = P \{ Y \leq y \} = P \{\log X \leq y \} = P \{ X \leq e^y \} = \int_{0}^{e^y} \lambda e^{- \lambda x} dx$
Let $u=\lambda x$, $du=\lambda dx$
$\int_{0}^{e^y} \lambda e^{- \lambda x} dx = \int_{0}^{\lambda e^y} e^{-u} du = -e_0^{\lambda e^y} = -e^{-x} \Big |_0^{e^y} = -e^{-e^y} + 1 = 1 - e^{-e^y}$
$\frac {d F_{|X|} (x) } {dx} = p(x) = e^x e^{-e^x}$
Does this look solid? | {"url":"http://mathhelpforum.com/advanced-statistics/162213-check-solution-find-probability-density-function-y-logx.html","timestamp":"2014-04-18T14:52:32Z","content_type":null,"content_length":"48793","record_id":"<urn:uuid:37f78da5-6b59-4943-a5ce-9461c5d83c6a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Thousands Game
Copyright © University of Cambridge. All rights reserved.
Why do this problem?
This problem
can be used when introducing or revising numbers in the thousands. Children's understanding of place value will be reinforced and discussion will give plenty of opportunities to emphasise appropriate
vocabulary not only on place value but also on odd and even numbers. The game described can transform what could be a tedious task into an engaging activity.
Possible approach
You could start by playing the game described in the problem with the whole group. You will need a set of digit cards.
This sheet of digit cards can be printed out, preferably onto card. A bag is not necessary, but does add a little drama into the activity! The important thing is that the cards should be picked
unseen. This simple interactivity can be used for displaying the digit cards when they have been chosen. It should be noted that the cards (and bag) will still be required.
After this learners could work in pairs on the game.
This sheet provides two "boards" for playing the game with the digit cards provided. Then they could go on to the actual problem from this sheet which gives the questions asked (but without the
At the end of the lesson the group can gather together to discuss, not only place value and comparing and ordering numbers, but also odd and even numbers. There should be plenty of opportunities to
emphasise the appropriate vocabulary for the work they have been doing.
Key questions
Which digit is most important if you are making the largest/smallest number possible?
To make the highest possible number, where would it be best to put the highest/lowest digit card?
If you want to make the lowest number, where would it be best to put the lowest/highest digit card?
What makes a number odd/even?
What kind of number will the units digit need to be to make an even number? What about an odd number?
Possible extension
Learners could play an alternative version of the game in which two players take turns in taking a digit card (unseen) and placing it on their board before taking the next card. This requires
considerable thought and understanding. Children will enjoy playing
Nice and Nasty
after having a go at this activity.
Possible support
Some children find place value difficult and even alarming. They could start with a similar activity using only three-digit numbers or even just two. Reading the numbers out loud may help turn what
seems to them just a jumble of digits into something meaningful. | {"url":"http://nrich.maths.org/2646/note?nomenu=1","timestamp":"2014-04-19T02:57:22Z","content_type":null,"content_length":"9237","record_id":"<urn:uuid:53b963b5-b1db-4f45-9b8b-86c77b2324f7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00091-ip-10-147-4-33.ec2.internal.warc.gz"} |
Narrow Search
Earth and space science
Sort by:
Per page:
Now showing results 1-10 of 14
This collection of 103 individual sets of math problems derives from images and data generated by NASA remote sensing technology. Whether used as a challenge activity, enrichment activity and/or a
formative assessment, the problems allow students to... (View More) engage in authentic applications of math. Each set consists of one page of math problems (one to six problems per page) and an
accompanying answer key. Based on complexity, the problem sets are designated for two grade level groups: 6-8 and 9-12. Also included is an introduction to remote sensing, a matrix aligning the
problem sets to specific math topics, and four problems for beginners (grades 3-5). (View Less)
This is a collection of mathematical problems about transits in the solar system. Learners can work problems created to be authentic glimpses of modern science and engineering issues, often involving
actual research data.
In this problem set, students calculate precisely how much carbon dioxide is in a gallon of gasoline. A student worksheet provides step-by-step instructions as students calculate the production of
carbon dioxide. The investigation is supported the... (View More) textbook "Climate Change," part of "Global System Science," an interdisciplinary course for high school students that emphasizes how
scientists from a wide variety of fields work together to understand significant problems of global impact. (View Less)
This is an activity about utilizing proportional mathematics to determine the height of lunar features. Learners will use the length of shadows to calculate the height of some of the lunar features.
This activity is Astronomy Activity 6 in a larger... (View More) resource entitled Space Update. (View Less)
This is an activity about the mathematics of oscillation. Using data obtained in ninth and tenth activities in the Exploring the Earth's Magnetic Field: An IMAGE Satellite Guide to the Magnetosphere
educators guide, learners will plot the formula... (View More) X(t)=X(0)cos(ft) or X(t)=X(0)sin(ft), depending on the data obtained during the oscillation experiments. Then, the mathematical model
for oscillation is further refined by including damping. This is the eleventh activity in the guide and requires prior use and construction of a soda bottle magnetometer. (View Less)
On this worksheet, students are provided hurricane data by decade and are asked to calculate frequencies and averages. The resource is part of the teacher's guide accompanying the video, NASA SCI
Files: The Case of the Phenomenal Weather. Lesson... (View More) objectives supported by the video, additional resources, teaching tips and an answer sheet are included in the teacher's guide. (View
In this activity, students learn about the advantages of the metric system, by comparing the ease of calculation and conversion between the English and metric systems of measurement. This resource is
from PUMAS - Practical Uses of Math and Science -... (View More) a collection of brief examples created by scientists and engineers showing how math and science topics taught in K-12 classes have
real world applications. (View Less)
This is an activity about satellite size. Learners will calculate the volume of the IMAGE (Imager for Magnetopause-to-Aurora Global Exploration) satellite, the first satellite mission to image the
Earth's magnetosphere. They will then determine the... (View More) effect of doubling and tripling the satellite dimensions on the satellite's mass and cost. This is the first activity in the Solar
Storms and You: Exploring Satellite Design educator guide. (View Less)
This is an activity about interpretation of a data graph. Learners will use mathematics to create a pie chart of percentages and answer accompanying questions. This is the fourth activity in the
Solar Storms and You: Exploring Satellite Design... (View More) educator guide. (View Less)
Using the simple example of calculating the probability of reaching a traffic light while green, students are shown how to build a mathematical model using a very commonly-taught formula (sum of
first n integers) to solve a rather practical problem.... (View More) This resource is from PUMAS - Practical Uses of Math and Science - a collection of brief examples created by scientists and
engineers showing how math and science topics taught in K-12 classes have real world applications. (View Less)
«Previous Page12 Next Page» | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics&resourceType%5B%5D=Instructional+materials%3AActivity&resourceType%5B%5D=Instructional+materials%3AProblem+set&learningTime=30+to+45+minutes","timestamp":"2014-04-20T22:08:42Z","content_type":null,"content_length":"65911","record_id":"<urn:uuid:66a9ce82-dca6-401a-921d-5f95472a31b6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
eee Choose the appropriate symbol to compare the fractions. View Solution
29 ___ 89 eee
eee Find the greatest fraction.
516, 56, 615, and 16 View Solution
eee Identify the smallest mixed number.
825 , 325 , 925 , 725 View Solution
eee Tommy and William went to a coffee shop and ordered two cups of coffee. While Tommy completed drinking 316^th of his coffee, William finished 516^th of his coffee. Who drank more? View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
73 ____ 715 eee
eee Katie gave 123 pints of milk to Stephanie and 54 pints of milk to Nathan. Who got more milk, Stephanie or Nathan? eee View Solution
eee Which group orders the fractions 49, 89, and 79 from the greatest to the least? View Solution
eee Compare 313 and 413 using one of the symbols <, >, or =. View Solution
eee Find the greatest fraction:
12, 23, 32, and 13. View Solution
eee Which is the smaller fraction, 12 or 45? eee View Solution
eee Ursula brought two identical cakes and gave one cake to Jeff and another to Sunny. Jeff ate 47 ^th of the cake and Sunny ate 12 ^th of the cake. Who ate a bigger portion? eee View Solution
eee Which is the smaller improper fraction, 143 or 72? View Solution
eee Find the greater of the two fractions, 413 and 416. View Solution
eee Which is smaller of the two fractions, 173 or 556? eee View Solution
eee Order the fractions 317, 517, 417 and 617 in the least to the greatest order. View Solution
eee Wilma gave 367 pints of milk to Irena and 258 pints of milk to Nathan. Who got more milk, Irena or Nathan? eee View Solution
eee Tim and William went to a coffee shop and ordered two cups of coffee. While Tim completed drinking 29^th of his coffee, William finished 49^th of his coffee. Who drank more? eee View Solution
eee Which of the models is the greater fraction? View Solution
eee Order the fractions 3511, 3811 and 3211 from the greatest to the least. View Solution
eee Arrange the fractions 3613, 2613 and 1513 in the decreasing order. View Solution
eee Which of the models represents a smaller fraction? eee View Solution
eee Order the fractions 213, 247 and 214 from the greatest to the least. View Solution
eee Choose a model that represents an improper fraction less than 1610. eee View Solution
eee Choose a model that represents a fraction greater than 36. eee View Solution
eee Choose a model that represents a fraction greater than 58. eee View Solution
eee Choose a model that represents a fraction smaller than 168. eee View Solution
eee Order the 25, 228, 0.07, and 70% in descending order. View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
52 ____ 37 eee
eee Which group orders the fractions 25, 45, and 35 from the greatest to the least? View Solution
eee Order the fractions 213, 413, 313 and 513 in the least to the greatest order. View Solution
eee Find the greatest fraction.
314, 34, 413, and 14 View Solution
eee Identify the smallest mixed number.
4310 , 1310 , 5310 , 3310 View Solution
eee Choose the appropriate symbol to compare the fractions. View Solution
313 ___ 1213 eee
eee Which of the models represents a smaller fraction? View Solution
eee Choose a model that represents a fraction greater than 46. View Solution
eee Find the greatest fraction.
213, 23, 312, and 13 View Solution
eee Identify the smallest mixed number.
813 , 413 , 913 , 713 View Solution
eee George and Ed went to a coffee shop and ordered two cups of coffee. While George completed drinking 316^th of his coffee, Ed finished 516^th of his coffee. Who drank more? eee View Solution
eee Which of the models is the greater fraction? eee View Solution
eee Choose a model that represents a fraction smaller than 58. eee View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
73 ____ 613 eee
eee Quincy gave 367 pints of milk to Wilma and 258 pints of milk to Charlie. Who got more milk, Wilma or Charlie? eee View Solution
eee Choose the models that represent a fraction smaller than 158. eee View Solution
eee Which group orders the fractions 511, 1011, and 911 from the greatest to the least? View Solution
eee Choose the models that represent a mixed number greater than 104. View Solution
eee Choose the models that represent improper fractions less than 1610. View Solution
eee Compare 29 and 49 using one of the symbols <, >, or =. View Solution
eee Find the greatest fraction:
310, 413, 1310, and 313. View Solution
eee Which is the smaller fraction, 12 or 56? eee View Solution
eee Which is the smaller improper fraction, 173 or 92? View Solution
eee Identify the greatest fraction.
4213 , 5514 , 3416 , 6312 View Solution
eee Find the greater of the two fractions, 414 and 418. View Solution
eee Which is smaller of the two fractions, 92 or 423? eee View Solution
eee Order the fractions 111, 311, 211 and 411 in the least to the greatest order. View Solution
eee Order the fractions 2417, 2517 and 2317 from the greatest to the least. View Solution
eee Arrange the fractions 31011, 21011 and 1711 in the decreasing order. View Solution
eee Which fraction is greater, 34 or 23? View Solution
eee Nina brought two identical cakes and gave one cake to Matt and another to Tony. Matt ate 47 ^th of the cake and Tony ate 12 ^th of the cake. Who ate a bigger portion? eee View Solution
eee Order the fractions 312, 335 and 313 from the greatest to the least. View Solution
eee Diane gave 367 pints of juice to Rachel and 258 pints of juice to Brian. Who got more juice, Rachel or Brian? eee View Solution
eee Tim and Chris went to a coffee shop and ordered two cups of tea. While Tim completed drinking 29^th of his tea, Chris finished 49^th of his tea. Who drank more? eee View Solution
eee Choose a model that represents an improper fraction greater than 1610. eee View Solution
eee Which of the models is the smaller fraction? View Solution
eee Which of the models represents a greater fraction? eee View Solution
eee Choose a model that represents a fraction greater than 168. eee View Solution
eee Choose a model that represents a fraction smaller than 36. eee View Solution
eee Choose a model that represents a fraction smaller than 58. eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxdxbgdfmxkhheb&.html","timestamp":"2014-04-16T10:10:17Z","content_type":null,"content_length":"129111","record_id":"<urn:uuid:48a1b1e1-fe71-4aed-8f0e-b1fa04a736c6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ignoring Zeros
Date: 09/03/98 at 17:41:22
From: Cole Krivoski
Subject: Math - Decimals
Here's the problem. I just want to know what to do.
You have 0.76 and 0.760 and you have to find out which number is
greater or less, or if they are equal. My teacher says they are equal.
The thing I do not understand is the number 0.760. Is point 760
thousandths right or wrong? So shouldn't 0.760 be greater? Or does the
zero not count?
Date: 09/04/98 at 13:05:14
From: Doctor Peterson
Subject: Re: Math - Decimals
Hi, Cole. This is an important question! Basically the answer is, as
you suggested, that the zero doesn't count; but it's important to
understand why that particular zero can be ignored, but others can't.
There are several ways to explain the meaning of a decimal. One is to
treat it as one big fraction, so that:
0.76 = --- and 0.760 = ----
Now look closely, and you'll see that you can simplify 760/1000 by
dividing the numerator and denominator by 10 to get 76/100. Does that
look familiar? I've just shown that these numbers are equal.
Another way to look at decimals is by place value, just as you did for
whole numbers. Then we can say that
0.760 = 7 * -- + 6 * --- + 0 * ----
Now do you see that the zero doesn't add anything to the number?
That's why it can be ignored. It's actually the same reason you can
ignore the zero in 076, which just means no hundreds, just as this
zero means no thousandths.
Now what about zeroes that aren't at the end? Look at the meaning of
0.706 = 7 * -- + 0 * --- + 6 * ----
That's not the same as 0.76, because now the 0 does something: it
changes the meaning of the 6 from 6 hundredths (in 0.76) to 6
thousandths (in 0.706).
So the rule is: when there is a zero at the left side of a number (to
the left of the decimal point), or at the right side of a number (to
the right of the decimal point), you can ignore it. If a zero is
between two non-zero digits, or between a digit and the decimal point,
you have to pay attention to it.
I hope that helps.
- Doctor Peterson, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/58948.html","timestamp":"2014-04-16T22:05:03Z","content_type":null,"content_length":"7159","record_id":"<urn:uuid:8bc9876b-247c-4b82-a20a-917a4cdb8f35>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
The Group of Rotations of the Cube
Consider a particular representation, , of on that preserves a cube centered at the origin, with faces orthogonal to the axes. By examining the action of elements of the group on the cube, both
singly and in composition with other elements, you can see that is isomorphic to the group of rotations of the cube. The brightest cube is fixed and the next two cubes show the actions of and . The
axes of rotation are shown in gray. The thickest axis represents .
The two sliders can be used to select elements and from . These elements are shown in cycle form, as is their composition, . Below that are the matrices of , , and . The graphic shows how these
matrices act on a cube centered at the origin, with faces orthogonal to the axes. The brightest cube is fixed, and displayed only as a point of reference. The middle cube (slightly darker) shows the
action of on the fixed cube, where the gray line shows the axis of rotation. The action of is first, since the operation throughout is composition. The third and darkest cube shows both the action
of on the cube already acted on by and the action of on the fixed cube. The thinner gray line shows the axis of rotation of and the thicker gray line shows the the axis of rotation of the
composition . By experimenting with different elements and compositions of elements, you can verify that is isomorphic to the group of rotations of the cube. | {"url":"http://demonstrations.wolfram.com/TheGroupOfRotationsOfTheCube/","timestamp":"2014-04-17T18:26:28Z","content_type":null,"content_length":"44535","record_id":"<urn:uuid:d71dfb9c-56d2-4ea1-b3fe-43603096d7ae>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do I use the formula 6000=2884.25(1+1.9%/12)^(12)(t)... how do I put 38 years and 7 months into the equation?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
It depends on what t is supposed to be in the formula. If t is in months: put 38*12+7= 463 in. If t is in years: 38+7/12 = 38.58333 in.
Best Response
You've already chosen the best response.
Here is the question: Chloe deposited $2,884.25 into a savings account with an interest rate of 1.9% compounded monthly. About how long will it take for the account to be worth $6,000? A) 9
years, 1 month B) 21 years, 8 months C) 38 years, 7 months D) 38 years, 11 months I think the answer is 38 years, 7 months... but I'm not sure and I don't know how to plug in 38 years, 7 months
into the equation to make sure I have the right answer.
Best Response
You've already chosen the best response.
The formula to calculate the the amount is\[A(t)=2884.25\left(1+\frac{ 0.019 }{ 12 }\right)^{12t}\]Because of the "12t" I think you have to put in years:\[A(38.58333)=(calculator)=5994.36\]It is
easier to replace the "12t"in the formula by the number of months = 463:\[2884.25\left(1+\frac{ 0.019 }{ 12} \right)^{463}=6000.054\approx 6000\]
Best Response
You've already chosen the best response.
So that would mean the answer is 38 years, 7 months... correct? (:
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ec8172e4b07cd2b64901bf","timestamp":"2014-04-18T21:29:29Z","content_type":null,"content_length":"37875","record_id":"<urn:uuid:95e0d262-fba1-4231-877d-3dab1e9e0dc8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Stuck in Proof of Laws of Syllogism and Absorption in Propositional Logic!
June 2nd 2008, 01:10 AM
[SOLVED] Stuck in Proof of Laws of Syllogism and Absorption in Propositional Logic!
Hello everyone, great this forum exists, I was just getting so hopeless...
Well I am badly stuck with the following problem, both problems are from Rosen section 1.2, I can't continue any further and am only still in section 1.2.
Please help(Later I'll also post my steps towards solution, I would like to know what went wrong.)
1) Prove that following statement is a tautology
(A -> B) /\ (B -> C) -> (A -> C)
Prove without using Truth Tables.
2) A \/ (A /\ B) = A
Please help, I am badly stuck, specially had given a lot of time to the first one.
June 2nd 2008, 04:07 AM
Where are the references in the proof from?
Thanx Angel White! Thanx a lot! But where are the references from! I mean Table references and page references! Is it from Rosen itself.?
June 2nd 2008, 05:27 AM
they are from Rosen's Discrete Mathematics and it's applications 6th edition. I assumed we were using the same book. I can scan the pages for you if we are not, I expect you should have something
similiar in your book. Also, if you have a rule called the "chain rule" in yours, you can skip the majority of the steps I did and do
(A -> B) /\ (B -> C) -> (A -> C)
= (A->C)->(A->C)
= ~(A -> C) V (A -> C)
= T
Edit: I have done something wrong in my proof, I am looking it over now.
June 2nd 2008, 06:07 AM
Some things wrong(As you said)
angel.white Thanx again for the clarification:
Unfortunately I am using Rosen 4th Edition,
anyways that version difference does not much matters if I understand it :
But I am having some doubts:
My derivation would go like this
(A -> B) /\ (B -> C) -> (A -> C)
~[(~A V B) /\ (~B V C)] V (A -> C)
[~(~A V B) V ~(~B V C)] V (A -> C)
[(A /\ ~B) V (B /\ ~C)] V (A -> C)
Here on I get AND in the statement and thus on distributing it further it gets very complicated leading me no where.
Where am I wrong here?
Also I don't have that chain rule in any table, could you please mention it here.
About absorption law Table 6 mention in my version of my book something else, and no where it does mentions any absorption law in fact full question is following:
a) [p V (p /\ q)] <=> p
b) [p /\ (p V q)] <=> p
Thus I have to prov both of them, I can;t use one for the other until I first prove the first one.
Thanx a lot man for the help....
June 2nd 2008, 06:13 AM
Okay, I am not sure how to fix the first proof, but it is wrong, do not use it.
(A->B) ^ (B->C)
is equivalent to (A->C)
unfortunately Rosen doesn't give me much to work with to prove this (since we can't use a truth table, and I'm assuming you can't use hypothetical syllogism from 1.6, which is what we called the
"chain rule" in my logic class).
If we were doing logic, we could use an assumption, where you assume A, which gives you B with modus ponens, which gives you C with modus ponens. So if A then C. But I don't know how to prove it
with the tools Rosen has given.
If anyone else can prove (A->B) ^ (B->C) using only the attached tables, I would appreciate the aid.
June 2nd 2008, 06:19 AM
No, my proof was wrong, I have removed it.
Can you tell me what tools you have available to you in 1.1 - 1.2? Without truth tables, hypothetical syllogism, and assumption (I don't know what Rosen calls assumptions, if he uses it at all),
I do not know how to prove this. Perhaps some of those tools are available to me but I do not realize it due to version differences, or perhaps someone more capable than I will be able to prove
it using only the tables I have attached.
June 2nd 2008, 06:25 AM
Thanx for the tables Man... Huh I realize my book is too old, well I will keep on trying with these new tables.... and I also realize I pretty poor in Logic.... I need a detailed course....
Thanx again for the help Angel White
June 2nd 2008, 06:41 AM
LOCALS%7E1/Temp/moz-screenshot-2.jpg[/IMG]I have attached all the tables found in section 1.2
In fact there are nones in section 1.1 and mainly it deals with logical puzzles and inconsistency I enjoyed solving those problems and was very excited... but....
With your tables I guess we can go a bit further... I'll post them here...
June 2nd 2008, 06:45 AM
I don't see hypothetical syllogism in section 1.1 and 1.2 (In fact I still don't know what it is!)
but it's okay if you can prove it by using your own tools.... I will refer to other resources......
Thanx again...
June 2nd 2008, 06:29 PM
I was just wandering Angle White! Is the same question removed from the exercises of Rosen of section 1.2 in 6th Edition of the book. In my book, its there as question no 8, b) where he asks to
prove this with truth table, and in question number 9 he asks each of the proposition of question no 8 to be proved tautology without using truth tables.
June 3rd 2008, 12:33 AM
Okay, I can prove it in my own words
First, let us examine the antecedent as it's own proposition:
1. (A -> B) ^ (B -> C)
This is saying that these two are true, which means that each must be true individually, using simplification we break the and statement into it's component parts.
2. A->B ...by simplification
3. B->C ...by simplification
Now let us assume that A is true and see what this would mean. We place an "a" in front of the step number to show that this is an assumption, it is true only if the assumption is true where
steps without the "a" are true always. We cannot mix these steps after we finish assuming A is true, that is why we mark them.
a4. A ...assumption
Because A is true, and we know that if A is true, then B is true, we know B must be true. This is called modus ponens.
a5. B ...modus ponens of 2 and 4
a6. C ...modus ponens of 3 and 5
So now we know that when we assume A is true, C must be true as well. But we don't actually know that A is true, so we must place it into a conditional statement
7. A->C ...conditional proof steps 4-6
Okay, now that we know when (A -> B) ^ (B -> C) is true, A->C is also true. Let us name this "hypothetical syllogism" (we called it "chain rule" in my logic class) meaning that if the antecedent
(the "then" part) in one if-then statement is the same as the consequent (the "if" part) in another if-then statement, then we can consolidate the statements as we have.
Now let us examine Rosen's tautology.
1. (A -> B) ^ (B -> C) -> (A->C)
we know that if the antecedent is true, then A->C must be true, that is what we just proved up above. So we can replace them.
2. (A->C) -> (A->C) ...hypothetical syllogism
3. ~(A->C) V (A->C) ...logical equivalence (this is the first step on table 7 that I attached, I can go into more detail about it if you like. If you have a hard time seeing it, replace A->C with
a single character, and examine the step again, it should be more obvious)
And this statement is saying that A->C is either not true, or it is true. This is a tautology, because it can only be true or false, and both of these options will make the (or) statement true.
4. T ...Negation laws (Table 6 that I attached in the last post)
Okay, so that is how I would explain it in my own words, I don't know what Rosen wanted you to do, it may be possible to prove it using the tools that he provided, but I seemed to keep getting
stuck when I tried. If this is for your own edification, and you understand it, then thats probably good enough (up to you, I suppose, you'll get at least two more opportunities to revisit logic
in this book, because set theory and boolean algebra are basically just alternative notations with simplified operators), but if it is for a course in school, your instructor may not accept it as
it uses tools that Rosen hasn't taught yet.
June 3rd 2008, 03:09 AM
Thanx Angle White! Thanx a lot! Atleast I have a prove. Its not for my homework, but for my own understanding, I am actually doing a self study! Great! Thanx a lot.
But I have another additional question related to this?
The rules you used, that is assumption and modus ponens etc. are not there in Rosen, why they are not there? I mean is it not the most basic of Logic, symbolic logic. Basically I never studied
Logic in High School, so this is my first introduction, but I get surprised by the various ways the meterial is presented in various books and sites.
For ex.
a) Rosen don't have all these rules you used to do the derivation.
b) Schaum's outline in logic also don't have these rules, but they do have other things, about building the tree and branches to solve the logic problem.
c) While in another book I found they mentioned 12 rules similar to the ones you used.
So where I stand, do I need to start somewhere else.
I am confused by so many terms, not knowing their difference, like "Symbolic Logic", "Propositional Logic", "Predicate Logic" etc.
What are the differences and which is a good place to start considering no preveious background in Logic.
Thanx a lot
June 3rd 2008, 04:46 AM
Thanx Angle White! Thanx a lot! Atleast I have a prove. Its not for my homework, but for my own understanding, I am actually doing a self study! Great! Thanx a lot.
But I have another additional question related to this?
The rules you used, that is assumption and modus ponens etc. are not there in Rosen, why they are not there? I mean is it not the most basic of Logic, symbolic logic. Basically I never studied
Logic in High School, so this is my first introduction, but I get surprised by the various ways the meterial is presented in various books and sites.
For ex.
a) Rosen don't have all these rules you used to do the derivation.
b) Schaum's outline in logic also don't have these rules, but they do have other things, about building the tree and branches to solve the logic problem.
c) While in another book I found they mentioned 12 rules similar to the ones you used.
So where I stand, do I need to start somewhere else.
I am confused by so many terms, not knowing their difference, like "Symbolic Logic", "Propositional Logic", "Predicate Logic" etc.
What are the differences and which is a good place to start considering no preveious background in Logic.
Thanx a lot
I do not know the differences between those names, but my experience is that all logic courses are basically similar. My philosophy logic course had tools that the discrete mathematics course did
not (we also focused more heavily on logical proofs in that class), but then varied towards how to use the logic in sentences and arguing. In Discrete math and other CS courses, it has been used
to enhance understanding, ie sets make much more sense when you understand logic, creating circuits is essentially an exercise in basic logic (though they introduce some new operators, but these
operators can be expressed through the operators you already know).
Some of what I talked about will be in Rosen's book in later sections, modus ponens will surely be in there, though I don't think the conditional proof (the assumption) will be, I don't remember
it, at least.
Also, some courses differ in how they introduce the domain of the variable. In Rosen's course, you use quantifiers, in my logic class, we introduced a new variable.
For example, Rosen might say "for every x, if x is y then x is z" where x is defined as "all humans"
But in my philosophy course, we would say "if x is (a human) and x is y, then x is z"
I personally prefer Rosen's method, here, not only is it useful later on with what you will learn, but it seems to result in simpler notation (once you understand how it works, at least).
I think your logic will be fine with Rosen's book, he does a pretty good job, once you understand it, you will not need to reference the tables, if you practice it a little you will be able to
see it (note that seeing it and proving it don't always go hand in hand). If your goal is to learn logic, it might be wiser to find a book which teaches logic specifically for the purpose of
learning logic, whereas Rosen's book teaches it for the specific purpose of it's use in Mathematics. This different focus doesn't change the logic, but it does change how the material is
presented, and what is emphasized. The nice thing about Rosen's book is that it does a good job of hitting logic in one chapter, I took logic as a philosophy course at the same time I took
Discrete Mathematics 1, we hit that chapter and moved on in discrete math, while the philosophy course took the entire first half of the semester to cover it. In some ways it was nice, because
the philosophy course was mostly review after that, but in other ways it was frustrating, because they used different notation and did somethings a little differently that kept messing me up.
If you're learning for your own benefit, I'll also suggest using a mix of notation, there are a number of ways to represent each operator, and I think some are easier to see than others. For
example I always get confused when using \/ for or and /\ for and, that was part of my problem earlier, so when I do my own logic now (it is actually quite useful for daily life) I use a
multiplication sign * to represent it, this is consistent with Boolean operators. I also prefer a tilde ~ for not, it is easier to get the proportions correct and takes less time to write.
Sometimes I use a bar for not, also, which is also consistent with boolean operators, the nice thing about a bar is that it makes it very clear what is being negated, which is quite useful when
you have nested negations.
Don't get too bogged down by Rosen's book, some chapters are a bit heavy, I found it helpful to keep it light, don't focus on learning every single thing absolutely thoroughly, but instead on
getting a breadth of knowledge, being able to do most of the work, having general understanding. It is not always easy to tell what is important, but this approach seems to be more effective (for
me at least, because otherwise my obsessiveness bogs me down and I never move on). The more you do in the book, and the more you use some of these things, the knowledge catches up nicely. Also,
there is very little flow between chapters, meaning 1 and 2 aren't necessarily related, I think he has a hierarchy of what chapter's are required for what other chapters, but this allows you to
skip around some if you are more interested in one over another. The other nice thing about chapters which are not directly related is that if you are frustrated with the chapter you are on, you
know you only have to finish it before you will move on to other material. Also, the later chapters are much easier than the earlier chapters, so you have that to look forward to as well :) But
then again, the later chapters are much more computer science oriented, while the earlier chapters are much more mathematically oriented, so depending on your goals, you may choose not to focus
on the later chapters. At my school, Discrete Math 1 is both a Math and a CS course, but Discrete Math 2 is only a CS course.
Anyway, good luck!
June 3rd 2008, 07:50 AM
Thanx for you such a detailed explanation Angel White! It was wonderful, thanx a lot again... as you say I'll continue with Rosen, and won't get bogged down... you are right there....
Thanx a lot.
June 3rd 2008, 08:38 AM
Ha I proved it!
I think I was going in right direction but giving up too soon...
I should have continued expansion.....
T stands for true/ tautology
(A -> B) /\ (B -> C) -> (A -> C)
<=> ~[(~A V B) /\ (~B V C)] V (A -> C)
<=> [~(~A V B) V ~(~B V C)] V (A -> C)
<=> [(A /\ ~B) V (B /\ ~C)] V (A -> C)
<=> [(A /\ ~B) V (B /\ ~C)] V (~A V C)
<=> [{B V (A /\ ~B)} /\ {~C V (A /\ ~B)}] V (~A V C)
<=> [(B V A) /\ (B V ~B) /\ (~C V A) /\ (~C V ~B)] V (~A V C)
<=> [(B V A) /\ T /\ (~C V A) /\ (~C V ~B)] V (~A V C)
<=> [(B V A) /\ (~C V A) /\ (~C V ~B)] V (~A V C)
<=> [{~A V (B V A)} /\ {~A V (~C VA)} /\ {~A V (~C V ~B)}] V [{C V (B V A)} /\ {C V (~C V A)} /\ {C V (~C V ~B)}]
<=> [T /\ T /\ {~A V (~C V ~B)}] V [{C V (B V A)} /\ T /\ T]
<=> [~A V (~C V ~B)] V [C V (B V A)]
<=> ~A V (~C V ~B) V C V (B V A)
<=> ~A V ~C V ~B V C V B V A
<=> (~A V A) V (~B V B) V (~ C V C)
<=> T V T V T
<=> T | {"url":"http://mathhelpforum.com/discrete-math/40355-solved-stuck-proof-laws-syllogism-absorption-propositional-logic-print.html","timestamp":"2014-04-21T00:06:19Z","content_type":null,"content_length":"32608","record_id":"<urn:uuid:a7870a87-4e18-4e73-a3ea-68f602499cdb>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Quotient Rule to find y'
November 17th 2010, 07:54 AM #1
Sep 2009
Using Quotient Rule to find y'
Hey guys!
The following question asks me to find y' via the quotient rule.
I know the quotient rule is:
$[f'(x)*g(x) - f(x)*g'(x)]/g(x)^2$
so i got:
$[3x^2+2]*[(4x^2-ln(x)^2] - [x^3-2x]*2[4x^2-ln(x)]$ divided by $[4x^2-ln(x)]^4$
what i don't understand is, in the solution key, they throw $(8x-1/x)$ in there.... where do they get that from?
$\displaystyle \frac{d}{dx} (4\ x^{2} - \ln^{2} x)^{2} = 2\ (4\ x^{2} - \ln^{2} x)\ (8\ x -\frac{2}{x}\ \ln x)$
Kind regards
November 17th 2010, 08:04 AM #2
November 17th 2010, 08:20 PM #3
Sep 2009 | {"url":"http://mathhelpforum.com/calculus/163555-using-quotient-rule-find-y.html","timestamp":"2014-04-19T17:23:18Z","content_type":null,"content_length":"35350","record_id":"<urn:uuid:2debed28-ec2e-4070-b921-78a540a74109>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
eee Choose the appropriate symbol to compare the fractions. View Solution
29 ___ 89 eee
eee Find the greatest fraction.
516, 56, 615, and 16 View Solution
eee Identify the smallest mixed number.
825 , 325 , 925 , 725 View Solution
eee Tommy and William went to a coffee shop and ordered two cups of coffee. While Tommy completed drinking 316^th of his coffee, William finished 516^th of his coffee. Who drank more? View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
73 ____ 715 eee
eee Katie gave 123 pints of milk to Stephanie and 54 pints of milk to Nathan. Who got more milk, Stephanie or Nathan? eee View Solution
eee Which group orders the fractions 49, 89, and 79 from the greatest to the least? View Solution
eee Compare 313 and 413 using one of the symbols <, >, or =. View Solution
eee Find the greatest fraction:
12, 23, 32, and 13. View Solution
eee Which is the smaller fraction, 12 or 45? eee View Solution
eee Ursula brought two identical cakes and gave one cake to Jeff and another to Sunny. Jeff ate 47 ^th of the cake and Sunny ate 12 ^th of the cake. Who ate a bigger portion? eee View Solution
eee Which is the smaller improper fraction, 143 or 72? View Solution
eee Find the greater of the two fractions, 413 and 416. View Solution
eee Which is smaller of the two fractions, 173 or 556? eee View Solution
eee Order the fractions 317, 517, 417 and 617 in the least to the greatest order. View Solution
eee Wilma gave 367 pints of milk to Irena and 258 pints of milk to Nathan. Who got more milk, Irena or Nathan? eee View Solution
eee Tim and William went to a coffee shop and ordered two cups of coffee. While Tim completed drinking 29^th of his coffee, William finished 49^th of his coffee. Who drank more? eee View Solution
eee Which of the models is the greater fraction? View Solution
eee Order the fractions 3511, 3811 and 3211 from the greatest to the least. View Solution
eee Arrange the fractions 3613, 2613 and 1513 in the decreasing order. View Solution
eee Which of the models represents a smaller fraction? eee View Solution
eee Order the fractions 213, 247 and 214 from the greatest to the least. View Solution
eee Choose a model that represents an improper fraction less than 1610. eee View Solution
eee Choose a model that represents a fraction greater than 36. eee View Solution
eee Choose a model that represents a fraction greater than 58. eee View Solution
eee Choose a model that represents a fraction smaller than 168. eee View Solution
eee Order the 25, 228, 0.07, and 70% in descending order. View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
52 ____ 37 eee
eee Which group orders the fractions 25, 45, and 35 from the greatest to the least? View Solution
eee Order the fractions 213, 413, 313 and 513 in the least to the greatest order. View Solution
eee Find the greatest fraction.
314, 34, 413, and 14 View Solution
eee Identify the smallest mixed number.
4310 , 1310 , 5310 , 3310 View Solution
eee Choose the appropriate symbol to compare the fractions. View Solution
313 ___ 1213 eee
eee Which of the models represents a smaller fraction? View Solution
eee Choose a model that represents a fraction greater than 46. View Solution
eee Find the greatest fraction.
213, 23, 312, and 13 View Solution
eee Identify the smallest mixed number.
813 , 413 , 913 , 713 View Solution
eee George and Ed went to a coffee shop and ordered two cups of coffee. While George completed drinking 316^th of his coffee, Ed finished 516^th of his coffee. Who drank more? eee View Solution
eee Which of the models is the greater fraction? eee View Solution
eee Choose a model that represents a fraction smaller than 58. eee View Solution
eee Pick the appropriate symbol to compare the fractions. View Solution
73 ____ 613 eee
eee Quincy gave 367 pints of milk to Wilma and 258 pints of milk to Charlie. Who got more milk, Wilma or Charlie? eee View Solution
eee Choose the models that represent a fraction smaller than 158. eee View Solution
eee Which group orders the fractions 511, 1011, and 911 from the greatest to the least? View Solution
eee Choose the models that represent a mixed number greater than 104. View Solution
eee Choose the models that represent improper fractions less than 1610. View Solution
eee Compare 29 and 49 using one of the symbols <, >, or =. View Solution
eee Find the greatest fraction:
310, 413, 1310, and 313. View Solution
eee Which is the smaller fraction, 12 or 56? eee View Solution
eee Which is the smaller improper fraction, 173 or 92? View Solution
eee Identify the greatest fraction.
4213 , 5514 , 3416 , 6312 View Solution
eee Find the greater of the two fractions, 414 and 418. View Solution
eee Which is smaller of the two fractions, 92 or 423? eee View Solution
eee Order the fractions 111, 311, 211 and 411 in the least to the greatest order. View Solution
eee Order the fractions 2417, 2517 and 2317 from the greatest to the least. View Solution
eee Arrange the fractions 31011, 21011 and 1711 in the decreasing order. View Solution
eee Which fraction is greater, 34 or 23? View Solution
eee Nina brought two identical cakes and gave one cake to Matt and another to Tony. Matt ate 47 ^th of the cake and Tony ate 12 ^th of the cake. Who ate a bigger portion? eee View Solution
eee Order the fractions 312, 335 and 313 from the greatest to the least. View Solution
eee Diane gave 367 pints of juice to Rachel and 258 pints of juice to Brian. Who got more juice, Rachel or Brian? eee View Solution
eee Tim and Chris went to a coffee shop and ordered two cups of tea. While Tim completed drinking 29^th of his tea, Chris finished 49^th of his tea. Who drank more? eee View Solution
eee Choose a model that represents an improper fraction greater than 1610. eee View Solution
eee Which of the models is the smaller fraction? View Solution
eee Which of the models represents a greater fraction? eee View Solution
eee Choose a model that represents a fraction greater than 168. eee View Solution
eee Choose a model that represents a fraction smaller than 36. eee View Solution
eee Choose a model that represents a fraction smaller than 58. eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution
eee Which of the following set is arranged from the least to the greatest? eee View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxdxbgdfmxkhheb&.html","timestamp":"2014-04-16T10:10:17Z","content_type":null,"content_length":"129111","record_id":"<urn:uuid:48a1b1e1-fe71-4aed-8f0e-b1fa04a736c6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Note on Relation between Double Laplace Transform and Double Differential Transform
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 535020, 7 pages
Research Article
Note on Relation between Double Laplace Transform and Double Differential Transform
Mathematics Department, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
Received 12 May 2013; Accepted 8 June 2013
Academic Editor: Bessem Samet
Copyright © 2013 Hassan Eltayeb. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Double differential transform method has been employed to compute double Laplace transform. To illustrate the method, four examples of different forms have been prepared.
1. Introduction
The concept of the DTM was first proposed by Zhou [1], who solved linear and nonlinear problems in electrical circuit problems. Chen and Ho [2] developed this method for partial differential
equations and applied it to the system of differential equations. During the recent years, many authors have used this method for solving various types of equations. For example, this method has been
used for differential algebraic equations [3], partial differential equations [4, 5], fractional differential equations [6], Volterra integral equations [7], and difference equations [8]. The main
goal of this paper is to extend the study of single Laplace transform by using differential transform (see [9]) and to compute double Laplace transform by means of double differential transform
method. As we know, the standard derivation of Laplace transforms inherits an improper integration which may, in certain cases, not be analytically tractable. However, in contrast, the proposed
straightforward approach merely requires easy differentiations and algebraic operations. Three examples are proposed.
The one-dimensional differential transform of the function is defined by the following formula: where and are the original and transform functions, respectively. The inverse differential transform of
is specified as follows: Consider an analytical function of two variables; then this function can be represented as a series in using differential transform and inverse double differential transform
From the definition of double Laplace transform, we can write where , and , is a complex value.
On using double direct and inverse differential transform with respect to and for both sides of the previous,
Lemma 1. Let and be a finite positive integers such that , and , , , ; then
Proof. By using mathematical induction, letting , , we have Then ; since we conclude that for , , Consequently Similarly Assume that, for , to , , it holds that and also Now, we are going to prove
that By using the definition of polynomial, we have Thus
2. Relation between Double Laplace and Differential Transforms
In this section, we compute the double Laplace transform by means of double differential transform by proposing some examples as follows.
Example 2. Double Laplace transform of function ; consider In the next example, we apply double differential transform to compute double Laplace transform as follows.
Example 3. If we consider the function , then double Laplace transform is given by From the properties of double differential transform, we have On using definition of Kronecker delta function
forces, we have From the previous equation, we have where and are constant coefficients of the polynomials generated by and , respectively. According to the lemma, the last summations of (23), , and
are zeros, such that In the next example, we apply double integral transform as follows.
Example 4. If we consider the function , then double Laplace transform is given by By calculating the summation inside the bracket we have From the previous lemma, we know that, for , ,, and , we
have the following form: From the definition of infinite geometric series and the summation of the previous terms, we have
In the next example, we apply double differential transform to find double Laplace transform of the function as follows.
Example 5. The double Laplace transform of the function , as follows: By using series, we have By applying double differential transform, we have On using double inverse differential transform, we
have According to the previous lemma, we have By using the definition of we have So that
Also, we can use the same idea to compute double Laplace transform for convolution function, single or double.
The author gratefully acknowledge, that this project was supported by King Saud University, Deanship of Scientific Research, College of Science Research Center. | {"url":"http://www.hindawi.com/journals/aaa/2013/535020/","timestamp":"2014-04-20T19:32:13Z","content_type":null,"content_length":"667868","record_id":"<urn:uuid:7e451fd1-5d5b-4281-84d7-1936a39e1d58>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Word Problem Database
Addition and Subtraction Word Problems - 3 Digit Numbers
1. Mr. Larson paid $479 for a television set.
He had $126 left.
How much money did Mr. Larson have at first?
2. The difference between two numbers is 68.
The smaller number is 153.
What is the bigger number?
3. The sum of two numbers is 275.
One of the numbers is 149.
What is the other number?
4. There are 524 books in the children's section of the library.
146 were checked out over the weekend.
How many books were left?
5. The library has 215 books about sports.
There are also 157 books about science.
How many more books are there about sports? | {"url":"http://www.mathplayground.com/wpdatabase/Addition_Subtraction_3Digit_2.htm","timestamp":"2014-04-16T04:49:08Z","content_type":null,"content_length":"50185","record_id":"<urn:uuid:57d84042-9f46-4fb4-8373-28fef0d14a8b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity Reviews - sum returns numpy.float64 when applied to a sequence of numpy.uint64
suzaku 12-15-2012 09:40 AM
sum returns numpy.float64 when applied to a sequence of numpy.uint64
I came across this question on StackOverflow today:
I'm not familiar with `numpy` but I'm curious about this, so I started doing some experiments.
This is what I have discovered so far:
1. when a `generator ` is passed to `numpy.sum`, it fallback to use Python's built-in `sum`.
2. if elements of the sequence passed to `sum` is of type `numpy.uint64`, the result would be a number of type `numpy.float64`;
3. when I tried it with `numpy.int64`, the result is as expected: `numpy.int64`.
I guess the reason maybe that we don't really have `64 bits unsigned integer` in Python, so the numbers get converted to something different. And if so, I have no idea why it chose `float64` as the
All times are GMT. The time now is 01:28 PM.
Powered by vBulletin®. Copyright ©2000 - 2014, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc. | {"url":"http://www.velocityreviews.com/forums/printthread.php?t=955512","timestamp":"2014-04-17T13:28:25Z","content_type":null,"content_length":"4501","record_id":"<urn:uuid:98d638a8-4723-4fd2-b887-11022f3b8445>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phone Book Trick
Certainly, you can use the memory techniques from here to memorize the phone book, but why? How many times can you really show that?
This trick will imply that you’ve memorized the phone book.
Imagine this: You hand out 9 cards or slips of paper, each with a different digit from 1 to 9, and ask your spectator to mix them up. He is then to hand any 3 to one person, any of the remaining 3 to
another person, and keep the remaining 3. With the help of the spectators, you generate a random equation, and ask the spectators to total it up. Once you’re given the total, you instantly recall a
number in the local phone book ending with those digits!
First, the mathematical part:
If you’re using a deck of cards, just get out the Ace through 9 of any suit. Otherwise, use slips of paper with 1 through 9 written on them.
They are mixed up by the spectators, and split among 3 people as described above.
You ask the spectator farthest to your left to choose any of their three digits, and call it out. You write it down as the hundreds digit of a number. You ask the person in the middle for any one of
their numbers, and you write that down as the tens digit. Finally, you ask the rightmost spectator for any one of their numbers, and write that down as the ones digit of the first number.
As the numbers are given to you, you take them back, so they can’t call the same number twice.
The above process is repeated twice more, to generate two more 3-digit numbers. These three 3-digit numbers are then added up.
Let’s say person A wound up with cards 3, 4 and 6, spectator B wound up with cards 1, 7 and 8, and spectator C wound up with 2, 5 and 9. They might create the equation this way:
675 (total=1,476)
Or, the equation might wind up being:
389 (total=1,476)
…or some other arrangement.
It seems like this process could generate an impossible large amount of numbers. Actually, with the numbers 1 through 9 used to create three 3-digit numbers like this, you can only arrive at 198
different totals.
The only possible totals you can generate are the multiples of 9, ranging from 774 to 2,556 (every 9 multiple inbetween is possible).
If you’re comfortable linking and memorizing numbers, you need to create a list of phone numbers in the local phone book that end in 0774, 0783, 0792, and so on, up to 2556.
Using a reverse phone lookup utility on the internet, combined with the zip code and prefixes for the area, you can actually generate a list of suitable numbers and their associated names with
minimal hassle. Don’t forget to make sure that each name is actually printed in the current edition of the phone book you’ll be using!
Once you have the list, you need to make the links from the numbers to the names. 198 links can be a challenge, so don’t try this if you’re just starting out in memory.
Obviously, this feat works better for big shows in larger metropolitan areas for which you have time to prepare with the local phone book.
The funny thing is that, while you’re actually doing an impressively large memory feat, you get credit for doing a memory feat on a far larger scale!
No, you won’t use this feat all the time. However, used at the right time and right place, you’ll leave a lasting impression!
Sign up to our Newsletter
Be Notified of New Posts
Some great information that all my readers will need to read. Will pass on this website. cheers will pop bach asap to check for updates
You can have a look at it.
Hello, really nice article!
Content is evaluated initially based on the message covered. When the audience doesn’t find their needs conveyed, that article confirmed its prospecting .
Mallin on December 5th, 2010 at 7:18 pm
That is most likely the very best article that ever cross my reference. I do not see why anybody ought to disagree. It may be too easy #for them# to comprehend…anyway nice work i am coming again
right here for Extra Nice Stuff!!
Best info. They must be something value. Thanks
Good post. Totally agree with him.
Great Post. I have read a lot of posts on this subject and you done the best job. Keep it up!
Hey admin, I have a little request. I had been just googleing for information on the topic you published and located this post. Some great stuff you posted here. Can I if possible talk about this
particular post on my own latest website I’m working on? That would be terrific
as I website owner I conceive the subject material here is rattling great , appreciate it for your efforts.
I respect your work , appreciate it for all the informative posts .
cityville on January 14th, 2011 at 3:38 pm
Pretty nice post. I just stumbled upon your weblog and wanted to say that I’ve really enjoyed browsing your blog posts. In any case I’ll be subscribing to your rss feed and I hope you write again
Great site man, has a lot of good information on it that I can use.
Enjoyed studying this, very good stuff, regards .
Hi, great stuff to read, thanks
Hi, really nice article for me
Hello Top InteressanterBeitragweiter soviele grüße
Hi, I have recently memorized the Yellow Pages phone book. Here is how I did it.
I just want to say I’m newbie to blogging and site-building and absolutely savored your page. Very likely I’m planning to bookmark your website . You surely come with fantastic articles. Thanks for
sharing with us your web page. | {"url":"http://memorymentor.com/blog/mental-tips-tricks/phone-book-trick/","timestamp":"2014-04-18T16:38:13Z","content_type":null,"content_length":"32844","record_id":"<urn:uuid:e3fb3ecd-a7aa-46c9-84f5-817fb4b25a5b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00535-ip-10-147-4-33.ec2.internal.warc.gz"} |
April 28th 2009, 11:15 AM
So this is something that came from a problem we did in probability. I hope it's a correct result, since I modified it a bit.
So I know the probabilitistic way to prove it. If you can find, then do it (but you'd have to know where you're going).
Otherwise, I don't know if there is a calculus approach lol, hence this thread.
Find $\int_{\mathbb{R}} \frac{1}{(1+u^2)^k} ~du$
for any positive integer k.
Answer should be :
April 28th 2009, 01:07 PM
One method would be to adapt the reduction formula in this thread (modifying it so as to work for the interval $(-\infty,\infty)$ instead of [0,1]).
April 28th 2009, 02:10 PM
another way: letteing $t=\sin^2 \theta$ in the beta function formula we get $B(x,y)=2 \int_0^{\frac{\pi}{2}} (\sin \theta)^{2x-1} (\cos \theta)^{2y-1} \ d \theta, \ \ x > 0, \ y> 0.$
now suppose $\frac{1}{2} < k \in \mathbb{R}$ and let $u=\tan \theta.$ then: $I_k=\int_{\mathbb{R}} \frac{du}{(1 + u^2)^k}=2 \int_0^{\frac{\pi}{2}} \cos^{2k-2} \theta \ d \theta=B \left(\frac{1}
{2}, k -\frac{1}{2} \right).$ therefore:
$I_k=\frac{\Gamma (\frac{1}{2}) \Gamma (k - \frac{1}{2})}{\Gamma(k)}= \frac {\sqrt{\pi}\Gamma (k - \frac{1}{2})}{\Gamma(k)}.$
April 29th 2009, 09:52 AM
So this is something that came from a problem we did in probability. I hope it's a correct result, since I modified it a bit.
So I know the probabilitistic way to prove it. If you can find, then do it (but you'd have to know where you're going).
Otherwise, I don't know if there is a calculus approach lol, hence this thread.
Find $\int_{\mathbb{R}} \frac{1}{(1+u^2)^k} ~du$
for any positive integer k.
Answer should be :
Here is another way (I'm in complex this term so here is goes)
Consider the closed arc in the complex plane. Let R be real and R > 1
$\gamma_1=-R+2R(t), 0 \le t \le 1$ and $\gamma_2=Re^{it}, 0 \le t \le \pi$
This forms a simple closed curve in the complex plane.
By the cauchy integral formula
$\oint \frac{1}{(z^2+1)^k}dz=\oint \frac{\frac{1}{(z+i)^k}}{(z-i)^k}dz=\frac{2\pi i}{(k-1)!}\frac{d^k}{dz^k}\left( \frac{1}{(z+i)^k}\right)_{z=i}=\frac{\pi}{2^{2k-2}}\left( \frac{(2k-2)!}
Now if we consider
$\int_{\gamma_1} \frac{1}{(z^2+1)^k}dz +\int_{\gamma_2} \frac{1}{(z^2+1)^k}dz$
$\int_{-R}^{R} \frac{1}{(x^2+1)^k}dx +\int_{\gamma_2} \frac{1}{(z^2+1)^k}dz$
Note that on the circular arc that $\left| \frac{1}{(z^2+1)^k}\right| \le \frac{1}{(R^2-1)^k}$
Using the ML estimate
$\left| \int_{\gamma_2} \frac{1}{(z^2+1)^k}dz\right| \le \frac{\pi R}{(R^2-1)^k}$
Now if we take the limit as $R \to \infty$
We end up with
$\int_{-\infty}^{\infty}\frac{1}{(1+x^2)^k}=\frac{\pi}{2^{ 2k-2}}\left( \frac{(2k-2)!}{((k-1)!)^2}\right)$ | {"url":"http://mathhelpforum.com/math-challenge-problems/86259-integral-print.html","timestamp":"2014-04-19T14:39:25Z","content_type":null,"content_length":"13425","record_id":"<urn:uuid:e6fa6a74-1039-4b18-ba67-069a1ab662a3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the (Im)possibility of Obfuscating Programs
Boaz Barak, Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan and Ke Yang
Informally, an obfuscator O is an (efficient, probabilistic) "compiler" that takes as input a program (or circuit) P and produces a new program O(P) that has the same functionality as P yet is
"unintelligible" in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic
encryption to complexity-theoretic analogues of Rice's theorem. Most of these applications are based on an interpretation of the "unintelligibility" condition in obfuscation as meaning that O(P) is a
"virtual black box," in the sense that anything one can efficiently compute given O(P), one could also efficiently compute given oracle access to P.
In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by
constructing a family of programs F that are unobfuscatable in the sense that (a) given any efficient program P' that computes the same function as a program P\in F, the “source code” P can be
efficiently reconstructed, yet (b) given oracle access to a (randomly selected) program P\in F, no efficient algorithm can reconstruct P (or even distinguish a certain bit in the code from random)
except with negligible probability.
We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and
(c) only need to work for very restricted models of computation (TC_0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption
schemes, and pseudorandom function families. | {"url":"http://people.seas.harvard.edu/~salil/research/obfuscate-abs.html","timestamp":"2014-04-20T10:50:47Z","content_type":null,"content_length":"9463","record_id":"<urn:uuid:f5feedab-b184-431b-bec6-2057b54c5996>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
I think that I am overthinking it
July 12th 2008, 01:51 PM #1
Jul 2008
I think that I am overthinking it
it is straight out of my book:
Find the volume of the solid where the baseis the given region and which has the property that each cross section perpendicular to the x-axis is a semicircle.
The region bounded by the parabola y = x^2 and the line 2x + y - 3 = 0
The answer is (64/15)pi but i don't know how to get it.
What are you getting and how are you getting it?
Domain is [-3,1]
Diameter is $((3-2x)-x^{2})$
Radius is $\frac{((3-2x)-x^{2})}{2}$
Area of Entire Circle is $\pi*\left(\frac{((3-2x)-x^{2})}{2}\right)^{2}$
Area of Semi-Circle is $\frac{1}{2}*\pi*\left(\frac{((3-2x)-x^{2})}{2}\right)^{2}$
That is how I have been working it. I have been getting. (544/15)pi I think.
it is straight out of my book:
Find the volume of the solid where the baseis the given region and which has the property that each cross section perpendicular to the x-axis is a semicircle.
The region bounded by the parabola y = x^2 and the line 2x + y - 3 = 0
The answer is (64/15)pi but i don't know how to get it.
Did you draw the region? We need this to help us to determine the value for our radius [so we can find its volume]
The cyan colored part is the region we're dealing with.
We see that it has a width of $3-2x-x^2$.
Thus the radius of each circle would be $\frac{3-2x-x^2}{2}$
As a result, each semi circle will have an area of $\frac{\pi}{2}\cdot\bigg[\frac{3-2x-x^2}{2}\bigg]^2$
We also need to know the points of intersection:
$3-2x=x^2\implies x^2+2x-3=0\implies (x+3)(x-1)=0\implies x=-3 \ or \ x=1$
Thus, the volume of our solid would be $\frac{\pi}{8}\int_{-3}^1\bigg(3-2x-x^2\bigg)^2\,dx$
Can you take it from here? You will get $\frac{64\pi}{15}$ as your answer.
Hope that this makes sense!
Thank you. It makes perfect sense.
Please observe, Jazz, that you showed no work during this conversation. Don't do that. SHOW YOUR WORK! Trust me on this.
I understand everything except I keep getting the wrong answer. I keep getting $\pi*2011/240$. Could you walk me through your integration steps?
I understand everything except I keep getting the wrong answer.
You've got to see how silly that sounds. There is something you don't understand. Too bad no one knows what it is since you are showing no work.
Did you set up the integral?
Did you get the same integrand?
Did you pull out the constants or leave them inside the integral?
Did you square the trinomial or try to think of some other way to find the antiderivative?
Did you divide the integration into pieces or try to do it all in one piece?
SHOW YOUR WORK.
I expanded it out and got:
$\frac{\pi}{8}\int_{-3}^1\bigg(9 - 12x-2x^2 + 6x^3 + x^4\bigg), dx$
Then I integrated each part:
$(9x-6x^2-\frac{2x^3}{3} + \frac{3x^4}{2} + \frac{x^5}{5}$
That gives me $\frac{-88}{15}$
Last edited by jazz836062; July 12th 2008 at 06:02 PM.
Its very close to what I have...I get $\int_{-3}^1 \bigg(3-2x-x^2\bigg)^2\,dx= \left.\left[9x-6x^2-\frac{2}{3}x^3+{\color{red}x^4}+\frac{1}{5}x^5\rig ht]\right|_{-3}^1$
Can you try to figure out where you went wrong? Showing your steps may be helpful.
w00t!!! my 3
It should be ${\color{red}4}x^3$...
I am new to this kind of coding so I am very slow at posting.
Correct. Now integrate to get $9x-6x^2-\frac{2}{3}x^3+x^4+\frac{1}{5}x^5$. Now apply the FTC to evaluate the integral.
You should get $\frac{512}{15}$. Then multiply by $\frac{\pi}{8}$ to get the answer you're looking for.
Hope this helps.
Last edited by Chris L T521; July 12th 2008 at 06:14 PM. Reason: typo
I am delighted at this excellent example of the value of showing one's work.
When you get to trigonometric substitutions, this might be a nice challenge problem to attempt WITHOUT multiplying out the trinomial. It's not pretty, but might be an interesting exploration.
July 12th 2008, 02:12 PM #2
MHF Contributor
Aug 2007
July 12th 2008, 02:14 PM #3
Jul 2008
July 12th 2008, 02:16 PM #4
July 12th 2008, 02:19 PM #5
Jul 2008
July 12th 2008, 02:21 PM #6
MHF Contributor
Aug 2007
July 12th 2008, 02:54 PM #7
Jul 2008
July 12th 2008, 04:01 PM #8
MHF Contributor
Aug 2007
July 12th 2008, 05:51 PM #9
Jul 2008
July 12th 2008, 06:00 PM #10
July 12th 2008, 06:02 PM #11
July 12th 2008, 06:09 PM #12
Jul 2008
July 12th 2008, 06:13 PM #13
July 12th 2008, 07:33 PM #14
MHF Contributor
Aug 2007 | {"url":"http://mathhelpforum.com/calculus/43542-i-think-i-am-overthinking.html","timestamp":"2014-04-19T12:11:50Z","content_type":null,"content_length":"74785","record_id":"<urn:uuid:b877f4da-e741-45a0-8df6-177d91dfdb0a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lake Elsinore Math Tutors
...I have used AutoCad since 2006. From 2D to 3D CAD, I can engage students in the different forms of using and approaching the program efficiently. REVIT has increasingly become the design
program of choice moving toward the future.
12 Subjects: including algebra 2, reading, prealgebra, grammar
...I have experience in IT consulting at NASDAQ 100 companies including Intel, Microsoft, DIRECTV, and Adobe. The knowledge is the easy part, the transferring knowledge to the student is the
challenge. A teacher must understand how much the student has learned and what they need next.
33 Subjects: including prealgebra, geometry, chemistry, biology
...I'm nearly done with my undergraduate degree in Chemistry, but I've been tutoring students and colleagues since high school. My favorite part of tutoring, by far, is the reward of seeing
someone succeed in an area they were struggling with. When I walk in and a student shows me they got an A for the first time on a Chemistry test, it confirms my work and renews my love of the
9 Subjects: including calculus, precalculus, algebra 1, algebra 2
...We also focused on how students learn physics concepts and tailored our teaching plans to that knowledge. This experience significantly helped me grow as a teacher and improved my ability to
relate to students from different backgrounds. I tutored high school students when I was going to college myself.
21 Subjects: including algebra 2, statistics, differential equations, Turkish
...I know how to take a student and have them be fluent in their Mathematical skills and pass the test. I have an Electrical Engineering degree (BSEE) from University of California Irvine. I have
the textbooks from college and can review before tutoring.
28 Subjects: including differential equations, ACT Math, probability, SAT math | {"url":"http://www.algebrahelp.com/Lake_Elsinore_math_tutors.jsp","timestamp":"2014-04-18T14:12:42Z","content_type":null,"content_length":"24954","record_id":"<urn:uuid:ffe09b65-c2b9-42ef-bceb-5b8d79fbc2c8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
18.06 Spring 2006 : Problem Set
18.06 Spring 2006 : Problem Set #2
due 4PM Wednesday 2/22
(Asterisk means the solution is in the back of the book)
Section 2.6: 6*, 13, 16, 28
Section 2.7: 4*, 7, 11, 13, 17, 19
Section 3.1: 10, 19, 23*, 25*, 27
MATLAB PROBLEM: I hope this will be fun. Grab the surfer.m and pagerank.m files from http://www.mathworks.com/moler/ncmfilelist.html .
I was just playing with surfer.m. It's not as robust as I would have liked. When I run it on the 18.06 web page, it hangs on the java applets, and on other web pages it does not distinguish real
links from little gif's for bullets. If anyone writes, borrows, or steals a better surfer.m, please send to me (edelman@Mit.edu) and I will post. Note MATLAB has a jdk inside so it can be written in
matlab or java.
This one ran more cleanly: [u,g]=surfer('http://web.mit.edu/newsoffice',50)
It's fun to watch. A dot in (i,j) means that link j points to link i. The internet is a huge sparse matrix. g(i,j) is 1 or 0, there is a link or there is not. u(i) contains the name of the link.
Here is a poor man's pagerank. What is it doing?:
[a,b]=sort(full(sum(g,2)),'descend'); u(b)
In pagerank there is a line
x = (I - p*G*D)\e;
which is the solution of an nxn system with a sparse matrix.
See if you can find another link that gives a nice matrix: sparse but still lots of links, not just junky links or ones that hang. Compare this with pagerank's answer which uses fancy linear algebra
algorithms. Is pagerank in your opinion significantly better than the poor man's algorithm? Note there is no right answer here. Main point is to have fun and see how useful linear algebra can be.
Copyright © 2003 Massachusetts Institute of Technology | {"url":"http://web.mit.edu/18.06/www/Spring06/ps2.html","timestamp":"2014-04-20T13:40:53Z","content_type":null,"content_length":"2494","record_id":"<urn:uuid:9572b09b-4d9d-42a5-9603-620c120c2e5c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teterboro Algebra 1 Tutor
...I have taught algebra 1 for three years and geometry for five years - two of the main subjects on the ACT test. I have also tutored algebra 2, pre-calculus, and SAT prep since the spring. I
have just started tutoring in ACT math prep as well so have some experience there and finally have good test taking and problem solving skills that I share with my students.
8 Subjects: including algebra 1, chemistry, geometry, algebra 2
I have an easy-going style which seems to work well with students who are anxious about science and math. I have been a chemistry teacher for 17 years at very well-regarded Ridgewood High School.
I have tutored all levels of chemistry and physics, including AP level.
7 Subjects: including algebra 1, chemistry, physics, algebra 2
...Since my other major was mathematics, I also feel very comfortable with the principles of logic. While majoring in math I covered probability in many of the courses I took, but it was most
prominent in discrete mathematics. We calculated specific probabilities, created situations that would have certain probabilities, and did many formal proofs that used probability.
22 Subjects: including algebra 1, calculus, trigonometry, algebra 2
I was a teaching assistant and a tutor for math while I was enrolled at my college university. I will work with the students to make sure they achieve all of the results that they are looking for.
I am also flexible with my price and am willing to work with all household income levels.
5 Subjects: including algebra 1, Microsoft Excel, general computer, Macintosh
...One on one attention at times works best and allows me as the tutor to focus on the weakness. I specialize in helping the younger generation prepare for their state exams and advancement to the
next grade. My rates are reasonable and I enjoy working with my students.
12 Subjects: including algebra 1, reading, writing, literature | {"url":"http://www.purplemath.com/teterboro_nj_algebra_1_tutors.php","timestamp":"2014-04-18T09:05:31Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:24a22c45-85b8-40a4-b745-f1428e0f2c98>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Proper distance" in GR
You are welcome yuiop, I definitely can use some practice in checking mistakes in formulas as the formulas I write myself are often full of mistakes.
I attempted to generalize the formulas to obtain the proper velocity of a free falling observer wrt a local stationary observer, in the attempt to generalize the, what Pervect calls, 'the
passionflower distance'. Then perhaps we can do the same thing but then for the 'Fermi distance'.
Apparently the velocity function can be parametrized, if I am not mistaken such thing was developed by Hartle (or perhaps he copied it before, I don't know that).
Basically the proper velocity wrt a local stationary observer of a radially free falling observer can be described as:
v=\sqrt {{\frac {{A}^{2}-g_{{{\it tt}}}}{{A}^{2}}}}
g_{{{\it tt}}}=1-2\,{\frac {m}{r}}
In the case the free fall is from infinity A simply becomes 1.
In the case the free fall has an initial velocity then A is calculated as:
A={\frac {1}{\sqrt {1-{v}^{2}}}}
Now, if we assume the formula below as given by yuiop is both correct and not an approximation:
\frac{dr}{dtau} = \sqrt{\frac{2m}{r} - \frac{2m}{R}}
then there is a catch.
For A smaller than 1 the free fall starts from a given height. I was trying to obtain the correct formula in order to plug in the right A in such a case and here is where I got some surprises (I
assume for the greater minds on the forum this is yet another demonstration of my lack of understanding). I did not succeed in expressing this in terms of r only. Initially I was trying to reason
that if in some way I could 'subtract' the escape velocity at the given R and convert that into A I would get the expected results. But that did not work, it turns out that A is no longer constant
during the free fall from a given height, we could express it in the following way to get the desired results, but it is ugly:
A = -{\frac {\sqrt { \left( -rR+2\,mR-2\,rm \right) R \left( -r+2\,m
\right) }}{-rR+2\,mR-2\,rm}}
This 'monstrosity' is not even real valued!
Now the interesting question is why, assuming again the formula based on this is exact, does the parameter depend on r? Perhaps I made a mistake or is perhaps the formula to obtain the proper
velocity from a given R only an approximation?
For instance it is rather tempting to calculate A for a drop at h independently of r by using:
A=\sqrt {1-2\,{\frac {m}{h}}}
The result is close to yuiop's formula but not identical. The 'shape' of the slabs look rather similar. | {"url":"http://www.physicsforums.com/showthread.php?t=437895&page=5","timestamp":"2014-04-20T03:26:34Z","content_type":null,"content_length":"100040","record_id":"<urn:uuid:073ab9b8-5f62-40a3-9eb1-1b39c411b2be>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topological modular forms literature list
Links to other people's reference lists
Audio files and slides from Jack Morava's birthday conference
The Newton Institute proceedings
This page compiles a list of suggested reading for the upcoming Talbot workshop on the construction of tmf. Many of the documents here are preliminary versions of papers, which I have posted with the
kind permission of the authors. For subjects where I did not know any printed references, there are scanned notes from seminars and workshops, whose contents should be viewed with even more caution.
Suggested reading
On the Classical Constructions of Elliptic Cohomology
Segal: Bourbaki and the long version of the Bourbaki article
Jens Franke: On the construction of elliptic cohomology, Math. Nachr. 158, 1992, pp.43-65.
P.S. Landweber, D.C. Ravenel, R.E. Stong, Periodic cohomology theories defined by elliptic curves, Contemporary Mathematics 181, 1995, pp.317-337.
P.S. Landweber (Editor), Elliptic curves and Modular Forms in Algebraic Topology: Proceedings, Priceton 1986, LNM 1326, Springer-Verlag, Berlin, 1986.
Matthew Greenberg's Master's thesis: Constructing elliptic cohomology
Survey talks and articles
Jacob Lurie: A Survey of Elliptic Cohomology
Mike Hopkins (ICM 2002): Algebraic Topology and Modular Forms
Haynes Miller (slides from a talk in Barcelona, 2002) Elliptic Cohomology: A Perspective and Some Prospects
Mike Hopkins (Notes from a talk at Santa Barbara): Algebraic Topology and Differential Forms
General papers and the construction of topological modular forms
Stefan Schwede's notes from the Muenster Conference: Mike Hopkins I, Mike Hopkins II, Matthew Ando, Charles Rezk, (some Charles Rezk and) Haynes Miller, Haynes Miller II, Paul Goerss I, Paul Goerss
Lars Hesselholt's notes from Mike Hopkins' 2004 class on tmf: Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, Part 11, Part 12, Part 13, Part 14, Part 15.
John Rognes: Topologiske Modulaere Former
Hopkins, Miller: Elliptic curves and stable homotopy theory
Hopkins, Miller: On the Hopkins Miller theorem
Mike Hopkins (lecture notes): Course Notes for Elliptic Cohomology
Paul Goerss, Charles Rezk: Bonn lectures
Charles Rezk: Supplementary Notes for Math 152
M. Hopkins and M. Mahowald: From Elliptic Curves to Homotopy Theory, preprint, MIT and Northwestern University, June 1998
The book by Thomas: Ellliptic cohomology
E.S. Devinatz and M.H. Hopkins Homotopy fixed point spectra for closed subgroups of the Morava Stabilizer groups
Charles Rezk Notes on the Hopkins-Miller Theorem
Mark Mahowald and Charles Rezk: topological modular forms at level 3.
Stacks and their role in homotopy theory
Paul Goerss: (Pre-)sheaves of ring spectra over the moduli stack of formal group laws
Mike Hopkins: Complex Oriented Cohomology Theories and the Language of Stacks
Bertrand Toen, Gabriele Vezzosi: "Brave New" algebraic geometry and global derived moduli spaces of ring spectra
Niko Naumann: Comodule categories and the geometry of the stack of formal groups
Niko Naumann: Quasi-isogenies and Morava stabilizer groups
Haynes Miller: Sheaves and the exact functor theorem
A seminar talk by Jacob Lurie on the role of stacks in homotopy theory and the Landweber exact functor theorem: Part 1 Part 2
Goerss-Hopkins obstruction theory
Paul Goerss and Mike Hopkins: Moduli spaces of commutative ring spectra
Paul Goerss and Mike Hopkins: Moduli Problems for structured ring spectra
K(1)-local topological modular forms
Gerd Laures: K(1)-local Topological Modular Forms
Mike Hopkins: K(1)-local E_oo Ring Spectra
K(2)-local topological modular forms
Some of this can be found in Hovey Strickland: Morava K-theories and localizations. This paper also is the only one with a phantom discussion which is strong enough to deal with the construction of
the Morava E-theories.
Another reference that speaks about the K(2)-local picture are the Bonn notes by Goerss-Rezk (above) and the string orientation paper by Ando-Hopkins-Rezk below.
Here are a few handwritten notes from a seminar talk on the topic:
Talk on the construction of the Hopkins-Miller spectra and an overview over what happened so far in the seminar and where we are going.
Talk on the construction of tmf, the K(2)-localization of tmf ...
... notes from a conversation with Mike ... and ... a second conversation with Mike ... (both on K(2)-local topological modular forms)
... and a third one ... on K(2) local topological modular forms and the construction of tmf.
Notes from conversations with Johan ... and ... Haynes.
Tilman Bauer: Computation of the homotopy of the spectrum tmf
Tilman Bauer: Elliptic cohomology and projective spaces - a computation -
Ando, Hopkins, Rezk: orientations
Ando, Hopkins, Rezk: MString --> tmf
Ando, Hopkins and Strickland: Elliptic Spectra, the Witten Genus and the Theorem of the Cube
Equivariant Elliptic Cohomology
Ginzburg, Kapranov, Vasserot: Elliptic Algebras and Equivariant Elliptic Cohomology
Ioanid Rosu: Equivariant elliptic cohomology and rigidity
David Gepner's thesis: Homotopy topoi and equivariant elliptic cohomology
Greenlees Hopkins Rosu: Rational circle equivariant elliptic cohomology
Jorge Devoto: Equivariant elliptic cohomology and finite groups
Matthew Ando: Equivariant Elliptic Cohomology and Rigidity
Ando, Basterra: The Witten Genus and Equivariant Elliptic Cohomology
Ando: The sigma orientation for analytic circle-equivariant elliptic cohomology
Power Operations and Hecke Operators
Charles Rezk (MIT notes): Notes on power operations
Matthew Ando: Power Operations in Elliptic Cohomology and Representations of Loop Groups
Ando, Hopkins, Strickland: The Sigma Orientation is an H_oo-map
The K(2)-local sphere
Mark Behrens: A modular description of the K(2)-local sphere at the prime 3
Goerss, Henn, Mahowald, Rezk: A resolution of the K(2)-local sphere at the prime 3
Geometric Approaches
Teichner, Stolz: What is an elliptic object?
Po Hu, Igor Kriz: Conformal Field Theory and Elliptic Cohomology
Baas, Dundas, Rognes: Two-Vector bundles and forms of elliptic cohomology
On Elliptic genera | {"url":"http://www.ms.unimelb.edu.au/~nganter/talbot/index.html","timestamp":"2014-04-20T20:54:31Z","content_type":null,"content_length":"12568","record_id":"<urn:uuid:100cddc6-f189-4a83-bf21-b5d036a45308>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Component failure analysis
The job of a technician frequently entails "troubleshooting" (locating and correcting a problem) in malfunctioning circuits. Good troubleshooting is a demanding and rewarding effort, requiring a
thorough understanding of the basic concepts, the ability to formulate hypotheses (proposed explanations of an effect), the ability to judge the value of different hypotheses based on their
probability (how likely one particular cause may be over another), and a sense of creativity in applying a solution to rectify the problem. While it is possible to distill these skills into a
scientific methodology, most practiced troubleshooters would agree that troubleshooting involves a touch of art, and that it can take years of experience to fully develop this art.
An essential skill to have is a ready and intuitive understanding of how component faults affect circuits in different configurations. We will explore some of the effects of component faults in both
series and parallel circuits here, then to a greater degree at the end of the "Series-Parallel Combination Circuits" chapter.
Let's start with a simple series circuit:
With all components in this circuit functioning at their proper values, we can mathematically determine all currents and voltage drops:
Now let us suppose that R[2] fails shorted. Shorted means that the resistor now acts like a straight piece of wire, with little or no resistance. The circuit will behave as though a "jumper" wire
were connected across R[2] (in case you were wondering, "jumper wire" is a common term for a temporary wire connection in a circuit). What causes the shorted condition of R[2] is no matter to us in
this example; we only care about its effect upon the circuit:
With R[2] shorted, either by a jumper wire or by an internal resistor failure, the total circuit resistance will decrease. Since the voltage output by the battery is a constant (at least in our ideal
simulation here), a decrease in total circuit resistance means that total circuit current must increase:
As the circuit current increases from 20 milliamps to 60 milliamps, the voltage drops across R[1] and R[3] (which haven't changed resistances) increase as well, so that the two resistors are dropping
the whole 9 volts. R[2], being bypassed by the very low resistance of the jumper wire, is effectively eliminated from the circuit, the resistance from one lead to the other having been reduced to
zero. Thus, the voltage drop across R[2], even with the increased total current, is zero volts.
On the other hand, if R[2] were to fail "open" -- resistance increasing to nearly infinite levels -- it would also create wide-reaching effects in the rest of the circuit:
With R[2] at infinite resistance and total resistance being the sum of all individual resistances in a series circuit, the total current decreases to zero. With zero circuit current, there is no
electron flow to produce voltage drops across R[1] or R[3]. R[2], on the other hand, will manifest the full supply voltage across its terminals.
We can apply the same before/after analysis technique to parallel circuits as well. First, we determine what a "healthy" parallel circuit should behave like.
Supposing that R[2] opens in this parallel circuit, here's what the effects will be:
Notice that in this parallel circuit, an open branch only affects the current through that branch and the circuit's total current. Total voltage -- being shared equally across all components in a
parallel circuit, will be the same for all resistors. Due to the fact that the voltage source's tendency is to hold voltage constant, its voltage will not change, and being in parallel with all the
resistors, it will hold all the resistors' voltages the same as they were before: 9 volts. Being that voltage is the only common parameter in a parallel circuit, and the other resistors haven't
changed resistance value, their respective branch currents remain unchanged.
This is what happens in a household lamp circuit: all lamps get their operating voltage from power wiring arranged in a parallel fashion. Turning one lamp on and off (one branch in that parallel
circuit closing and opening) doesn't affect the operation of other lamps in the room, only the current in that one lamp (branch circuit) and the total current powering all the lamps in the room:
In an ideal case (with perfect voltage sources and zero-resistance connecting wire), shorted resistors in a simple parallel circuit will also have no effect on what's happening in other branches of
the circuit. In real life, the effect is not quite the same, and we'll see why in the following example:
A shorted resistor (resistance of 0 Ω) would theoretically draw infinite current from any finite source of voltage (I=E/0). In this case, the zero resistance of R[2] decreases the circuit total
resistance to zero Ω as well, increasing total current to a value of infinity. As long as the voltage source holds steady at 9 volts, however, the other branch currents (I[R1] and I[R3]) will remain
The critical assumption in this "perfect" scheme, however, is that the voltage supply will hold steady at its rated voltage while supplying an infinite amount of current to a short-circuit load. This
is simply not realistic. Even if the short has a small amount of resistance (as opposed to absolutely zero resistance), no real voltage source could arbitrarily supply a huge overload current and
maintain steady voltage at the same time. This is primarily due to the internal resistance intrinsic to all electrical power sources, stemming from the inescapable physical properties of the
materials they're constructed of:
These internal resistances, small as they may be, turn our simple parallel circuit into a series-parallel combination circuit. Usually, the internal resistances of voltage sources are low enough that
they can be safely ignored, but when high currents resulting from shorted components are encountered, their effects become very noticeable. In this case, a shorted R[2] would result in almost all the
voltage being dropped across the internal resistance of the battery, with almost no voltage left over for resistors R[1], R[2], and R[3]:
Suffice it to say, intentional direct short-circuits across the terminals of any voltage source is a bad idea. Even if the resulting high current (heat, flashes, sparks) causes no harm to people
nearby, the voltage source will likely sustain damage, unless it has been specifically designed to handle short-circuits, which most voltage sources are not.
Eventually in this book I will lead you through the analysis of circuits without the use of any numbers, that is, analyzing the effects of component failure in a circuit without knowing exactly how
many volts the battery produces, how many ohms of resistance is in each resistor, etc. This section serves as an introductory step to that kind of analysis.
Whereas the normal application of Ohm's Law and the rules of series and parallel circuits is performed with numerical quantities ("quantitative"), this new kind of analysis without precise numerical
figures is something I like to call qualitative analysis. In other words, we will be analyzing the qualities of the effects in a circuit rather than the precise quantities. The result, for you, will
be a much deeper intuitive understanding of electric circuit operation.
• REVIEW:
• To determine what would happen in a circuit if a component fails, re-draw that circuit with the equivalent resistance of the failed component in place and re-calculate all values.
• The ability to intuitively determine what will happen to a circuit with any given component fault is a crucial skill for any electronics troubleshooter to develop. The best way to learn is to
experiment with circuit calculations and real-life circuits, paying close attention to what changes with a fault, what remains the same, and why!
• A shorted component is one whose resistance has dramatically decreased.
• An open component is one whose resistance has dramatically increased. For the record, resistors tend to fail open more often than fail shorted, and they almost never fail unless physically or
electrically overstressed (physically abused or overheated).
Related Links | {"url":"http://www.allaboutcircuits.com/vol_1/chpt_5/7.html","timestamp":"2014-04-17T04:20:36Z","content_type":null,"content_length":"21163","record_id":"<urn:uuid:6e96cd82-0e37-4e98-a7fb-e623bff4d4f5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
P.J. Besl, N.D. McKay, "A Method for Registration of 3-D Shapes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, February, 1992.
BibTex x
@article{ 10.1109/34.121791,
author = {P.J. Besl and N.D. McKay},
title = {A Method for Registration of 3-D Shapes},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {14},
number = {2},
issn = {0162-8828},
year = {1992},
pages = {239-256},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.121791},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - A Method for Registration of 3-D Shapes
IS - 2
SN - 0162-8828
EPD - 239-256
A1 - P.J. Besl,
A1 - N.D. McKay,
PY - 1992
KW - 3D shape registration; pattern recognition; point set registration; iterative closest point; geometric entity; mean-square distance metric; convergence; geometric model; computational
geometry; convergence of numerical methods; iterative methods; optimisation; pattern recognition; picture processing
VL - 14
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method
handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given
an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all
six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to
shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces.
[1] K. S. Arun, T. S. Huang, and S. D. Blostein, "Least-squares fitting of two 3-D point sets,"IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-9, no. 5, pp. 698-700, 1987.
[2] R. Bajcsy and F. Solina, "Three-dimensional object representation revisited,"Proc. 1st Int. Conf. Comput. Vision(London), June 8-11, 1989, pp. 231-240.
[3] P. J. Besl, "Geometric modeling and computer vision,"Proc. IEEE, vol. 76, no. 8, pp. 936-958, Aug. 1988.
[4] P. J. Besl, "Active optical range imaging sensors," inAdvances in Machine Vision(J. Sanz, Ed.). New York: Springer-Verlag, 1989; see alsoMachine Vision and Applications, vol. 1, pp. 127-152,
[5] P. J. Besl, "The free-form surface matching problem,"Machine Vision for Three-Dimensional Scenes(H. Freeman, Ed.). New York: Academic, 1990.
[6] P. J. Besl and R. C. Jain, "Three-dimensional object recognition,"ACM Comput. Surveys, vol. 17, no. 1, pp. 75-145, Mar. 1985.
[7] J. Blumenthal, "Polygonizing implicit surfaces," Xerox Parc Tech. Rep. EDL-88-4, 1988.
[8] B. Bhanu and C.-C. Ho, "CAD-based 3D object representation for robot vision,"Computer, vol. 20, pp. 19-35, Aug. 1987.
[9] W. Boehm, G. Farin, and J. Kahmann, "A survey of curve and surface methods in CAGD,"Comput. Aided Geometric Des., vol. 1, no. 1, pp. 1-60, July 1984.
[10] R. C. Bolles and P. Horaud, "3DPO: A three dimensional part orientation svstem,"Int. J. Robotics Res., vol. 5, no. 3, Fall 1986, pp. 3-26.
[11] P. Brou, "Using the Gaussian image to find the orientation of an object,"Int. J. Robotics Res., vol. 3, no. 4, pp. 89-125, 1983.
[12] J. Callahan and R. Weiss, "A model for describing surface shape," inProc. Conf. Comput. Vision. Patt. Recogn.(San Francisco, CA), June 1985, pp. 240-247.
[13] C. H. Chen and A. C. Kak, "3DPOLY: A robot vision system for recognizing 3-D objects in low-order polynomial time," Tech. Rep. 88-48, Elect. Eng. Dept., Purdue Univ., West Lafayette, IN, 1988.
[14] R.T. Chin and C. R. Dyer, "Model-based recognition in robot vision,"ACM Comput. Surveys, vol. 18, no. 1, pp. 67-108, Mar. 1986.
[15] C. DeBoor,A Practical Guide to Splines. New York: Springer-Verlag, 1978.
[16] C. DeBoor and K. Hollig, "B-splines without divided differences,"Geometric Modeling: Algorithms and New Trends(G. Farin, Ed.),SIAM, pp. 21-28, 1987.
[17] G. Farin,Curves and Surfaces in Computer Aided Geometric Design: A Practical Guide. New York: Academic, 1990.
[18] O. D. Faugeras and M. Hebert, "The representation, recognition, and locating of 3-D objects,"Int. J. Robotics Res., vol. 5, no. 3, Fall 1986, pp. 27-52.
[19] T.-J. Fan, "Describing and recognizing 3-D objects using surface properties," Ph.D dissertation, University of Southern California, Tech. Rep. IRIS-237, Aug. 1988.
[20] P. J. Flynn and A. K. Jain, "CAD-based computer vision: From CAD models to relational graphs,"IEEE Trans. Patt. Anal. Machine Intell., vol. 13, no. 2, pp. 114-132, 1991; see also Ph.D. Thesis,
Comput. Sci. Dept., Michigan State Univ., E. Lansing, MI.
[21] E. G. Gilbert and C. P. Foo, "Computing the distance between smooth objects in 3D space," RSD-TR-13-88, Univ. of Michigan, Ann Arbor, 1988.
[22] E. G. Gilbert, D. W. Johnson, S. S. Keerthi, "A fast procedure for computing the distance between complex objects in 3D space,"IEEE J. Robotics Automat., vol. 4, pp. 193-203, 1988.
[23] G. H. Golub and C. F. Van Loan,Matrix Computations. Baltimore, MD: Johns Hopkins Univ. Press, 1983.
[24] W. E. L. Grimson, "The combinatorics of local constraints in model-based recognition and localization from sparse data,"J. ACM, vol. 33, no. 4, pp. 658-686, 1986.
[25] W. E. L. Grimson and T. Lozano-Pérez, "Model-based recognition and localization from sparse range or tactile data,"Int. J. Robotics Res., vol. 3, no. 3, pp. 3-35, Fall 1984.
[26] K. T. Gunnarsson and F. B. Prinz, "CAD model-based localization of parts in manufacturing,"IEEE Comput., vol. 20, no. 8, pp. 66-74, Aug. 1987.
[27] E. Hall, J. Tio, C. McPherson, and F. Sadjadi, "Measuring curved surfaces for robot vision,"Comput.vol. 15, no. 12, pp. 42-54, Dec. 1982.
[28] R. M. Haralicket al., "Pose estimation from corresponding point data," inMachine Vision for Inspection and Measurement(H. Freeman, Ed.). New York: Academic, 1989.
[29] H. Hilton,Mathematical Crystallography and the Theory of Groups of Movements. Oxford: Clarendon, 1963; London: Dover, 1963.
[30] B. K. P. Horn, "Extended Gaussian images,"Proc. IEEE, vol. 72, no. 12, pp. 1656-1678, Dec. 1984.
[31] B. K. P. Horn, "Closed-form solution of absolute orientation using unit quaternions,"J. Opt. Soc. Amer. Avol. 4, no. 4, pp. 629-642, Apr. 1987.
[32] B. K. P. Horn, "Relative orientation," A.I. Memo 994, AI Lab, Mass. Inst. Technol., Cambridge, Sept. 1987.
[33] B. K. P. Horn and J. G. Harris, "Rigid body motion from range image sequences,"Comput. Vision Graphics Image Processing, 1989.
[34] K. Ikeuchi, "Generating an interpretation tree from a CAD model for 3-D object recognition in bin-picking tasks,"Int. J. Comput. Vision, vol. 1, no. 2, pp. 145-165, 1987.
[35] A. K. Jain and R. Hoffman, "Evidence-based recognition of 3-D objects,"IEEE Trans. Patt. Anal. Machine Intell, vol. 10, no. 6, pp. 793-802, 1988.
[36] B. Kamgar-Parsi, J. L. Jones, and A. Rosenfeld, "Registration of multiple overlapping range images: Scenes without distinctive features,"Proc. IEEE Comput. Vision Patt. Recogn. Conf.(San Diego,
CA), June 1989.
[37] Y. Lamdan and H. J. Wolfson, "Geometric hashing: A general and efficient model-based recognition scheme," inProc. 2nd Int. Conf. Computer vision, 1988.
[38] S. Z. Li, "Inexact matching of 3D surfaces," VSSP-TR-3-90, Univ. of Surrey, England, 1990.
[39] P. Liang, "Measurement, orientation determination, and recognition of surface shapes in range images," Cent. Robotics Syst., Univ. of California, Santa Barbara, 1987.
[40] D. Luenberger,Linear and Nonlinear Programming. Reading, MA Addison-Wesley, 1984.
[41] A. P. Morgan,Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems. Englewood Cliffs, NJ: Prentice Hall, 1987.
[42] M. E. Mortenson,Geometric Modeling. New York: Wiley, 1985.
[43] J. Mundyet al., "The PACE system," inProc. CVPR '88 Workshop; see alsoDARPA IUW.
[44] D. W. Murray, "Model-based recognition using 3D shape alone,"Computer Vision, Graphics and Image Process., vol. 40, pp. 250-266, 1987.
[45] W. Press, B. Flannery, S. Teukolsky, and W. Vetterling,Numeric Recipes in C-The Art of Scientific Computing.Cambridge, UR: Cambridge University Press, 1988.
[46] J.H. Rieger, "On the classification of views of piecewise smooth objects,"Image Vision Comput., vol. 5, no. 2, pp. 91-97, May 1987.
[47] Proc. IEEE Robust Methods Workshop. Univ. of Washington, Seattle.
[48] P. Sander, "On reliably inferring differential structure from 3D im ages," Ph.D. dissertation, Dept. of Elect. Eng., McGill Univ., Montreal Canada, 1988.
[49] P. H. Schonemann, "A generalized solution to the orthogonal procrustes problem,"Psychometrika, vol. 31, no. 1, 1966.
[50] J. T. Schwartz and M. Sharir, "Identification of objects in two and three dimensions by matching noisy characteristic curves,"Int. J. Robotics Res., vol. 6, no. 2, pp. 29-44, 1987.
[51] T. W. Sederberg, "Piecewise algebraic surface patches,"Comput. Aided Geometric Des., vol. 2, no. 1, pp. 53-59, 1985.
[52] B. M. Smith, "IGES: A key to CAD/CAM systems integration,"IEEE Comput. Graphics Applications, vol. 3, no. 8, pp. 78-83, 1983.
[53] G. Stockman, "Object recognition and localization via pose clustering,"Comp. Vision Graphics Image Processing, vol. 40, pp. 361-387, 1987.
[54] R. Szeliski, "Estimating motion from sparse range data without correspondence,"2nd Int. Conf. Comput. Vision(Tarpon Springs, FL), Dec. 5-8, 1988, pp. 207-216.
[55] G. Taubin, "Algebraic nonplanar curve and surface estimation in 3- space with applications to position estimation," Tech. Rep. LEMS-43, Div. Eng., Brown Univ., Providence, RI, 1988.
[56] G. Taubin, "About shape descriptors and shape matching," Tech. Rep. LEMS-57, Div. Eng., Brown Univ., Providence, RI, 1989.
[57] D. Terzopoulos, J. Platt, A. Barr, and K. Fleischer, "Elastically deformable models."Comput. Graphics, vol. 21, no. 4, pp. 205-214, July 1987.
[58] B.C. Vemuri, A. Mitiche, and J.K. Aggarwal, "Curvature-based representation of objects from range data,"Image and Vision Comput., vol. 4, no. 2, pp. 107-114, May 1986.
[59] B. C. Vemuri and J. K. Aggarwal, "Representation and recognition of objects from dense range maps,"IEEE Trans. Circuits Syst., vol. CAS-34, no. 11, pp. 1351-1363, Nov. 1987.
[60] A. K. C. Wong, S. W. Lu, and M. Rioux, "Recognition and shape synthesis of 3-D objects based on attributed hypergraphs,"IEEE Trans. Patt. Anal. Machine Intell., vol. 11, no. 3, pp. 279-290, Mar.
Index Terms:
3D shape registration; pattern recognition; point set registration; iterative closest point; geometric entity; mean-square distance metric; convergence; geometric model; computational geometry;
convergence of numerical methods; iterative methods; optimisation; pattern recognition; picture processing
P.J. Besl, N.D. McKay, "A Method for Registration of 3-D Shapes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, Feb. 1992, doi:10.1109/34.121791
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tp/1992/02/i0239-abs.html","timestamp":"2014-04-16T22:48:56Z","content_type":null,"content_length":"63825","record_id":"<urn:uuid:6878dfc0-f727-4423-a5b3-7dabb6d4253e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reactance, denoted X, is a form of opposition that electronic components exhibit to the passage of alternating current (alternating current) because of capacitance or inductance. In some respects,
reactance is like an AC counterpart of DC (direct current) resistance. But the two phenomena are different in important ways, and they can vary independently of each other. Resistance and reactance
combine to form impedance, which is defined in terms of two-dimensional quantities known as complex number.
When alternating current passes through a component that contains reactance, energy is alternately stored in, and released from, a magnetic field or an electric field. In the case of a magnetic
field, the reactance is inductive. In the case of an electric field, the reactance is capacitive. Inductive reactance is assigned positive imaginary number values. Capacitive reactance is assigned
negative imaginary-number values.
As the inductance of a component increases, its inductive reactance becomes larger in imaginary terms, assuming the frequency is held constant. As the frequency increases for a given value of
inductance, the inductive reactance increases in imaginary terms. If L is the inductance in henries (H) and f is the frequency in hertz (Hz), then the inductive reactance +jX[L], in imaginary-number
ohms, is given by:
+jX[L] = +j(6.2832fL)
where 6.2832 is approximately equal to 2 times pi, a constant representing the number of radians in a full AC cycle, and j represents the unit imaginary number (the positive square root of -1). The
formula also holds for inductance in microhenries (?H) and frequency in MHz (MHz).
As a real-world example of inductive reactance, consider a coil with an inductance of 10.000 ?H at a frequency of 2.0000 MHz. Using the above formula, +jX[L] is found to be +j125.66 ohms. If the
frequency is doubled to 4.000 MHz, then +jX[L] is doubled, to +j251.33 ohms. If the frequency is halved to 1.000 MHz, then +jX[L ]is cut in half, to +j62.832 ohms.
As the capacitance of a component increases, its capacitive reactance becomes smaller negatively (closer to zero) in imaginary terms, assuming the frequency is held constant. As the frequency
increases for a given value of capacitance, the capacitive reactance becomes smaller negatively (closer to zero) in imaginary terms. If C is the capacitance in farads (F) and f is the frequency in
Hz, then the capacitive reactance -jX[C], in imaginary-number ohms, is given by:
-jX[C] = -j (6.2832fC)^-1
This formula also holds for capacitance in microfarads (?F) and frequency in megahertz (MHz).
As a real-world example of capacitive reactance, consider a capacitor with a value of 0.0010000 ?F at a frequency of 2.0000 MHz. Using the above formula, -jX[C] is found to be -j79.577 ohms. If the
frequency is doubled to 4.0000 MHz, then -jX[C] is cut in half, to -j39.789 ohms. If the frequency is cut in half to 1.0000 MHz, then -jX[C] is doubled, to -j159.15 ohms.
This was last updated in September 2005
Tech TalkComment
Contribute to the conversation
All fields are required. Comments will appear at the bottom of the article. | {"url":"http://whatis.techtarget.com/definition/reactance","timestamp":"2014-04-17T04:06:38Z","content_type":null,"content_length":"63263","record_id":"<urn:uuid:e53dff24-084b-4b49-8494-11a449bd5f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
bousso research
Research Interests
My interests are in theoretical cosmology and quantum gravity.
The central principles of quantum mechanics and of general relativity (our classical theory of gravity) come into sharp conflict at the horizon of a black hole. Quantum mechanics rests on the
principle of unitarity: information cannot be fundamentally lost. General relativity is based on the equivalence principle: empty space should look the same everywhere and to everyone.
Black holes evaporate, converting their energy into a cloud of radiation. Quantum mechanics requires this cloud to contain information: in principle, we could measure the cloud and use a powerful
computer to reconstruct what formed the black hole. Meanwhile, in general relativity, the horizon of a black hole is empty space, so it should look the same as empty space everywhere else.
It turns out that these two demands are mutually exclusive. Either information is lost, or the horizon of a black hole is a special place called a "firewall". Hawking went with the equivalence
principle and famously predicted that information is lost. But since then, evidence has accumulated that unitarity must be upheld.
For some time, it was believed that an idea called "black hole complementarity" would resolve the conflict. Bob, who waits for the black hole to evaporate, recovers information; whereas Alice, who
jumps into the black hole, notices nothing special at the horizon. There is no real contradiction, since Alice cannot get out of the black hole and argue with Bob. They just have two very different
descriptions of the black hole.
Very recently, however, we have come to realize that complementarity falls short: it cannot fully resolve the conflict between unitarity and the equivalence principle. This is a real crisis, and it
presents an opportunity for dramatic progress: some deeply held belief about quantum mechanics or gravity will have to be abandoned. Much of my research focusses on understanding the firewall paradox
and its consequences.
I am also interested in extracting predictions from the landscape of string theory. String theory predicts an enormous universe containing diverse regions, each larger than the observed universe.
This is currently the only known explanation of the small value of the observed cosmological constant (or "dark energy"), and of the fact that it is comparable to the energy density of matter. The
landscape can also explain other coincidences, such as the fact that dark and ordinary matter have comparable abundances.
A central challenge in cosmology is the "measure problem": the need to regulate infinities that come from indefinite exponential expansion, known as eternal inflation. This problem arises
independently of the landscape: because the cosmological constant is positive, our own observable universe will expand in this way in the future. I have proposed a measure that has some theoretical
motivation and phenomenological success, but fundamentally the problem remains poorly understood. This will remain an important area of study.
Selected Publications
R. Bousso, "Firewalls From Double Purity" (2013)
R. Bousso, R. Harnik, G. Kribs, and G. Perez: "Predicting the Cosmological Constant from the Causal Entropic Principle", e-Print: hep-th/0702115
R. Bousso, “Holographic Probabilities in Eternal Inflation,” Phys. Rev. Lett. 97, 191302 (2006). e-Print: hep-th/0605263
R. Bousso, “The Holographic Principle,” Rev.Mod.Phys.74:825-874 (2002). e-Print: hep-th/0203101.
R. Bousso and J. Polchinski, “Quantization Of Four Form Fluxes And Dynamical Neutralization Of The Cosmological Constant,” JHEP 0006:006 (2000). e-Print: hep-th/0004134.
R. Bousso, “A Covariant Entropy Conjecture,” JHEP 9907:004 (1999). e-Print: hep-th/9905177. | {"url":"http://www.physics.berkeley.edu/research/faculty/bousso.html","timestamp":"2014-04-16T18:59:46Z","content_type":null,"content_length":"7362","record_id":"<urn:uuid:93cbc6be-c41a-4c17-921c-ed72801938ae>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] List with numpy semantics
josef.pktd@gmai... josef.pktd@gmai...
Tue Nov 2 21:31:54 CDT 2010
On Tue, Nov 2, 2010 at 10:21 PM, <josef.pktd@gmail.com> wrote:
> On Tue, Nov 2, 2010 at 10:02 PM, Nikolaus Rath <Nikolaus@rath.org> wrote:
>> Gerrit Holl <gerrit.holl@gmail.com> writes:
>>> On 31 October 2010 17:10, Nikolaus Rath <Nikolaus@rath.org> wrote:
>>>> Hello,
>>>> I have a couple of numpy arrays which belong together. Unfortunately
>>>> they have different dimensions, so I can't bundle them into a higher
>>>> dimensional array.
>>>> My solution was to put them into a Python list instead. But
>>>> unfortunately this makes it impossible to use any ufuncs.
>>>> Has someone else encountered a similar problem and found a nice
>>>> solution? Something like a numpy list maybe?
>>> You could try a record array with a clever dtype, maybe?
>> It seems that this requires more cleverness than I have... Could you
>> give me an example? How do I replace l in the following code with a
>> record array?
>> l = list()
>> l.append(np.arange(3))
>> l.append(np.arange(42))
>> l.append(np.arange(9))
>> for i in range(len(l)):
>> l[i] += 32
> Depending on how you want to use it, it might be more convenient to
> use masked arrays or fill with nan (like pandas and larry) to get a
> rectangular array. it might be more convenient for some things, but if
> the sizes differ a lot then it might not be more efficient.
another option I sometimes use (e.g. for unbalanced panel data), is to
just stack them on top of each other into one long 1d array, and keep
track which is which, e.g. keeping the (start-end) indices or using an
indicator array. For example, with an integer label array np.bincount
is very fast to work with it.
This is mainly an advantage if there are many short arrays and many
operations have to applied to all of them.
> Josef
>> Thanks,
>> -Nikolaus
>> --
>> »Time flies like an arrow, fruit flies like a Banana.«
>> PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C
>> _______________________________________________
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-November/053690.html","timestamp":"2014-04-18T00:21:32Z","content_type":null,"content_length":"5917","record_id":"<urn:uuid:f6713316-99d7-4c9b-a4f9-f21614859e31>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Under a Curve
September 2nd 2009, 04:03 AM
Area Under a Curve
Also if some1 can help me in this question that would be great.
Find the equation of the tangent to the parabola y=2x^2 at (1,2). Calculate its point of intersection with the x-axis and the volume of the solid formed when the area between the parabola, the
tangent line and the xaxis is reveloved about the xaxis.
Especially the bit about forming a solid of revolution.
The tangent, has equn of
THe pt of intersection is (1/2,0)
But the area formed i found to be 2/15 pi, which is different to the answer. Much help would be appreciated.
September 2nd 2009, 04:22 AM
I believe 2pi/15 is correct
Draw a diagram and you'll see you need 2 integrals
between x = 0 and 1/2 you have disks
V= (pi)integral(4x^4dx) = pi/40
between x =1/2 and 1 you have washers
V = pi integral (4x^4 - (4x-2)^2)dx) =13pi/120
Adding the 2 you get 16pi/120 = 2pi/15
September 2nd 2009, 04:35 AM
See attachment for diagram and set up
September 2nd 2009, 11:11 PM | {"url":"http://mathhelpforum.com/calculus/100225-area-under-curve-print.html","timestamp":"2014-04-21T09:30:37Z","content_type":null,"content_length":"5245","record_id":"<urn:uuid:7cc88579-efad-4ba4-a649-8aa7a2082fd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intersections -- Poetry with Mathematics
In this Robert Frost couplet, “The Secret Sits,” the poet may not have intended to speak of mathematics but his lines sing true for mathematical discovery. We dance round in a ring and suppose, But
the Secret sits in the middle and knows. from The Witness Tree.
Counter-intuitive notions are among my favorite parts of mathematics and, in considerations of infinity, these are numerous. Recalling Zeno's paradox, we capture the infinite finitely in this
1 + 1/2 + 1/4 + 1/2^3 + . . . + 1/2^n + . . . = 1
Nonsense verse has a prominent place in the poetry that mathematicians enjoy. Perhaps this is so because mathematical discovery itself has a playful aspect--playing, as it were, with non-sense in an
effort to tease the sense out of it. Lewis Carroll, author of both mathematics and literature, often has his characters offer speeches that are a clever mix of sense and nonsense. For example, we
have these two stanzas from "Fit the Fifth" of The Hunting of the Snark, the words of the Butcher, explaining to the Beaver why 2 + 1 = 3.
More familiar than the name Benoit Mandelbrot are images, like the one to the left, of the fractal that bears his name. Born in Poland (1924) and educated in France, Mandelbrot moved to the US in
1958 to join the research staff at IBM. A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is a reduced-size copy of the whole, a property called
Mathematics is a visual language. As with poetry, placement on the page is a key ingredient of meaning. Here is one of my favorite visual poems, The Transcendence of Euler's Formula, by Neil
Hennessey, a Canadian poet and computer scientist. For additional math-poetry from Neil, follow the link.
Margaret Cavendish (1623-73) was a writer who published under her own name at a time when most women published anonymously. Her writing addressed a number of topics, including gender equity and
scientific method.
Piet Hein (Denmark, 1905-1996) was many-faceted--by times a philosopher, mathematician, designer, scientist, inventor of games and poet. He also created a new poetic form that he called 'grook'
("gruk" in Danish). Hein wrote over 10,000 grooks, most in Danish or English, published in more than 60 books. Some say that the name is short for 'GRin & sUK' ("laugh & sigh", in Danish). Here are
samples, with links to more:
In my own library this next poem is found (untitled) in Collected Sonnets by Edna St Vincent Millay (1892-1950), but it also is found online at various sites. The first line of the sonnet, which
announces Euclid as its subject, is well-known to most mathematicians; enjoy here all fourteeen lines. | {"url":"http://poetrywithmathematics.blogspot.com/2010_05_01_archive.html","timestamp":"2014-04-20T13:20:21Z","content_type":null,"content_length":"101725","record_id":"<urn:uuid:61a8b9a0-534d-4aad-abc7-9d32029e5d26>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seatac, WA Prealgebra Tutor
Find a Seatac, WA Prealgebra Tutor
...I earned a Bachelor of Science in Computer Science and in Computer Engineering at UW in Tacoma. My primary programming language is currently Java. Regardless of the subject, I would say I am
effective at recognizing patterns.
16 Subjects: including prealgebra, chemistry, French, calculus
...After leaving the University of Washington with my Bachelors of Science, I decided to go back to take the MCAT test and prepare for entrance into medical school. I took the test twice and
enrolled in the Kaplan prep class for the MCAT as well. I have been using MS Outlook since 1991.
46 Subjects: including prealgebra, reading, English, chemistry
...By assisting students in getting caught up with difficult subjects along with self-confidence they are able to work on new concepts on their own. If a subject presents a new challenge, they
can be handled with an occasional tutoring lesson. I look forward to the point where students are able to teach me new things.
45 Subjects: including prealgebra, chemistry, physics, calculus
...I said 'take the first number. Draw that many circles. Take the second number- put that many dots in each circle.
17 Subjects: including prealgebra, calculus, statistics, geometry
Background: I recently graduated from the University of Washington with a B.S. degree in chemistry. Throughout my college career I had a special focus in mathematics. Outside of school, current
events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha.
17 Subjects: including prealgebra, chemistry, physics, calculus
Related Seatac, WA Tutors
Seatac, WA Accounting Tutors
Seatac, WA ACT Tutors
Seatac, WA Algebra Tutors
Seatac, WA Algebra 2 Tutors
Seatac, WA Calculus Tutors
Seatac, WA Geometry Tutors
Seatac, WA Math Tutors
Seatac, WA Prealgebra Tutors
Seatac, WA Precalculus Tutors
Seatac, WA SAT Tutors
Seatac, WA SAT Math Tutors
Seatac, WA Science Tutors
Seatac, WA Statistics Tutors
Seatac, WA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Auburn, WA prealgebra Tutors
Bellevue, WA prealgebra Tutors
Burien, WA prealgebra Tutors
Des Moines, WA prealgebra Tutors
Federal Way prealgebra Tutors
Issaquah prealgebra Tutors
Kent, WA prealgebra Tutors
Kirkland, WA prealgebra Tutors
Newcastle, WA prealgebra Tutors
Normandy Park, WA prealgebra Tutors
Redmond, WA prealgebra Tutors
Renton prealgebra Tutors
Seattle prealgebra Tutors
Tacoma prealgebra Tutors
Tukwila, WA prealgebra Tutors | {"url":"http://www.purplemath.com/Seatac_WA_prealgebra_tutors.php","timestamp":"2014-04-18T13:29:49Z","content_type":null,"content_length":"23804","record_id":"<urn:uuid:9deeb714-2341-4982-84e8-e9b7d1117643>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Islam Unraveled
One of the Great Miracles [74:35]
PROOF FOR THE WORLD - QURAN, THE UNALTERED, COMPLETE WORD OF GOD
Mathematics is the language in which God wrote the universe.
Galileo (1564-1642 AD)
For the first time in history, we have a built-in proof that the Quran is the unaltered, original and complete word of God. A proof that is verifiable by anyone. So powerful is the proof, that in a
few generations it will become obvious, that any religion or any group of people, which advocates faith, as a pre-requisite or basis for belief will immediately be exposed as a false religion. Since
we now have proof, blind faith is no longer valid.
"You shall not accept any information, unless you verify it for yourself. I have given you the hearing, the eyesight, and the brian, and you are responsible for using them." (17:36)
The mathematical structure of the Quran was discovered by Dr Rashad Khalifa, an Egyptian born American biochemist, in the 1970's. Dr Khalifa, started translating the Quran into English with the
determination to find an explanation for the mysterious initials prefixing 29 Suras. He initiated an extensive research on these initials (for example: the arabic letter "Qaaf" in Suras 42 and 50),
after placing the Quranic text of the initialed Suras into a computer. His objective was to find a mathematical pattern which would explain the significance of the initials, although he had no idea
where and what to look for. After several years of research,
Dr Khalifa published his first results in a book entitled MIRACLE OF THE QURAN, Significance of the Mysterious Alphabets (Islamic Productions), in 1973. It was in 1974 that Dr Khalifa discovered that
there was a common denominator in the initials and throughout the Quran - the number 19. Subsequently Dr Khalifa published, THE COMPUTER SPEAKS: GOD'S MESSAGE TO THE WORLD (Renaissance Productions,
1981), QURAN: Visual Presentation of the Miracle
(Islamic Productions, 1982); and the translation of the Quran in English (Islamic Productions, 1989). All these publications are good tools to verify the mathematical structure. (The books can be
ordered from ICS, PO Box 43476, Tucson, Az 85733-3476).
1. The first verse (1:1), "Basmalah" consists of 19 arabic letters.
2. Each of the four arabic words of "Basmalah" are repeated in
the Quran in multiples of 19 in numbered verses.
The first word..."Ism" (Name).....occurs...19 times
The second word.."Allah" (God)....occurs...2698 times (19x142).
The third word..."Al-Rahman" (Most Gracious)...57 times (19x3)
The fourth word.."Raheem" (Most Merciful)..114 times (19x6)
The above can be verified by the following:
A. In the Concordance of the Quran by Abdul Baqy on page 362
the word ISM is listed with 19 occurances. The peculiar spelling of the word ISM as BISM is repeated in the Quran three times in verses 1:1, 27:30 & 11:41. (1+1+27+30+11+41 +
number of occurances 3 =114 or 19x6)
B. The count of the word ALLAH can best be verified by Dr. Rashad Khalifa's translation of the Quran which carries the cummulative total occurances of ALLAH on each page. Abdul Baqy gets the same
count when the numbered verse 1:1 is included in his count.
C. On page 307 on Abdul Bay's concordance we find AL REHMAN to be 57 as total occurances.
D. The word AL RAHEEM is listed on page 307 as occuring 95 times, wh ile RAHEEM is listed on page 309 as occuring as 20 times. The total occurance is 114 (95 -1 + 20 = 114). AL RAHEEM in verse 9:128
is not counted (Note: verses 9:128 & 129 were falsely injected into the Quran after the death of the prophet. The subject will be discussed in another topic).
3. The Quran consists of 114 suras, which is ...19 x 6
4. The total number of verses in the Quran is 6346, or ..19x334.
6346 is the total of 6234 numbered verses and 112 un-numbered verses (Basmalah)
Also 6 + 3 + 4 + 6 = 19
5. From the missing Basmalah in sura 9 to the extra Basmalah in sura 27, there are precisely 19 suras.
6. The first revelation (96:1-5) consists of 19 words and 76 letters (19 x 4)
7. First sura revealed (sura 96) consists of 19 verses and 304 arabic letters (19 x 16).
8. The Quran mentions 30 different numbers.(Eg: 300 & 9 in
verse 18:25). The sum of the 30 numbers is 162,146 or 19 x 8534.
9. The sum of all verse numbers where God "Allah" is mentioned is 118,123 or 19 x 6,217.
There are 29 suras in the Quran with prefixed initials. All the initials are linked to the common denominator - 19.
"Q" (Qaaf) is initialed in suras 42 and 50. In both the suras,"Q" is repeated 57 times or 19 x 3.
"Nun" (Noon) is initialed in sura 68 and the name of the letter is spelled out as - "noon wow noon" - in the original text. The total count of "Nun" is 133 or 19 x 7.
"S" (Saad) is initialed in suras, 7, 19, 38, and the total occurrence in the three suras is 152 or 19 x 8.
"Y.S" (Ya Seen). These two letters are prefixed in Sura 36 and the total occurrence for both of them is 285 or 19 x 15.
"H.M" (Ha Mim). These letters prefix suras 40 through 46 and their total occurrence in the seven "H.M" initialed suras is 2147 or 19 x 113.
"`A.S.Q" ('Ayn Seen Qaf). These initials constitute Verse 2 of sura 42 and are repeated in the sura 209 or 19 x 11 times.
"A.L.M" (Alef Laam Mim). These most frequently used letters in the Arabic language are prefixed in six suras - 2, 3, 29, 30, 31 and 32 and the total occurrence of the three letters in each of the six
suras in 9899 (19x521), 5662 (19x298), 1672 (19x88),1254 (19x66),
817 (19x43) and 570 (19x30) respectively.
All other Quranic initials, without exception, show similar patterns of being multiples of 19.
When the Quran was revealed, 14 centuries ago, the numbers known today did not exist. Alphabets of Arabic, Hebrew, Aramaic and Greek languages were used as numerals with a value for each alphabet.
For example, "alef" had a value of 1, "wow" had a value of 6, etc.
The total sum of the 19 letters of "Basmalah" is 786, a number known to muslim masses all over the world by God's will. It is beyond the scope of this article to give the mathematical patterns in the
Quran, taking into consideration the Gematrical Value of Arabic letters. Only one example will be given. The total sum of the 14 Arabic letters which participate in the formation of Quranic initials
in 29 suras is 693.....693 + 29 = 722 or 19 x 19 x 2.
The use of a computer becomes mandatory in certain complex aspects of the mathematical miracle of the Quran. For example, the sum of the number of verses in each of the 114 suras plus the sum of
every single verse number in all the suras is equal to 339,644 or 19 x
If we take the same number used in getting the total of 339,644 and put them all, side by side, from the first sura to the last sura, we obtain a 12,692 digit number. The number 12,692 is divisible
by 19 (19 x 668). But more importantly, the entire 12,692 digit
number is also a multiple of 19.
7 1234567 286 1234....285286 200 123......5 12345 6 123456.
OVER IT IS NINETEEN (74:30)
We now know the meaning of verse 30 of sura 74. God has chosen the number nineteen as his signature on his creation - the Glorious Quran. Anyone who cares to study and verify the mathematical
structure of the Quran will know with certainty that such a book can never be authored by anyone, other than by God. When this mathematical structure is taken together with the literary excellence of
the Quran, one can appreciate God's assertion in the
following verses in the same sura 74, that this is one of the God's great miracles.
"Absolutely, (I swear) by the moon.
"And the night as it passes.
"And the morning as it shines
"THIS IS ONE OF THE GREAT MIRACLES." (74:32-35)
Verse 74:31 gives five reasons for the miracle of the Quran with number 19 as the common denominator.
1. To disturb the disbelievers.
2. To convince the Christians and the Jews (that this is divine
3. To strengthen the faith of the faithful.
4. To remove all traces of doubt from the hearts of Christians,
Jews, as well as the believers; and
5. To expose those who harbor doubt in their hearts, and the
disbelievers; who will say, "What did God mean by this
allegory?" (or "So What?").
With mankind having a verifiable proof, that the Quran, the Final Testament, is the unaltered word of God, we have entered a new era in religion. Verse 17:36 quoted above mandates, that we use our
hearing , eyesight and brain to verify all information, including the miracle of the Quran by ourselves.
This new era also mandates that we seek proof for any religious law or practice dictated upon us. What makes the religion of Submission (Islam) so easy, is the proof, we now have, that the Quran is
the unaltered and complete word of God. As submitters (muslims) to
God, we have to accept God's assertion in the Quran; that the Quran is complete, fully detailed and the only source of religious law.
[ back to the top of this page ] | {"url":"http://www.islamunraveled.org/islam-basics/proof/summary-Intro.php","timestamp":"2014-04-19T01:47:44Z","content_type":null,"content_length":"18484","record_id":"<urn:uuid:75331f91-5f93-4426-b524-bd6049f6d509>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponential mapping
From Encyclopedia of Mathematics
A mapping of the tangent space of a manifold connection given on
1) Let
2) Let
The concept of an exponential mapping of a Lie group
[1] S. Helgason, "Differential geometry, Lie groups, and symmetric spaces" , Acad. Press (1978)
How to Cite This Entry:
Exponential mapping. A.S. Fedenko (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Exponential_mapping&oldid=12230
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Exponential_mapping","timestamp":"2014-04-19T19:46:50Z","content_type":null,"content_length":"24448","record_id":"<urn:uuid:bea019a0-8bea-44b4-bd9a-a2fa2f34070d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig Integral Identity
June 7th 2011, 12:05 PM #1
May 2008
Trig Integral Identity
Hi I'm doing a dynamics problem, To get it into the form wanted I've reduced the problem to showing that
$<br /> \int_{0}^{\pi/2} cos^2 x sin^8 x dx = \frac{1}{9} \int_{0}^{\pi/2} cos^{10} x dx<br />$
Apologies, the above latex code renders fine on my machine but it looks like the forum doesn't support it
It's meant to say
\int_{0}^{\pi/2} \cos^2 x \sin^8 x dx = \frac{1}{9} \int_{0}^{\pi/2} \cos^{10} x dx
Is there any obvious quick way of doing this, or a hint would be nice, I've thought of integrating it by parts but I cant quite see a nice way of splitting it.
Last edited by mr fantastic; June 7th 2011 at 04:08 PM. Reason: Restored deleted question.
Use parts: $u=\cos(x)~\&~dv=\cos(x)\sin^8(x)$
June 7th 2011, 12:15 PM #2 | {"url":"http://mathhelpforum.com/calculus/182582-trig-integral-identity.html","timestamp":"2014-04-19T19:39:13Z","content_type":null,"content_length":"32640","record_id":"<urn:uuid:618b489b-e2af-4a05-998c-3e120bb7f8d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Hilbert and conservativeness
Robert Black Mongre at gmx.de
Sat Sep 3 07:32:07 EDT 2005
Panu Raatikainen wrote:
>My hypothesis is the following: I think that Hilbert simply assumed
>that finitistic mathematics is deductively complete with respect to the
>real sentences (i.e., is real-complete ).... Bernays, in any case,
>explicitly assumed
>real-completeness: In the case of a finitistic proposition however, the
>determination of its irrefutability is equivalent to determination of its
>truth (Bernays 1930, 259, my italics). One may presume that this also
>reliably reflects Hilbert s view.
No doubt Hilbert (and Bernays) believed that finitary reasoning was
complete for real sentences. And this would follow anyway from other
things they believed, namely that PA was complete, that a finitary
proof of the consistency of PA was possible, and that such a proof
would show that PA was conservative over finitary reasoning for real
However, to have assumed this in argument would have been a serious
mistake (and not one I think we should attribute to them), since the
Enemy was Brouwer, and Brouwer would (rightly, as it turns out) not
have conceded it.
The quote from Bernays just doesn't entail real completeness. From
the immediate context it's not even clear that he's talking about
*general* statements at all rather than just calculations with
particular numbers. But assume (I think probably correctly) that he
is talking about general statements. The sentence before tells us
that once we have recognized the consistency of an ideal system of
postulates 'it immediately follows that a theorem deduced from them
can never contradict an intuitively recognizable fact [anschaulich
erkennbare Tatsache]'. Note that the word 'Tatsache' would be more
naturally used for a *particular* fact than a general one. This
looks to me *exactly* like the argument of Hilbert's Hamburg lecture:
if AxFx is a theorem of a consistent system extending finitary
reasoning then there can't be an n such that not-Fn is an anschaulich
erkennbare Tatsache, so for every n Fn is an anschaulich erkennbare
Tatsache, so AxFx is true. From the finitary standpoint AxFx is
incapable of negation, so the only sense in which it could be
refutable is for there to be an n such that not-Fn calculates out as
true, and for it to be irrefutable just is for it to be the case that
for every n, Fn calculates out as true. Nothing about real
completeness here.
Note also that intuitionistically (in his 1930 paper Bernays, as he
later noted, didn't distinguish properly between 'finitary' and
'intutionistic') if F is decidable, then from not-not-AxFx we can
conclude AxFx, i.e. stability but not decidability holds for pi_1
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-September/009062.html","timestamp":"2014-04-21T03:06:27Z","content_type":null,"content_length":"5150","record_id":"<urn:uuid:36663daa-f040-428e-b555-127474bc87ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Help.. Prove that lim as x approaches to 0. x^4cos(2/x)=0
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5054d607e4b0a91cdf446a43","timestamp":"2014-04-20T11:14:17Z","content_type":null,"content_length":"173974","record_id":"<urn:uuid:21eab35b-cd2f-40d8-995a-919838197b7f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
The question is:
Write a program that reads an unspecified number of integers, determines how many positive and
negative values have been read, and computes the total and average of the input values(not
counting zeros). Your program ends with the input 0. Display the average as a double. Here is a
sample run:
Enter an integer, the input ends if it is 0: 1 2 -1 3 0
The number of positives is 3.
The number of negatives is 1.
The total is 5.
The average is 1.25.
This is the program which I wrote:
using namespace std;
int main (){
int x;
int numofpos=0;
int numofnegs=0;
int total=0;
int sum=0;
int average=0;
cout<< "Please enter a number:";
while (x!=0){
if (x >0){
if (x <0){
sum +=x;
cout<< "The number of positive numbers is"<< numofpos<<endl;
cout<< "The number of negative numbers is"<< numofnegs<<endl;
cout<< "The total number is"<<sum<<endl;
cout<< "The average is"<<(sum)/(numofpos+numofnegs)<<endl;
return 0;
When I run the program, I am asked to only input the numbers, but I do not get the output which I asked for. Please keep it basic! | {"url":"http://www.dreamincode.net/forums/topic/313869-c-program-only-asks-for-input-doesnt-input-any-numbers/page__pid__1811615__st__0","timestamp":"2014-04-19T07:14:39Z","content_type":null,"content_length":"155718","record_id":"<urn:uuid:10cb92f0-2b66-4508-9ae8-a3ae7a8545a4>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symbol rate
In digital communications, symbol rate (also known as baud or modulation rate) is the number of symbol changes (waveform changes or signalling events) made to the transmission medium per second using
a digitally modulated signal or a line code. The symbol rate is measured in baud (Bd) or symbols/second. In the case of a line code, the symbol rate is the pulse rate in pulses/second. Each symbol
can represent or convey one or several bits of data. The symbol rate is related to, but should not be confused with, the gross bitrate expressed in bit/second.
A symbol can be described as either a pulse (in digital baseband transmission) or a "tone" (in passband transmission using modems) representing an integer number of bits. A theoretical definition of
a symbol is a waveform, a state or a significant condition of the communication channel that persists for a fixed period of time. A sending device places symbols on the channel at a fixed and known
symbol rate, and the receiving device has the job of detecting the sequence of symbols in order to reconstruct the transmitted data. There may be a direct correspondence between a symbol and a small
unit of data (for example, each symbol may encode one or several binary digits or 'bits') or the data may be represented by the transitions between symbols or even by a sequence of many symbols.
The symbol duration time, also known as unit interval, can be directly measured as the time between transitions by looking into an eye diagram of an oscilloscope. The symbol duration time T[s] can be
calculated as:
$T_s = {1 \over f_s}$
where f[s] is the symbol rate.
A simple example: A baud rate of 1 kBd = 1,000 Bd is synonymous to a symbol rate of 1,000 symbols per second. In case of a modem, this corresponds to 1,000 tones per second, and in case of a line
code, this corresponds to 1,000 pulses per second. The symbol duration time is 1/1,000 second = 1 millisecond.
Relationship to gross bitrate[edit]
The term baud rate sometimes incorrectly been used to mean bit rate, since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol,
such that binary "0" is represented by one symbol, and binary "1" by another symbol. In more advanced modems and data transmission techniques, a symbol may have more than two states, so it may
represent more than one binary digit (a binary digit always represents one of exactly two states). For this reason, the baud rate value will often be lower than the gross bit rate.
Example of use and misuse of "baud rate": It is correct to write "the baud rate of my COM port is 9,600" if we mean that the bit rate is 9,600 bit/s, since there is one bit per symbol in this case.
It is not correct to write "the baud rate of Ethernet is 100 Mbaud" or "the baud rate of my modem is 56,000" if we mean bit rate. See below for more details on these techniques.
The difference between baud (or signalling rate) and the data rate (or bit rate) is like a man using a single semaphore flag who can move his arm to a new position once each second, so his signalling
rate (baud) is one symbol per second. The flag can be held in one of eight distinct positions: Straight up, 45° left, 90° left, 135° left, straight down (which is the rest state, where he is sending
no signal), 135° right, 90° right, and 45° right. Each signal (symbol) carries three bits of information. It takes three binary digits to encode eight states. The data rate is three bits per second.
In the Navy, more than one flag pattern and arm can be used at once, so the combinations of these produce many symbols, each conveying several bits, a higher data rate.
If N bits are conveyed per symbol, and the gross bit rate is R, inclusive of channel coding overhead, the symbol rate can be calculated as:
$f_s = {R \over N}$
In that case M = 2^N different symbols are used. In a modem, these may be sinewave tones with unique combinations of amplitude, phase and/or frequency. For example, in a 64QAM modem, M = 64. In a
line code, these may be M different voltage levels.
By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley^[1] constructed a measure of the gross bitrate R as:
$R = f_s \log_2(M)$
where f[s] is the baud rate in symbols/second or pulses/second. (See Hartley's law).
Modems for passband transmission[edit]
Modulation is used in passband filtered channels such as telephone lines, radio channels and other frequency division multiplex (FDM) channels.
In a digital modulation method provided by a modem, each symbol is typically a sine wave tone with certain frequency, amplitude and phase.Symbol rate, baud rate, is the number of transmitted tones
per second.
One symbol can carry one or several bits of information. In voiceband modems for the telephone network, it is common for one symbol to carry up to 7 bits.
Conveying more than one bit per symbol or bit per pulse has advantages. It reduces the time required to send a given quantity of data over a limited bandwidth. A high spectral efficiency in (bit/s)/
Hz can be achieved; i.e., a high bit rate in bit/s although the bandwidth in hertz may be low.
The maximum baud rate for a passband for common modulation methods such as QAM, PSK and OFDM is approximately equal to the passband bandwidth.
Voiceband modem examples:
• A V.22bis modem transmits 2400 bit/s using 1200 Bd (1200 symbol/s), where each quadrature amplitude modulation symbol carries two bits of information. The modem can generate M=2^2=4 different
symbols. It requires a bandwidth of 1200 Hz (equal to the baud rate). The carrier frequency (the central frequency of the generated spectrum) is 1800 Hz, meaning that the lower cut off frequency
is 1,800 − 1,200/2 = 1,200 Hz, and the upper cutoff frequency is 1,800 + 1,200/2 = 2,400 Hz.
• A V.34 modem may transmit symbols at a baud rate of 3,420 Bd, and each symbol can carry up to ten bits, resulting in a gross bit rate of 3420 × 10 = 34,200 bit/s. However, the modem is said to
operate at a net bit rate of 33,800 bit/s, excluding physical layer overhead.
Line codes for baseband transmission[edit]
In case of a baseband channel such as a telegraph line, a serial cable or a Local Area Network twisted pair cable, data is transferred using line codes; i.e., pulses rather than sinewave tones. In
this case the baud rate is synonymous to the pulse rate in pulses/second.
The maximum baud rate or pulse rate for a base band channel is called the Nyquist rate, and is double the bandwidth (double the cut-off frequency).
The simplest digital communication links (such as individual wires on a motherboard or the RS-232 serial port/COM port) typically have a symbol rate equal to the gross bit rate.
Common communication links such as 10 Mbit/s Ethernet (10Base-T), USB, and FireWire typically have a symbol rate slightly lower than the data bit rate, due to the overhead of extra non-data symbols
used for self-synchronizing code and error detection.
J. M. Emile Baudot (1845–1903) worked out a five-level code (five bits per character) for telegraphs which was standardized internationally and is commonly called Baudot code.
More than two voltage levels are used in advanced techniques such as FDDI and 100/1,000 Mbit/s Ethernet LANs, and others, to achieve high data rates.
1,000 Mbit/s Ethernet LAN cables use four wire pairs in full duplex (250 Mbit/s per pair in both directions simultaneously), and many bits per symbol to encode their data payloads.
Digital television and OFDM example[edit]
In digital television transmission the symbol rate calculation is:
symbol rate in symbols per second = (Data rate in bits per second × 204) / (188 × bits per symbol)
The 204 is the number of bytes in a packet including the 16 trailing Reed-Solomon error checking and correction bytes. The 188 is the number of data bytes (187 bytes) plus the leading packet sync
byte (0x47).
The bits per symbol is the (modulation's power of 2)*(Forward Error Correction). So for example in 64-QAM modulation 64 = 2^6 so the bits per symbol is 6. The Forward Error Correction (FEC) is
usually expressed as a fraction; i.e., 1/2, 3/4, etc. In the case of 3/4 FEC, for every 3 bits of data, you are sending out 4 bits, one of which is for error correction.
given bit rate = 18096263
Modulation type = 64-QAM
FEC = 3/4
$\text{symbol rate} = \cfrac{18096263}{6\cdot\frac{3}{4}} ~ \cfrac{204}{188} = \cfrac{18096263}{6} ~ \cfrac{4}{3} ~ \cfrac{204}{188} = 4363638$
In digital terrestrial television (DVB-T, DVB-H and similar techniques) OFDM modulation is used; i.e., multi-carrier modulation. The above symbol rate should then be divided by the number of OFDM
sub-carriers in view to achieve the OFDM symbol rate. See the OFDM system comparison table for further numerical details.
Relationship to chip rate[edit]
Some communication links (such as GPS transmissions, CDMA cell phones, and other spread spectrum links) have a symbol rate much higher than the data rate (they transmit many symbols called chips per
data bit. Representing one bit by a chip sequence of many symbols overcomes co-channel interference from other transmitters sharing the same frequency channel, including radio jamming, and is common
in military radio and cell phones. Despite the fact that using more bandwidth to carry the same bit rate gives low channel spectral efficiency in (bit/s)/Hz, it allows many simultaneous users, which
results in high system spectral efficiency in (bit/s)/Hz per unit of area.
In these systems, the symbol rate of the physically transmitted high-frequency signal rate is called chip rate, which also is the pulse rate of the equivalent base band signal. However, in spread
spectrum systems, the term symbol may also be used at a higher layer and refer to one information bit, or a block of information bits that are modulated using for example conventional QAM modulation,
before the CDMA spreading code is applied. Using the latter definition, the symbol rate is equal to or lower than the bit rate.
Relationship to bit error rate[edit]
The disadvantage of conveying many bits per symbol is that the receiver has to distinguish many signal levels or symbols from each other, which may be difficult and cause bit errors in case of a poor
phone line that suffers from low signal-to-noise ratio. In that case, a modem or network adapter may automatically choose a slower and more robust modulation scheme or line code, using fewer bits per
symbol, in view to reduce the bit error rate.
An optimal symbol set design takes into account channel bandwidth, desired information rate, noise characteristics of the channel and the receiver, and receiver and decoder complexity.
Many data transmission systems operate by the modulation of a carrier signal. For example, in frequency-shift keying (FSK), the frequency of a tone is varied among a small, fixed set of possible
values. In a synchronous data transmission system, the tone can only be changed from one frequency to another at regular and well-defined intervals. The presence of one particular frequency during
one of these intervals constitutes a symbol. (The concept of symbols does not apply to asynchronous data transmission systems.) In a modulated system, the term modulation rate may be used
synonymously with symbol rate.
Binary modulation[edit]
If the carrier signal has only two states, then only one bit of data (i.e., a 0 or 1) can be transmitted in each symbol. The bit rate is in this case equal to the symbol rate. For example, a binary
FSK system would allow the carrier to have one of two frequencies, one representing a 0 and the other a 1. A more practical scheme is differential binary phase-shift keying, in which the carrier
remains at the same frequency, but can be in one of two phases. During each symbol, the phase either remains the same, encoding a 0, or jumps by 180°, encoding a 1. Again, only one bit of data (i.e.,
a 0 or 1) is transmitted by each symbol. This is an example of data being encoded in the transitions between symbols (the change in phase), rather than the symbols themselves (the actual phase). (The
reason for this in phase-shift keying is that it is impractical to know the reference phase of the transmitter.)
N-ary modulation, N greater than 2[edit]
By increasing the number of states that the carrier signal can take, the number of bits encoded in each symbol can be greater than one. The bit rate can then be greater than the symbol rate. For
example, a differential phase-shift keying system might allow four possible jumps in phase between symbols. Then two bits could be encoded at each symbol interval, achieving a data rate of double the
symbol rate. In a more complex scheme such as 16-QAM, four bits of data are transmitted in each symbol, resulting in a bit rate of four times the symbol rate.
Data rate versus error rate[edit]
Modulating a carrier increases the frequency range, or bandwidth, it occupies. Transmission channels are generally limited in the bandwidth they can carry. The bandwidth depends on the symbol
(modulation) rate (not directly on the bit rate). As the bit rate is the product of the symbol rate and the number of bits encoded in each symbol, it is clearly advantageous to increase the latter if
the former is fixed. However, for each additional bit encoded in a symbol, the constellation of symbols (the number of states of the carrier) doubles in size. This makes the states less distinct from
one another which in turn makes it more difficult for the receiver to detect the symbol correctly in the presence of disturbances on the channel.
The history of modems is the attempt at increasing the bit rate over a fixed bandwidth (and therefore a fixed maximum symbol rate), leading to increasing bits per symbol. For example, the V.29
specifies 4 bits per symbol, at a symbol rate of 2,400 baud, giving an effective bit rate of 9,600 bits per second.
The history of spread spectrum goes in the opposite direction, leading to fewer and fewer data bits per symbol in order to spread the bandwidth. In the case of GPS, we have a data rate of 503 bit/s
and a symbol rate of 1.023 Mchips/s. If each chip is considered a symbol, each symbol contains far less than one bit (503 bit/s / 1,023 ksymbols/s =~= 0.00005 bits/symbol).
The complete collection of M possible symbols over a particular channel is called a M-ary modulation scheme. Most modulation schemes transmit some integer number of bits per symbol b, requiring the
complete collection to contain M = 2^b different symbols. Most popular modulation schemes can be described by showing each point on a constellation diagram, although a few modulation schemes (such as
MFSK, DTMF, pulse-position modulation, spread spectrum modulation) require a different description.
Significant condition[edit]
In telecommunication, concerning the modulation of a carrier, a significant condition is one of the signal's parameters chosen to represent information.^[2]
A significant condition could be an electrical current (voltage, or power level), an optical power level, a phase value, or a particular frequency or wavelength. The duration of a significant
condition is the time interval between successive significant instants.^[2] A change from one significant condition to another is called a signal transition. Information can be transmitted either
during the given time interval, or encoded as the presence or absence of a change in the received signal.^[3]
Significant conditions are recognized by an appropriate device called a receiver, demodulator, or decoder. The decoder translates the actual signal received into its intended logical value such as a
binary digit (0 or 1), an alphabetic character, a mark, or a space. Each significant instant is determined when the appropriate device assumes a condition or state usable for performing a specific
function, such as recording, processing, or gating.^[2]
See also[edit]
External links[edit] | {"url":"http://blekko.com/wiki/Symbol_rate?source=672620ff","timestamp":"2014-04-18T01:12:18Z","content_type":null,"content_length":"38332","record_id":"<urn:uuid:e88c6edd-8069-44c0-988e-51032eeccf7a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reactance, denoted X, is a form of opposition that electronic components exhibit to the passage of alternating current (alternating current) because of capacitance or inductance. In some respects,
reactance is like an AC counterpart of DC (direct current) resistance. But the two phenomena are different in important ways, and they can vary independently of each other. Resistance and reactance
combine to form impedance, which is defined in terms of two-dimensional quantities known as complex number.
When alternating current passes through a component that contains reactance, energy is alternately stored in, and released from, a magnetic field or an electric field. In the case of a magnetic
field, the reactance is inductive. In the case of an electric field, the reactance is capacitive. Inductive reactance is assigned positive imaginary number values. Capacitive reactance is assigned
negative imaginary-number values.
As the inductance of a component increases, its inductive reactance becomes larger in imaginary terms, assuming the frequency is held constant. As the frequency increases for a given value of
inductance, the inductive reactance increases in imaginary terms. If L is the inductance in henries (H) and f is the frequency in hertz (Hz), then the inductive reactance +jX[L], in imaginary-number
ohms, is given by:
+jX[L] = +j(6.2832fL)
where 6.2832 is approximately equal to 2 times pi, a constant representing the number of radians in a full AC cycle, and j represents the unit imaginary number (the positive square root of -1). The
formula also holds for inductance in microhenries (?H) and frequency in MHz (MHz).
As a real-world example of inductive reactance, consider a coil with an inductance of 10.000 ?H at a frequency of 2.0000 MHz. Using the above formula, +jX[L] is found to be +j125.66 ohms. If the
frequency is doubled to 4.000 MHz, then +jX[L] is doubled, to +j251.33 ohms. If the frequency is halved to 1.000 MHz, then +jX[L ]is cut in half, to +j62.832 ohms.
As the capacitance of a component increases, its capacitive reactance becomes smaller negatively (closer to zero) in imaginary terms, assuming the frequency is held constant. As the frequency
increases for a given value of capacitance, the capacitive reactance becomes smaller negatively (closer to zero) in imaginary terms. If C is the capacitance in farads (F) and f is the frequency in
Hz, then the capacitive reactance -jX[C], in imaginary-number ohms, is given by:
-jX[C] = -j (6.2832fC)^-1
This formula also holds for capacitance in microfarads (?F) and frequency in megahertz (MHz).
As a real-world example of capacitive reactance, consider a capacitor with a value of 0.0010000 ?F at a frequency of 2.0000 MHz. Using the above formula, -jX[C] is found to be -j79.577 ohms. If the
frequency is doubled to 4.0000 MHz, then -jX[C] is cut in half, to -j39.789 ohms. If the frequency is cut in half to 1.0000 MHz, then -jX[C] is doubled, to -j159.15 ohms.
This was last updated in September 2005
Tech TalkComment
Contribute to the conversation
All fields are required. Comments will appear at the bottom of the article. | {"url":"http://whatis.techtarget.com/definition/reactance","timestamp":"2014-04-17T04:06:38Z","content_type":null,"content_length":"63263","record_id":"<urn:uuid:e53dff24-084b-4b49-8494-11a449bd5f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 18: The Chi-Square Probability D
Chapter 18: TheChi-SquareProbabilityDistribution 279 Using Excel's CHIINV Function You don't have a chi-square distribution table handy? No need to panic. We can gen- erate critical chi-square scores
using Excel's CHIINV function, which has the follow- ing characteristics: CHIINV(probability, deg-freedom) where: probability = the level of significance, deg-freedom = the number of degrees of
freedom For instance, Figure 18.2 shows the CHIINV function being used to determine the critical chi-square score for = 0.10 and d.f. = 4 from our previous example. Figure 18.2 Excel's CHIINV
function. Cell A1 contains the Excel formula =CHIINV(0.10, 4) with the result being 7.779. This probability is underlined in the previous table. Characteristics of a Chi-Square Distribution We can
see from Figure 18.2 that the chi-square distribution is not symmetrical but rather has a positive skew. The shape of the distribution will change with the number of degrees of freedom as shown in
Figure 18.3. As the number of degrees of freedom increases, the shape of the chi-square distribu- tion becomes more symmetrical. | {"url":"http://my.safaribooksonline.com/book/statistics/9781592576340/chapter-18-the-chi-square-probability-distribution/characteristics_of_a_chisquare","timestamp":"2014-04-18T10:44:57Z","content_type":null,"content_length":"59871","record_id":"<urn:uuid:7e831c7c-d195-4de9-8254-21f782999cc8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unbiased Estimators
February 7th 2009, 02:29 PM
Unbiased Estimators
Let Y_1, Y_2,...,Y_n be a random sample of size n from the pdf f_Y(y;theta)=(1/theta)*e^(-y/theta), y>0:
a) Let theta hat=n*Y_min. Is theta hat unbiased for theta?
For this question, I'm not sure how to generate the pdf for Y_min. Once I get that, I assume I multiply by n, multiply by the other pdf, and solve the integral.
b) Is theta hat=(1/n)summation(from i=1 to n)Y_i unbiased for theta?
February 7th 2009, 07:45 PM
mr fantastic
Let Y_1, Y_2,...,Y_n be a random sample of size n from the pdf f_Y(y;theta)=(1/theta)*e^(-y/theta), y>0:
a) Let theta hat=n*Y_min. Is theta hat unbiased for theta?
For this question, I'm not sure how to generate the pdf for Y_min. Once I get that, I assume I multiply by n, multiply by the other pdf, and solve the integral.
b) Is theta hat=(1/n)summation(from i=1 to n)Y_i unbiased for theta?
(a) If the random variable Y has pdf f(y) then the pdf of $Y_{(1)}= \text{min} \{ Y_1, \, Y_2, \, .... \, Y_n\}$ is found as follows:
The cdf of $Y_{(1)}$ is $G(y) = \Pr(Y_{(1)} \leq y) = 1 - \Pr(Y_{(1)} > y)$.
Since $Y_{(1)}$ is the minimum of $Y_1, \, Y_2, \, .... \, Y_n$ it follows that the event $\Pr(Y_{(1)} > y)$ occurs if and only if the events $\Pr(Y_i > y)$ occur for $i = 1, 2, \, .... \, n$.
Since the $Y_i$ are independent and $\Pr(Y_i > y) = 1 - F(y)$ it follows that
$G(y) = \Pr(Y_{(1)} \leq y) = 1 - \Pr(Y_{(1)} > y) = 1 - \Pr(Y_1 > y, \, Y_2 > y, \, .... \, Y_n > y)$
$= 1 - \Pr(Y_1 > y) \cdot \Pr(Y_2 > y) \cdot \, .... \, \cdot \Pr(Y_n > y) = 1 - [1 - F(y)]^n$.
The pdf of $Y_{(1)}$ is given by $g(y) = \frac{dG}{dy}$: $g(y) = n [1 - F(y)]^{n-1} f(y)$.
Now you have to calculate $E(n Y_{(1)})$ and see if it's equal to $\theta$.
(b) Calculate the expected value of the estimator and see whether or not you get $\theta$. | {"url":"http://mathhelpforum.com/advanced-statistics/72356-unbiased-estimators-print.html","timestamp":"2014-04-20T09:06:37Z","content_type":null,"content_length":"9122","record_id":"<urn:uuid:6deca574-174d-468c-9c8f-2b0f72e519bd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Illustration 8.1
Java Security Update: Oracle has updated the security settings needed to run Physlets.
Click here for help on updating Java and setting Java security.
Illustration 8.1: Force and Impulse
set force: small Δt | set force: large Δt
Please wait for the animation to completely load.
So what do we mean by a force? Newton considered a net force as something that caused a time rate of change of momentum, Δp/Δt or dp/dt. However, C.D. Broad (Scientific Thought, 1923) wrote, "It
seems clear to me that no one ever does mean or ever has meant by 'force', rate of change of momentum." So if Newton's statement seems odd it is because you are used to a special—and famous—case of
Newton's general statement of the second law, that of Σ F[net] = ma. Restart.
Consider the force applied by the hand over a small Δt (this happens automatically at t = 1 s). Notice the change in momentum (position is given in meters and time is given in seconds). The arrow
represents the change in momentum. Initially the mass of the cart is 1 kg. Change the mass to 2 kg. Does the change in momentum differ? No! But what does change is the final velocity; it is half of
the velocity when the mass was 1 kg. The same force results in the same change in momentum in the same time interval.
Another way to represent this is in terms of the integral (the area) under a force vs. time graph. Check the box to see this graph. This area is called the impulse, which is a fancy name for Δp. What
can you say about the impulse received by the cart, independent of its mass? Check the second box to find out. Again, it should be, and is, the same.
Consider the animation with the force applied by the hand over a large Δt (this happens automatically at t = 1 s). The difference between the animations is that in large Δt the force acts for a
longer time and therefore the force causes a larger change in momentum. Again, the arrow represents the change in momentum, which is larger than the small Δt case.
next » | {"url":"http://www.compadre.org/Physlets/mechanics/illustration8_1.cfm","timestamp":"2014-04-20T18:35:53Z","content_type":null,"content_length":"25527","record_id":"<urn:uuid:cc7211e7-ac97-4bcb-8c3f-c740d57100c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Best Response
You've already chosen the best response.
If the output is to be set at 6.0 V, then across the other section (A), the voltage drop needed to be 3.0 V. At 0730, the resistance of the thermistor = 1.5 kΩ The current flowing through section
(A): I=3V/1.5kΩ+1.5kΩ I=0.001 A Similarly, the same current flows through the section (B) 6.0 V =(Rv+1.5k)*(0.001A) Rv=4.5kΩ
Best Response
You've already chosen the best response.
if correct please click best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f773e9e4b027eb5d99bc70","timestamp":"2014-04-20T23:41:35Z","content_type":null,"content_length":"30381","record_id":"<urn:uuid:4beaa73e-f214-4dd9-8829-77ae6c3edc85>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Queueing Systems: Computer applications
Leonard Kleinrock
Queueing Systems Volume 1: Theory Leonard Kleinrock This book presents and develops methods from queueing theory in sufficient depth so that students and professionals may apply these methods to many
modern engineering problems, as well as conduct creative research in the field. It provides a long-needed alternative both to highly mathematical texts and to those which are simplistic or limited in
approach. Written in mathematical language, it avoids the "theorem-proof" technique: instead, it guides the reader through a step-by-step, intuitively motivated yet precise development leading to a
natural discovery of results. Queueing Systems, Volume I covers material ranging from a refresher on transform and probability theory through the treatment of advanced queueing systems. It is divided
into four sections: 1) preliminaries; 2) elementary queueing theory; 3) intermediate queueing theory; and 4) advanced material. Important features of Queueing Systems, Volume 1: Theory include-
* techniques of duality, collective marks
* queueing networks
* complete appendix on z-transforms and Laplace transforms
* an entire appendix on probability theory, providing the notation and main results needed throughout the text
* definition and use of a new and convenient graphical notation for describing the arrival and departure of customers to a queueing system
* a Venn diagram classification of many common stochastic processes
1975 (0 471-49110-1) 417 pp. Fundamentals of Queueing Theory Second Edition Donald Gross and Carl M. Harris This graduated, meticulous look at queueing fundamentals developed from the authors'
lecture notes presents all aspects of the methodology-including Simple Markovian birth-death queueing models; advanced Markovian models; networks, series, and cyclic queues; models with general
arrival or service patterns; bounds, approximations, and numerical techniques; and simulation-in a style suitable to courses of study of widely varying depth and duration. This Second Edition
features new expansions and abridgements which enhance pedagogical use: new material on numerical solution techniques for both steady-state and transient solutions; changes in simulation language and
new results in statistical analysis; and more. Complete with a solutions manual, here is a comprehensive, rigorous introduction to the basics of the discipline. 1985 (0 471-89067-7) 640 pp.
Review: Queueing Systems, Volume 1: Theory
User Review - Dan - Goodreads
I can't say I read the whole thing. The parts that I did read had great mathematical beauty. A good writer, and a deep mathematician. Read full review
Review: Queueing Systems, Volume 1: Theory
User Review - Bob - Goodreads
Used as a textbook by Prof. J. Laurie Snell, Mathematics Department, Dartmouth College for an elective topics course in Operations Research, Fall 1979. Read full review
A Queueing Theory Primer 1
The Queue GMm 241 6
Bounds Inequalities and Approximations 27
13 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=nsgQAQAAMAAJ&q=define&dq=related:ISBN0471491101&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-21T05:27:22Z","content_type":null,"content_length":"119951","record_id":"<urn:uuid:898539ee-47dd-4c2a-99cd-2c7b42930060>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
subtract - 2x ^2 - 5x +1 from 4x ^2 +5x - 1
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/506362b4e4b0da5168bdc55c","timestamp":"2014-04-21T10:15:01Z","content_type":null,"content_length":"34806","record_id":"<urn:uuid:feeeebe8-6f71-42f2-952a-a32bb38ff988>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
regression to the mean
regression to the mean (plural regressions to the mean)
1. (statistics) The phenomenon by which extreme examples from any set of data are likely to be followed by examples which are less extreme; a tendency towards the average of any sample. For example,
the offspring of two very tall individuals tend to be tall, but closer to the average (mean) than either of their parents.
• linear regression
• regression line
Last modified on 18 June 2013, at 22:35 | {"url":"http://en.m.wiktionary.org/wiki/regression_to_the_mean","timestamp":"2014-04-20T13:36:52Z","content_type":null,"content_length":"14912","record_id":"<urn:uuid:0e46ae38-0922-4793-9ae2-5589d9884a2f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Industrial Location
August 30, 2006
Welcome to Industrial Location. This is how it all started.
August 30, 2006
Podz, from the «amazing WordPress.com support», made it all possible.
It’s nice to work with people like Podz.
WordPress.com, give a raise to Podz!
2006-09-01 5:45 pm
August 30, 2006
Francis et al. (1992) view the facility location problem as a “layout problem in the large.” The problems of configuring a single facility and of configuring a system of facilities have much in
common, but they arise in a variety of contexts and they also differ significantly in their characteristics. Because of the multiplicity of facility layout and location problems that have been
addressed in the research literature, Francis et al. (1992) found it is useful to define a number of categories that can be used to classify facility location problems. In classifying facility
location problems, six major elements are considered:
1. new facility characteristics,
2. existing facility locations,
3. new and existing facility interactions,
4. solution space characteristics,
5. distance measure, and
6. objective.
These elements are depicted in Francis et al. (1992), Figures 1.5-1.10, pp. 21-23.
Francis, Richard L., McGinnis, Leon F., Jr. and White, John A. Facilities Layout and Location, 2nd. ed., Prentice Hall, Englewood Cliffs, NJ, 1992.
2006-10-05 10:41 pm
August 29, 2006
Definition of planar location problem, Francis et al. (1992, p. 238)
Assumptions of planar location models, Francis et al. (1992, pp. 238-239, 339)
Application of planar location models, Francis et al. (1992, pp. 238, 240)
Francis, Richard L., McGinnis, Leon F., Jr. and White, John A. Facilities Layout and Location, 2nd. ed., Prentice Hall, Englewood Cliffs, NJ, 1992.
2007-01-29 11:45 pm
August 29, 2006
Definition of Euclidean distance, Francis and White (1974, p. 169)
Examples where Euclidean distance is appropriate, Francis and White (1974, pp. 169, 187)
Properties of Euclidean distance, Francis et al. (1992, p. 189)
Francis, Richard L. and White, John A. Facilities Layout and Location, Prentice Hall, Englewood Cliffs, NJ, 1974.
Francis, Richard L., McGinnis, Leon F., Jr. and White, John A. Facilities Layout and Location, 2nd. ed., Prentice Hall, Englewood Cliffs, NJ, 1992.
2007-01-29 10:25 pm
August 29, 2006
Definition of rectilinear distance, Francis and White (1974, p. 169)
Examples where rectilinear distance is appropriate, Francis and White (1974, pp. 169-170) and Francis et al. (1992. pp. 188-189)
Properties of rectilinear distance, Francis et al. (1992, pp. 189-181)
Francis, Richard L. and White, John A. Facilities Layout and Location, Prentice Hall, Englewood Cliffs, NJ, 1974.
Francis, Richard L., McGinnis, Leon F., Jr. and White, John A. Facilities Layout and Location, 2nd. ed., Prentice Hall, Englewood Cliffs, NJ, 1992.
2006-10-06 4:42 pm
Recent Comments
Mr WordPress on Hello world! | {"url":"http://indulocation.wordpress.com/","timestamp":"2014-04-20T04:13:03Z","content_type":null,"content_length":"51540","record_id":"<urn:uuid:dfee938c-1384-4f07-beae-6e68ab683cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about scoring on Code and Football
October 3, 2012
So how does defense affect an EP curve?
Posted by foodnearsnellville under Defense, Football, Modeling, Statistics | Tags: Brian Burke, expected points, Michael Vick, Neil Payne, scoring |
Leave a Comment
Ed Bouchette has a good article, with Steelers defenders talking about Michael Vick. Neil Payne has two interesting pieces (here and here) on how winning early games is correlated with the final
record for the season.
Brian Burke has made an interesting attempt to break down EP (expected points) data to the level of individual teams. I’ve contributed to the discussion there. There is a lot to the notion that slope
of the EP curve reflects the ease with which a team can score, and the more shallow the slope, the easier it is for a team to score.
Note that the defensive contribution to a EP curve will depend on how expected points are actually scored. In a Keith Goldner type Markov chain model (a “raw” EP model), a defense cannot affect its
own EP curve. It can only affect an opponent’s curve. In a Romer/Burke type EP formulation, the defensive effect on a team’s EP curve and the opponent’s EP curve is complex. Scoring by the defense
has an “equal and opposite” effect on team and opponent EP, the slope being affected by frequency of the scoring as a function of yard line. Various kinds of stops could also affect the slope as
well. Since scoring opportunities increase for an offense the closer to the goal line the offense gets, an equal stop probability per yard line would end up yielding nonequal scoring chances, and
thus slope changes.
This is something I’ve wanted to test ever since I got my hands on play-by-play data, and to be entirely honest, doing this test is the major reason I acquired play-by-play data in the first place.
Linearized scoring models are at the heart of the stats revolution sparked by the book, The Hidden Game of Football, as their scoring model was a linearized model.
The simplicity of the model they presented, the ability to derive it from pure reason (as opposed to hard core number crunching) makes me want to name it in some way that denotes the fact: perhaps
Standard model or Common model, or Logical model. Yes, scoring the ’0′ yard line as -2 points and the 100 as 6, and everything in between as a linearly proportional relationship between those two
has to be regarded as a starting point for all sane expected points analysis. Further, because it can be derived logically, it can be used at levels of play that don’t have 1 million fans analyzing
everything: high school play, or even JV football.
From the scoring models people have come up with, we get a series of formulas that are called adjusted yards per attempt formulas. They have various specific forms, but most operate on an assumption
that yards can be converted to a potential to score. Gaining yards, and plenty of them, increases scoring potential, and as Brian Burke has pointed out, AYA style stats are directly correlated with
With play-by-play data, converted to expected points models, some questions can now be asked:
1. Over what ranges are expected points curves linear?
2. What assumptions are required to yield linearized curves?
3. Are they linear over the whole range of data, or over just portions of the data?
4. Under what circumstances does the linear assumption break down?
We’ll reintroduce data we described briefly before, but this time we’ll fit the data to curves.
One simple question that can change the shape of an expected points curve is this:
How do you score a play using play-by-play data?
I’m not attempting, at this point, to come up with “one true answer” to this question, I’ll just note that the different answers to this question yield different shaped curves.
If the scoring of a play is associated only with the drive on which the play was made, then you yield curves like the purple one above. That would mean punting has no negative consequences for the
scoring of a play. Curves like this I’ve been calling “raw” formulas, “raw” models. Examples of these kinds of models are Kieth Goldner’s Markov Chain model, and Bill Connelly’s equivalent points
If a punt can yield negative consequences for the scoring of a play, then you get into a class of models I call “response” models, because the whole of the curve of a response model can be thought of
response = raw(yards) – fraction*raw(100 – yards)
The fraction would be a sum of things like fractional odds of punting, fractional odds of a turnover, fractional odds of a loss on 4th down, etc. And of course in a real model, the single fractional
term above is a sum of terms, some of which might not be related to 100 – yards, because that’s not where the ball would end up - a punt fraction term would be more like fraction(punt)*raw(60 –
Raw models tend to be quadratic in character. I say this because Keith Goldner fitted first and 10 data to a quadratic here. Bill Connelly’s data appear quadratic to the eye. And the raw data set
above fits mostly nicely to a quadratic throughout most of the range.
And I say mostly because the data above appear sharper than quadratic close to the goal line, as if there is “more than quadratic” curvature less than 10 yards to go. And at the risk of fitting to
randomness, I think another justifiable question to look at is how scoring changes the closer to the goal line a team gets.
That sharp upward kink plays into how the shape of response models behaves. We’ll refactor the equation above to get at, qualitatively, what I’m talking about. We’re going to add a constant term to
the last term in the response equation because people will calculate the response differently
response = raw(yards) – fraction*constant*raw(100 – yards)
Now, in this form, we can talk about the shape of curves as a function of the magnitude of “constant”. As constant grows larger, the more the back end of the curve takes on the character of the last
10 yards. A small constant and you yield a less than quadratic and more than linear curve. A mid sized constant yields a linearized curve. A potent response function yields curves more like those of
David Romer or Brian Burke, with more than linear components within 10 yards on both ends of the field. Understand, this is a qualitative description. I have no clues as to the specifics of how they
actually did their calculations.
I conclude though, that linearized models are specific to response function depictions of equivalent point curves, because you can’t get a linearized model any other way.
So what is our best guess at the “most accurate” adjusted yards per attempt formula?
In my data above, fitting a response model to a line yields an equation. Turning the values of that fit into an equation of the form:
AYA = (yards + α*TDs – β*Ints)/Attempts
Takes a little algebra. To begin, you have to make a decision on how valuable your touchdown is going to be. Some people use 7.0 points, others use 6.4 or 6.3 points. If TD = 6.4 points, then
delta points = 6.4 + 1.79 – 6.53 = 1.79 + 0.07 = 1.86 points
α = 1.86 points/ 0.0653 = 28.5 yards
turnover value = (6.53 – 1.79) + (-1.79) = 6.53 – 2*1.79 = 2.95 points
β = 2.95 / 0.0653 = 45.2 yards
If TDs = 7.0 points, you end up with α = 37.7 yards instead.
It’s interesting that this fit yields a value of an interception (in yards) almost identical to the original THGF formula. Touchdowns are more close in value to the NFL passer rating than THGF’s new
passer rating. And although I’m critical of Chase Stuart’s derivation of the value of 20 for PFR’s AYA formula, the adjustment they made does seem to be in the right direction.
So where does the model break down?
Inside the 10 yard line. It doesn’t accurately depict the game as it gets close to the goal line. It’s also not down and distance specific in the way a more sophisticated equivalent points model
can be. A stat like expected points added gets much closer to the value of an individual play than does a AYA style stat. In terms of a play’s effect on winning, then you need win stats, such as
Brian’s WPA or ESPNs QBR to break things down (though I haven’t seen ESPN give us the QBR of a play just yet, which WPA can do).
Update: corrected turnover value.
Update 9/24/11: In the comments to this link, Brian Burke describes how he and David Romer score plays (states).
Summary: The NFL passer rating can be considered to be the sum of two adjusted yards per attempt formulas, one cast in units of yards and the other using catches as a measure of yards. We show, in
this article, how to build such a model by construction.
My previous article has led to some very nice emails back and forth with the Pro Football Focus folks. In thinking about ways to explain the complexities of the original NFL formula, it occurred to
me that there are two yardage terms because the NFL passer rating can be regarded as the sum of two adjusted yards per attempt formulas. Once you begin thinking in those terms, it’s not all that hard
to derive an NFL style formula.
Our basic formula will be
<1> AYA = (yards + α*TDs – β*Ints)/Attempts
The Hidden Game of Football’s new passer rating is a formula of this kind, with α = 10 and β = 45. Pro Football Reference’s AY/A has an α value of 20 and a β value of 45. On this blog, we’ve shown
that these formulas are tightly associated with scoring models.
Using the relationship Yards = YPC*Catches, we then get
<2> AYA = (YPC*Catches + α*TDs – β*Ints)/Attempts
Since the point of the exercise is to end up with an NFL-esque formula, we’ll multiply both sides of equation <2> with 20/YPC.
<3> 20*AYA/YPC = (20*Catches + 20*α*TDs/YPC – 20*β*Ints/YPC)/Attempts
Now, adding equations <1> and <3>, we now have
<4> (20/YPC + 1)*AYA = (20*Catches + Yards + [20/YPC + 1]*α*TDs – [20/YPC + 1]*β*Ints)/Attempts
and if we now define RANKING as the left hand side of equation <4>, A as [20/YPC + 1]*α and B as [20/YPC + 1]*β, formula <4> becomes
RANKING = (20*Catches + Yards + A*TDs – B*Ints)/Attempts
Look familiar? This is the same form as the NFL passer rating, when stripped of its multiplier and the additive coefficient. To complete the derivation, multiply both sides of the equation by 100/24
and then add 50/24 to both sides. You end up with
RANKING = 100/24*[(20*Catches + Yards + A*TDs - B*Ints)/Attempts] + 50/24
which is the THGF form of the NFL passer rating, when A = 80 and B = 100.
If YPC equals 11.4, then the conversion coefficient (20/YPC + 1) becomes 2.75. The relationship between the scoring model coefficients α and β and the NFL style passer model coefficients A and B
A = 2.75*α
B = 2.75*β
Just for the sake of argument, we’re going to set alpha to 25, pretty close to the 23.3 that we get from a linearized Brian Burke model, and beta we’ll set to 60, 6.7 yards less than the 66.7 yards
we calculated from the linearized Brian Burke scoring model. using those values, we get 68.75 for A and 165 for B. Rounding the first value to the nearest 10 and rounding B down a little, our
putative NFL style model becomes:
RANKING = (20*Catches + Yards + 70*TDs – 160*Ints)/Attempts
Note that formulas <1> and <2> do not contribute equally to the final sum. Equation <2> is weighted by the factor (20/YPC)/(20/YPC + 1) and equation <1> is weighted by the factor 1/(20/YPC + 1). When
YPC is about 11.4 yards, then the contribution of equation <2> to the total is about 63.6% and equation <1> adds about 35.4% to the total. Complaints that the NFL formula is heavily driven by
completion percentage are correct.
Using the values α = 20 and β = 45, which are values found in Pro Football Reference’s version of adjusted yards per attempt, we then get values of A and B that are 55 and 123.75 respectively.
Rounding down to the nearest 10, and plugging these values into the NFL style formula yields
RANKING = (20*Catches + Yards + 50*TDs – 120*Ints)/Attempts
Note that the two models in question have smaller A values than the core of the traditional NFL model (80) and larger B values than the traditional NFL model (100). This probably reflects the times.
The 1970s were a defensive era. It was harder to score then. As it becomes harder to score, the magnitude of the TD term should increase. TD/Interception ratios were smaller in the 1950s, 1960s, and
1970s. As interceptions were more a part of the job, perhaps their effect wasn’t as valued when the original NFL formula was constructed.
Afterward: in many respects, this article is just the reverse of the arguments here. However, the proof by construction yields some useful formulas, and in my opinion, is easier to explain.
Update: more exhaustive derivation of the NFL passer rating.
The value of a touchdown is a phrase used in formulas like this one
PASSER RANKING = (yards + 10*TDs – 45*Ints)/attempts
where the first thing that comes to mind is that the TD is worth 10 yards and the interception is worth 45 yards. But is it? A TD after all, is worth about 7 points, and in The Hidden Game of
Football formulation, a turnover is worth 4 points. Therefore, a TD is worth considerably more than a turnover, but the formula values the TD less. How is that?
Well, let me reassure you that in the new passer rating of the Hidden Game of Football, the value of a touchdown is a constant, equal to 6.8 points or 85 yards. The interception of 4 points is
usually valued at 45 yards instead of 50, because most interceptions don’t make it back to the line of scrimmage.
The field itself is zero valued at the 25 yard line. That means once you get to the one yard line, you have one yard to go of field and the TD is worth an additional 10 yards of value. That’s where
the 10 comes from. It’s not the value of the touchdown, but the additional value of the touchdown not measured on the field itself.
But what does this additional term actually mean?
If you check out the figures above, Figure 1 is introduced in The Hidden Game of Football on page 102, and features in just about all the descriptions of worth up until page 186, where we run into
this text. The authors appear to be carving out a new formula from the refactored NFL formula they introduce in their book.
Awarding a 80 yard bonus for a touchdown pass makes no sense either. It’s like treating every TD pass as though it were a 80-yard bomb. Yet, the majority of touchdown passes are from inside the
25 yard line.
It’s not the bonus we’re objecting to-after all, the whole point of throwing a pass is to get the ball into the end zone-but the size of the bonus is way out of kilter. We advocate a 10 yard
bonus for each touchdown pass. It’s still higher than the yardage on a lot of TD passes, but it allows for the fact that yardage is a lot harder to get once a team gets inside the opponent’s 25.
and without quite saying so, the authors introduce the model in Figure 2. To note, the value of the touchdown and the yardage value merge in Figure 1, but remain apart in Figure 2. This value, which
I’ve called a barrier potential previously, is the product of a chance to score that’s less than a 1.0 probability as you reach the goal line. If your chances maximize at merely 80%, you’ll end up
with a model with a barrier potential.
If I have an objection to the quoted argument, it’s that it encourages the whole notion of double counting the touchdown “yardage”. The appropriate way to figure out the slope of any linear scoring
model is by counting all scoring at a particular yard line, or within a particular part of the field (red zone scoring, for example, which could be normalized to the 10 yard line). These are scoring
models, after all, not touchdown models.
Where did 6.8 come from, instead of 7?
Whereas before I was thinking it was 6 points for the TD and 0.8 points for the extra point, I’m now thinking it came from the same notions that drove the score value of 6.4 for Romer and 6.3 for
Burke. It’s 7 points less the value of the runback. I’ve used 6.4 points to derive scoring models for PFR’s aya and the NFL passer rating, but on retrospect, those aren’t appropriate uses. These
models tend to zero in value around 25 yards, whereas the Romer model has much higher initial slopes and reaches positive values faster than these linear models.
This value can be calculated, but the formula that results can’t be calculated directly. It can be solved iteratively, though, with a pretty short piece of code
And the solution is close enough to 6.8 that it’s easy enough to ignore the difference. Plugging 7 points for the touchdown, 20 and 29.1 yards respectively for the barrier potential yields almost no
changes in the touchdown value for the PFR aya model and the NFL passer rating formula, and we end up with these scoring model plots.
After the previous post in this series, I realized there is a scoring model buried within the NFL passer rating formula. Pretty much any equation of the form
RATE = (yards + a*TDs – b*(INTS + FUMBLES) – sacks)/plays
implies the existence of one of these models. Note that this form suggests a single barrier potential for touchdowns, while there equally well could be one for the 0 yardage side (“the sack side”) of
the equation. To plot the one suggested by Pro Football Reference adjusted yards per attempt formula,
RATE = (yards + 20*TDs – 45*Ints)/attempts
we see this
The refactored NFL passer rating has the form
RATE = 100/24*2.75[( yards + 29.1*TDs - 36.4*Ints)/attempts] + 50/24
when the completion and yards terms are combined using yards per completion as a constant. The term in brackets is a scoring model. To figure out the model, some algebra is needed to determine the
value of the line at 100 yards.
0.291(x + 2 ) + (x + 2) = 6.4 + 2 = 8.4
1.291 x + 2.582 = 8.4
1.291x = 5.818
x ≈ 4.5
This yields a slope of 0.065, a barrier potential of 1.9 points or so, and a value for a turnover of 2.5 points. Plotted, it looks like this
and is not all that much different from the implied model in the PFR aya formula.
To get to the idea that the barrier potential represents a difference between a model that allows a 100% chance to score, and a model that has an imperfect chance of scoring, we’re going to build a
scoring potential model from just a single data point. Understand, as a line has two points, and -2 at 0 yards is generally assumed, the slope of the line can be determined by solving for the
expected points at a single yard line.
If on first down at the 1 yard line, you have an 80% change of scoring a touchdown and a 15% chance of scoring a field goal, and a 5% chance of just losing possession, then solving for the expected
points on first and one, you get
expected points = 0.8*6.4 + 0.15*3 = 5.57 points
value of yards at 100 = 5.57*100/99 ≈ 5.63 points
barrier potential = 6.4 – 5.63 = 0.77 points = 10.1 yards
turnover value = 5.63 – 2 = 3.63 points ≈ 47.6 yards
and expressed as a passer ranking formula, you might get something like
RATE = (yards + 10.1*TDs – 48*Int)/attempts
and plotted, look something like this:
The synthetic first and one data above differ little from the real first and one data given here, but PFR’s adjusted yards per attempt is a formula that averages data over all downs, as opposed to
being the data for a single down.
The size of the barrier potential is a measure of how hard it is to score. The smaller the barrier potential, the easier it is to score. When the barrier potential is zero, scoring approaches 100% as
the team approaches the goal line. Therefore, in more realistic scoring models, barrier potentials tend to appear.
It is entirely possible that the larger barrier potentials of the NFL passer formula merely reflect the times in which the model was created. The 1970s was an era dominated by defense and a running
game. It was harder to score then. It would be interesting to calculate scoring rates for first and one situations from, say, 1965 to 1971, when the NFL passer formula was created, and see if the
implied formula actually matches the data of the times.
Other issues these models suggest: since they are easy to construct with very modest data sets, they can be individualized for college and high school conferences, leagues, and even teams. They
suggest trends that can be useful for analyzing particular times and ages. Note that as scoring gets harder and barrier potentials grow larger, the value of the turnover grows less. It’s not that
hard also, to set up an equation representing a high scoring team with one that doesn’t score much at all. Since the slope of the line of the low scoring team is less than that of the high scoring
team, turnover value becomes dependent on field position, as the slopes don’t cancel. The turnover becomes more valuable towards the goal line of the low scoring team.
In chemistry, people will speak of the chemical potential of a reaction. That a mix of chemicals has a potential doesn’t mean the reaction will happen. There is an activation energy that prevents it.
To note, the reaction energy can’t exceed the chemical potential of a reaction. Energy is conserved, and can neither be created nor destroyed.
Likewise, common models of the value of yardage assign a scoring potential to yards. I know of 5 models offhand, of which the simplest is the linear model (one discussed in The Hidden Game of
Football). We’re going to derive this model by argument from first principles. There is also Keith Goldner’s Markov Chain model (see here and here), David Romer’s quadratic spline model (see here or
just search for “David Romer football” via a good Internet search engine), the linear model of Football Outsiders in 2003, and Brian Burke’s expected points analysis (see here, here, here, and here).
And just as in thermodynamics, where energy is conserved, this scoring potential has to be a conserved quantity, else the logic of the model falls apart.
One of the points of talking about the linear model is that is applies to all levels of football, not just the pros. Second, since it doesn’t require people to break down years worth of play by play
data to understand it, the logic is useful as a first approximation. Third, I suspect some clever math geek could derive all the other models as Taylor series expansions where the first term in the
Taylor series is the linear model itself. At one level, it has to be regarded as the foundation of all the scoring potential models.
Deriving the linear model.
If I start at the one yard line and then proceed back into my own end zone and get tackled, I’ve just lost 2 points. This is true regardless of the level of football being played. If instead I run 99
yards to my opponent’s end zone, I score 6 points instead. That means the scale of value in the common linear model is 8 points, and if we count each yard as equal in scoring potential, we start at
-2 yards in my end zone, 6 in my opponents, and every 12.5 yards on the field, I gain 1 point of value. I do not have to crunch any numbers to assume this model as a first approximation.
Other models derive from analyzing a large data set of games for down, distance, to go, and time situations. They can follow all the consequences of being in those down/distance combinations and
then derive real probabilities of scoring. We’re going to call those model EP, EPA or NEP models. The value in these models is rather than assuming some probability of scoring, average scoring
probabilities are built into the model itself.
What’s the value of a turnover?
In the classic linear model, as explained by The Hidden Game of Football, the cost of a turnover is 4 points. This is because the difference in value between both teams everywhere is 4 points. The
moment the model becomes nonlinear, that no longer applies. Both Keith Goldner’s model and the FO model predict that a turnover at the line of scrimmage minimizes in the middle of the field and
maximize at the ends.
4 points is worth 50 yards. We’ll come back to that in a bit.
What’s the value of a possession?
It’s the value of not turning the ball over, and since we know the value of a turnover, in the linear model, possession is worth 4 points. In other models, this may change.
The value of the possession in the linear model is always 4 points, even at the end of the game. To explain, there are two kinds of models that predict two kinds of things.
scoring potential models predict scoring
win probability models predict winning
The scoring potential of the possession does not change as the game is ending. The winning potential does change and should change markedly as the game begins to end.
How much is a down worth?
This is an important issue and not readily studied without a data heavy model. I’d suggest following a couple of the Brian Burke links above, they shed a terrific amount of light on the topic.
Essentially, the value of a down at a particular time and distance is the difference in expected points at that time and distance between those downs.
How much is a touchdown worth?
We’ll start with the expected points models, because it becomes easy to see how they work. EPA or NEP style models have a total assigned value for the score (6.4 pts Romer, 6.3 Burke), so the value
of scoring a touchdown is the value of the score minus the value of the position on the field. It has to be that way because the remaining value is a function of field position et al. If this isn’t
true, you violate conservation of a scoring potential.
Likewise, in the linear model, the value of the touchdown is equivalent, due to linearity and scoring potential conservation, to the yards required to score the touchdown. This means if the defense
recovers the ball on the opponent’s 5 (i.e. the defense has just handed you 95 yards of value), and your team runs for 3 yards, and then passes 2 yards for the score, that the value of the
touchdown is 2 yards, or 0.16 points, and the value of the entire drive is 5 yards.
In this context, the classic interpretation of what THGF calls the new rating system doesn’t make a lot of sense.
RANKING = ( yards + 10*TDs – 45*Ints)/attempts
I say so because the yards already encompass the value of the touchdown(s). In this context, the second term could be regarded as an approximation of the value of the extra point (0.8 points of value
in this case). And 45 instead of 50 is an estimation that the average INT changes field position by about 5 yards.
Finally, this analysis begs the question of what model Pro Football Reference’s adjusted yards per attempt actually describes. I’ll try, however. If you adjust the value of yards to create a “barrier
potential” term to describe the touchdown, you get the following bit of algebra
0.2(x + 2) + (x + 2 ) = value of true scoring difference = 6.4 + 2 = 8.4
1.2x + 2.4 = 8.4
1.2x = 6.0
x = 5
So, if you adjust the slope so the value of the line at 100 equals 5 instead of 6, then the average value of a yard becomes 0.07 points, and the cost of a turnover then becomes 3 points, or about
43 yards.
How much is a field goal worth?
The same logic that applies for a touchdown also applies for a field goal. It’s the value of the score minus the value of the particular field position, down, etc from which the goal is scored. Note
that in a linear model, the value is actually negative for a field goal scored from the 37.5 yard line in. And this actually makes sense, because the sum of the score values, as the number of scores
grow large, in a well balanced EPA/NEP model should approach zero. In the linear model, I suspect it will approach some nonzero number, which would be an approximation of the average deviation from
best fit EPA/NEP function itself.
Okay, so what if high scoring teams have this zero scoring value? What’s going on?
This is the numerator of a rate term, akin to that of a shooting percentage in the NBA. But since EP models are already averaged, the proper analogy is to the shooting percentage minus the league
average shooting percentage. And to continue the analogy a bit further, to score in the NBA, you not only need to shoot (not necessary a good percentage), but you also need to make your own shot.
Teams that put themselves into position to score are the equivalent, they make their own shot. I’ll also note this +/- value probably also is a representation of the TD to FG ratio.
Scoring potential models are part of the new wave of football analysis and the granddaddy of all scoring potential models is the linear model discussed extensively in The Hidden Game of Football.
In these models, scoring potential is a conserved quantity and can neither be created nor destroyed. Some of the consequences of this conservation are discussed above.
October 21, 2011
The valid range of a linearized scoring model
Posted by foodnearsnellville under Uncategorized | Tags: adjusted yards per attempt, Bill Connelly, Brian Burke, David Romer, Keith Goldner, NFL, NFL passer rating, scoring, scoring model, The Hidden
Game of Football |
[3] Comments
September 28, 2011
From adjusted yards per attempt to a NFL style passer rating
Posted by foodnearsnellville under Data, Football, Modeling, Statistics | Tags: adjusted yards per attempt, AY/A, Brian Burke, NFL, NFL passer rating, Pro Football Focus, Pro Football Reference,
scoring, scoring model |
[2] Comments
September 7, 2011
The value of a touchdown
Posted by foodnearsnellville under Books and Articles, Code, Football, Modeling, Statistics | Tags: adjusted yards per attempt, Bob Carroll, Brian Burke, David Romer, John Thorn, new passer rating,
NFL, NFL passer rating formula, Pete Palmer, Pro Football Reference, scoring, scoring model, scoring potential, The Hidden Game of Football |
[6] Comments
September 3, 2011
The NFL passer rating as a scoring model
Posted by foodnearsnellville under Football, Modeling, Statistics | Tags: adjusted yards per attempt, NFL, NFL passer rating, Pro Football Reference, scoring, scoring model, scoring potential |
[7] Comments
September 1, 2011
The NFL: on scoring and scoring potential
Posted by foodnearsnellville under Blogging, Football, Statistics | Tags: Aaron Schatz, Brian Burke, David Romer, expected points, expected points added, Football Outsiders, Keith Goldner, net
expected points, new rating system, possession, score, scoring, scoring models, scoring potential, scoring potential models, The Hidden Game of Football, turnover, win probability, win probability
added, yardage |
[6] Comments | {"url":"http://codeandfootball.wordpress.com/tag/scoring/","timestamp":"2014-04-21T12:09:00Z","content_type":null,"content_length":"113755","record_id":"<urn:uuid:080db25e-7977-461d-a42c-21ed4fe02242>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evidence of methodological bias in hospital standardised mortality
ratios: retrospective database study of English hospitals
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals
Objective To assess the validity of case mix adjustment methods used to derive standardised mortality ratios for hospitals, by examining the consistency of relations between risk factors and
mortality across hospitals.
Design Retrospective analysis of routinely collected hospital data comparing observed deaths with deaths predicted by the Dr Foster Unit case mix method.
Setting Four acute National Health Service hospitals in the West Midlands (England) with case mix adjusted standardised mortality ratios ranging from 88 to 140.
Participants 96
Main outcome measures Presence of large interaction effects between case mix variable and hospital in a logistic regression model indicating non-constant risk relations, and plausible mechanisms that
could give rise to these effects.
Results Large significant (P≤0.0001) interaction effects were seen with several case mix adjustment variables. For two of these variables—the Charlson (comorbidity) index and emergency
admission—interaction effects could be explained credibly by differences in clinical coding and admission practices across hospitals.
Conclusions The Dr Foster Unit hospital standardised mortality ratio is derived from an internationally adopted/adapted method, which uses at least two variables (the Charlson comorbidity index and
emergency admission) that are unsafe for case mix adjustment because their inclusion may actually increase the very bias that case mix adjustment is intended to reduce. Claims that variations in
hospital standardised mortality ratios from Dr Foster Unit reflect differences in quality of care are less than credible.
The longstanding need to measure quality of care in hospitals has led to publication of league tables of standardised mortality ratios for hospitals in several countries, including England, the
United States, Canada, the Netherlands, and Sweden.^1 ^2 ^3 ^4 ^5 ^6 Standardised mortality ratios for hospitals in these countries have been derived with methods heavily influenced by the seminal
work of Jarman et al,^1 who first developed standardised mortality ratios for National Health Service (NHS) hospitals in England in 1999, and by the subsequent methodological developments by the Dr
Foster Unit.^7 ^8 The Dr Foster Unit methodology is used by Dr Foster Intelligence, a former commercial company that is now a public-private partnership, to annually publish standardised mortality
ratios for English hospitals in the national press.
A consistent, albeit controversial,^9 ^10 ^11 inference drawn from the wide variation in published standardised mortality ratios for hospitals is that this reflects differences in quality of care. In
the 2007 hospital guide for England,^12 Dr Foster Intelligence portrayed standardised mortality ratios for hospitals as “an effective way to measure and compare clinical performance, safety and
quality.” Although an increasing international trend exists for standardised mortality ratios for hospitals to be developed and published,^13 ^14 we must be sure that the underlying case mix
adjustment method is fit for purpose before inferences about quality of care are drawn.
Case mix adjustment is widely used to overcome imbalances in patients’ risk factors so that fairer comparisons between hospitals can be made. Methods for case mix adjustment are often criticised
because they can fail to include all the important case mix variables and do not adequately adjust for a variable because of measurement error.^10 ^11 Despite these criticisms, case mix adjustment is
widely done because the adjusted comparisons, although imperfect, are generally considered to be less biased than unadjusted comparisons.
However, a third, more serious problem exists that can affect the validity of case mix adjustment. In a study that compared unadjusted and case mix adjusted treatment effects from non-randomised
studies against treatment effects from randomised trials, Deeks et al observed that on average the unadjusted and not the adjusted non-randomised results agreed best with the randomised comparisons.^
15 In this instance, case mix adjustment had increased bias in the comparisons. Nicholl pointed out that case mix adjustment can create biased comparisons when underlying relations between case mix
variables and outcome are not the same in all the comparison groups.^16 This phenomenon has been termed “the constant risk fallacy,” because if the risk relations are assumed to be constant, but in
fact are not, then case mix adjustment may be more misleading than crude comparisons.^16 Two key mechanisms can give rise to non-constant risk relations. The first mechanism involves differential
measurement error, and the second one involves inconsistent proxy measures of risk. Each is illustrated below.
Consider two hospitals that are identical in all respects (case mix, mortality, quality of care) except that one hospital (B) systematically under-records comorbidities (measurement error) in its
patients. If mortality is case mix adjusted for comorbidity then the expected (but not the observed) number of deaths in hospital B will be artificially depleted, because its patients seem to be at
lower risk than they really are. The effect of case mix adjustment is to erroneously inflate the standardised mortality ratio (observed number of deaths/expected number of deaths × 100) for that
hospital. The box presents a numerical example of this scenario.
Example of differential measurement error
To illustrate the constant risk fallacy we construct hypothetical hospital mortality data with a single case mix variable—a comorbidity index (CMI) that takes values 0 to 6. The relation between
in-hospital mortality and CMI value has been modelled for the population, estimating risks of in-hospital death of 0.02, 0.04, 0.08, 0.14, 0.25, 0.40, and 0.57 in the seven CMI categories (equivalent
to an odds ratio of two for each unit increase in the index).
Consider two hospitals, A and B, both of which admit 1000 patients a year in each of the seven CMI categories. Assume that the case mix of the groups of patients and the quality of care in the two
hospitals are identical and that 1500 deaths are observed in both hospitals. Consider that hospital A correctly codes the comorbidity index, whereas hospital B tends to under-code, such that in
hospital B for each true CMI the following are recorded:
• CMI=0: all are coded as 0
• CMI=1: 50% coded 0, 50% coded 1
• CMI=2: 33% coded 0, 33% coded 1, 33% coded 2
• CMI=3: 25% coded 0, 25% coded 1, 25% coded 2, 25% coded 3
• CMI=4: 20% coded 0, 20% coded 1, 20% coded 2, 20% coded 3, 20% coded 4
• CMI=5: 20% coded 1, 20% coded 2, 20% coded 3, 20% coded 4, 20% coded 5
• CMI=6: 20% coded 2, 20% coded 3, 20% coded 4, 20% coded 5, 20% coded 6.
The consequence of this is that rather than observing 1000 patients in each of the seven CMI categories, in hospital B the numbers instead are 2283, 1483, 1184, 850, 600, 400, and 200. It thus looks
as if a difference exists in the distribution of the CMI between the two hospitals, with hospital B having on average a lower CMI. Computation of expected numbers of deaths taking into account the
reported (rather than true) CMI is done to calculate standardised mortality ratios on the basis of the modelled values.
The expected number of deaths in hospital A is (1000×0.02)+(1000×0.04)+(1000×0.08)+(1000×0.14)+(1000×0.25)+(1000×0.40)+(1000×0.57)=1500, yielding a standardised mortality ratio (observed/expected
deaths) of 1500/1500=100.
The expected number of deaths in hospital B is (2283×0.02)+(1483×0.04)+(1184×0.08)+(850×0.14)+(600×0.25)+(400×0.40)+(200×0.57)=743, yielding a standardised mortality ratio of 1500/743=202.
It thus wrongly seems that the mortality in hospital B is twice that in hospital A. Adjustment has changed a fair comparison (1500 v 1500) into a biased comparison. This is an illustration of the
constant risk fallacy. Furthermore, modelling the data by using logistic regression reveals that whereas the relation between CMI and mortality in hospital A is the same as in the population (odds
ratio=2.0 per category increase), the relation in hospital B is weaker (odds ratio=1.6 per category increase in CMI) (as would be expected through misclassification introducing attenuation bias) and
the interaction between hospital B and CMI is clinically and statistically significant (P<0.001). If CMI was measured with equal measurement error in all hospitals the problem would be one of
residual confounding caused by regression dilution or attenuation bias (in which case the standardised mortality ratios would be preferable to crude mortality but will not fully adjust for the risk
factor). Because measurement errors differ among hospitals, the constant risk fallacy (where standardised mortality ratios may be more misleading than the crude mortality comparison) is a
The second mechanism can occur even in the absence of measurement error. Consider, for example, emergency admissions to hospitals. Patients admitted as emergencies are usually regarded as being
seriously ill, but if an individual hospital often admits the “walking wounded” (who are not seriously ill) as emergencies, then the risk associated with being an emergency admission in that hospital
will be reduced. Variation in this practice across hospitals leads to a non-constant relation between emergency admission and mortality. The standardised mortality ratio for hospitals that admit more
walking wounded will receive an unjustified downward case mix adjustment, because elsewhere emergencies are generally the sickest patients and the case mix adjustment will endeavour to reflect this.
A general feature of these two mechanisms that allows identification of case mix variables prone to the constant risk fallacy is that the value recorded for a given patient would change if he or she
presented at a different hospital. Comorbidity would be under-coded in one hospital compared with another, whereas the patient may be admitted (and thus coded) as an emergency in some hospitals and
elsewhere treated and discharged without being admitted at all. Case mix variables such as age, sex, and deprivation (on the basis of the patients’ home address) are not prone to these two mechanisms
because their values do not change with different hospitals.
A simple way to screen case mix variables for their susceptibility to non-constant risk relations, on a scale sufficient to bias the case mix adjustment method, is to do a statistical test for
interaction effects between hospital and case mix variables in a logistic regression model that predicts death in hospital.^16 If large interaction effects are not found then no apparent evidence of
non-constant risk relations exists and the constant risk fallacy (within the limits of statistical inference) may be discounted (although the other challenges in interpreting standardised mortality
ratios, such as omitted covariates, will still remain^9 ^10). However, if a large interaction effect is found, then this indicates a non-constant risk relation. If this is due to inconsistent
measurement practices across hospitals (as in the comorbidity index example in the box), it will result in a misleading adjustment to standardised mortality ratios. If the interaction occurs because
the covariate genuinely has different relations with death across hospitals (as in the emergency admission example above), this too will result in a misleading adjustment to standardised mortality
ratios. Alternatively, the interaction could occur if different levels of the covariate were associated with different standards of care across hospitals, in which case the standardised mortality
ratio will appropriately reflect the average of the associated increases in mortality. Unfortunately, no statistical method exists for teasing apart these non-exclusive explanations, but they can be
explored and resolved, to some extent, by doing “detective work” seeking a likely cause for the observed interaction effect.
In this paper we screened the Dr Foster Unit method,^1 which is used to derive standardised mortality ratios for English hospitals and which has been adopted/adapted internationally,^1 ^2 ^3 ^4 ^5 ^6
^12 for its susceptibility to the constant risk fallacy. We first tested for the presence of large interaction effects and then, in respect of two key case mix variables (comorbidity index and
emergency admission), we did detective work to seek likely explanations.
Dr Foster Unit case mix adjustment method
The Dr Foster Unit case mix adjustment method uses data derived from routinely collected hospital episode statistics.^12 These data include admission date, discharge date, in-hospital mortality, and
primary and secondary diagnoses according to ICD-10 (international classification of disease, 10th revision) codes on every inpatient admission (or spell) in NHS hospitals in England. The Dr Foster
Unit standardised mortality ratio is derived from logistic regression models, which are based on 56 primary diagnosis groups derived from hospital episode statistics data accounting for 80% of
hospital mortality. Covariates for case mix adjustment in the model are sex, age group, method of admission (emergency or elective), socioeconomic deprivation, primary diagnosis, the number of
emergency admissions in the previous year, whether the patient was admitted to a palliative care specialty, and the Charlson (comorbidity) index (range 0-6), which is derived from secondary ICD-10
diagnoses codes.^17
Study hospitals and data sources
This study involves four hospitals, representing a wide range of the published case mix adjusted Dr Foster Unit standardised mortality ratios (88-143, for the period April 2005-March 2006), which had
purchased the Dr Foster Intelligence Real Time Monitoring computer system and so were able to provide anonymised output data (including case mix variables, the Dr Foster Unit predicted risk of death,
and whether a death occurred) for this study. The hospital with the lowest standardised mortality ratio (88) is a large teaching hospital (University Hospital North Staffordshire, 1034 beds); those
with higher ratios were one large teaching hospital (123: University Hospitals Coventry and Warwickshire, 1139 beds) and two medium sized acute hospitals (127: Mid Staffordshire Hospitals, 474 beds;
143: George Eliot Hospital, 330 beds).
Our analyses are based on data and predictions from the Real Time Monitoring system, which were available for the following time periods: April 2005 to March 2006 (year 1), April 2006 to March 2007
(year 2), and April to October 2007 (part of year 3—the most recent data available at the time of the study).
Statistical analyses
We constructed logistic regression models to test for interactions to assess whether the case mix adjustment variables used in the Dr Foster Unit method were prone to the constant risk fallacy. The
Dr Foster Unit dataset includes the predicted risk of death for each patient, generated from the Dr Foster Unit case mix adjustment model, which we included (after logit transformation) as an offset
term in a logistic regression model of in-hospital deaths. To this model we added terms for each hospital (thus allowing for the differences between hospitals in adjusted mortality) and then
interaction terms for each hospital and case mix variable in turn (which estimate the degree to which the relation between the case mix variable and mortality in each hospital differed from that
implemented in the Dr Foster Unit case mix adjustment model).
Interaction terms that produced odds ratios close to one indicated that the relation between the case mix variable and mortality was constant and so not prone to the constant risk fallacy. The
presence of large significant interactions suggested that the case mix variable was potentially prone to the constant risk fallacy, because its relation to mortality differed from the Dr Foster Unit
national estimate. We tested the significance of interactions by using likelihood ratio tests; we deemed P values ≤0.01 to be statistically significant. We report the odds ratios, including 95%
confidence intervals and P values, for each hospital-variable interaction over the three years.
Selected variables
The following patient level variables included in the Dr Foster Unit adjustment were available and tested: Charlson index (0-6, a measure of comorbidity), age (10 year age bands), sex (male/female),
deprivation (fifths), primary diagnosis (1 of 56), emergency admission (no/yes), and the number of emergency admissions in the previous year (0, 1, 2, 3, or more). We excluded the palliative care
variable from our analyses because no admissions to this specialty occurred in the hospitals. We excluded less than 1.5% of all the data from the Real Time Monitoring system because of missing data
(for example, age not known, deprivation not known). The total numbers of admissions for each year were 96
For two prominent case mix variables—the Charlson index of comorbidity and emergency admission—we did detective work to seek explanations for the presence of large interaction effects, as described
Investigation of interaction effects seen with Charlson index
Patients with a lower Charlson index (less comorbidity) have lower expected mortality in the Dr Foster Unit model. Therefore, if the Charlson index was systematically under-coded in some hospitals
they would be assigned artificially inflated standardised mortality ratios. We investigated the possibility of such misclassification in the Charlson index in two ways.
Firstly, we investigated changes in the depth of clinical coding (number of ICD-10 codes for secondary diagnoses identified per admission) over time within the hospitals and examined the hypothesis
that the increase would be most rapid in those starting with the lowest Charlson indices (as they have the greatest headroom to improve through better coding). We formed the contingent hypothesis
that any such change would be accompanied by diminished interactions between Charlson index and mortality across hospitals.
Secondly, we considered that if clinical coding was similarly accurate in all hospitals, then differences in the Charlson index should reflect genuine differences in case mix profiles. We postulated
that hospitals with higher Charlson indices were therefore more likely to admit older patients and to have higher proportions of emergency admissions, longer lengths of stay, and a higher crude
mortality. If this was not the case, then this finding would corroborate a hypothesis that differences in the Charlson indices across hospitals were primarily attributable to systematic differences
in clinical coding practices.
Investigation of interaction effects seen with emergency admission
In the original analyses by Jarman et al, the emergency admission variable was noted to be the best predictor of hospital mortality.^1 We explored this variable in more depth by investigating the
proportion of emergency admissions that were recorded as having zero length of stay (being admitted and discharged on the same day). Although clinically valid reasons may exist to admit patients for
zero stay, and some patients may die on admission, the practice of admitting less seriously ill patients has been recognised as a strategy that is increasingly used in the NHS to comply with accident
and emergency waiting time targets.^18 ^19 This potentially leads to a reduction in the mortality risk associated with emergency admissions in hospitals that more often follow this practice. We
examined the magnitude of differences in the proportion of emergency admissions with zero length of stay both within hospitals over time and between hospitals, as well as the observed risk associated
with zero and non-zero lengths of stay.
We determined the extent to which case mix variables used in the Dr Foster Unit method had a non-constant relation with mortality across hospitals by examining the odds ratios of interaction terms
for hospital and case mix variables derived from a logistic regression model (with death as the outcome). Table 11 reports the odds ratios of tests of interactions for six case mix variables.
Two variables (sex and deprivation) had no significant interaction with hospitals, indicating that these two variables are safe to use for case mix adjustment because they are not prone to the
constant risk fallacy. However, the remaining variables had significant interactions. The number of previous emergency admissions was significant in year 2; the three hospitals with high standardised
mortality ratios had 6% to 10% increases in odds of death with every additional previous emergency admission over and above the allowance made in the Dr Foster Unit model. Age had a significant
interaction in year 2, but the effect was small—a 10 year age change was associated with an additional 1% increase in odds of death across the hospitals. Primary diagnosis also had significant
interactions in all three years (results not shown as 56 categories and four hospitals produce 224 interaction terms).
The Charlson index had significant interaction effects in year 1 and year 2 but not in year 3. A unit change in the Charlson index was associated with a wide range of effect sizes—up to a 7% increase
in odds of death (George Eliot Hospital, year 1) and an 8% reduction in odds of death (University Hospital North Staffordshire, year 2) over and above that accounted for in the Dr Foster Unit model.
Across the full range of the Charlson index, these correspond to increases in odds of death of 50% or decreases of 39%.
We found significant interactions with being an emergency admission in all years across all hospitals. The effect sizes ranged from 38% (University Hospital North Staffordshire, year 3) to 355% (Mid
Staffordshire Hospitals, year 2) increases in odds of death above those accounted for in the Dr Foster Unit equation.
Investigation of interaction effects seen with Charlson index
The 962 shows the mean Charlson index for the four study hospitals. The hospital with a low standardised mortality ratio (University Hospital North Staffordshire) had the highest mean Charlson index
(1.54), whereas the three hospitals with high standardised mortality ratios had mean Charlson index values near or below the median (1).
An indicator of completeness of coding is depth of coding—the number of ICD-10 codes per admission (table 22).). University Hospital North Staffordshire had the highest mean coding depth and
Charlson index in all years; more importantly, as coding depth increased over the years in all hospitals (table 33),), the interaction between the Charlson index and hospitals became smaller and
statistically non-significant (table 11).). We also explored the extent to which differences in the Charlson index between hospitals reflect genuine differences in case mix profiles (table 22).).
Although University Hospital North Staffordshire serves a more deprived population with a higher proportion of male patients than the other hospitals, the percentage of emergency admissions,
readmissions, length of stay, and crude mortality are at variance with the view that this hospital treats a systematically “sicker” population of patients. The evidence from table 22 is therefore
inconsistent with the explanation that differences in the Charlson index reflect genuine differences in case mix profiles.
Investigation of interaction effects seen with emergency admission
We investigated the emergency admission variable in more depth by considering proportions of emergency/non-emergency admissions with a zero length of stay (days). Combining data across hospitals, the
crude in-hospital mortality for non-emergency admissions was 1/1000 for zero length of stay and 23/1000 for non-zero length stay; the mortality for emergency admissions was 46/1000 for zero length of
stay and 107/1000 for non-zero length of stay. Table 44 shows that the proportion of emergency admissions with zero length of stay varied between 10.4% and 20.4% across hospitals. The hospital with
the lower case mix adjusted standardised mortality ratio (University Hospital North Staffordshire) had the highest proportion of zero stay emergency patients in years 2 and 3 (20.4% and 17.7%),
whereas the hospital with the highest standardised mortality ratio (George Eliot Hospital) had the lowest proportion of zero stay emergency patients in all three years (10.4%, 11.0%, and 12.9%). The
large variations in proportions of emergency/non-emergency patients with zero length of stay indicate that systematically different admission policies were being adopted across hospitals. The net
effect of this is that the relation between an emergency admission and risk of death varies sustainably across hospitals (that is, the risk of death is not constant), apparently because of
differences in hospital admission policies.
The league tables of mortality for NHS hospitals in England from Dr Foster Intelligence,^12 compiled by using case mix adjustment methods that have been internationally adopted or adapted,^2 ^3 ^4 ^5
^6 have been published annually since 2001 and continue to raise concerns about the wide variations in standardised mortality ratios for hospitals and quality of care.^20 Unsurprisingly perhaps,
similar concerns have been raised in other countries that have developed their own standardised mortality ratios.^5 ^21 Before such concerns can be legitimately aired, we must ensure that methods
used by Dr Foster Unit are fit for purpose and not potentially misleading.^8 ^9
Our results show that a critical, hitherto often overlooked, methodological concern is that the relation between risk factors used in case mix adjustment and mortality differs across the hospitals,
leading to the constant risk fallacy. This phenomenon can increase the very bias that case mix adjustment is intended to reduce.^16 The routine use of locally collected administrative data for case
mix variables makes this a real concern.^16 A serious problem is that no statistical fix exists for overcoming the challenges of variables susceptible to this constant risk fallacy.^16 It has to be
investigated by a more painstaking inquiry.
As the Dr Foster Unit method, like other case mix adjustment methods, does not report screening variables for non-constant risk,^1 ^12 we investigated seven variables and found that three of
them—age, sex, and deprivation—were safe in this respect. However, we found that emergency admission, the Charlson (comorbidity) index, primary diagnosis, and the number of emergency admissions in
the previous year had clinically and statistically significant interaction effects. For two variables, the Charlson index and emergency admission, we found credible evidence to suggest that they are
prone to the constant risk fallacy caused by systematic differences in clinical coding and emergency admission practices across hospitals.
For the Charlson index variable, we showed how the interaction effects seemed to relate to the number of ICD-10 codes (for secondary diagnoses) per admission—that is, depth of clinical coding.^22
Overall, we reasoned that as the increased depth of coding (over time) was accompanied by a decrease in the interaction effect and as differences in the Charlson index did not reflect genuine
differences in case mix profiles, we could reasonably conclude that the Charlson index is prone to the constant risk fallacy largely as a result of differential measurement error from clinical coding
practices. Drawbacks in determining the Charlson index by using administrative datasets have been reported previously.^23 Hospitals with a lower depth of coding were disadvantaged because this was
associated with a lower Charlson index, which in turn underestimated the expected mortality and so inflated the standardised mortality ratio. For the emergency admission variable, we found strong
evidence of systematic differences across hospitals in numbers of patients admitted as emergencies who were admitted and discharged on the same day. The higher risk usually associated with
emergencies would be diluted by the inclusion of zero length of stay admissions in some hospitals. Thus, we judge these two variables—the Charlson index and emergency admission—to be unsafe to use in
case mix adjustment methods because, ironically, their inclusion may have increased the bias that case mix adjustment aims to reduce. Further research to understand the mechanisms behind the other
variables with large interactions is clearly warranted.
Given that our analyses are based on a subset of hospitals in the West Midlands, our study urgently needs to be replicated with more hospitals (for example, at the national level) to examine the
extent to which our findings are generalisable. Furthermore, given the widespread use of standardised mortality ratios for hospitals in other countries (such as the United States,^2 ^3 Canada,^4 the
Netherlands,^5 and Sweden^6), with similar methods to those of the Dr Foster Unit, we are concerned that these comparisons may also be compromised by the possibility of the constant risk fallacy. In
addition, given the widespread use of case mix adjusted outcome comparisons in health care (for example, for producing standardised mortality ratios to compare intensive care units^8), we urge that
all case mix adjustment methods should screen (and report) variables for their susceptibility to the constant risk fallacy. A similar analysis could also be done within a single hospital, such that a
logistic regression model with an offset term could be used to discover which set of the case mix variables has any systematic relation with mortality over and above the original adjustments. This
may be an effective way for a hospital to identify variables that are susceptible to the constant risk fallacy and may give hospitals, especially those with a high standardised mortality ratio, a
focal point for their subsequent investigations. Hospitals with low standardised mortality ratios may also find this analysis useful in increasing their understanding of their standardised mortality
Our findings suggest that the current Dr Foster Unit method is prone to bias and that any claims that variations in standardised mortality ratios for hospitals reflect differences in quality of care
are less than credible.^8 ^12 Indeed, our study may provide a partial explanation for understanding why the relation between case mix adjusted outcomes and quality of care has been questioned.^24
Nevertheless, despite such evidence, assertions that variations in standardised mortality ratios reflect quality of care are widespread,^25 resulting, unsurprisingly, in institutional stigma by
creating enormous pressure on hospitals with high standardised mortality ratios and provoking regulators such as the Healthcare Commission to react.^20
We urge that screening case mix variables for non-constant risk relations needs to become an integral part of validating case mix adjustment methods. However, even with apparently safe case mix
adjustment methods, we caution that we cannot reliably conclude that the differences in adjusted mortality reflect quality of care without being susceptible to the case mix adjustment fallacy,^10
because case mix adjustment by itself is devoid of any direct measurement of quality of care.^26
What is already known on this topic
• Case mix adjusted hospital standardised mortality ratios are used around the world in an effort to measure quality of care
• However, valid case mix adjustment requires that the relation between each case mix variable and mortality is constant across all hospitals (a constant risk relation)
• Where this requirement is not met, case mix adjustment may be misleading, sometimes to the degree that it will actually increase the very bias it is intended to reduce
What this study adds
• Non-constant risk relations exist for several case mix variables used by the Dr Foster Unit to derive standardised mortality ratios for English hospitals, raising concern about the validity of
the ratios
• The cause of the non-constant risk relation for two case mix variables—a comorbidity index and emergency admission—is credibly explained by differences in clinical coding and hospitals’ admission
• Case mix adjustment methods should screen case mix variables for non-constant risk relations
Editor's note: The embargoed copy of this article, sent to the media, wrongly attributed to Dr Foster Intelligence the authorship of the standardised mortality ratio method that is considered here.
The article, as published here, now attributes this standardised mortality ratio method to the Dr Foster Unit at Imperial College.
This independent study was commissioned by the NHS West Midlands Strategic Health Authority. We are grateful for the support of all the members of the steering group, chaired by R Shukla. We
especially thank the staff of participating hospitals, in particular P Handslip. Special thanks go to Steve Wyatt for his continued assistance with the project. We also thank our reviewers for their
helpful suggestions.
Contributors: MAM drafted the manuscript. MAM and GR did the preliminary analyses. JJD designed and did the statistical modelling to test for interactions, with support from AG. RJL and AJS provided
guidance and support. MC provided medical advice and did preliminary investigations into the Charlson index. All authors contributed to the final manuscript. MAM is the guarantor.
Funding: The study was part of a study commissioned by the NHS West Midlands Strategic Health Authority. AG is supported by the EPSRC MATCH consortium.
Competing interests: None declared.
Ethical approval: Not needed.
Cite this as: BMJ 2009;338:b780
Jarman B, Gault S, Alves B, Hider A, Dolan S, Cook A, et al. Explaining differences in English hospital death rates using routinely collected data. BMJ 1999;318:1515-20. [PMC free article] [PubMed]
4. Canadian Institute for Health Information. HSMR: a new approach for measuring hospital mortality trends in Canada. Ottawa: CIHI, 2007.
Heijink R, Koolman X, Pieter D, van der Veen A, Jarman B, Westert G. Measuring and explaining mortality in Dutch hospitals; the hospital standardized mortality rate between 2003 and 2005. BMC Health
Serv Res 2008;8:73. [PMC free article] [PubMed]
6. Koster M, Jurgensen U, Spetz C, Rutberg H. [Standardized hospital mortality as quality measurement in healthcare centres and hospitals.] Lakar Tidningen 2008:19:8:1391-6. (In Swedish.)
Moran JL, Solomon PJ. Mortality and other event rates: what do they tell us about performance? Crit Care Resusc 2003;5:292-304. [PubMed]
Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004;363:1147-54. [
Iezzoni LI. The risks of risk adjustment. JAMA 1997;278:1600-7. [PubMed]
Jarman B. Medical Meccas: which hospital is best? Newsweek International 2006. Oct 30 (available at www.newsweek.com/id/45117).
Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, et al. Evaluating non-randomised intervention studies. Health Technol Assess 2003;7(27). [PubMed]
Nicholl J. Case-mix adjustment in non-randomised observational evaluations: the constant risk fallacy. J Epidemiol Community Health 2007;61:1010-3. [PMC free article] [PubMed]
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chron Dis 1987;40:373-83. [PubMed]
Locker TE, Mason SM. Are we hitting the target but missing the point? Analysis of the distribution of time patients spend in the emergency department. BMJ 2005;330:1188-9. [PMC free article] [PubMed]
Dixon J, Sanderson C, Elliott P, Walls P, Jones J, Petticrew M. Assessment of the reproducibility of clinical coding in routinely collected hospital activity data: a study in two hospitals. J Public
Health Med 1998;20:63-9. [PubMed]
Van Doorn C, Bogardus ST, Williams SC, Concato J, Towle VR, Inouye SK. Risk adjustment for older hospitalized persons: a comparison of two methods of data collection for the Charlson index. J Clin
Epidemiol 2001;54:694-701. [PubMed]
Pitches D, Mohammed MA, Lilford R. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality of care? A systematic review of the literature. BMC
Health Serv Res 2007;7:91. [PMC free article] [PubMed]
Marshall M, Shekelle P, Leatherman S, Brook R. Public disclosure of performance data: learning from the US experience. Qual Health Care 2000;9:53-7. [PMC free article] [PubMed]
Lilford R, Brown C, Nicholl J. Use of process measures to monitor quality of care. BMJ 2007;335:648-50. [PMC free article] [PubMed]
Articles from BMJ Open Access are provided here courtesy of BMJ Group
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2659855/?tool=pubmed","timestamp":"2014-04-18T06:19:59Z","content_type":null,"content_length":"113876","record_id":"<urn:uuid:9e06804e-8393-42c5-8601-d04d84609ff0>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Sabermetric Society at Virginia Tech
Members Only Page
January 27, 2011 – 5:30 pm
I’ve noticed a lot of people are viewing the members only page. If you would like to actually see what’s on there, send me (bstucki1@vt.edu) an e-mail. If you have a vt.edu address, I will send you
the password.
Collegiate Pythagorean Expectation
January 27, 2011 – 2:15 am
Bill James invented the Pythagorean expectation formula as a means to estimate the number of games a baseball team “should” have won in a given season or stretch of games as a function of their runs
scored and runs allowed in that span. While the formula only offers a small degree of predictive value, it can be used to calculate by how much a team is “over or under producing” in wins based on
the runs that their runs scored and runs allowed.
The original formula produced by Bill James was derived empirically and based on Major League Baseball data. The formula James produced was as follows:
Win ratio=(Runs Scored)^2/((Runs Scored)^2+(Runs Allowed)^2)
The calculation for expected wins is then simply:
Expected Wins=Win Ratio x Games Played
Following James’s initial production and presentation of his formula to the sabermetrics community fellow sabermetrians questioned the use of 2 as the exponent. Two formulas produced to determine
the ‘ideal’ exponent were the Pythagenport formula and Pythagenpat formula. The latter is both simpler and believed to be more accurate. The Pythagenpat formula, originally created by David Smyth,
is as follows:
Exponent=(Runs per Game)^.287
Using this formula and the 2010 runs per game data for each of the 292 Division 1 Men’s NCAA Baseball teams the ideal exponent for college baseball was found. This exponent was determined to be
approximately 2.131. This produced the expectancy formula of :
Win ratio=(Runs Scored)^2.131/((Runs Scored)^2.131+(Runs Allowed)^2.131)
Using this formula it was calculated that over the past five seasons, the Virginia Tech Baseball team underperformed by approximately 1.60 wins in the 138 games played in those five seasons. This
formula can also be used in conjunction with a run expectancy table and play-by-play data in order to hypothesize regarding how many wins a team “should” have gotten had it employed a different
strategy, i.e., made different bunt decisions, steal decisions, etc.
It is also worth nothing that the 2.131 exponent is higher than more commonly used exponents. A widely accepted value for the MLB is 1.82. The higher exponent reflects the fact that many more runs
are scored in the college game.
Beyond Batting Average — Lee Panas [Review]
December 27, 2010 – 4:00 am
Lee Panas generously donated a copy of his book Beyond Batting Average to the Sabermetric Society at Virginia Tech. What follows is a review by me, Bryce.
Brandeis University researcher Lee Panas has written the primer on advanced baseball statistics that I so badly needed when I began studying the sport seriously this summer. As many readers will
know, websites such as FanGraphs and Baseball Prospectus do have glossaries but they often lack adequate explanations or are incomplete. There are extraordinarily useful websites such as Sabermetrics
Library but these are difficult for the Sabermetrics Newbie to find. Even popular books such as Baseball Between the Numbers and The New Historical Baseball Abstract either lack lucid explanations of
sabermetric ratings or do not treat them deeply enough or are not organized in a useful manner. Others, such as The Book, are too specialized for the casual fan.
Panas’s book, on the other hand, does an excellent job treading the territory between superficial and deep treatment of the major sabermetric topics of the day, and it arranges those treatments in
quite a clever way. Beginning with a brief history of the development of sabermetrics, starting with Alexander Cartwright and Henry Chadwick and ending with Bill James and Moneyball, Panas then
reminds us that the goal of baseball is to score more runs than the other team, something that is easily forgotten by many amateur analysts. After giving us the Big Picture, Panas slowly moves from
basic hitting to advanced hitting statistics, repeating the process for pitching, then for defense, wrapping things up with contextual considerations and total player contribution metrics (so you can
finally understand what that WAR row on ESPN means). In clearly expressed language, Panas reminds shows us the many ways in which baseball players contribute to helping their teams win. The result is
an accessible and insightful read.
There are, however, some shortcomings. I would have preferred a “mathematical” appendix including explicit formulas for the metrics mentioned and some relevant explanation. I would have also liked a
comprehensive glossary with definitions for easy reference. The book also has the appearance of a children’s coloring book and the title sounds a bit too much like Baseball Between the Numbers for my
taste. A small paperback with slick cover design and a title like Sabermetrics for the Practical Man would make this book instantly more attractive, as would removing Curtis Granderson from the cover
and replacing him with someone a bit more relevant—Kevin Youkilis, perhaps?
That said, what is in the book deserves to be there. What Panas does best is explaining the meaning of the various statistics, showing how they relate to one another, and defending the older, more
mainstream statistics such as ERA and batting average on the grounds that they do tell us something. Over and over again he emphasizes that the reader understand what a statistic means and that,
nearly always, there is not one single stat that tells the whole story, not even WAR; evaluating players is a complicated and evolving task. The completeness of the book, however, is its primary
strength: if you patiently read this Beyond Batting Average—which, at 142 pages filled with graphs and tables, should not take long—you will be up to speed on the modern analysis of the game. As
Panas writes in the introduction, “My goal is to explain the new world of baseball statistics in a way that any knowledgeable and curious baseball fan will comprehend.” Beyond Batting Average
accomplishes that goal.
Final Meeting Tonight!
December 2, 2010 – 5:00 pm
In Squires 217 at 6 p.m. We will be starting some serious work for the Tech baseball team. Please stop by if you want to help.
Article on Adam Dunn Featured on Baseball Think Factory
November 22, 2010 – 11:50 pm
I wrote an article on Adam Dunn for The Nats Blog today that was featured on Baseball Think Factory, one of the best sabermetrics sites on the web. (Someone even called me a “jackass” in the comments
The article discusses Adam Dunn’s new swing-happy approach to the plate in ’10 as well as lots of other boring things like batting average, walk rates, strikeout rates, regression, and sample sizes.
Anyway, the point of this post is to encourage you to get writing. If it’s quality stuff, it will get noticed.
(Also, my post was listed just above an article by Dave Cameron on the Baseball Think Factory list–Dave Cameron being the guy from FanGraphs who came to speak two weeks ago. Take that, Dave Cameron.)
FanGraphs Talk: Write-Up and Video
November 14, 2010 – 10:37 pm
This week, David Appelman, founder of FanGraphs, and Dave Cameron, full-time writer for FanGraphs, visited Virginia Tech for a panel discussion on careers in baseball. The guys themselves were
awesome—very friendly and laid back. They also were quite encouraging as far as the various projects the Sabermetrics Society at Virginia Tech was working on and they did a great job fielding the
questions I offered in my impromptu role as panel facilitator for our careers in baseball event.
Despite my glowing introduction, however, the event took on somewhat of a pessimistic tone. Again and again it was emphasized that a job in an MLB front office is not exactly ideal. It requires a
huge amount of work and commands a low level of pay. More attractive are jobs writing about baseball, but here the guys were also pessimistic. Essentially, they argued, there are not that many jobs
available for writing about the game, so getting paid to do so is difficult. Even on the issue of press passes, which teams are making more available to bloggers than ever before, the guys were
downers, stating that sitting in the press box isn’t as fun as you might think.
But, even with all the negativity, the talk was ultimately very positive. It seems that if you possess skills such as computer programming, good statistical chops, a penchant for hard work, strong
writing abilities, and, perhaps most important, creativity in terms of getting a team interested, landing an MLB job is very possible, even in the face of tough competition. And there are enough
success stories, of guys landing jobs or getting rapidly promoted, to balance out the bad ones. There are also many jobs in baseball outside of the MLB front office: in marketing—ticket sales seem
particularly important these days— as a lawyer, or at a variety of positions throughout the minors leagues. And once you are within the MLB system, getting a top job becomes a lot easier.
As for writing about baseball, there are actually a surprising amount of jobs for baseball writers out there; FanGraphs employs many part-time writers, SB Nation is a great starting place for
potential writers, ESPN and others are constantly looking for help, and there are certainly a number of traditional writing positions out there at newspapers and other media outlets. Perhaps more
relevant, however, is the changing face of baseball writing. It is no secret that newspapers are rapidly losing their importance in the world of media. It would not be surprising to see bloggers take
a bigger and more legitimate role in covering baseball in the very near future.
Indeed, the dynamism of the baseball writing industry seems to be part of a larger trend in the baseball world. Though at the moment working for an MLB team may not seem particularly attractive, it
is likely with the increasing importance of statistics and the proliferation of computer skills amongst young people, that teams may not be able to afford paying their employees so little for much
longer; the most talented workers can simply begin contracting their skills out like people such as Tom Tango already do. It will be interesting to see how dramatic the changes in the near future
will be—if they’re anything like the extreme changes in the seven years since Moneyball came out, then things should be looking up.
Below is a link to a video of the talk:
David Appelman and Dave Cameron at Virginia Tech from Bryce Wilson Stucki on Vimeo.
Interested in Writing About the Nationals?
November 13, 2010 – 9:25 pm
Are you interested in writing about the Nationals for a well-known Nationals blog? Can you produce regular content with a statistical bent? Are you intrigued by the idea of getting press passes for
Nationals games next year?
If you are interested in writing for The Nats Blog, send Will an e-mail at will@thenatsblog.com.
Mathematica Sabermetrics Notebook
November 12, 2010 – 6:19 am
I’ve added a link to a Mathematica notebook under the “Member Work” and “Educational” sections. The notebook contains various sabermetric formulas, a dynamic interface for computing the Bill James
Pythagorean Theorem, and short explanations.
I want the notebook to contain as much sabermetrics knowledge as possible, so I encourage you to download it, add to it, and make it available for free to all who are interested in it.
Mathematica is great software, is fairly easy to learn, and is available for free to all Virginia Tech students (which saves you several hundred dollars). You can download Mathematica and activate it
by visiting this page and clicking on the “Network Software” link under the “Ordering Information” header.
FanGraphs is Hiring
November 11, 2010 – 6:29 pm
FanGraphs is looking for part-time, paid writers.
David Appelman and Dave Cameron (two full-time employees of FanGraphs) were here on Tuesday and gave a great talk about careers in baseball. It seems like working for FanGraphs would be a great
David Appelman and Dave Cameron to Speak at Virginia Tech
October 27, 2010 – 1:50 am
On November 9th, David Appelman and Dave Cameron–both of FanGraphs.com–will be coming to Virginia Tech. The event will be held in Squires 345 from 7:00-8:30 p.m. The event will feature brief talks on
careers in/making money from baseball. After the talks there will be a panel discussion of baseball-related questions. The tone will be informal, so please come relaxed and ready to enjoy yourself!
There will also be a trip downtown after the event.
See you then! (For questions contact Bryce at bstucki1@vt.edu.) | {"url":"http://vtsaber.wordpress.com/","timestamp":"2014-04-18T18:29:11Z","content_type":null,"content_length":"46865","record_id":"<urn:uuid:cbca7a92-0044-42e1-951f-d37c17565b22>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
the additive inverse and identity element
February 10th 2009, 05:31 PM #1
the additive inverse and identity element
just want to make sure that my thinking is on the right track here. we are working with abstract algebra, more specifically we are working with groups. so the + symbol should have a circle around
Given (a,b)E of Z x Z* [a,b are elements of integers times integers with no zeros]
show that [(-a,b)] is an additive inverse of [(a,b), that is [(a,b)] + [(-a,b)]=[(0,1)]
What I think I have to do:
show that [(a,b)]+[(-a,b)] = [(-a,b)]+[(a,b)] (or in relation to)
which will give the identity element [(0,1)]
so using the communitive law:
[ab + b-a,bb] = [(0.1)]
again note that the + symbol shoul have a circle around it. Any advice would be good. If you need clarification on anything, let me know. Thank you.
just want to make sure that my thinking is on the right track here. we are working with abstract algebra, more specifically we are working with groups. so the + symbol should have a circle around
Given (a,b)E of Z x Z* [a,b are elements of integers times integers with no zeros]
show that [(-a,b)] is an additive inverse of [(a,b), that is [(a,b)] + [(-a,b)]=[(0,1)]
What I think I have to do:
show that [(a,b)]+[(-a,b)] = [(-a,b)]+[(a,b)] (or in relation to)
which will give the identity element [(0,1)]
so using the communitive law:
[ab + b-a,bb] = [(0.1)]
again note that the + symbol shoul have a circle around it. Any advice would be good. If you need clarification on anything, let me know. Thank you.
You will need to learn to express yourself more clearly.
What are distinct elements of the set, and how has the group operation been defined?
In all probability you are not working with $\mathbb{Z}\times \mathbb{Z}^*$ but with equivalence classes of this set defined by the equivalence relation:
$(a,b) \equiv (c,d)$
if and only if $(a,b) \in \mathbb{Z}\times \mathbb{Z}^* , (c,d) \in \mathbb{Z}\times \mathbb{Z}^*$ and $ad=cb$.
(That is; you are working with a model of the rational numbers and addition on the rationals)
February 10th 2009, 11:13 PM #2
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/discrete-math/72967-additive-inverse-identity-element.html","timestamp":"2014-04-19T05:36:50Z","content_type":null,"content_length":"35827","record_id":"<urn:uuid:84a51ed4-b1be-46c8-b6d0-60ac377cdcfa>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Neck Trigonometry Tutor
Find a Little Neck Trigonometry Tutor
...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of
physics is math, and I am experienced and well qualified to tutor math from elementary school up throug...
18 Subjects: including trigonometry, reading, physics, calculus
...Chemistry is built upon overlapping concepts starting with the structure of the atom, to the Periodic Table, to types of bonds, Nuclear and Organic Chemistry. I take the basic concepts, make
sure that the student understands them and then I build upon these concepts. I use diagrams, charts,and ...
45 Subjects: including trigonometry, English, chemistry, GED
...I have post-graduate training in Wilson Reading Program, Wilson Foundations, Animated Literacy and Sounds in Motion. Using a combination of these programs plus strategies gleaned from my long
professional career, I work with students who struggle with decoding, phonemic awareness, reading compre...
39 Subjects: including trigonometry, English, reading, ESL/ESOL
I am a Stanford University graduate with a BS in Physics, a BA in Philosophy and 5 years cumulative tutoring experience. In college, I worked with high school students and college freshmen, in
Physics, Inorganic Chemstry, Calculus, Algebra I and II and Geometry. In the years since graduating from ...
23 Subjects: including trigonometry, chemistry, reading, physics
...My tutoring philosophy is based on the idea above: my job as a tutor is to help you understand how math works, making you able to do any problem yourself! (Of course this all applies to physics
as well as to math!) I believe that you can always do better in math or physics, no matter what level...
12 Subjects: including trigonometry, physics, MCAT, calculus | {"url":"http://www.purplemath.com/Little_Neck_trigonometry_tutors.php","timestamp":"2014-04-16T19:41:43Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:8e252d86-18a2-481d-9f7c-3dc2b4f424e1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Untitled Document
Graduate Course in Mathematics and Computer Science
Day and time: Wednesdays, 4:15-6:15 p.m.
Place: CUNY Graduate Center, 365 Fifth Avenue
Offered by János Pach
Spring Term, 2001
The exciting fact that randomness (i.e., coin flipping) can be used profitably to construct various mathematical structures with unexpected and often paradoxical properties, and to efficiently solve
many { otherwise hopelessly difficult { computational tasks, attracted a lot of interest during the past 25 years. In this course, we give a systematic introduction to the most important applications
of this idea. No special knowledge of combinatorics is required. However, we assume some familiarity with the basic notions of probability theory (expectation and variance of random variables,
binomial distribution).
• Linearity of expectation
• Applications in combinatorics and number theory
• Randomized algorithms (sorting, convex hull, linear programming)
• The second moment method
• Random graphs
• Circuit complexity
Textbook: N. Alon and J. Spencer: The Probabilistic Method (2nd edition), J. Wiley and Sons, New York, 2000.
Bonus lecture: The course will include a guest appearance by at least one of the authors! | {"url":"http://math.nyu.edu/~pach/syllabus/syllabus6.html","timestamp":"2014-04-16T18:56:03Z","content_type":null,"content_length":"4262","record_id":"<urn:uuid:a717db9d-9c7f-485c-bff1-dfc874bf7be9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Law of Sines
The Law of Sines is the relationship between the sides and angles of non-right (oblique) triangles. Simply, it states that the ratio of the length of a side of a triangle to the sine of the angle
opposite that side is the same for all sides and angles in a given triangle.
In ∆ABC is an oblique triangle with sides a, b and c, then
To use the Law of Sines you need to know either two angles and one side of the triangle (AAS or ASA) or two sides and an angle opposite one of them (SSA). Notice that for the first two cases we use
the same parts that we used to prove congruence of triangles in geometry but in the last case we could not prove congruent triangles given these parts. This is because the remaining pieces could have
been different sizes. This is called the ambiguous case and we will discuss it a little later.
Example 1: Given two angles and a non-included side (AAS).
Given ∆ABC with A = 30°, B = 20° and a = 45 m. Find the remaining angle and sides.
The third angle of the triangle is
C = 180° – A – B = 180° – 30° – 20 ° = 130°
By the Law of Sines,
By the Properties of Proportions
Example 2: Given two angles and an included side (ASA).
Given A = 42°, B = 75° and c = 22 cm. Find the remaining angle and sides.
The third angle of the triangle is:
C = 180° – A – B = 180° – 42° – 75° = 63°
By the Law of Sines,
By the Properties of Proportions
The Ambiguous Case
If two sides and an angle opposite one of them is given, three possibilities can occur.
(1) No such triangle exists.
(2) Two different triangles exist.
(3) Exactly one triangle exists.
Consider a triangle in which you are given a, b and A. (The altitude h from vertex B to side b sin A.)
(1) No such triangle exists if A is acute and a < h or A is obtuse and a ≤ b.
(2) Two different triangles exist if A is acute and h < a < b.
(3) In every other case, exactly one triangle exists.
Example 1: No Solution Exists
Given a = 15, b = 25 and A = 80°. Find the other angles and side.
h = b sin A = 25 sin 80° ≈ 24.6
Notice that a < h. So it appears that there is no solution. Verify this using the Law of Sines.
This contrasts the fact that the –1 ≤ sin B ≤ 1. Therefore, no triangle exists.
Example 2: Two Solutions Exist
Given a = 6. b = 7 and A = 30°. Find the other angles and side.
h = b sin A = 7 sin 30° = 3.5
h < a < b therefore, there are two triangles possible.
By the Law of Sines,
There are two angles between 0° and 180° whose sine is approximately 0.5833, 35.69° and 144.31°.
If B ≈ 35.69° If B ≈ 144.31°
C ≈180° – 30° – 35.69° ≈ 114.31° C ≈ 180° – 30° – 144.31° ≈ 5.69°
Example 3: One Solution Exists
Given a = 22, b =12 and A = 40°. Find the other angles and side.
a > b
By the Law of Sines,
B is acute.
C ≈ 180° – 40° – 20.52° ≈ 119.48°
By the Law of Sines,
If we are given two sides and an included angle of a triangle or if we are given 3 sides of a triangle, we cannot use the Law of Sines because we cannot set up any proportions where enough
information is known. In these two cases we must use the Law of Cosines. | {"url":"http://hotmath.com/hotmath_help/topics/law-of-sines.html","timestamp":"2014-04-19T20:43:40Z","content_type":null,"content_length":"13838","record_id":"<urn:uuid:63726d03-8703-449f-bc9f-c1083cbb0e23>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 443: Kernels and Large Cardinals 1
Harvey Friedman friedman at math.ohio-state.edu
Wed Oct 20 22:29:16 EDT 2010
Assuming this work remains stable, I am expecting to put up sketches
early in 2011. I have other urgent matters presently.
Harvey M. Friedman
October 20, 2010
0, Preface.
1. Digraph, kernel, downward.
2. A-set, A-digraph, diagonal image, upper shift.
3. The infinite upper shift kernel theorem.
4. (A,R)-set, (A,R)-set*, (A,R)-digraph, local (A,R)-function.
5. The infinite system kernel theorem.
6. Norm, n-kernel, n-diagonal image.
7. The finite upper shift kernel theorem.
8. n-local (A,R)-function.
9. The finite system kernel theorem.
10. The future.
0. PREFACE.
In http://www.cs.nyu.edu/pipermail/fom/2010-October/015089.html, we
presented the Infinite Upper Shift Kernel Theorem, and pointed to a
sketch of its proof from large cardinals (the SRP hierarchy) in http://www.math.ohio-state.edu/%7Efriedman/manuscripts.html
, section 1 #70.
The Infinite Upper Shift Kernel Theorem states the existence of a
subset of Q such that certain naturally associated digraphs have a
kernel containing its image under a specific function (the upper shift).
We now present the Infinite System Kernel Theorem. Here we state the
existence of a system (A,R), A contained in Q, R contained in A^2,
such that certain naturally associated digraphs have a kernel
containing its image under a nontrivial function whose truncations are
nice relative to (A,R).
The resulting sentence is equivalent to Con(HUGE).
In http://www.cs.nyu.edu/pipermail/fom/2010-October/015089.html, we
also presented the Finite Upper Shift Kernel Theorem, which can be
viewed as a straightforward version of the Infinite Upper Shift Kernel
Theorem, putting in upper bounds on the norms of the rationals
involved. It is explicitly Pi01, and also equivalent to Con(SRP).
We present a somewhat simplified version of the Finite Upper Shift
Kernel Theorem. We also present the Finite System Kernel Theorem, as a
straightforward version of the Infinite System Kernel Theorem. It is
explicitly Pi01, and also equivalent to Con(HUGE).
We also sharpen the System Kernel Theorems to reach past I3 = rank
into itself.
Note that all finite forms are explicitly Pi01. The estimates used
appear safe, and will probably be sharpened when the results fully
We can also use the lexicographic ordering for downward. This will
push the strength up a little bit, and also seems to somewhat simplify
the reversals.
1. DIGRAPH, KERNEL, DOWNWARD.
A directed graph, or digraph, is a pair (V,E), where V is a nonempty
set of vertices, and E contained in V^2 is a set of edges. We say that
x connects to y if and only if (x,y) in E.
A kernel in (V,E) is a S contained in V such that
i. No element of S connects to any element of S.
ii. Every element of V\S connects to some element of S.
This the standard definition of kernel in digraphs, and the study of
kernels in digraphs is a very active subject in pure/applied
combinatorics. There is the dual notion of dominator in digraphs, and
some of the work falls under that name.
The study of kernels in digraphs was inspired by the following theorem
of John von Neumann.
THEOREM 1.1. Von Neumann 1944 (book with Morgenstern). Every finite
dag has a unique kernel.
Here a dag is a directed acyclic graph; i.e., a digraph with no cycles.
There is a well known generalization.
THEOREM 1.2. Every digraph with no infinite walk has a unique kernel.
Theorem 1.2 is subject to Reverse Mathematics. For countable digraphs
the existence of a kernel and the existence of a unique kernel are
both equivalent to ATR_0 over RCA_0.
It appears that Digraph Kernels are providing a crucial link between
strong uses of infinity and an active research area of mathematics
relating to combinatorics, computer science, and optimization.
However, I do not want to overstate the case here. The bulk of
interest is in kernels in finite digraphs, and the Infinite Kernel
Theorems presented here involve infinite digraphs. The Finite Kernel
Theorems, however, involve only finite digraphs. We are hopeful that
through the Finite Kernel Theorems, we shall be able to strengthen the
connection with ongoing research in Digraph Kernels.
The great power of Digraph Kernels is that they allow us to
appropriate encapsulate transfinite recursion, transfinite induction,
and bounded separation, with one simple concept. Large cardinals are
"generated" by imposing richness properties on the kernels.
We let Q be the set of all rational numbers with the usual ordering.
We will focus on digraphs (A^k,E) where A is contained in Q. We say
that (A^k,E) is downward if and only if x E y implies max(x) > max(y).
Note that if A is well ordered, then (V,E) has no infinite walk, and
hence has a unique kernel.
2. A-SET, A-DIGRAPH, DIAGONAL IMAGE, UPPER SHIFT.
We fix A contained in Q. The A-sets in A^k form a Boolean algebra of
subsets of A^k defined using variables x_1,...,x_k ranging over A. It
is the Boolean algebra generated by the conditions x_i < x_j, 1 <= i,j
<= k.
The A-digraphs on A^k are the digraphs (A^k,E), where E is an A-set in
The A-sets and the A-digraphs are the A-sets in A^k and the A-digraphs
on A^k, for k >= 1.
Let S contained in Q^k and f:Q into Q be partial. The diagonal image
of S by f is the set
{(f(x_1),...,f(x_k)): (x_1,...,x_k) in S}.
The upper shift is the function ush:Q into Q given by
ush(q) = q+1 if q >= 0; q otherwise.
3. THE INFINITE UPPER SHIFT KERNEL THEOREM.
PROPOSITION 3.1. Infinite Upper Shift Kernel Theorem. There exists 0
in A contained in Q such that every downward A-digraph has a kernel
containing its diagonal image by the upper shift.
THEOREM 3.2. Proposition 3.1 is provably equivalent to Con(SRP) over
ACA_0. ACA_0 + Con(SRP) proves that A and the kernels can be taken to
be recursive in the Turing jump of 0.
Here SRP = ZFC + {there exists lambda with the k-SRP}_k. Lambda has
the k-SRP if and only if lambda is a cardinal such that every f:
[lambda]^k into 2 is constant on some [E]^k, E a stationary subset of
lambda. Here [E]^k is the set of all unordered k-tuples from E.
4. (A,R)-SET, (A,R)*-SET, (A,R)-DIGRAPH, LOCAL (A,R)-FUNCTION.
A Q-system is a pair (A,R), where A is a subset of Q and R is a subset
of A^2.
The (A,R)-sets in A^k form a Boolean algebra of subsets of A^k defined
using variables x_1,...,x_k ranging over A. It is the Boolean algebra
generated by the conditions z < w, R(z,w), where z,w are among the
variables x_1,...,x_k.
If we also allow z,w to be specific elements of A, then we obtain the
(in general) larger Boolean algebra of (A,R)*-sets. We can think of
(A,R)* as the structure (A,R) expanded with constants for each element
of A.
The (A,R)-digraphs on A^k are the digraphs (A^k,E), where E is an
(A,R)-set in A^k.
A downward (A,R)-digraph is an (A,R)-digraph where x E y implies
max(x) > max(y).
The (A,R)-sets, (A,R)*-sets, (A,R)-digraphs are the (A,R)-sets in A^k,
(A,R)*-sets in A^k, (A,R)-digraphs on A^k, respectively, for some k >=
The local (A,R)-functions are the functions f:A into A such that the
graph of each truncation f|<x, x in A, is an (A,R)*-set.
A function is said to be nontrivial if and only if it is not an
identity function.
5. THE INFINITE SYSTEM KERNEL THEOREM.
PROPOSITION 5.1. Infinite System Kernel Theorem. There exists a Q-
system (A,R) such that every downward (A,R)-digraph has a kernel
containing its diagonal image by a nontrivial local (A,R)-function.
THEOREM 5.2. Proposition 5.1 is provably equivalent to Con(HUGE) over
ACA_0. ACA_0 + Con(HUGE) proves that A,R, the kernels, and the local
functions can be taken to be recursive in the Turing jump of 0.
Here HUGE = ZFC + {there exists a k-huge cardinal}_k.
We can go further.
PROPOSITION 5.3. Extended Infinite System Kernel Theorem. There exists
a Q-system (A,R) such that every downward (A,R)-digraph has a kernel
containing its diagonal image under a local (A,R)-function that is
nontrivial below some fixed point.
THEOREM 5.4. Proposition 5.3 implies Con(I3) and is implied by
Con(I2), over ACA_0. ACA_0 + Con(I2) proves that A,R, the kernels, and
the local functions can be taken to be recursive in the Turing jump of
6. NORM, n-KERNEL, n-DIAGONAL IMAGE.
The norm of x in Q^k is the sum of the magnitudes of all of the
numerators and denominators of the reduced forms of its coordinates.
I.e, the norm of (-2,2/6,0) is 8.
The norm of a subset of Q^k is the maximum norm of its elements. The
empty set and infinite subsets of Q^k do not have norms.
Let (A^k,E) be a digraph, where A is contained in Q. We say that S is
an n-kernel if and only if
i. Every element of S has norm at most 8n^2.
ii. No element of S is connected to any element of S.
iii. Every x in V\S of norm p <= n is connected to an element of S of
norm at most 8p^2.
Let S contained in Q^k and f:Q into Q be partial. The n-diagonal image
of S by f is the set of all elements of the diagonal image of S by f
of norm at most n.
7. THE FINITE UPPER SHIFT KERNEL THEOREM.
PROPOSITION 7.1. Finite Upper Shift Kernel Theorem. There exists 0 in
A contained in Q of norm at most 8n^2 such that every downward A-
digraph has an n-kernel containing its n-diagonal image under the
upper shift.
THEOREM 7.2. Proposition 7.1 is provably equivalent to Con(SRP) over
Note that the Finite Upper Shift Kernel Theorem is explicitly Pi01.
8. n-LOCAL (A,R) FUNCTION.
Let (A,R) be a Q-system. The norm of an (A,R)*-set is the least n such
that the set can be expressed using parameters of norm at most n.
An n-local function is a function f from elements of A of norm at most
n into elements of A of norm at most 8n^2, where for all x in A of
norm p <= n, f|<x is an (A,R)*-set of norm at most 8p^2.
9. THE FINITE SYSTEM KERNEL THEOREM.
PROPOSITION 9.1. Finite System Kernel Theorem. There exists a Q-system
(A,R), A of norm at most 8n^2, such that every downward (A,R)-digraph
has a kernel containing its n-diagonal image under an n-local (A,R)-
function that maps 0 to 1.
PROPOSITION 9.2. Extended Finite System Kernel Theorem. There exists a
Q-system (A,R), A of norm at most 8n^2, such that every downward (A,R)-
digraph has a kernel containing its n-diagonal image under an n-local
(A,R)-function that maps 0 to 1 and 2 to 2.
THEOREM 9.3. Proposition 9.1 is provably equivalent to Con(HUGE) over
THEOREM 9.4. Proposition 9.2 implies Con(I3) and is implied by
Con(I2), over EFA.
Note that the Finite System Kernel Theorem and the Extended Finite
System Kernel Theorem are explicitly Pi01.
10. THE FUTURE.
I am expecting an interesting general theory of NORMIFICATION, whereby
certain infinite statements are uniformly normed, resulting in Pi01
statements. The idea is to justify that the Pi01 statements are just
as mathematical as the original infinite statements from which they
came. The present normifications - perhaps modified - should be
special cases of general normification.
I should be able to go further into the large cardinal axioms that
violate the axiom of choice. However, a new idea is needed since the
obvious way to do this will get the axiom of choice, and hence
I am expecting that the above approach based on kernels of digraphs
should be crafted into a tool that allows for a uniform correspondence
with the entire large cardinal hierarchy, from PA through j:V into V.
As a first step towards this, we already know how to make the SRP
statement relate to the HUGE statement:
There exists a Q-system (A,R) such that every downward (A,R)-digraph
has a kernel containing its diagonal image under a function whose
fixed points have a strict sup in A.
If we are just concerned with SRP, then we prefer the Infinite Upper
Shift Kernel Theorem.
We expect to be able to Template the Infinite Upper Shift Kernel
Theorem using partial rational piecewise linear functions from Q into
Q, as discussed in http://www.cs.nyu.edu/pipermail/fom/2010-October/015089.html
. We expect to be able to template the Infinite System Kernel Theorem
and the Extended Infinite System Kernel Theorem also.
I use http://www.math.ohio-state.edu/~friedman/ for downloadable
manuscripts. This is the 443rd in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-349 can be found athttp://www.cs.nyu.edu/pipermail/fom/2009-August/014004.html
in the FOM archives.
350: one dimensional set series 7/23/09 12:11AM
351: Mapping Theorems/Mahlo/Subtle 8/6/09 10:59PM
352: Mapping Theorems/simpler 8/7/09 10:06PM
353: Function Generation 1 8/9/09 12:09PM
354: Mahlo Cardinals in HIGH SCHOOL 1 8/9/09 6:37PM
355: Mahlo Cardinals in HIGH SCHOOL 2 8/10/09 6:18PM
356: Simplified HIGH SCHOOL and Mapping Theorem 8/14/09 9:31AM
357: HIGH SCHOOL Games/Update 8/20/09 10:42AM
358: clearer statements of HIGH SCHOOL Games 8/23/09 2:42AM
359: finite two person HIGH SCHOOL games 8/24/09 1:28PM
360: Finite Linear/Limited Memory Games 8/31/09 5:43PM
361: Finite Promise Games 9/2/09 7:04AM
362: Simplest Order Invariant Game 9/7/09 11:08AM
363: Greedy Function Games/Largest Cardinals 1
364: Anticipation Function Games/Largest Cardinals/Simplified 9/7/09
365: Free Reductions and Large Cardinals 1 9/24/09 1:06PM
366: Free Reductions and Large Cardinals/polished 9/28/09 2:19PM
367: Upper Shift Fixed Points and Large Cardinals 10/4/09 2:44PM
368: Upper Shift Fixed Point and Large Cardinals/correction 10/6/09
369. Fixed Points and Large Cardinals/restatement 10/29/09 2:23PM
370: Upper Shift Fixed Points, Sequences, Games, and Large Cardinals
11/19/09 12:14PM
371: Vector Reduction and Large Cardinals 11/21/09 1:34AM
372: Maximal Lower Chains, Vector Reduction, and Large Cardinals
11/26/09 5:05AM
373: Upper Shifts, Greedy Chains, Vector Reduction, and Large
Cardinals 12/7/09 9:17AM
374: Upper Shift Greedy Chain Games 12/12/09 5:56AM
375: Upper Shift Clique Games and Large Cardinals 1graham
376: The Upper Shift Greedy Clique Theorem, and Large Cardinals
12/24/09 2:23PM
377: The Polynomial Shift Theorem 12/25/09 2:39PM
378: Upper Shift Clique Sequences and Large Cardinals 12/25/09 2:41PM
379: Greedy Sets and Huge Cardinals 1
380: More Polynomial Shift Theorems 12/28/09 7:06AM
381: Trigonometric Shift Theorem 12/29/09 11:25AM
382: Upper Shift Greedy Cliques and Large Cardinals 12/30/09 2:51AM
383: Upper Shift Greedy Clique Sequences and Large Cardinals 1
12/30/09 3:25PM
384: THe Polynomial Shift Translation Theorem/CORRECTION 12/31/09
385: Shifts and Extreme Greedy Clique Sequences 1/1/10 7:35PM
386: Terrifically and Extremely Long Finite Sequences 1/1/10 7:35PM
387: Better Polynomial Shift Translation/typos 1/6/10 10:41PM
388: Goedel's Second Again/definitive? 1/7/10 11:06AM
389: Finite Games, Vector Reduction, and Large Cardinals 1 2/9/10
390: Finite Games, Vector Reduction, and Large Cardinals 2 2/14/09
391: Finite Games, Vector Reduction, and Large Cardinals 3 2/21/10
392: Finite Games, Vector Reduction, and Large Cardinals 4 2/22/10
393: Finite Games, Vector Reduction, and Large Cardinals 5 2/22/10
394: Free Reduction Theory 1 3/2/10 7:30PM
395: Free Reduction Theory 2 3/7/10 5:41PM
396: Free Reduction Theory 3 3/7/10 11:30PM
397: Free Reduction Theory 4 3/8/10 9:05AM
398: New Free Reduction Theory 1 3/10/10 5:26AM
399: New Free Reduction Theory 2 3/12/10 9:36AM
400: New Free Reduction Theory 3 3/14/10 11:55AM
401: New Free Reduction Theory 4 3/15/10 4:12PM
402: New Free Reduction Theory 5 3/19/10 12:59PM
403: Set Equation Tower Theory 1 3/22/10 2:45PM
404: Set Equation Tower Theory 2 3/24/10 11:18PM
405: Some Countable Model Theory 1 3/24/10 11:20PM
406: Set Equation Tower Theory 3 3/25/10 6:24PM
407: Kernel Tower Theory 1 3/31/10 12:02PM
408: Kernel tower Theory 2 4/1/10 6:46PM
409: Kernel Tower Theory 3 4/5/10 4:04PM
410: Kernel Function Theory 1 4/8/10 7:39PM
411: Free Generation Theory 1 4/13/10 2:55PM
412: Local Basis Construction Theory 1 4/17/10 11:23PM
413: Local Basis Construction Theory 2 4/20/10 1:51PM
414: Integer Decomposition Theory 4/23/10 12:45PM
415: Integer Decomposition Theory 2 4/24/10 3:49PM
416: Integer Decomposition Theory 3 4/26/10 7:04PM
417: Integer Decomposition Theory 4 4/28/10 6:25PM
418: Integer Decomposition Theory 5 4/29/10 4:08PM
419: Integer Decomposition Theory 6 5/4/10 10:39PM
420: Reduction Function Theory 1 5/17/10 2:53AM
421: Reduction Function Theory 2 5/19/10 12:00PM
422: Well Behaved Reduction Functions 1 5/23/10 4:12PM
423: Well Behaved Reduction Functions 2 5/27/10 3:01PM
424: Well Behaved Reduction Functions 3 5/29/10 8:06PM
425: Well Behaved Reduction Functions 4 5/31/10 5:05PM
426: Well Behaved Reduction Functions 5 6/2/10 12:43PM
427: Finite Games and Incompleteness 1 6/10/10 4:08PM
428: Typo Correction in #427 6/11/10 12:11AM
429: Finite Games and Incompleteness 2 6/16/10 7:26PM
430: Finite Games and Incompleteness 3 6/18/10 6:14PM
431: Finite Incompleteness/Combinatorially Simplest 6/20/10 11:22PM
432: Finite Games and Incompleteness 4 6/26/10 8:39PM
433: Finite Games and Incompleteness 5 6/27/10 3:33PM
434: Digraph Kernel Structure Theory 1 7/4/10 3:17PM
435: Kernel Structure Theory 1 7/5/10 5:55PM
436: Kernel Structure Theory 2 7/9/10 5:21PM
437: Twin Prime Polynomial 7/15/10 2:01PM
438: Twin Prime Polynomial/error 9/17/10 1:22PM
439: Twin Prime Polynomial/corrected 9/19/10 2:16PM
440: Finite Phase Transitions 9/26/10 1:28PM
441: Equational Representations 9/27/10 4:59PM
442: Kernel Structure Theory Restated 10/11/10 9:01PM
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2010-October/015099.html","timestamp":"2014-04-18T16:06:26Z","content_type":null,"content_length":"20927","record_id":"<urn:uuid:52542b88-0bad-41aa-93f4-3b8aea2beebf>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pitching, Defense Just Slightly More Important to Team Wins Than Offense
BASEBALL FOLKLORE abounds with pronouncements as to what areas of the game are most important to winning. These are put forth by venerable veterans and the greenest of rookies, but more commonly
these "pearls" of diamond wisdom emanate from those sagacious ex-big leaguers, stars and scrubs alike. The pronouncements include: "Pitching is 90% of the game (or some variant thereof) ``...`` You
have to be solid up the middle." . . . Good pitching always stops good hitting."…"All you need is a strong bullpen."
Earl Weaver used to emphasize the home run (three-run variety) as his most important ingredient with the Balitmore Orioles. Sorry, Earl, it was more likely that your pitching and/or solid defense did
the trick, regardless of the home runs. How so? Is there any way to empirically assess these tried and true maxims?
Well, yes, friends, there does exist a statistical technique (and you thought we had exhausted them all) which begins to give us an answer to the question: offense or defense? In the lexicon of the
social scientist it is known as "multiple regression analysis." Let me try to explain, as simply as possible.
If you were asked to predict a given set of teams' win totals for a season and you wanted to minimize your error, you probably would opt for the mean value (usually around 81 games) for each team,
which would be your best statistical bet. However, multiple regression purports to yield better prediction with even smaller error variance. It states that if you know a set of variables - termed
"independent variables" - beforehand, you can come up with better predictive quality in the variable in question, termed the "dependent variable." This implies and assumes both a theoretical and
statistical (linear, additive) causation pattern, from a set of independent variables to a dependent variable, in that order.
Given that there is a total amount of variance (or variation in the actual values) in the dependent variable (100%), multiple regression can tell us how much of that is explained by the set of
independent variables utilized, in toto, as well as for each variable's singular contribution. Also, a prediction equation for the dependent variable can be calculated.
The application to baseball and its statistics thus becomes extremely alluring (at least for those of a statistical bent). The dependent variable in question is Team Wins. The set of independent
variables would encompass offensive and defensive performance statistics. Given 100% variance or variation in Team Win totals (across one or both leagues), how much can be explained by hitting,
pitching or fielding?
How much can't be? This is the logic and approach taken in my statistical assessment. (For a more thorough exposition of multiple regression, see: Applied Regression: An Introduction by Michael S.
Lewis-Beck© 1980. Sage Productions, Inc.; Social Statistics by Hubert M. Blalock, Jr. © 1979. McGraw-Hill).
The title of my analysis is "Baseball Regression 1973-1983: Omitting 1981." Why 1973-1983 and omitting 1981? Primarily because 1973 heralded the first year of the designated hitter in the American
League, and with the omission of strike-shortened 1981 due to its being an aberration, what's left is a nice ten-year period with which an analysis can be run and an evaluation made.
For this initial analysis, Team Earned-Run Average was chosen to capture the pitching factor (also defense), Team Fielding Average to proxy for defense, and four offensive variables which were
readily available: Team Batting Average, Home Runs, Slugging Percentage and Runs Scored. These are my first choices and the potential for revision and greater explication lies in the minds of those
who wish to further theoretically and statistically conceptualize.
Using the 1982 edition of The Baseball Encyclopedia and data supplied by the league and commissioner's offices, and with the computer aid of SPSS (Statistical Package For the Social Sciences - Nie,
Hull, Jenkins, Steinbrenner, Bent© 1975, McGraw-Hill), the results obtained were as follows:
1. Sample size (omitting 1981)
1973-1976: A.L. (12 teams)= 48 cases
1977-1983: A.L. (14 teams)= 84 cases
132 cases
1973-1983: N.L. (12 teams)= 120 cases
total cases = 252
2. Total explained variance (symbolized in statistics as R2) for all cases (252) was roughly 87%. That is, 87% of the variance in Team Wins could be accounted for by Team Batting average, Home Runs,
Slugging Percentage, Runs Scored, Earned-Run Average and Fielding
Average. However, Team Batting Average, Slugging Percentage and Home Runs were found to be not significant, statistically (t test), meaning that their "impact" was not statistically reliable (could
have as easily happened by chance) and their R2 contributions were miniscule, at best. The most useful picture from the output comes from breaking the analysis out by league as follows:
Once again, Team Batting, Home Runs, and Slugging Percentage were not significant (for either league), and the three variables listed are the only ones statistically salient within this analysis. For
both leagues, defense, as represented by pitching (earned-run average) and fielding, is more important to Team Wins than the offensive statistic, runs scored, as follows: A.L.: 58%-42%; N.L.:
54%-46%. In the N.L., pitching turns out to be of highest explanatory value, solely, while in the A.L. it is runs scored, which intuitively makes sense, backing up the assertion that pitching is
better in the N.L. while offense is the name of the game in the junior circuit. Runs scored, you might say, has to be highly related to Team Wins, given that you have to score more runs than the
opponent to win. This is true although there are many teams that score heavily but still fail to win consistently. Is this a tautological cycle or not? Or does it show that it doesn't matter how you
score, just so you get those runs across the plate?
More ruminating needs to be done on other possible offensive statistics. Is the fielding average contribution greater in the A.L. because there is more hitting? Only speculation. The major caveat
remains that there still is 11% left unexplained in Team Wins in the A.L. while the figure is 15% in the N.L. Perhaps if other offensive statistics were used, the balance would swing in the other
direction. For the time being, though, the proof is in the numbers as they stand and the burden on the skeptic is to disprove.
One by-product of this regression analysis is the calculation of a prediction equation mathematically relating the independent variables to the dependent variable, Team Wins. While based on a
specific ten years' worth of history, it still can allow the fan to predict what his/her favorite team's win total should be given the team's current statistics as well as projecting what it needs to
do to improve its current standings. The equation:
A.L.: Team Wins = -417.861 + 497.80 X Fielding
Avg.+.104xRuns Scored-15.721xERA
N.L.: Team Wins= -142.815+223.976xFielding
Avg. + .103 X Runs Scored- 17.5 19 x ERA
*It should be noted that the numbers multiplied by the performance variables are known as "slope coefficients" and these, as well as the explained variance figures, are generated by a method known as
"OLS", ordinary least squares. The SPSS computer package utilizes OLS principles in conjunction with matrix algebra to produce these results. The more detailed statistical analysis can be had upon
request from the author.
Well, there you have it. An inveterate fan and social science student's contribution to the mainstream of baseball arcana, grist for those upper-deck games between spectators known as "trivial
As stated before, much still can be done to close the gap between what I have explained (89%-A.L.; 85%-N.L.) and the perfect world of 100% explained variation. While seasoned watchers might allocate
that 11 and 15 percent, respectively, to managerial acumen or team spirit or ballpark design, I would prefer to think that there are other variables with which to creep closer (Total DPs? Total
Bases? Proportion of a team's hitters above .300? A Bill James' creation? Someone else's?). I encourage any and all to participate with further suggestions. The only requisites are a fanatical love
for baseball and a knowledge of its "numbers" as well as a compulsion to care about such things! | {"url":"http://research.sabr.org/journals/pitching-defesnse","timestamp":"2014-04-20T08:17:32Z","content_type":null,"content_length":"20152","record_id":"<urn:uuid:6b56e59e-48ef-44bb-8ef4-796721bb9e03>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: A good math blog?
Replies: 4 Last Post: Feb 15, 2014 7:12 AM
Messages: [ Previous | Next ]
A good math blog?
Posted: Feb 10, 2014 9:12 PM
A co-worker and I were talking today. She's interested in math, but
feels she did poorly in it in school. From what she described, it
was poor teaching at the university level -- she had a run of bad
luck with her instructors.
She doesn't want to take a math course, but she would like to (my
words) be exposed to mathematical thinking. I've recommended John
Allen Paulos and Gerd Gigerenzer to her, but it strikes me that a
blog she could follow would be just perfect: small columns that used
something real-world to illustrate a mathematical concept.
Anyone have a good candidate?
Stan Brown, Oak Road Systems, Tompkins County, New York, USA
Shikata ga nai... | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2619111","timestamp":"2014-04-21T04:58:24Z","content_type":null,"content_length":"21212","record_id":"<urn:uuid:77a80c66-ec62-451a-9423-567104687082>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Student Support Forum: 'Approximation for nonlinear equations' topic
Author Comment/Response
Hey guys!
I am trying to solve a system of three non-linear equations using Mathematica. However, the solver is unable to work it out analytically and solving it numerically doesn't work either (or at
least not within a reasonable amount of time).
The function f[a,b,c,d,e,]=g with (a_i,b_i,g_i) i=1,2,3 is given. I want determine c,d,e.
A good approximation would be fine as well (and I am also not entirely sure if there is an exact solution).
FindRood should theoretically do the job but apparently there is no way to exclude negative numbers as results (which I keep getting). Mathematica also wouldn't allow me to add an additional
equation or inequation to make sure the results are positive numbers (saying that these are too many equations for the number of variables to solve for)
I guess somehow minimizing the f-g's simultaneously could also work, but I am unfortunatelly not that skilled with the program and the internet didn't help me so far
Any ideas on what I could do here?
URL: , | {"url":"http://forums.wolfram.com/student-support/topics/529026","timestamp":"2014-04-17T18:36:51Z","content_type":null,"content_length":"28856","record_id":"<urn:uuid:36a21f57-b0ac-49fa-9df3-511778217746>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix products under which the determinant behaves multiplicatively
up vote 3 down vote favorite
The determinant behaves multiplicatively with respect to the usual matrix product $$ \det(AB) = \det(A)\det(B), $$ and also with respect to the Kronecker (or tensor) product of square matrices $$ \
det(A\otimes B) = \det(A)^q \det(B)^p, $$ when $A$ and $B$ are $p\times p$ and $q \times q$ matrices, respectively.
Are there other natural types of matrix products under which the determinant behaves multiplicatively? To be completely precise, the property I need is that the determinant of the product is $0$ if
and only if the determinant of at least one of its factors is $0$.
linear-algebra determinants
2 First of all, what do you mean by a "product"? Do you mean an algebra structure on $\oplus_{n=0}^{\infty}Mat(n,\Bbbk)$? – Qfwfq Sep 19 '10 at 15:20
Thinking about the 'natural' part of the question, would this be a good formulation? Fix a group $G$ and $G$-modules $U$ and $V$ (over a ground field $K$, say). Classify the $G$-modules $W$ and
bilinear $G$-module morphisms $\mu: End(U)\times End(V) \to End(W)$ with the property that $\det_W(\mu(A,B)) = c\ \det_U(A)^q\det_V(B)^p$ for nonzero $c$ and $p,q>0$? The special case of $G=GL(n,\
mathbb{R})$ and $U=V=\mathbb{R}^n$, would encompass regular matrix multiplication, reversed matrix multiplication, and the tensor product, though there are other examples. – Robert Bryant Jul 21
'11 at 19:44
add comment
2 Answers
active oldest votes
Direct summation (taking a $p \times p$ matrix $A$ and a $q \times q$ matrix $B$ and returning a block-diagonal $(p+q) \times (p+q)$ matrix $A \oplus B := \begin{pmatrix} A & 0 \\\ 0 & B \
end{pmatrix}$) also works:
$$\det(A \oplus B) = \det(A) \det(B).$$
up vote One can debate whether this operation deserves to be called a "matrix product", though (for instance, it is not distributive over addition).
7 down
vote EDIT: Another (somewhat trivial) example is the reversed multiplication operation $(A, B) \mapsto BA$. More generally, if there was a linear automorphism $T$ on $Mat_n$ that preserved the
singular variety $\{ A \in Mat_n: \det A = 0 \}$, one could conjugate the usual matrix multiplication operation by $T$. In the above example, $T$ is the transpose operation $T: A \mapsto A^
t$. As another example, one could let $T$ be a left multiplication operator $A \mapsto SA$ for some invertible $S$, in which case the matrix multiplication operation becomes $(A, B) \mapsto
ASB$, which also seems to work. One can combine the two and obtain another operation $(A, B) \mapsto BSA$. I'm not sure if these are the only examples that can be constructed by this method.
add comment
The determinant of the product of two non square matrices is nicely expressed by the Binet-Cauchy formula: $$ \det(AB) = \sum_I \det A_I \det B_I $$ Here $A$ is $n \times m$ and $B$ is $m \
times n$ and the sum ranges over $n$-subsets $I$ of the numbers $\{1,2,...,m\}$. $A_I$ means "select columns of $A$ indexed by $I$" and $B_I$ means "select the rows of $B$ indexed by $I$".
If either $A$ or $B$ has rank less than $n$ than the determinant of $AB$ is, thus, zero.
up vote 2
down vote I do not know for certain, but this looks like it has to do with some kind of coproduct?
That definitely has nothing to do with a coproduct. That's just what you get if you compute the determinant of the square matrix resulting from multiplying matrices in the way noted
above. If I had to guess how to begin proving it, it would be something like: start from Cramer's rule and look at the minors. The reason that the formula is so convoluted is that both
matrix multiplication and the determinant have nasty explicit descriptions for matrices (although their formal properties arise from canonical constructions). – Harry Gindi Sep 19 '10 at
The proof is via the functoriality of the exterior algebra, along with the fact that the adjoint of the map induced on exterior algebras is the map on exterior algebras induced by the
1 adjoint. It might have to do with a coproduct, as it resembles the coproduct on the ring of matrix coefficients. However, a proof along those lines would be circuitous. – Charlie Frohman
Sep 20 '10 at 11:14
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra determinants or ask your own question. | {"url":"http://mathoverflow.net/questions/39296/matrix-products-under-which-the-determinant-behaves-multiplicatively/39324","timestamp":"2014-04-18T19:04:48Z","content_type":null,"content_length":"60644","record_id":"<urn:uuid:a2c89548-c499-4ab9-adda-8c3936edd423>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Gardens, FL Algebra 2 Tutor
Find a Miami Gardens, FL Algebra 2 Tutor
...In high school I graduated top 10% of my class with honors and AP credit in the math and sciences. Unlike most other college students, I have taken all of the difficult science and math
courses but also education courses on how to portray this material to students. Because these subjects are di...
22 Subjects: including algebra 2, English, Spanish, reading
...Personally, I believe that such experience is very much in phase with my personality and has given me a very particular perspective about general psychological impediments to enjoying and
learning Physics and Math concepts. I also strongly believe that learning Math and Sciences is of the upmost...
11 Subjects: including algebra 2, Spanish, physics, calculus
...As a high-school student, I qualified and participated to the National Math Olympics in Romania and other Mathematics contests as well. I have a deep knowledge of MS Office as well as of other
office productivity software suites, based on my background and on my engineering experience. I earned a Bachelor's Degree in Mechanical Engineering and a Master's Degree in Aircraft Engineering.
22 Subjects: including algebra 2, calculus, physics, geometry
...I learned how to teach in the USAF where I taught fighter pilots how to fly supersonic jets. Later in civilian life as an FAA qualified flight instructor, I taught students with physical
handicaps to fly small aircraft. I also teach Isshinryu Karate and Self Defense courses.
9 Subjects: including algebra 2, English, reading, algebra 1
...How about writing? Indeed, my reach goes all the way from mastery of the English language, across the ocean of political philosophy, and to the land of physical sciences (chemistry, biology,
physics), also where the dreaded algebra monster lies in waiting to devour the souls of the innocent. But need not fear, I have conquered the beast, and I can show you how.
36 Subjects: including algebra 2, chemistry, reading, English
Related Miami Gardens, FL Tutors
Miami Gardens, FL Accounting Tutors
Miami Gardens, FL ACT Tutors
Miami Gardens, FL Algebra Tutors
Miami Gardens, FL Algebra 2 Tutors
Miami Gardens, FL Calculus Tutors
Miami Gardens, FL Geometry Tutors
Miami Gardens, FL Math Tutors
Miami Gardens, FL Prealgebra Tutors
Miami Gardens, FL Precalculus Tutors
Miami Gardens, FL SAT Tutors
Miami Gardens, FL SAT Math Tutors
Miami Gardens, FL Science Tutors
Miami Gardens, FL Statistics Tutors
Miami Gardens, FL Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Aventura, FL algebra 2 Tutors
Doral, FL algebra 2 Tutors
Hallandale algebra 2 Tutors
Hialeah algebra 2 Tutors
Hialeah Gardens, FL algebra 2 Tutors
Hollywood, FL algebra 2 Tutors
Miami Lakes, FL algebra 2 Tutors
Miami Shores, FL algebra 2 Tutors
Miramar, FL algebra 2 Tutors
N Miami Beach, FL algebra 2 Tutors
North Miami Beach algebra 2 Tutors
North Miami, FL algebra 2 Tutors
Opa Locka algebra 2 Tutors
Pembroke Park, FL algebra 2 Tutors
Pembroke Pines algebra 2 Tutors | {"url":"http://www.purplemath.com/Miami_Gardens_FL_algebra_2_tutors.php","timestamp":"2014-04-18T11:22:33Z","content_type":null,"content_length":"24600","record_id":"<urn:uuid:63fca207-3f71-4880-8b7e-a5d3b7cc798f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Math Forum ESCOT POW: Teacher Support
A Math Forum Project
ESCOT Problems of the Week:
NCTM Standards-Aligned Index
ESCOT: www.escot.org || Teacher Support || Student Versions || Applets Index
The technical requirements for ESCOT are here, however, every ESCOT PoW does some self-diagnostics to determine if any software installation is necessary.
Each problem is described below including:
• short description of the problem
• Where's the Math
• links to:
□ teacher support page
□ archived problem with highlighted solutions
□ student version
□ applet (in some cases more than one)
□ (in some cases additional versions of the problem)
More information is available on the ESCOT site's Math Standards page.
If you have something to share with us as you use any of the links or suggestions on this page (something you tried and changed or a new idea) we would love to hear from you. Please email us.
Number & Operations Geometry
Fish Farm I Graph Zooming
Fish Farm II In the Dark with an Elephant
Fractris Pythagoras' Mystery Tablet
Galactic Exchange Scale 'n Pop
Graph Zooming Marathon Graphing
The Hispaniola Water Shortage Rumors
Marabyn Search and Rescue, Part I
Mosaic Search and Rescue, Part II
Pythagoras' Mystery Tablet Measurement
Scale 'n Pop
Search and Rescue, Part II Marabyn
Algebra Pythagoras' Mystery Tablet
Scale 'n Pop
Fish Farm I Search and Rescue, Part I
Galactic Exchange Search and Rescue, Part II
The Hispaniola Water Shortage
In the Dark with an Elephant Data Analysis & Probability
Marathon Graphing Fish Farm II
Polyrhythm Galactic Exchange
Rumors Marathon Graphing
Fish Farm, Part I: Students try to place male and female fish in different ponds such as to satisfy certain restrictions on the ratios of male to female fish.
Where's the Math: Depending on the student's approach to this problem, many types of math can be incorporated to solve this problem. The concept of ratio is used throughout, and students can
solve the problem by experimenting with the number of fish so that the ratios fit the constraints. Algebra can also be used, by coming up with simultaneous equations to represent the conditions
given. These can be simplified to one equation in multiple unknowns which requires positive integral solutions, otherwise known as a Diophantine equation.
[teacher support] [archive] [student version] [applet]
Fish Farm, Part II: Students collect data on randomly chosen fish from a pond. Using this data, they try to determine the ratio of male fish to female fish in the pond.
Where's the Math: This problem allows students to investigate different data collection methods. Students realize that they get different data depending on whether they have a small sample size,
but a large number of trials; a large sample size and a small number of trials, etc. With further experimenting, students can discover which method of collecting data gives the most accurate
results. In addition, the bonus question deals with the concept of expected value. Given the number of fish in the pond and the size of the sample, students can realize intuitively that the ratio
of males to females in their sample should be close to the actual ratio. If they investigate the probability of getting a male or female fish, they can discover the formula for expected value,
and why it works.
[teacher support] [archive] [student version] [applet]
Fractris: Students fill up a row in a Tetris-like game by making combinations of certain fractions that add up to 1.
Where's the Math: This problem deals with basic addition of fractions. However, some of the questions also encourage students to investigate different ways to get the fractions to add up to 1. By
multiplying all the fractions by 12, all the fractions are converted to integers, and the problem becomes finding all the ways to get the numbers 1-6 to add up to 12. This involves combinatorics
and number theory.
[teacher support] [archive] [student version] [applet]
Galactic Exchange: Students are asked to discover the exchange rates among different types of alien currency, and use this information to find out the amount of money needed to "buy" certain types of
Where's the Math: This problem allows students to investigate ratio and proportion, by discovering the exchange rate between different alien currencies (i.e. 7 circles have the same value as 3
triangles, etc.) By manipulating these exchange rates algebraically, students can come up with equations which represent the money needed to buy certain products. The use of symbols for coin
types encourages a symbolic or alphabetic representation of each type of currency.
[teacher support] [archive] [student version] [applet] [extra practice]
Graph Zooming: Students use different buttons to zoom a graph. They then investigate what is visible and what isn't, depending on which scale the zoom displays.
Where's the Math: By asking students about the intersection between two lines, this problem involves solving two linear equations graphically. However, the main purpose of the problem is to
investigate what happens when a graph is zoomed with different scales. This allows students to become more familiar with Cartesian coordinates, and the representations of certain lines when
graphed in such a coordinate system.
[teacher support] [archive] [student version] [applet]
The Hispaniola Water Shortage: Students are asked to use virtual containers of water, of definite sizes, and combine them in various ways to come up with as many different resulting volumes as
Where's the Math: Gives students an introduction to number theory by emphasizing the significance of starting with containers of different parity, as compared to containers of the same parity.
Some implied work with modular arithmetic, i.e. using the 3 oz. and 8 oz. containers, whenever one fills up the 8 oz. with the 3 oz., 1 oz. will be left in the 3 oz. container because 8 is
congruent to 2 (mod 3). Questions encourage students to find a pattern involving parity.
[teacher support] [archive] [student version] [applet 1] [applet 2]
In the Dark with an Elephant: Students are asked to investigate how the appearance of a graph changes, based on the scale of the graph and the region being viewed.
Where's the Math: Gives students experience in manipulating graphs by changing domain and range values for the viewing window, which can easily be carried over to more powerful tools such as
graphing calculators. Allows students to become familiar with the Cartesian coordinate system. Questions encourage thought about how the shape of different areas of a graph are not necessarily
representative of the shape of the entire graph.
[teacher support] [archive] [student version] [applet]
Marabyn: Students change the distance Marabyn rides the bus and the distance she walks to fit constraints about the distance and time that she walks.
Where's the Math: This problem deals primarily with the distance formula, d = rt. For one question, students also formulate their own equation to represent additional constraints. Using the slope
of lines on the graph, students can also calculate Marabyn's walking and riding speeds. With this data, an inequality can be formulated which represents the minimum and maximum walking distances.
[teacher support] [archive] [student version] [applet]
Marathon Graphing: Students are asked to use data and best-fit lines to predict the winning women's marathon times from various years.
Where's the Math: This problem was designed to get students to investigate how a best-fit line can be used to approximate actual data. Also, it was designed to show the limitations of such a
best-fit line. To create a line to fit the data, students could move the line using the applet to get a visual representation of how the line fit the data, and then create a linear equation which
represents the line. Also, students solve linear equations graphically, by observing where two lines representing two different sets of data intersect.
[teacher support] [archive] [student version] [applet 1] [applet 2]
Mosaic: Students use blocks of variable length (fractional, not decimal) to fill up a map of the USA like a mosaic. They then use the size of the blocks to estimate the area of the map.
Where's the Math: Like Fractris, this problem deals with basic addition of fractions. In addition, question #2 brings up basic ideas of calculus, i.e. using smaller and smaller rectangles to
approximate an irregular shape. This question challenges students to come up with that idea independently.
[teacher support] [archive] [student version] [applet]
Polyrhythm: Students investigate different rhythms, both aurally and visually, using the applet. They then quantify these rhythms based on the accented beats.
Where's the Math: The first few questions and the investigation deal mainly with ratios and their properties.To find the phrase length of a complicated rhythm, however, the concept of the LCM is
needed, since the total phrase length must be evenly divisible by each of its constituent ratios.
[teacher support] [archive] [student version] [applet]
Pythagoras' Mystery Tablet: Students use an applet to compute the area of a square based on its side length. They then use this information to determine whether it is possible to get exact side
lengths for certain areas.
Where's the Math: The goal of this problem was to get students to investigate the concept of irrational numbers through a familiar concept like area. Students realized that certain areas, like 4,
had an exact side length, while others, like 2, did not. This led to questions about the categorization of such numbers, based on how "easy" it was to get an exact area. Without actually
introducing the mathematical nomenclature, students discovered the existence of certain irrational numbers.
[teacher support] [archive] [student version] [applet]
Rumors: Students use a simulation to discover how the number of students who are told about a rumor varies with time
Where's the Math: In conjunction with the simulation, a graph is generated by the applet that illustrates the relationship between number of people and time. Students are asked to investigate how
the shape of the graph changes when the parameters of the rumor-spreading are changed, i.e. how many people spread the rumor, how the rumor is spread, etc. The bonus question introduces the
concept of exponential functions to students, as compared with linear functions for the introduction.
[teacher support] [archive] [student version] [applet]
Scale 'n Pop: Students resize a balloon by using fractional scaling factors, to try to get the balloon to fit between two walls, yet still be big enough to be popped by a pair of nails.
Where's the Math: This problem allows students to investigate the properties of fractions. The applet allows a visual representations of basic properties of fractions, and makes it easier to see
things like the fact that a smaller numerator makes the fraction smaller, but a smaller denominator makes the fraction bigger. The problem also illustrates how fractions can be used to scale a
set amount, and the data table displays that scaling. This also illustrates conversion between fractions and decimals.
[teacher support] [archive] [student version] [applet]
Search and Rescue, Part I: Students fly a helicopter to different locations by specifying headings and distances.
Where's the Math: The concept of vectors is introduced in an interactive way. By specifying a distance and a direction (instead of an angle from the origin, an angle relative to true north), a
vector corresponding to the helicopter's movement is specified. By becoming familiar with the behavior of vectors in the simulation, students can investigate adding vectors by realizing that, in
one movement, they can end up in the same spot as with two other, different movements. The relationship between the distances and angles of added vectors, as well as other vector properties, can
also be experimented with in this simulation.
[teacher support] [archive] [student version] [applet 1] [applet 2]
Search and Rescue, Part II: Students are asked to place a helicopter rescue base between two campgrounds. The location of the base should correspond to certain constraints, such as the population of
each campground, and the fact that a minimal distance is favorable. The location is determined by dragging a dot across the map, and the heading and distance information is automatically computed and
Where's the Math: Students investigate the concept of locus. By dragging the dot across the map, students can discover that, even if the distance between the base and each of the two campgrounds
is required to be the same, there are an infinite number of locations for the base. However, all these locations lie on a certain line: the perpendicular bisector of the line connecting the two
campgrounds. This result can be obtained directly from certain theorems in geometry dealing with equidistance and perpendicular bisectors; conversely, students can discover these theorems on
their own by experimenting with the applet. The problem also deals with the concepts of minimum distance and proportion, which result from students trying to make the base twice as close to one
campground, while still making the resulting distances as small as possible.
[teacher support] [archive] [student version] [applet] | {"url":"http://mathforum.org/escot/teacher.nctm.html","timestamp":"2014-04-17T10:38:06Z","content_type":null,"content_length":"24012","record_id":"<urn:uuid:55197de7-e7f2-4a2e-a588-a878d25c95e8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Neck Trigonometry Tutor
Find a Little Neck Trigonometry Tutor
...As an experienced teacher of high school and college level physics courses, I know what your teachers are looking for and I bring all the tools you'll need to succeed! Of course, a big part of
physics is math, and I am experienced and well qualified to tutor math from elementary school up throug...
18 Subjects: including trigonometry, reading, physics, calculus
...Chemistry is built upon overlapping concepts starting with the structure of the atom, to the Periodic Table, to types of bonds, Nuclear and Organic Chemistry. I take the basic concepts, make
sure that the student understands them and then I build upon these concepts. I use diagrams, charts,and ...
45 Subjects: including trigonometry, English, chemistry, GED
...I have post-graduate training in Wilson Reading Program, Wilson Foundations, Animated Literacy and Sounds in Motion. Using a combination of these programs plus strategies gleaned from my long
professional career, I work with students who struggle with decoding, phonemic awareness, reading compre...
39 Subjects: including trigonometry, English, reading, ESL/ESOL
I am a Stanford University graduate with a BS in Physics, a BA in Philosophy and 5 years cumulative tutoring experience. In college, I worked with high school students and college freshmen, in
Physics, Inorganic Chemstry, Calculus, Algebra I and II and Geometry. In the years since graduating from ...
23 Subjects: including trigonometry, chemistry, reading, physics
...My tutoring philosophy is based on the idea above: my job as a tutor is to help you understand how math works, making you able to do any problem yourself! (Of course this all applies to physics
as well as to math!) I believe that you can always do better in math or physics, no matter what level...
12 Subjects: including trigonometry, physics, MCAT, calculus | {"url":"http://www.purplemath.com/Little_Neck_trigonometry_tutors.php","timestamp":"2014-04-16T19:41:43Z","content_type":null,"content_length":"24378","record_id":"<urn:uuid:8e252d86-18a2-481d-9f7c-3dc2b4f424e1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use Scale Factor when Problem Solving
4.7: Use Scale Factor when Problem Solving
Difficulty Level:
Created by: CK-12
Practice Scale Factor to Find Actual Dimensions
Have you ever applied scale factor to a real-world dilemma? Take a look at this one.
A driveway has a length of 24 feet. If the scale is 2 inches : 4 feet, what is the scale factor? In a diagram, how many inches would be drawn to represent the driveway?
Pay attention and you will know how to figure this out by the end of the Concept.
A ratio is a comparison between two quantities. We can write a ratio in fraction form, by using a colon or by using the word “to”.
Sometimes in life, we have a real-life object that we want to represent in a smaller form. Think about buildings. We can’t build an actual building to show the dimensions in a smaller way, so we
build a model of the building. When we do this, we take the actual dimensions and shrink them down to build a model.
The scale that we use can help us with scale dimensions or actual dimensions. This scale is key in problem solving.
Let’s say that the scale is 1 : 2.
We can use this information to determine the scale factor. The scale factor is the relationship between the scale dimension and the measurement comparison between the scale measurement of the model
and the actual length.
In this case, it is $\frac{1}{2}$
Take a look at this situation where we can use scale factor.
What is the scale factor if 3 inches is equal to 12 feet?
We can write a ratio to show the scale factor.
$\frac{3}{12} = \frac{1}{4}$
The scale factor is 1 : 4. It is expressed in simplest form.
Now let’s look at applying this information further.
If the scale dimension is 4, then we can figure out the actual dimension. Here is a proportion to show these two ratios.
$1 : 2 = 4 : x$
Let’s use fraction form of the ratios to make this clearer.
$\frac{1}{2} = \frac{4}{x}$
See the units aren’t necessary for figuring out the missing part of the proportion. We can simply use what we have learned to find the actual dimension.
1 times 4 = 4
2 times 4 = 8
$\frac{1}{2} = \frac{4}{8}$
This is the answer.
Now we can look at applying scale factor to our work when we do know the units. To use scale factor to find actual dimensions or scale dimensions, we will need to know a few things.
Necessary Information:
1. Scale Factor
2. One other dimension either the actual or the scale dimension must be given
So, if we have three parts of the proportion, we can solve for the last missing part.
Take a look at this one.
The plans for a flower garden show that it is 6 inches wide on the plan. If the scale for the flower garden is 1 : 12, what is the actual width of the flower garden?
To work on this problem, we first need to write two ratios that form a proportion. We have the scale factor and we have the scale measurement. We are missing the actual measurement. Let’s figure out
the actual measurement of the garden.
$1 : 12 = 6 : x$
Now we have two ratios that form a proportion. Let’s write them both in fraction form so that we can work easily in solving for the missing measurement.
$\frac{1}{3} = \frac{12}{x}$
Now we can cross multiply or solve it by using equal ratios.
$1 \times 12 = 12$
$3 \times 12 = 36$
The measurement of the garden is 36 inches, which is the same as three feet.
Use the scale factor of $\frac{1}{4}$$4$
Example A
Solution: $\frac{1}{2}$
Example B
Solution: $\frac{3}{4}$
Example C
Solution: $1$
Now let's go back to the dilemma from the beginning of the Concept.
Notice that there are two parts to this problem. First, we have to identify the scale factor.
$\frac{2}{4} = \frac{1}{2}$
The scale factor is 1 : 2.
Next, we need to figure out how many inches will be drawn to represent the driveway. To do this, we write a proportion.
$\frac{2}{4} = \frac{x}{24}$
We can cross multiply and divide or use equal ratios to solve this. Let’s use equal ratios. We work with the denominators.
$4 \times 6 = 24$
$2 \times 6 = 12$
The driveway will be represented by 12 inches or 1 foot.
Scale Dimension
the measurement used to represent actual dimensions in a drawing or on a map.
Actual Dimension
the real – life dimension of the object or building.
Scale Factor
the ratio of scale to actual dimension written in simplest form.
Guided Practice
Here is one for you to try on your own.
Find the missing actual dimension if the scale factor is 2" : 3' and the scale measurement is 6".
First, we can set up a proportion.
$2 : 3 = 6 : x$
Now we can use fraction form to make it easier to solve this proportion.
$\frac{2}{3} = \frac{6}{x}$
$2 \times 3 = 6$
$3 \times 3 = 9$
$\frac{2}{3} = \frac{6}{9}$
The actual dimension is 9 feet.
Video Review
Directions: Figure out each scale factor.
1. $\frac{2 \ inches}{8 \ feet}$
2. $\frac{3 \ inches}{12 \ feet}$
3. $\frac{6 \ inches}{24 \ feet}$
4. $\frac{11 \ inches}{33 \ feet}$
5. $\frac{16 \ inches}{32 \ feet}$
6. $\frac{18 \ inches}{36 \ feet}$
7. $\frac{6 \ inches}{48 \ feet}$
8. $\frac{6 \ inches}{12 \ feet}$
Directions: Solve each problem.
9. A rectangle has a width of 2 inches. A similar rectangle has a width of 9 inches. What scale factor could be used to convert the larger rectangle to the smaller rectangle?
10. A drawing of a man is 4 inches high. The actual man is 64 inches tall. What is the scale factor for the drawing?
11. A map has a scale of 1 inch = 4 feet. What is the scale factor of the map?
12. A drawing of a box has dimensions that are 2 inches, 3 inches, and 5 inches. The dimensions of the actual box will be $3\frac{1}{4}$
13. A room has a length of 10 feet. Hadley is drawing a scale drawing of the room, using the scale factor $\frac{1}{50}$
14. The distance from Anna’s room to the kitchen is 15 meters. Anna is making a diagram of her house using the scale factor of $\frac{1}{150}$
15. On a map of Cameron’s town, his house is 9 inches from his school. If the scale of the map is $\frac{1}{400}$in feet from Cameron’s house to his school?
Files can only be attached to the latest version of Modality | {"url":"http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-8/r10/section/4.7/","timestamp":"2014-04-17T00:05:06Z","content_type":null,"content_length":"136496","record_id":"<urn:uuid:6a7f35e1-8860-4a8c-ac58-d5ad27f64ced>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
1 player likes this solution
1 Comment
Randall Stace Romero Aguilar
on 23 Aug 2013
I think the problem is not well formulated: in a weighted average, your weights should add up to one | {"url":"http://www.mathworks.com/matlabcentral/cody/problems/106-weighted-average/solutions?page=2","timestamp":"2014-04-17T04:34:31Z","content_type":null,"content_length":"132042","record_id":"<urn:uuid:c7a5a8fa-ff9b-482c-908b-ffa98a0a8750>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
'Simpsons' Analysts Show How Math Figures into Episodes
Professors use popular cartoon to demonstrate that subject doesn't have to be intimidating
By Matthew Chin
© Los Angeles Times, Inland Valley Edition, March 19, 2002.
CLAREMONT -- Mathematics doesn't exactly have a great reputation for being a source of brilliant humor. When was the last time you heard a math joke? And, more importantly, did you laugh? That's why
it may come as a surprise that the writers of "The Simpsons," regarded by some critics as the smartest, most successful cartoon on television today, regularly turn to math to plumb its potential for
Take, for example, one 1998 episode in which Homer contemplates buying a 5-pound lobster at $8 per pound. "How many pounds in a gallon?" he wonders.
The show's preoccupation with innumeracy -- the mathematical equivalent of illiteracy -- and with math in general has not gone unnoticed in academic circles. Two math professors, Andrew Nestler of
Santa Monica College and Sarah J. Greenwald of Appalachian State University in North Carolina, have compiled more than 100 math references, from the simple to the cutting edge, in the show's
13-season run.
The pair recently gave a lecture at Harvey Mudd College to an audience of more than 100 students that sampled mathematical morsels from the show. "It's a show with a cult following, especially at
Harvey Mudd," said Harvey Mudd physics sophomore Sean Skelley, explaining the lecture's popularity. "There are a lot of mature references and subtle jokes that may fly past children, but adults can
get the subtle meanings."
Harvey Mudd regularly has math lectures that may appeal to a a broader nonacademic audience, math department chairman Michael Moody said.
"The Simpsons" uses fractions, statistics, geometry and the metric system to show that many of the characters have an apparent lack of math knowledge.
For example, the show's resident school bully, Nelson Muntz, once declared "That's like asking the square root of a million. No one will ever know." Actually, there are two numbers that when squared
equal 1 million: 1,000 and - 1,000. Nelson's quip got a big guffaw from the science and math students at Harvey Mudd.
Nestler and Greenwald, who are huge fans of the show, began the idea of using "The Simpsons" in lectures a few years ago while they were graduate students at the University of Pennsylvania.
They wanted a way to get students who are uncomfortable or even afraid of math to see that the subject shouldn't be too intimidating and can even be fun, Greenwald said.
They have given their lecture four times, mostly recently on Saturday at CalTech for the spring meeting of the Southern California Mathematical Assn. of America.
At Harvey Mudd, Nestler said it was like preaching to the converted, as a solid background in math is required to gain admission to the science and engineering school.
In Greenwald's favorite "Simpsons" math moment, Homer and Marge Simpson are considering sending their daughter Lisa to a school for the gifted. As the camera pans, two young girls playing the game of
patty-cake recite the following playground chant: "Cross my heart and hope to die / Here's the digits that make pi / 3.1415926535897932384..." and the camera pans away.
The joke, of course, is that the digits that make pi -- a circle's circumference divided by its diameter -- continue infinitely. The writers are clearly aware that pi is what's called an irrational
number -- one that cannot be expressed in terms of the quotient of two integers in lowest terms. And to "get it," the viewers have to understand that it means you can never say what pi is exactly, in
the same way you can say what 5 is.
"They laugh, but then they start to ask questions and engage the mathematics," Greenwald said of the audiences to which she shows the clip. "They end up learning significant mathematics, because
there are deep ideas embedded in these."
Nestler and Greenwald said some of the math references show a surprisingly high level of understanding of complex math topics. In one episode, Homer Simpson, normally a two-dimensional character, is
trapped in a three-dimensional world.
One equation whizzes by the screen: 1,782 ^ 12 + 1841 ^ 12 = 1922 ^ 12. One could multiply out the numbers raised to the power of 12 to prove that the statement is false -- a tedious process by any
measure -- but it's not really necessary since the equation would, if true, prove Fermat's Last Theorem false. Fermat's Last Theorem, which for centuries stumped the best mathematical minds but has
now been proved, states that when "n" is greater than 2, there are no non-zero integers that can stand in for X,Y, and Z in the equation X ^ n + Y ^ n = Z ^ n that will make the equation true. Since
12 is greater than 2, the statement that whizzed by the 3-D Homer is obviously false.
Such an arcane math reference would be lost on most, but a few of the more well-read Mudders caught the inside joke.
For more a more complete guide to math references in "The Simpsons," check out the Web site: http://homepage.smc.edu/nestler_andrew/SimpsonsMath.htm. | {"url":"http://www.snpp.com/other/articles/analysts.html","timestamp":"2014-04-18T20:43:33Z","content_type":null,"content_length":"8008","record_id":"<urn:uuid:1939bb80-74eb-4dd1-9aa1-c418e498cc24>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Work down on the Upper bound of the Twin Primes
up vote 3 down vote favorite
This question already has an answer here:
It can be shown using the Selberg Sieve method, that the maximum number of Twin primes less than $N$ is $$\frac{CN}{\ln^2(N)}$$ does anyone know if there has been any work done on finding an upper
bound for the constant $C$?
Essentially the same question was asked here: mathoverflow.net/questions/34719/… – Mark Lewko Mar 15 '11 at 20:30
add comment
marked as duplicate by Gerry Myerson, Andres Caicedo, Ramiro de la Vega, Daniel Moskovich, Andrey Rekalo Jul 9 '13 at 5:33
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
1 Answer
active oldest votes
It is conjectured that the number of twin primes less than $N$ is $(\mathfrak{S}+o(1))N/(\log N)^2$, where $$\mathfrak{S}=2\prod_{p>2}(1-(p-1)^{-2})$$ is the so-called twin-prime constant.
Using the large sieve it is easy to show that the number of twin primes less than $N$ is at most $(8\mathfrak{S}+o(1))N/(\log N)^2$. According to page 76 of Tenenbaum's Introduction to
analytic and probabilistic number theory, the best result in this direction is by Wu (1990) which replaces 8 by 3.418.
up vote 5 EDIT: According to MathSciNet, Wu (2004) improved 3.418 to 3.3996.
down vote
EDIT: The constant 8 also follows from the Selberg sieve, see page 65 in Greaves' Sieves in number theory.
And of course the result (for some $C$) is due to V. Brun. – Denis Chaperon de Lauzières Mar 15 '11 at 15:47
I think the Brun sieve misses a power of $\log\log N$. – GH from MO Mar 15 '11 at 15:54
1 Only the first version of Brun's sieve has this $\log \log N$; he improved it later to get the right order of magnitude. – Denis Chaperon de Lauzières Mar 15 '11 at 17:35
Great, so Brun was the first to get the right order of magnitude. Do you know a reference? – GH from MO Mar 15 '11 at 19:13
1 gallica.bnf.fr/ark:/12148/bpt6k3121p/… – Denis Chaperon de Lauzières Mar 15 '11 at 20:26
show 1 more comment
Not the answer you're looking for? Browse other questions tagged prime-numbers or ask your own question. | {"url":"http://mathoverflow.net/questions/58535/work-down-on-the-upper-bound-of-the-twin-primes?sort=newest","timestamp":"2014-04-21T10:08:07Z","content_type":null,"content_length":"49818","record_id":"<urn:uuid:e46baf76-617b-4985-8329-958a1638b543>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural transformations as categorical homotopies
up vote 35 down vote favorite
Every text book I've ever read about Category Theory gives the definition of natural transformation as a collection of morphisms which make the well known diagrams commute. There is another possible
definition of natural transformation, which appears to be a categorification of homotopy:
given two functors $\mathcal F,\mathcal G \colon \mathcal C \to \mathcal D$ a natural transformation is a functor $\varphi \colon \mathcal C \times 2 \to \mathcal D$, where $2$ is the arrow
category $0 \to 1$, such that $\varphi(-,0)=\mathcal F$ and $\varphi(-,1)=\mathcal G$.
My question is:
why doesn't anybody use this definition of natural transformation which seems to be more "natural" (at least for me)?
(Edit:) It seems that many people use this definition of natural transformation. This arises the following question:
Is there any introductory textbook (or lecture) on category theory that introduces natural transformation in this "homotopical" way rather then the classical one?
(Edit2:) Some days ago I've read a post in nlab about $k$-transfor. In particular I have been interested by the discussion in the said post, because it seems to prove that the homotopical definition
of natural transformation should be the right one (or at least a slight modification of it). On the other end this definition have always seemed to be the most natural one, because historically
category theory develop in the context of algebraic topology, so now I've a new question:
Does anyone know the logical process that took Mac Lane and Eilenberg to give their (classical) definition of natural transformation?
Here I'm interested in the topological/algebraic motivation that move those great mathematicians to such definition rather the other one.
2 "anybody" does use this definition (you can find it somewhere on my [pretty much defunct] blog, for example). – Todd Trimble♦ May 9 '11 at 10:38
Wow, this is a great way to think about natural transformations! I wish I'd seen this months ago – David White Jul 1 '11 at 15:02
5 I am happy to give you my 300th vote for this question. It is not the way natural transformation are introduced, but it is actually the way people (should) think about them. IMHO this is the
starting observation to make for introducing simplicial categories as a model for $\infty$-categories. – DamienC Sep 8 '11 at 15:14
@DamienC I really appreciate your comment, I don't know very much about model categories so I'm wondering: "would you like to elaborate your comment in an answer in which discuss more completely
this aspect of this definition of natural transformation?" (I think this argument is interesting and deserves to be in an answer) – Giorgio Mossa Sep 8 '11 at 16:08
1 This idea was expressed for natural equivalences in the first, differently titled, 1968 edition of the book now available as "Topology and Groupoids": pages.bangor.ac.uk/~mas010/topgpds.html –
Ronnie Brown Nov 13 '13 at 10:27
show 4 more comments
8 Answers
active oldest votes
The homotopy analogue definition of natural transformations has been known and used regularly since at least the late 1960's, by which time it was understood that the classifying space
functor from (small) categories to spaces converts natural transformations to homotopies because it takes the category $I=2$ to the unit interval and preserves products. Composition of
up vote natural transformations $H\colon A\times I\to B$ and $J\colon B\times I\to C$ is just the obvious composite starting with $id\times \Delta: A\times I \to A\times I\times I$, just as in
22 down topology. (I've been teaching that for at least several decades, and I'm sure I'm not the only one.)
add comment
Once you learn a subject, you can think about things in whatever way is most pleasing or helpful for solving a problem. Fixing a fact as a definition is pedagogy -- something to help those
learning the subject.
I can't really speak for how others learn, but I'm not sure recognizing natural transformations as being described by functors $\mathcal{C} \times 2 \to \mathcal{D}$ would be very useful
before one starts seriously thinking in terms of the 2-category of categories.
up vote
11 down I confess I would almost turn your question on its head -- I far more frequently want to think of a homotopy between functions $f,g:X \to Y$ as being a function from $X$ to paths in $Y$, or
vote sometimes as a function from $[0,1]$ to $Y^X$, and feel the usual definition as a function $X \times [0,1] \to Y$ more as being a much simpler way to state the technical details. I saw the
analogy with homotopy early in learning about categories, and I don't think seeing natural transformations defined as functors $\mathcal{C} \times 2 \to \mathcal{D}$ would have helped me
make the analogy. (But, for the record, I am very much not an algebraic topologist)
I agree with you when you say that it's more natural thinking homotopy as a path of function, and in this sense one could think about natural transformation as functor of kind $2 \to \
text{Fun}(\mathcal C,\mathcal D)$, by the way this require the category of functors and so natural transformation as well. A similar problem seems to arise in topology, where one must
define a topology on the space of function in order to define homotopies like path of function. Why do you think this definition of natural transformation is useful when one start to
think in terms of the 2-categories of categories? – Giorgio Mossa May 10 '11 at 20:02
I assert that when one is thinking in terms of the 2-categories of categories, definitions are no longer important. (Amongst) what is important is the knowledge of various equivalent
1 ways to capture a notion. I think describing natural transformations as functors $\mathcal{C} \times 2 \to \mathcal{D}$ doesn't become useful until thinking in terms of the 2-category of
categories mainly because the only other way I see to use description is to unfold it into the ordinary description and working in terms of that. – Hurkyl May 11 '11 at 0:23
add comment
This "geometric" definition is well-known to category-theorists. See for example this youtube video by the Catsters, which introduces natural transformations. It should be also well-known
to algebraic topologists working with model categories. But I have to admit that there are few introductions to category theory which emphasize this definition of a natural transformation.
Remark that this fits into a more general framework: For every category $C$, there is an isomorphism $[I,C] \cong Arr(C)$, where $Arr(C)$ is the arrow category of $C$. In particular, $Arr
([C,D]) \cong [I,[C,D]] \cong [C \times I,D]$.
up vote
10 down On the other hand, the usual definition is more easy to work with. For example how do you define the composition of two natural transformations, say given by $\alpha : C \times 2 \to D, \
vote beta : C \times 2 \to D$ with $\alpha(-,1) = \beta(-,0)$? Of course you can just write it down explicitly, but then you end up working with the usual definition. But instead, you could also
use that $\alpha,\beta$ correspond to a functor on the amalgam $(C \times 2) \cup_C (C \times 2)$ of the inclusions $(-,1)$ and $(-,0)$, and compose with the natural functor $C \times 2 \to
(C \times 2) \cup_C (C \times 2)$ which "leaves out the middle point".
I like this definition of composition of natural transformation, but it seems to use the notion of limit, which need the definition of natural transformation, am I right? I wrote down a
definition of composition for this kind of natural transformation which is more complex of the classical one, but it has the merit of avoid to demonstrate that the composite is a natural
transformation, because this fact is implicit in the definition. – Giorgio Mossa May 10 '11 at 20:17
1 I guess the fastest answer is that the composition corresponds to (pulling back along) the canonical map $\vec I \to \vec I \cup_{\text{middle}} \vec I$, and leave the $C$s to the
reader? – Theo Johnson-Freyd Sep 14 '11 at 17:28
add comment
Have anyone ever introduced natural transformation in this "homotopical" way rather then the classical one in any reference like a textbook or some lecture notes?
Yes, Quillen introduces it in the paper
up vote 10 down
vote Higher Algebraic K-theory. I. Algebraic K-theory, I: Higher K-theories (Proc. Conf., Battelle Memorial Inst., Seattle, Wash., 1972), pp. 85–147. Lecture Notes in Math., Vol. 341,
Springer, Berlin 1973.
In connection with his "Theorem A" and "Theorem B."
add comment
Disclaimer: this is not an answer to the question as I have no explanation for why people don't introduce natural transformations in the way explained in the question, but I am posting this
in order to expand a comment I made. The comment was
this is the starting observation to make for introducing simplicial categories as a model for $\infty$--categories
Moreover, I am not a specialist neither of category theory nor of homotopy theory (and a posteriori of higher categories).
The $2$-category of categories
The starting point is that the category $Cat$ of categories is actually a $2$-category. For any to objects (i.e. categories) $\mathcal C$ and $\mathcal D$ we have that $Hom_{Cat}(\mathcal
C,\mathcal D)$ is itself a category.
This is very transparent when using the definition $$ Hom_{Cat}(\mathcal C,\mathcal D):=Hom_{t_{\leq0}(Cat)}(\mathcal C\times\Delta^1,\mathcal D)\,, $$ where $\Delta^1=\Box^1=\mathbb{G}^1$
is the arrow category $0\to 1$ and $t_{\leq0}(Cat)$ is the underlying $1$-category of $Cat$.
Remark: In general one can see a $2$-category $\mathcal C$ as a simplicial category by replacing the $Hom$-categories by their nerves.
In the case of $Cat$, we see that the $Hom$-categories naturally appear as $1$-truncations of simplicial sets (one can replace here "simplicial" by "cubical" of "globular").
The $3$-category of $2$-categories
Le us now go to natural transformations of (strict) $2$-functors between (strict) $2$-categories. Given two such $2$-functors $F,G:\mathcal C\to\mathcal D$ one can see that a natural
transformation $F\Rightarrow G$ is the same as a $2$-functors $$ \phi:\mathcal C\times \mathbb{G}^2\to\mathcal D $$ such that $\phi(-,0)=F$ and $\phi(-,1)=G$, where $\mathbb{G}^2$ is the
$2$-category with two objects $0$ and $1$ and such that $Hom_{\mathbb{G}^2}(0,1)$ is the arrow category $\mathbb{G}^1=(0\to 1)$.
Therefore the "set" of $2$-functors is a naturally a $2$-category.
Remark: as before we can then see any $3$-category as a simplicial/cubical/globular category by replacing the $Hom$-$2$-categories by their (simplicial/cubical/globular) nerves.
up vote 9
down vote In the case of $2-Cat$, we see that the $Hom$-$2$-categories naturally appear as $2$-truncations of globular sets.
Simplices, Cubes, and globes
The globe category $\mathbb{G}$, the cubical category $\Box$ and the simplicial category $\Delta$ are known to be suitable geometric shape to model higher structures. Simplicial sets are
good models for (weak) $\infty$-groupoids. It was proved (by Jardine ... with some improvement by Cisinski if I remember well) that cubical sets also provide a model for (weak) $\
I don't know any reference but I guess that the same holds for globular sets (which are quite more used by people working with automata).
The $(n+1)$-category of $n$-categories
Let me consider the category $n-Cat$ of (strict) $n$-categories. A a natural transformation between (strict) $n$-functor $F,G:\mathcal C\to\mathcal D$ can be written as an $n$-functor $$ \
phi:\mathcal C\times \mathbb{G}^n\to\mathcal D $$ such that $\phi(-,0)=F$ and $\phi(-,1)=G$, where $\mathbb{G}^n$ is the $n$-category with two objects $0$ and $1$ and such that $Hom_{\
mathbb{G}^n}(0,1)$ is the $(n-1)$-category $\mathbb{G}^{n-1}$.
Therefore the "set" of $n$-functors is a naturally a (strict) $n$-category, and thus $n-Cat$ is a (strict) $n+1$-category. It also naturally appears as a $n$-truncation of a globular
The advantage of working with simplicial/cubical/globular categories
Working directly with simplicial/cubical/globular categories has the following advantages:
1. it does allow to work directly with higher categories without going through an inductive process.
2. it allows to deal with weak $(\infty,1)$-categories, as simplicial/cubical/globular are models for weak $\infty$-groupoids (here $(\infty,1)$ stands for "$\infty$-categories such that
$n$-arrows for $n\geq2$ are weakly invertible").
@DamienC thanks a lot, this stuff is really cool, though there are some things that I don't get straight, for instance: above you say that $\hom_{t \leq 0 (Cat)}(\mathcal C,\mathcal D)$
should be a category but it seems to me it should be a set, at the same time you say that a functor $\mathcal C \times G^n \to \mathcal D$ should be a $n$-functor, but in this case if we
consider $n=2$, then a $2$-functor should be a natural transformation in the sense of the definition in the question, where am I wrong? – Giorgio Mossa Sep 11 '11 at 10:06
@Giorgio Mossa: $Hom_{t\leq0}(Cat)}(\mathcal C,\mathcal D)$ is a set but $Hom_{t\leq0}(Cat)}(\mathcal C\times(0\to 1),\mathcal D)$ is a category. About your second question you are right,
I should have written that an $n$-functor $\mathcal C\times G^n\to\mathcal D$ is a natural transformation between two $n$-functors. I'm going to fix it. – DamienC Sep 11 '11 at 11:30
@DamienC But $Hom_{t≤0(Cat)}(\mathcal C\times \Delta^1,\mathcal D)$ should be the set of the functors between the categories $\mathcal C \times \Delta^1$ and $\mathcal D$, this should be
(in our description) the set of natural transformations, this description seems lacking of the objects of the category, unless you're considering arrow-only categories. – Giorgio Mossa
Sep 11 '11 at 11:59
@Giorgio Mossa: to be precise I should have said that $Hom_{t\leq0}(Cat)}(\mathcal C,\mathcal D)$ is the set of morphisms of a category with objects being functors $\mathcal C\to\mathcal
D$. – DamienC Sep 11 '11 at 19:55
@DamienC I suppose you meant $Hom_{t\leq0(Cat)}(\mathcal C \times \Delta,\mathcal D)$ in your last comment, but I think I get it. Thanks a lot. Just to be completely correct I think that
there are some typo-error in your answer, you say that the natural-transformation $\phi$ should be such that $\phi(-,0)=F$ e $\phi(-,0)=G$, did you meant $\phi(-,1)=G$, right? – Giorgio
Mossa Sep 11 '11 at 21:04
show 2 more comments
What is "more natural" is strictly determined by a mathematical background one has (or more seriously --- by one’s understanding of the world) when one comes to learn a new subject. Thus,
a good definition should be more about "simplicity" (with respect to its theory) than about "analogy" to other concepts (in other braches of math). Analogies are then established by
I am not a mathematician, so I have a sweet opportunity to be ignorant on some fundamental branches of math --- for example --- topology. I think of functors $\mathbb{C} \rightarrow \
up vote 6 mathbb{D}$ as of structures in $\mathbb{D}$ of the shape of $\mathbb{C}$. Then a transformation is something that morphs one structure into another (i.e. it is a collection of morphisms
down vote indexed by the shape of a structure), whereas natural transformation is something that morphs in a coherent way.
I really like a story on "Blind men and an elephant" link text that I first red in Peter Johnstone's book "Sketches of an Elephant". He compares a topos to the elephant, and we are the
blind men. Surely, we are blind men, but I do think that most concepts found in category theory (with perhaps category theory itself) are like elephants.
IMHO you are more "mathematician" than lots of my colleagues. Ineff is a friend of mine and he asked me today that very question; I thought about johnstone's sketches too. :) –
tetrapharmakon May 9 '11 at 22:24
What do you mean with "I am not a mathematician"? – Hans Stricker Jul 30 '13 at 13:38
add comment
Following the previous indication of Professor Brown I want to add another possible way to see natural transformation which is a generalization of the previous definition.
Given categories $\mathcal C$ and $\mathcal D$ and two functors between them $\mathcal F,\mathcal G \colon \mathcal C \to \mathcal D$ then a natural transformation $\tau$ can be defined
as a functor $\tau \colon \mathcal C \to (\mathcal F \downarrow \mathcal G)$ which arrow components are the diagonal functions, sending each arrow $f \in \mathcal C(c,c')$, with $c,c' \
in \mathcal C$ to $(f,f) \in (\mathcal F \downarrow \mathcal G)(\tau(c),\tau(c'))$.
Edit: I think the definition of natural transformation proposed by professor Brown probably can be even a more natural than the one proposed in the question. I think that more details are
The key ingredient for that definition is the concept of arrow category of a given category $\mathbf D$: such category have morphism of $\mathbf D$ as objects and commutative square as
This category come equipped with two functors $\mathbf {source}, \mathbf{target} \colon \text{Arr}(\mathbf D) \to \mathbf D$ such that for each object (i.e. a morphisms of $\mathbf D$) $f \
colon d \to d'$ we have $$\mathbf{source}(f)=d$$ $$\mathbf{target}(f)=d'$$ while for each $f \in \mathbf D(x,x')$, $g \in \mathbf D(y,y')$ and a morphism $\alpha \in \text{Arr}(\mathbf D)
(f,g)$ (i.e. a quadruple $\langle f,g, \alpha_0,\alpha_1\rangle$ where $\alpha_0 \in \mathbf D(x,y)$ and $\alpha_1 \in \mathbf D(x',y')$ such that $\alpha_1 \circ f = g \circ \alpha_0$) we
have $$\mathbf{source}(\alpha)=\alpha_0$$ $$\mathbf{target}(\alpha)=\alpha_1$$ it's easy to prove that these data give two functors (which gives to $\text{Arr}(\mathbf D)$ the structure of a
graph internal to $\mathbf{Cat}$).
up vote Now let's take a look to this new definition of natural transformation:
3 down
vote A natural transformation $\tau$ between two functors $F,G \colon \mathbf C \to \mathbf D$ is a functor $\tau \colon \mathbf C \to \text{Arr}(\mathbf D)$ such that $\mathbf{source} \circ
\tau = F$ and $\mathbf{target}\circ \tau = G$.
A functor of this kind associate to every object $c \in \mathbf C$ a morphism $\tau_c \colon F(c) \to G(c)$ in $\mathbf D$, while to every $f \in \mathbf C(c,c')$ it gives the commutative
triangle expressing the equality $$\tau_{c'} \circ F(f)=\tau_{c'} \circ \mathbf {source}(\tau_f)=\mathbf {target}(\tau_f) \circ \tau_c = G(f) \circ \tau_c$$ certifying the naturality (in the
ordinary sense) of the $\tau_c$. This definition reminds the notion of homotopy between maps $f,g \colon X \to Y$ as map of kind $X \to Y^I$ (i.e. an homotopy as a (continuous) family of
path of $Y$).
That's not all, indeed we can reiterate the construction of the arrow category obtaining what I think is called a cubical set $$\mathbf D \leftarrow \text{Arr}(\mathbf D) \leftarrow \text
{Arr}^2(\mathbf D)\leftarrow \dots $$ where each arrow should be thought as the pair of functors $\mathbf{source}_{n+1},\mathbf{target}_{n+1} \colon \text{Arr}^{n+1}(\mathbf D) \to \text
{Arr}^n (\mathbf D)$.
In this way we can associate to each category a cubical set. There's also a natural way to associate to every functor a (degree 0) mapping of cubical sets.
If we consider natural transformation as maps from a category to an arrow category then this correspondence associate to each natural transformation a degree 1 map between such cubical sets
(by degree one I mean that the induced map send every object of $\text{Arr}^n(\mathbf C)$ in an object of $\text{Arr}^{n+1}(\mathbf D)$). I've found really beautiful this construction
because it shows an analogy between categories-functors-natural transformation and complexes-map of complexes-complexes homotopies.
add comment
Charles Ehresmann had a natty way of developing natural transformations. For a category $C$ let $\square C$ be the double category of commuting squares in $C$. Then for a small category $B$
we can form Cat($B,\square_1 C$), the functors from $B$ to the direction 1 part of $\square C$. This gets a category structure from the category structure in direction 2 of $\square C$. So
we get a category CAT($B,C$) of functors and natural transformations. This view makes it easier to verify the law
Cat($ A \times B,C) \cong $Cat($ A, $CAT($B,C$)).
up vote 2 And this method goes over to topological categories as well:
down vote
R. Brown and P. Nickolas, ``Exponential laws for topological categories, groupoids and groups and mapping spaces of colimits'', Cah. Top. G\'eom. Diff. 20 (1979) 179-198.
See also Section 6.5 of my book Topology and Groupoids for using the homotopy terminology for natural equivalences, as it was in the first 1968 edition entitled "Elements of Modern
Topology" (McGraw Hill).
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory ho.history-overview reference-request at.algebraic-topology textbook-recommendation or ask your own question. | {"url":"http://mathoverflow.net/questions/64365/natural-transformations-as-categorical-homotopies/64371","timestamp":"2014-04-21T10:19:39Z","content_type":null,"content_length":"111242","record_id":"<urn:uuid:88e23093-4147-4ce9-80d1-5cae1e4e838e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |