content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Number of 6's between 1 to 18000
Is there a smart way to solve this..?
I can see a brute force type of way.... by looking at the number of 6's from 1-100, then 101 to 200 etc... and special cases for 601-700 etc... but it seems very long, tedius and prone to error?
Thanks for any ideas.
I would suspect you would work in "chunks" of numbers, and see if you can find any patterns which might simplify things at all.
For instance, between 1 and 9 (which is the same number as between 0 and 9), there is one 6. Between 10 and 19, there is another 6, in "16". Between 20 and 29, there is another 6, in "26". And so
forth, except for the sixties, where you get another ten 6's.
Then start working with the hundreds. You know how many 6's there are between 0 and 99. How many then will there be between 100 and 199? How many between 200 and 299? How many extra 6's do you get
between 600 and 699? And so forth.
Then start working with the thousands. You should be able to use patterns, and values from previous "chunks" of numbers, to arrive at a final value.
Have fun!
Re: Number of 6's between 1 to 18000
That was my guess, but I was hoping for a motrin free method.... Patterns makes sense and its a good exercise.
Re: Number of 6's between 1 to 18000
One of my classmates got the same answer as presented by Dr. Andrew.... using the pattern method: how many 6's from 1-100, how many from 1000-1999 etc... and making use of the special case where the
first number is 2.
When I tried Dr. Andrews method for the text question: How many zeros from 1 to 1,000,000. I obtained the wrong answer. The text says 488895.
Here is what I said (probably some of my brain fog here):
i) there are 1M numbers from 1 to 1,000,000 (rows of numbers in this case if we write one on each line.
ii) For the numbers from 1 to 999,999 there are 6 columns of numerals.
iii) Since each numeral from 0-9 has an equal chance of appearing, then the number of 0's would be: 1/10 x 6,000,000 numerals (1m numbers of 6 numerals each)
iv) This would give us: 600,000 zeros: BUT there are 6 zeros not added in from the last number 1,000,000, AND the leading zeros would not be counted for example: 000,001 and 000,030, etc... so we
would need to subtract leading zeros.
space intended
v) Now to subtract leading zeros we should? have:
- for numbers 0-9 we have 5 leading zeros so (- 5 x 9 ) = -45
- for numbers 10 -99 we have 4 leading zeros so ( -4 x 90) = - 360
- for numbers 100 - 999 we have 3 leading zeros so ( -3 x 900) = - 2700
- for numbers 1000-9999 we have 2 leading zeros so (-2 x 9000) = -18,000
- for numbers 10,000 - 99,999 we have 1 leading zero so (-1 x 90,000) = -90,000
Total zeros to subtract = -111105 plus we need to add the 6 zeros from 1,000,000
The total zeros to subtract now becomes -111,111
Our original total zeros = 600,00 - 111,111 to give us = -488,889 so while close to the text answer of 488,895 I am short 4 zeros from somewhere?
Thanks for any help....I feel good that I am close....but not quite the cigar!
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=4306","timestamp":"2014-04-18T13:33:42Z","content_type":null,"content_length":"25642","record_id":"<urn:uuid:dc0dcffa-2e30-46ab-92e8-5dd65da0c7f5>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Catholic Encyclopedia (1913)/Philippe de la Hire
From Wikisource
Catholic Encyclopedia (1913), Volume 8
←Jean de La Haye Philippe de la Hire Diocese of Lahore→
Mathematician, astronomer, physicist, naturalist, and painter, b. in Paris, 18 March, 1640; d. in Paris, 21 April, 1718; was, as Fontenelle said, an academy in himself. His father, Laurent de La
Hire, (1606-1656), was a distinguished artist. Philippe first studied painting in Rome, where he had gone for his health in 1660, but on his return to Paris, soon devoted himself to the classics and
to science. He showed particular aptitude for mathematics, in which subject he was successively the pupil and associate in original investigation of Desargues. In 1678, he was made a member of the
Academy of Sciences, section of astronomy. Beginning in 1679, in connexion with the construction of a map for the Government, he made extended observations in Brittany, Guienne, Calais, Dunkirk, and
Provence. In 1683, he continued the principal meridian north from Paris, Cassini at the same time continuing it south, and, in 1684, he investigated the flow and fall of the River Eure in connexion
with the water-supply of Versailles. His attainments won for him professorships both at the Collège de France, in 1682, and at the Academy of Architecture. Two of his sons rose to distinction,
Gabriel-Philippe (1677-1719), in mathematics, and Jean-Nicolas (1685-1727) in botany. Industry, unselfishness, and piety were noteworthy traits of his character.
The chief contributions of La Hire were in the department of pure geometry. Although familiar with the analytic method of Descartes, which he followed in treatises published in 1679, his most
important works were developed in the method of the ancients. He continued the work of Desargues and of Pascal and introduced into geometry, chiefly by a new method of generating conics in a plane,
several conceptions related to those of recent times. In his exhaustive work on conics, published in 1685, he not only simplified and improved the demonstrations of many well-known theorems, but he
also established several new ones, particularly some concerning the theory of poles and polars, a subject not fully developed until the nineteenth century. In this work appears for the first time the
term "harmonic". Of the writings of La Hire which were, for the most part, published in the "Mémoirs" of the Academy of Sciences, and which treat of mathematics, astronomy, meteorology, and physics,
the following are the most important: "Nouvelle Méthode en Géométrie pour les sections des superficies coniques et cylindriques" (1673); "Nouveaux Eléments des Sections Coniques: Les Lieux
Géométriques: Les Constructions ou Effections des Equations" (in one vol., Paris, 1679); "Traité de Gnomonique" (1682); "Sectiones conicæ in novem libros distributæ" (Paris, 1685); "Tables du soleil
et de la lune" (1687); "Ecole des arpenteurs" (1689); "Mémoire sur les conchoïdes" (1708); "Traité de mécanique" (Paris, 1729).
CHASLES, Aperçu historique sur l'origine et le développement des Méthodes en Géométrie (3rd ed., Paris, 1889); LEHMANN, De La Hire und seines Sectiones Conicæ, in supplement to Jahresbericht des
königlichen Gymnasium zu Leipzig (Leipzig, 1888, 1890).
Paul H. Linehan.
|
{"url":"http://en.wikisource.org/wiki/Catholic_Encyclopedia_(1913)/Philippe_de_la_Hire","timestamp":"2014-04-20T03:56:43Z","content_type":null,"content_length":"26212","record_id":"<urn:uuid:7799cac8-4b4c-4ed5-9a74-beda8c4adc71>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [svg2] radialGradient @fr constraints
From: Jasper van de Gronde <th.v.d.gronde@hccnet.nl> Date: Fri, 31 Aug 2012 09:20:45 +0200 Message-ID: <504065CD.30902@hccnet.nl> To: www-svg@w3.org
On 30-08-12 12:44, Dr. Olaf Hoffmann wrote:
> Jasper van de Gronde:
>> If you take the absolute value before interpolating, then you get the
>> weird situation that a cone (for example) first gets "thinner" and then
>> thicker again as you change the radius of one end from positive to negative.
> For animation?
I was more thinking about someone tinkering with the source of an SVG or
modifying a radius in a GUI.
>> It also makes perfect sense if you represent a circle by x^2+y^2=r^2, or
>> using a parametric representation.
> Is there really a difference between positive and negative r in a real
> number space?
> In the example you give, the main difference seems to be, that the
> center of the circle is changed, but the radius is always not negative.
I think I may not have been clear enough about what the examples
actually show. Both the center and the radius varied. And you're right
that there isn't (much) of a difference between a positive and a
negative radius, but there is a difference between interpolating from
radius 5 to 5 and interpolating from radius 5 to -5, as demonstrated by
my first and last examples:
In the first case you get a tube (constant radius), in the last one you
get a double cone (the radius is zero at t=0.5).
>> The images were drawn by solving
>> (x-cx(t))^2+(y-cy(t))^2-r(t)^2
> = 0?
Yes, the =0 was missing. (I think Canvas uses the same equation and
mostly the same notation btw.)
> Still you can interpolate with values="5;0;5" for r and
> values="-5;5" for cx to get the same effect.
I think you might be referring to animation here (and I'm not), but in
principle you're right that you can get the same effect by splitting up
the gradient in two. As such I'm not necessarily advocating allowing
negative radii (although it should have negligible impact on
implementation complexity), but I am trying to show why it makes no
sense to take the absolute values of the radii at the endpoints and then
interpolate. Basically negative radii do have an interpretation and it
results in something other than what you get when you first take the
absolute values. So I would suggest either allowing negative radii (and
using their natural interpretation), or not allowing them (and clamping
to zero for example).
> And if it is allowed to set fx,fy outside the circle, fr positive or negative,
> one still needs to define the effect for spreadMethod as already mentioned.
> Because this is already problematic, if the point is on the
> circle, do you have any idea about a meaningful behaviour
> of spreadMethod, if fx,fy is outside and not corrected?
> spreadMethod seems to be only simple, if there is not selfintersection
> of the gradient.
Actually, spreadMethod is pretty straightforward, regardless of where
the fx,fy is in relation to the (other) circle. If you solve a quadratic
equation like I did (and Cairo does, and probably many others), then you
simply don't discard t values outside [0,1] and do whatever you want
with them (repeat, reflect, pad, whatever). If you really draw circles,
then you just draw them for all t that have some effect on the image. Of
course, if the focus is exactly on the circle and you're trying to solve
the quadratic equation, then you get some numerical problems, but that's
regardless of whether or not you use spreadMethod or allow the focus to
lie outside the circle (outside the circle is no problem, only /on/ the
circle is a bit of a problem).
(I'll try posting some more examples over the weekend.)
Received on Friday, 31 August 2012 07:21:16 GMT
This archive was generated by hypermail 2.3.1 : Friday, 8 March 2013 15:54:52 GMT
|
{"url":"http://lists.w3.org/Archives/Public/www-svg/2012Aug/0154.html","timestamp":"2014-04-20T11:47:57Z","content_type":null,"content_length":"10777","record_id":"<urn:uuid:297c3710-720f-4f38-98f1-3891711e18a7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
User-Defined Operations for Reduce and Scan
Next: The Semantics of Up: Collective Communications Previous: Scan
user-defined operations reduce, user-defined scan, user-defined
MPI_Op_create(MPI_User_function *function, int commute, MPI_Op *op)
MPI_OP_CREATE( FUNCTION, COMMUTE, OP, IERROR) EXTERNAL FUNCTION
MPI_OP_CREATE binds a user-defined global operation to an op handle that can subsequently be used in MPI_REDUCE, MPI_ALLREDUCE, MPI_REDUCE_SCATTER, and MPI_SCAN. The user-defined operation is assumed
to be associative. If commute = true, then the operation should be both commutative and associative. If commute = false, then the order of operations is fixed and is defined to be in ascending,
process rank order, beginning with process zero. The order of evaluation can be changed, taking advantage of the associativity of the operation. If commute = true then the order of evaluation can be
changed, taking advantage of commutativity and associativity. associativity, and user-defined operation commutativity, and user-defined operation
function is the user-defined function, which must have the following four arguments: invec, inoutvec, len and datatype.
The ANSI-C prototype for the function is the following.
typedef void MPI_User_function( void *invec, void *inoutvec, int *len,
MPI_Datatype *datatype);
The Fortran declaration of the user-defined function appears below.
FUNCTION USER_FUNCTION( INVEC(*), INOUTVEC(*), LEN, TYPE)
<type> INVEC(LEN), INOUTVEC(LEN)
INTEGER LEN, TYPE
The datatype argument is a handle to the data type that was passed into the call to MPI_REDUCE. The user reduce function should be written such that the following holds: Let u[0], ... , u[len-1] be
the len elements in the communication buffer described by the arguments invec, len and datatype when the function is invoked; let v[0], ... , v[len-1] be len elements in the communication buffer
described by the arguments inoutvec, len and datatype when the function is invoked; let w[0], ... , w[len-1] be len elements in the communication buffer described by the arguments inoutvec, len and
datatype when the function returns; then w[i] = u[i]ov[i], for i=0 , ... , len-1, where o is the reduce operation that the function computes.
Informally, we can think of invec and inoutvec as arrays of len elements that function is combining. The result of the reduction over-writes values in inoutvec, hence the name. Each invocation of the
function results in the pointwise evaluation of the reduce operator on len elements: i.e, the function returns in inoutvec[i] the value invec[i]oinoutvec[i], for i = 0,.....,count-1, where o is the
combining operation computed by the function.
General datatypes may be passed to the user function. However, use of datatypes that are not contiguous is likely to lead to inefficiencies.
No MPI communication function may be called inside the user function. MPI_ABORT may be called inside the function in case of an error.
Next: The Semantics of Up: Collective Communications Previous: Scan
Jack Dongarra
Fri Sep 1 06:16:55 EDT 1995
|
{"url":"http://www.netlib.org/utk/papers/mpi-book/node118.html","timestamp":"2014-04-19T14:44:00Z","content_type":null,"content_length":"5984","record_id":"<urn:uuid:be984ebb-84fd-49e2-9918-b779de9ffd96>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > Disjunctive syllogism
Disjunctive syllogism
is a
, simple
argument form
Either P or Q.
Not P.
Therefore, Q.
Roughly, we are told that it has to be one or the other that is true; then we are told that it is not the one that is true; so we infer that it has to be the other that is true. The reason this is
called "disjunctive syllogism" is that, first, it is a
--a three-step argument--and second, it contains a
, which means simply an "or" statement. "Either P or Q" is a disjunction; P and Q are called the statement's
Here is an example:
Either I will choose soup or I will choose salad.
I will not choose soup.
Therefore, I will choose salad.
Here is another example:
Either the Browns win or the Bengals win.
The Browns do not win.
Therefore, the Bengals win.
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/di/Disjunctive_syllogism","timestamp":"2014-04-16T16:26:14Z","content_type":null,"content_length":"12577","record_id":"<urn:uuid:d1a5ef73-5696-4685-86e3-dca331912813>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Franconia, VA Math Tutor
Find a Franconia, VA Math Tutor
...I have taken the ASVAB myself, and I achieved a very high score. I have a knowledge base that is sufficient for taking, and doing well in, the ASVAB. As an English instructor, I can certainly
help with the English sections, but I am also educated in science and math and have a great interest in electronics.
22 Subjects: including prealgebra, ASVAB, ESL/ESOL, English
...Yours in math, ~DexterReviewing basic algebraic concepts (e.g. variables, order of operations, and exponents); understanding and graphing functions; solving linear (with one variable 'x') and
quadratic equations (including ones with multiple variables and decimals/fractions); and reviewing pre-a...
15 Subjects: including linear algebra, organic chemistry, algebra 1, algebra 2
...I took a general music reading course in college that gave me the material and knowledge to break music down to where I can teach it to others. I also have 7 plus years in piano lessons, 2
years in violin, and 3 plus years in choir and sight reading. I love music and I love teaching it and sharing it with others.
56 Subjects: including prealgebra, piano, English, writing
Do you have an elementary-aged child who needs some extra instruction and encouragement? Helping young students to succeed is my passion - learning should be fun, not frustrating! I have earned a
B.S. in Engineering and an M.S. in Business while spending four semesters as a teaching assistant.
6 Subjects: including algebra 1, grammar, prealgebra, spelling
...My score of 800 on the Writing section of the SAT is evidence of my strong skills as a writer. I have taken numerous courses in Writing (Honors, AP, and college-level). I have also engaged
with writing as Managing Editor of my college newspaper, Communications Department Intern for a leading non...
16 Subjects: including algebra 1, prealgebra, reading, English
Related Franconia, VA Tutors
Franconia, VA Accounting Tutors
Franconia, VA ACT Tutors
Franconia, VA Algebra Tutors
Franconia, VA Algebra 2 Tutors
Franconia, VA Calculus Tutors
Franconia, VA Geometry Tutors
Franconia, VA Math Tutors
Franconia, VA Prealgebra Tutors
Franconia, VA Precalculus Tutors
Franconia, VA SAT Tutors
Franconia, VA SAT Math Tutors
Franconia, VA Science Tutors
Franconia, VA Statistics Tutors
Franconia, VA Trigonometry Tutors
Nearby Cities With Math Tutor
Baileys Crossroads, VA Math Tutors
Cameron Station, VA Math Tutors
Camp Springs, MD Math Tutors
Dale City, VA Math Tutors
Fort Hunt, VA Math Tutors
Jefferson Manor, VA Math Tutors
Kingstowne, VA Math Tutors
Lake Ridge, VA Math Tutors
Lincolnia, VA Math Tutors
North Bethesda, MD Math Tutors
North Springfield, VA Math Tutors
Oak Hill, VA Math Tutors
Saint Charles, MD Math Tutors
Springfield, VA Math Tutors
West Springfield, VA Math Tutors
|
{"url":"http://www.purplemath.com/franconia_va_math_tutors.php","timestamp":"2014-04-18T19:23:26Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:f3ff3f87-0f7e-435f-a21d-8d49d19871e1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodacre Trigonometry Tutor
...G. has a daughter who is currently in high school. He enjoys music, hiking and geocaching.Dr. Andrew G. has a Ph.D. from Caltech in environmental engineering science with a minor in numerical
13 Subjects: including trigonometry, calculus, physics, geometry
...I am well regarded as an excellent instructor and am able to deal with students with a wide range of abilities in math, finance and economics. I worked a number of years as a data analyst and
computer programmer and am well versed in communicating with people who have a variety of mathematical a...
49 Subjects: including trigonometry, calculus, physics, geometry
...I can teach the specific aspect of Chinese culture and language you want to learn. I took biostatistics in college and through UC Berkeley Extension. I received an A in both classes.
22 Subjects: including trigonometry, calculus, geometry, statistics
...I tutor high school, college, and university students on an almost daily basis in Precalculus. I emphasize the understanding because it will not only help in getting better grades but also
serves structural learning. "Excellent Tutor" - Alexandra K. San Francisco, CA Andreas was more than will...
41 Subjects: including trigonometry, calculus, geometry, statistics
...I taught Math and Physics at the Orinda Academy for 6 years. I also teach Tai Chi at Cal. I had a chess rating of 1600 in the United States Chess Federation.
12 Subjects: including trigonometry, chemistry, physics, calculus
|
{"url":"http://www.purplemath.com/woodacre_ca_trigonometry_tutors.php","timestamp":"2014-04-19T17:36:33Z","content_type":null,"content_length":"23767","record_id":"<urn:uuid:d7217a77-4b29-432e-bcce-fdd11656fcff>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SymMath Application
Orbital Graphing©
Mark David Ellison
Wittenberg University
P. O. Box 720
Springfield, OH 45501
United States
mail to: mellison@wittenberg.edu
This goal of this document is to help students become familiar with preparing and viewing three-dimensional graphs of the angular parts of hydrogen-like atomic orbitals. The students plot the
angular part of an s (l=0, m=0) orbital and a pz (l=1, m=0) orbital. However, the atomic orbitals p1 (l=1, m=1) and p-1 (l=1, m=-1) are graphed but look nothing like the familiar shape of a p
orbital. Linear combinations of these orbitals are graphed, and the combinations result in the familiar px and py orbitals. The process is repeated for some of the d orbitals. In the second
part of the document, the students use linear combinations of s and p orbitals to construct hybrid sp, sp^2, and sp^3 orbitals.
Audiences: Upper-Division Undergraduate
Pedagogies: Computer-Based Learning
Domains: Physical Chemistry
Topics: Atomic Properties / Structure, Computational Chemistry, Mathematics / Symbolic Mathematics, Quantum Chemistry
File Name Description Software Type Software Version
OrbitalGraphing2001i.mcd Computational Document. Requires Mathcad 2001i or higher Mathcad 2001i
OrbitalGraphing2001i.pdf Non-interactive derived from Mathcad 2001i file.
OrbitalGraphingStudentVS2001i.pdf Non-interactive derived from Mathcad 2001i file.
OrbitalGraphingStdnt2001i.mcd Mathcad Computational Document, Student Version. Mathcad 2001i
JCE Subscribers only: name and password or institutional IP number access required.
Comments to: Mark Ellison
©Copyright 2004 Journal of Chemical Education
|
{"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/app?app_id=48&guest=true","timestamp":"2014-04-16T10:22:09Z","content_type":null,"content_length":"9147","record_id":"<urn:uuid:28fe7899-ecdf-44eb-a765-735e4f21eaf5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast Multiplication of Normalized 16 bit Numbers with SSE2
If you are compositing pixels with 16 bits per component, you often need this computation:
uint16_t a, b, r;
r = (a * b + 0x7fff) / 65535;
There is a well-known way to do this quickly without a division:
uint32_t t;
t = a * b + 0x8000;
r = (t + (t >> 16)) >> 16;
Since we are compositing pixels we want to do this with SSE2 instructions, but because the code above uses 32 bit arithmetic, we can only do four operations at a time, even though SSE registers have
room for eight 16 bit values. Here is a direct translation into SSE2:
a = punpcklwd (a, 0);
b = punpcklwd (b, 0);
a = pmulld (a, b);
a = paddd (a, 0x8000);
b = psrld (a, 16);
a = paddd (a, b);
a = psrld (a, 16);
a = packusdw (a, 0);
But there is another way that better matches SSE2:
uint16_t lo, hi, t, r;
hi = (a * b) >> 16;
lo = (a * b) & 0xffff;
t = lo >> 15;
hi += t;
t = hi ^ 0x7fff;
if ((int16_t)lo > (int16_t)t)
lo = 0xffff;
lo = 0x0000;
r = hi - lo;
This version is better because it avoids the unpacking to 32 bits. Here is the translation into SSE2:
t = pmulhuw (a, b);
a = pmullw (a, b);
b = psrlw (a, 15);
t = paddw (t, b);
b = pxor (t, 0x7fff);
a = pcmpgtw (a, b);
a = psubw (t, a);
This is not only shorter, it also makes use of the full width of the SSE registers, computing eight results at a time.
Unfortunately SSE2 doesn’t have 8 bit variants of pmulhuw, pmullw, and psrlw, so we can’t use this trick for the more common case where pixels have 8 bits per component.
Exercise: Why does the second version work?
Syndicated 2011-07-03 12:11:20 (Updated 2011-09-04 01:42:00) from Søren Sandmann Pedersen
|
{"url":"http://advogato.org/person/ssp/diary.html?start=13","timestamp":"2014-04-18T10:53:22Z","content_type":null,"content_length":"7273","record_id":"<urn:uuid:51ee6bd7-b8a1-4c42-9260-fc948c1537e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semi-analytical solution for the optimal low-thrust deflection of near-Earth objects
Colombo, C. and Vasile, M. and Radice, G. (2009) Semi-analytical solution for the optimal low-thrust deflection of near-Earth objects. Journal of Guidance, Control and Dynamics, 32 (3). pp. 796-809.
ISSN 0731-5090
PDF (strathprints014571.pdf)
Download (4Mb) | Preview
This paper presents a semi-analytical solution of the asteroid deviation problem when a low-thrust action, inversely proportional to the square of the distance from the sun, is applied to the
asteroid. The displacement of the asteroid at the minimum orbit interception distance from the Earth's orbit is computed through proximal motion equations as a function of the variation of the
orbital elements. A set of semi-analytical formulas is then derived to compute the variation of the elements: Gauss planetary equations are averaged over one orbital revolution to give the secular
variation of the elements, and their periodic components are approximated through a trigonometric expansion. Two formulations of the semi-analytical formulas, latitude and time formulation, are
presented along with their accuracy against a full numerical integration of Gauss equations. It is shown that the semi-analytical approach provides a significant savings in computational time while
maintaining a good accuracy. Finally, some examples of deviation missions are presented as an application of the proposed semi-analytical theory. In particular, the semi-analytical formulas are used
in conjunction with a multi-objective optimization algorithm to find the set of Pareto-optimal mission options that minimizes the asteroid warning time and the spacecraft mass while maximizing the
orbital deviation.
Item type: Article
ID code: 14571
Notes: COPYRIGHT OWNED BY ALL AUTHORS
Keywords: pareto optimum, optimization, multiobjective programming, numerical integration, gaussian process, orbital element, equation of motion, proximal, minimal distance, spacecraft, solid
dynamic, satellite, interception, orbit, asteroid, thrust, minimum time, Mechanical engineering and machinery, Motor vehicles. Aeronautics. Astronautics
Subjects: Technology > Mechanical engineering and machinery
Technology > Motor vehicles. Aeronautics. Astronautics
Department: Faculty of Engineering > Mechanical and Aerospace Engineering
Depositing Strathprints Administrator
Date 17 Feb 2010 12:15
Last 24 Jun 2013 03:12
URI: http://strathprints.strath.ac.uk/id/eprint/14571
Actions (login required)
|
{"url":"http://strathprints.strath.ac.uk/14571/","timestamp":"2014-04-25T09:38:06Z","content_type":null,"content_length":"26268","record_id":"<urn:uuid:f15225c5-b3e6-4e6b-82a1-efdec962e3ef>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: January 2004 [00647]
[Date Index] [Thread Index] [Author Index]
RE: Question: formatting text, sections
• To: mathgroup at smc.vnet.net
• Subject: [mg45938] RE: [mg45903] Question: formatting text, sections
• From: "David Park" <djmp at earthlink.net>
• Date: Fri, 30 Jan 2004 04:16:10 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
To my mind, this is the one flaw in Automatic Grouping. (Still Automatic
Grouping is infinitely better than manual grouping.)
You can obtain your objective by:
1) adding the Text cell in the last subsection.
2) Use Shift-Ctrl-E (or menu/Formate/ShowExpression) to see the underlying
3) Add the option
at the end of the Cell statement
4) Use Shift-Ctrl-E to return to the normal display.
The Text cell will now be grouped with the Section and not with the
I've pasted a sample notebook at the end of this posting.
You can look up CellGroupingRules in
HelpBrowser/FrontEnd/Cell Options/Cell Properties/CellGroupingRules
David Park
djmp at earthlink.net
From: Jason Miller [mailto:millerj at truman.edu]
To: mathgroup at smc.vnet.net
I have a section in a Mathematica notebook that has several
subsections. I want to end the section with some text, but that text
gets automagically included in the last subsection's cell bracket. I'd
like it to occur outside (below) that cell bracket so that when I close
the subsection cell (to hide its contents), the last bit of text is
still visible to the reader. Is there a slick/easy way to do this?
Jason E. Miller, Ph.D.
Associate Professor of Mathematics
Truman State University
Kirksville, MO
(************** Content-type: application/mathematica **************
CreatedBy='Mathematica 5.0'
Mathematica-Compatible Notebook
This notebook can be used with any Mathematica-compatible
application, such as Mathematica, MathReader or Publicon. The data
for the notebook starts with the line containing stars above.
To get the notebook into a Mathematica-compatible application, do
one of the following:
* Save the data starting with the line of stars above into a file
with a name ending in .nb, then open the file inside the
* Copy the data starting with the line of stars above to the
clipboard, then use the Paste menu command inside the application.
Data for notebooks contains only printable 7-bit ASCII and can be
sent directly in email or through ftp in text mode. Newlines can be
CR, LF or CRLF (Unix, Macintosh or MS-DOS style).
NOTE: If you modify the data for this notebook not in a Mathematica-
compatible application, you must delete the line below containing
the word CacheID, otherwise Mathematica-compatible applications may
try to use invalid cache data.
For more information on notebooks and Mathematica-compatible
applications, contact Wolfram Research:
web: http://www.wolfram.com
email: info at wolfram.com
phone: +1-217-398-0700 (U.S.)
(*CacheID: 232*)
(*NotebookOptionsPosition[ 2404, 94]*)
(*NotebookOutlinePosition[ 3047, 116]*)
(* CellTagsIndexPosition[ 3003, 112]*)
Cell["Section One", "Section"],
Cell["Subsection 1", "Subsection"],
\(1 + 1\)], "Input"],
\(2\)], "Output"]
}, Open ]]
}, Closed]],
Cell["Subsection 2", "Subsection"],
\(2 + 2\)], "Input"],
\(4\)], "Output"]
}, Open ]]
}, Closed]],
Cell["This is a text cell to close out Section One", "Text",
CellGroupingRules->{"SectionGrouping", 40}]
}, Open ]],
Cell["Section Two", "Section"],
Cell["Continuining on....", "Text"]
}, Closed]]
FrontEndVersion->"5.0 for Microsoft Windows",
ScreenRectangle->{{0, 1280}, {0, 941}},
WindowSize->{492, 740},
WindowMargins->{{0, Automatic}, {Automatic, 0}}
Cached data follows. If you edit this Notebook file directly, not
using Mathematica, you must remove the line containing CacheID at
the top of the file. The cache data will then be recreated when
you save this file from within Mathematica.
Cell[1776, 53, 30, 0, 73, "Section"],
Cell[1831, 57, 34, 0, 38, "Subsection"],
Cell[1890, 61, 38, 1, 30, "Input"],
Cell[1931, 64, 35, 1, 29, "Output"]
}, Open ]]
}, Closed]],
Cell[2015, 71, 34, 0, 30, "Subsection"],
Cell[2074, 75, 38, 1, 30, "Input"],
Cell[2115, 78, 35, 1, 29, "Output"]
}, Open ]]
}, Closed]],
Cell[2177, 83, 106, 1, 30, "Text",
CellGroupingRules->{"SectionGrouping", 40}]
}, Open ]],
Cell[2320, 89, 30, 0, 73, "Section"],
Cell[2353, 91, 35, 0, 33, "Text"]
}, Closed]]
End of Mathematica Notebook file.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Jan/msg00647.html","timestamp":"2014-04-17T15:36:03Z","content_type":null,"content_length":"39887","record_id":"<urn:uuid:95007604-e597-48e7-87f7-3e6b8c94bf0b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CMU Summer School Recap, Part 5 (Ashlagi and Conitzer)
October 1, 2012 by timroughgarden
This is the fifth and final post in a series recapping the talks at the CMU summer school on algorithmic economics (the first four: #1, #2, #3, #4). See the summer school site for abstracts, slides,
and (eventually) videos.
Itai Ashlagi
Itai Ashlagi from MIT discussed two domains in which mechanism design theory is playing an important role: matching residents to hospitals, and kidney exchange. He began with the cautionary tale of
the Boston School Choice Mechanism, which illustrates the consequences of poor mechanism design. This was a mechanism for assigning students to schools, and it worked as follows. Every student
submitted a ranked list of schools. The mechanism went through the students in some order, assigning a student to his/her first choice provided space was still available. The mechanism then performed
an iteration using the unassigned students’ second choices, an iteration with the unassigned students’ third choices, and so on. This may sound natural enough, but it has some serious problems — do
you see any? The key issue is that if a student’s first choice is full, he/she is skipped and not considered again until the second iteration, at which point his/her second choice might be full, and
so on. For this reason, there can be an incentive to rank first a school that is not your first choice, such as one that is pretty good and also has a large capacity. Such strategic behavior was
observed empirically. The school choice mechanism has since been changed to have better incentive properties. One natural improvement: when a student’s first choice is already full, proceed to the
next school on that student’s list, rather than immediately skipping to the next student.
Matching residents to hospitals is the canonical application of the Gale-Shapley stable marriage algorithm. Itai discussed the complications introduced by couples constraints, where two residents
want to be matched to nearby hospitals. A lot of the nice theory about stable matchings, including the main existence theorem and incentive properties, break down when couples constraints are added.
Nevertheless, the current heuristic algorithms for matching with couples constraints generally find a stable matching. To provide a theoretical explanation for this empirical success, Itai advocated
a “large random market” approach. He proposed a model with n residents with random preferences, and O(n) hospitals with arbitrary preferences. The goal (see this paper) is to prove that, provided the
number of couples grows sublinearly with n, there exists a stable matching with high probability (as n grows large). The gist of the algorithmic proof is: first match all of the singles (using
Gale-Shapley), and then introduce the couples one-by-one. A new couple is allowed to evict previously assigned singles, which are then re-assigned by resuming the Gale-Shapley algorithm. This
algorithm successfully computes a stable matching as long as no couple evicts another couple. To prove that this is unlikely, Itai defined a subtle notion of conflict between couples and showed that,
if there are not too many couples, the conflict graph is acyclic with high probability. Thus, for an appropriate ordering of the couples, the algorithm terminates with a stable matching.
Kidney exchange, in its simplest form, involves two patient-donor pairs, (A,A’) and (B,B’). A’ wants to give a kidney to A but unfortunately they are incompatible (e.g., because of a blood type
mismatch). Similarly for B’ and B. But if B’ is compatible with A and A’ with B, then we can transplant compatible kidneys to both of the patients. (Generally, A and A’ have never met B or B’. The
two surgeries are always done simultaneously — do you see why?) Itai discussed the frontier of the theory and practice of kidney exchange (see here for details). One issue is that the real players in
the “kidney exchange game” are hospitals, rather than individual pairs of patients. (A single hospital can have many patient-donor pairs.) The objective function of a single hospital (save as many of
their own patients, do as many surgeries in-house as possible, etc.) is not the same as the global objective (save as many lives as possible). Simplistic mechanisms incentivize hospitals to keep
secret their patient-donor pairs that can be matched in-house, reporting only their difficult-to-match pairs to the central exchange. Itai discussed better exchange mechanisms that do not suffer from
such incentive problems. The conjecture that hospitals are disproportionately reporting difficult-to-match pairs could also explain why long chains of exchanges seem needed to maximize the number of
matched patients, in contrast to the predictions of the early models of kidney exchange (see here for details).
Vincent Conitzer
Vince Conitzer from Duke gave two lectures on social choice from a computer science perspective. Recall that a social choice function takes preferences as input (e.g., a ranked list of candidates
from each voter) and outputs a notion of the “collective preferences” (e.g., a single ranked list). Vince pointed out that there are at least two different reasons to invoke such a function: to
recover some notion of “ground truth”; and to compromise among subjective preferences when no “ground truth” exists. He began by reviewing the basics on the latter topic. With only two alternatives,
the majority rule satisfies all the properties you might want. When there are at least three alternatives, Arrow’s Impossibility Theorem says that every voting rule has its problems — most commonly,
failing to satisfy “independence of irrelevant alternatives” and hence, via the Gibbard-Satterthwaite theorem, being manipulable by tactical voting.
Moving on to the use of social choice functions to recover the “ground truth”, Vince discussed a very cool question: which social choice functions arise as maximum likelihood estimators (MLEs) with
respect to some noise model? (See here for details.) For example, suppose there are only two alternatives, one “right” and one “wrong”. Assume that the votes are random and i.i.d., with each being
“right” with some probability larger than .5. Then, the majority rule gives the MLE. When there are more than two alternatives and a “correct” ranking of them, the Kemeny rule gives the MLE under a
natural noise model (based on independently flipping each pair of alternatives). A neat way to prove that certain voting rules cannot provide a MLE for any noise model is to exhibit a violation of
the following consistency property: if the output of a rule is the same with votes V_1 and V_2, then it must also be the same with the votes V_1+V_2.
Vince’s second lecture focused on computational aspects of social choice. The Gibbard-Satterthwaite theorem states that non-trivial voting rules are manipulable. But perhaps there are non-trivial and
computationally tractable rules for which manipulation is computationally intractable? Such rules are known for worst-case intractability (i.e., NP-hardness), but these rules are generally easy to
manipulate “on average” (see this survey paper). Thus, the Gibbard-Satterthwaite theorem seems equally damning in the computationally bounded world. Vince wrapped up by discussing the communication
complexity of eliciting preferences (i.e., how many questions of a given form do you need to ask everybody in order to evaluate a given voting rule) and compact representations for preferences for
settings where the number of alternatives is ridiculously large.
Thanks to my co-organizer Ariel, all of the speakers, and especially to the students — both for their enthusiastic participation and for the unexpected gift at the end of the summer school!
Way cool! Some very valid points! I appreciate you writing this article and the rest of the site is really good.
One Response
|
{"url":"http://agtb.wordpress.com/2012/10/01/cmu-summer-school-recap-part-5-ashlagi-and-conitzer/","timestamp":"2014-04-20T11:25:46Z","content_type":null,"content_length":"55817","record_id":"<urn:uuid:209dd12f-39ea-4081-adef-edaf5ae2cc9e>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Evolutionary Framework for Association Testing in Resequencing Studies
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
PLoS Genet. Nov 2010; 6(11): e1001202.
An Evolutionary Framework for Association Testing in Resequencing Studies
Jonathan Marchini, Editor^
Sequencing technologies are becoming cheap enough to apply to large numbers of study participants and promise to provide new insights into human phenotypes by bringing to light rare and previously
unknown genetic variants. We develop a new framework for the analysis of sequence data that incorporates all of the major features of previously proposed approaches, including those focused on allele
counts and allele burden, but is both more general and more powerful. We harness population genetic theory to provide prior information on effect sizes and to create a pooling strategy for
information from rare variants. Our method, EMMPAT (Evolutionary Mixed Model for Pooled Association Testing), generates a single test per gene (substantially reducing multiple testing concerns),
facilitates graphical summaries, and improves the interpretation of results by allowing calculation of attributable variance. Simulations show that, relative to previously used approaches, our method
increases the power to detect genes that affect phenotype when natural selection has kept alleles with large effect sizes rare. We demonstrate our approach on a population-based re-sequencing study
of association between serum triglycerides and variation in ANGPTL4.
Author Summary
Studies correlating genetic variation to disease and other human traits have examined mostly common mutations, partly because of technological restrictions. However, recent advances have resulted in
dramatically declining costs of obtaining genomic sequence data, which provides the opportunity to detect rare genetic variation. Existing methods of analysis designed for an earlier era of
technology are not optimal for discovering links to rare mutations. We take advantage of 1) the advanced theoretical understanding of evolutionary mechanics and 2) genome-wide evidence about
evolutionary forces on the human genome to suggest a framework for understanding observed correlations between rare genetic variation and modern traits. The model leads to a powerful test for genetic
association and to an improved interpretation of results. We demonstrate the new method on previously confirmed results in a gene related to high blood cholesterol levels.
Over the past 20 years, positional cloning guided by linkage analysis and genome wide association studies (GWAS) have identified many loci relevant to human disease and other quantitative phenotypes
such as height, body mass index, and serum lipid composition. However, in most cases the total amount of phenotypic variance explained is small compared to the heritability observed in twin or
adoption studies [1]. Some authors note the possibility that low-frequency genetic variation, which is not measured on standard single nucleotide polymorphism (SNP) arrays, may contribute to this
missing heritability [2]–[7]. The rapidly decreasing cost of obtaining DNA sequence has prompted several groups to test this hypothesis by sequencing candidate genes in participants of cohort or
case-control studies hoping to discover either 1) rare or previously unknown SNPs with large detectable effect sizes, or 2) a correlation between overall number of rare SNPs and phenotype [8]–[15].
This research is rapidly approaching a new phase as investigators use next-generation sequencing technology to measure all variation in the exome and wider genome [16], [17]. Several authors have
shown that rare variation is particularly relevant in the case that natural selection has acted to keep variants with large effects rare, and that without action by purifying selection rare variants
have effect sizes comparable to common ones [2], [3], [6].
There are three signatures of association in a resequencing study which we want to use to assess candidate genes. Some SNPs could have effect sizes large enough that they have individually noticeable
impact on phenotype; this is the information underlying regression procedures, like those put forward by Hoggart et al [18] and Kwee et al [19]. This approach is very similar to current tag-SNP based
procedures and not designed thinking of resequencing data, since the effects of rare SNPs will not be easy to discern. Depending on the role natural selection has played in the history of the
phenotype, two other signatures of association may exist. Second, rare SNPs may tend to have effect sizes in the same direction (e.g. inducing risk), so a measure of overall rare-variant burden could
correlate to phenotype; this is the information exploited in allele-count [20] and rare-variant-burden [21] type methods. That signature may be present if either selection has favored the phenotype
(or a correlate) in a particular direction, or if purifying selection has been weak and derived alleles tend to be deleterious to the phenotype. Finally, rare SNPs could tend to have effect sizes
which are larger than common ones. This could be the case if selection has tended to stabilize the phenotype. The method of Kwee et al [19] can allow for that possibility, but does not contain
guidance on what the structure of the frequency - effect size relationship should be.
We present a method capable of detecting all three signatures of association. Our method generalizes allele count and rare-variant-burden methods by explicitly constructing a model relating disease
impact, selective pressure, and SNP frequency in a candidate gene. By doing so, we will be able to provide intuitive interpretations to detected associations, allowing investigators to answer
additional questions with their data. Our approach will yield substantially more power if the model is close to correct without introducing bias or sacrificing much efficiency when our assumptions
are not met.
We propose to estimate the evolutionary fitness burden of each SNP using its observed frequency and population genetic parameters inferred by other authors. That estimate of fitness burden will act
as prior information on the variant effect, acting like a burden function [21]. The same estimate will structure the variability of SNP-phenotype correlations, replacing arbitrary weights [19], and
provide robust estimates even if there is no relationship between fitness and effect magnitude. We recognize that for a quantitative trait measured in a prospective cohort, a well-justified
approximation of the full model can be fit using a fast and general statistical technique, mixed linear models, and provide software routines to estimate parameters and conduct hypothesis tests. We
have named the approach EMMPAT (Evolutionary Mixed Model for Pooled Assocation Testing)
In what follows, we will briefly introduce the population genetics ideas which underly our approach. Next, we construct our statistical model and discuss estimation and testing within it. Finally, we
illustrate the method both in simulation studies and on a real candidate gene resequencing study examining serum triglyceride levels in a multi-ethnic prospective community-based sample [8], [12].
Relating SNP Frequency, Fitness, and Disease Effect
Several authors have reviewed the potential contribution of low frequency alleles to variation in phenotypes [2]–[7]. Absent a change in the properties of new mutations during recent history, which
we find implausible, systematic differences between SNPs of varying frequencies must be mediated by natural selection. Since the early 20th century, much work has explicated the evolutionary dynamics
of quantitative traits, reviewed by Barton and Johnson [22], [23]. Below we will posit a model of pleiotropic selection whereby the trait under study or a trait with a correlated genetic basis is
under purifying selection. More detailed connection and contrast to the existing work on the genetic basis of quantitative traits is found in Text S1.
In Figure 1, we illustrate direct and apparent selection scenarios which give rise to a correlation between fitness effects and phenotype effects. In Figure 1A, the phenotype itself is under
selective pressure; for example, disease leading to propensity to childhood mortality. Figure 1B shows apparent selection by pleiotropy; variants which disrupt an unconstrained role of a gene also
tend to disrupt another role which is under selection; for example, variation which increases Alzheimer's Disease risk after reproductive age may relate to other brain function which is relevant for
individuals still reproducing.
Hypotheses relating SNP effect and fitness effect.
Hartl and Clark [24] carefully constructs and interprets the concept of fitness-effects in classical population genetics. Briefly, in an idealized population, the relative reproductive advantage of
an individual is the product of the fitness effects of each variant that person carries, an additive approximation with no dominance or epistasis. We parameterize the problem in terms of the log of
multiplicative fitness effects. That is, the fitness of the [24].
Rather than assume that all variants in the region have the same distribution of fitness effects (DFE). Just as a fixed [25]–[34]. Boyko et al [33] found that a combination of a point mass at
neutrality (not under selection) combined with a gamma distribution for deleterious differences from neutrality to be a good fit for the DFE of non-synonymous mutations.
With these facts in mind, in what follows we will use fitness effects to operationalize the construct of functional status for each SNP. Whereas Johnson and Barton [23] worked directly with the joint
distribution of fitness and phenotype effects, we will use an existing DFE estimate [33] as a marginal distribution for fitness effects and construct the conditional distribution of phenotype
effects. Since we do not know the true fitness effects of SNPs, we will estimate them with observed SNP frequency, which is statistically ancillary to phenotype-SNP correlation, using a simulation
methodology described below.
Model for SNP Effects on Phenotype
Assume the context of a simple random cross-sectional sample of
We can write a regression model for person
Using standard least-squares regression to estimate such a model will pose several problems. First, because there will be many rare variants,
To overcome these problems, we need to make more assumptions and model the
In applied problems,
and combine the two uncorrelated error terms to yield
The first term in (4), j's effect on phenotype from the average of SNPs with the same observed frequency. The variance of
Equation (4) asserts that phenotype-effect and fitness-effect are linearly related; that seems correct for the scenario in Figure 1A and a good starting place for the other possibilities. In future
work we will be able to empirically examine this assumption by graphical diagnostics and comparing fits using other functional forms. Further discussion of nonlinear relationships is found in Text S1
, and we will demonstrate the impact of an incorrect assumption of linearity in our simulation studies.
Our model is quite general in that existing methods correspond to submodels of (4). An allele count method tests the model with only [21] where
When [19] with all variants given the same weight. That is, regardless of frequency all SNPs have the same likelihood of having large effect sizes, and regardless of frequency SNP effects have zero
mean. As a result, our method will be robust to the case that fitness and phenotype effects are unrelated by estimating [18]. Their approach corresponds to
Model Interpretation
The specification of equations (1), (4), and (6) yields a natural interpretation to the fitted model. After estimating the population parameters of phenotype effects, we will be able to jointly
estimate individual SNP effects i's phenotype and what we would expect were there no effects of this gene. As a result we can empirically estimate the overall phenotypic variability due to observed
genetic variants, Figure 2, when using the genome-wide distribution of fitness effects for non-synonymous SNPs, common variation is nearly neutral so
By calculating Text S1. We can use the same technique to compare classes of SNPs, for example non-coding vs missense, by jointly fitting separate
Relationship between sampled frequency and mean fitness.
An important consideration is how to interpret the results when multiple ethnic groups are analyzed simultaneously. Because some genetic variation is fixed between ethnic groups in the sample, the
average effect of single-population variation will be absorbed into the fitted mean for that group. As a result, the interpretation for “total explained variation” is actually “total explained
within-ethnic-group variation;” genetic variation may explain some of the phenotypic difference between groups, but we do not include it in our estimate because of confounding between environmental
exposures and ethnic background.
Another point requiring clarification is the assumption that genotype effects are independent. In the context of GWAS, nearby SNPs often are thought to have correlated effects because they mutually
tag a functional variant. Additionally, estimates of SNP effects will be correlated due to LD making their true separate effects difficult or impossible to identify. However, in the underlying data
generating mechanism true genotype effects are independent. Because sequencing identifies all the variation within the region and eliminates much of the correlation due to untyped alleles, we believe
that the independence assumption is a useful approximation in this case. Non-independence of the true effects could be accommodated by imposing a covariance structure on SNP effects, for example
using their spatial distance in the genome or folded protein. Alternatively, the phylogenetic approach of TreeLD [35] estimates the degree of probable overlap of untyped SNPs.
Computing Fitness Effects
Model (4) relies on a prediction
1. Take as given the fitted distributional form of fitness effects and population history since out-of-Africa
2. Use existing software SFS_CODE
to simulate new polymorphisms in the gene under study many times, creating pseudo-samples containing true variant-level fitness.
3. For each variant in the real dataset, find variants in the pseudo-data with the same sampled frequency, and calculate the mean
To reduce computational requirements, steps 2 and 3 above can be replaced by simulating a smaller number of large populations and calculating the expected mean and variance of fitness using simple
random sampling. Figure 2 depicts the relationship of [33]. Because much of the variation discovered in our multi-ethnic example dataset is confined to one ethnicity, we use the ethnicity-specific
frequency and pseudo-data. Because of admixture in our sample, we use the highest observed frequency (the most skeptical about its being rare) to assign an ethnicity of origin to SNPs appearing in
multiple groups.
An advantage of this method is that because it refers to a feature of genetic history rather than a phenotype, it need only be done once for any trait under study on the same cohort. While the
fitness - phenotype relationship will be different for all traits, that is modeled by the fitted parameter Text S1. Discussion of the quality of the existing DFE estimates are also included in Text
S1. We have used the observed frequency to estimate the fitness effect, but there are many other potential predictors of functional status. Discussion of including them in our model is found in Text
Model Fitting and Estimation
Our model fitting procedure will be likelihood-based, so we will use a standard hypothesis testing method: likelihood ratio tests. To improve robustness, our examples will use permutation p-values
obtained by comparing the likelihood ratio of the fitted model to that generated under the null hypothesis by randomly swapping genotype vectors between members of the same ethnicity. Permuting
genotype labels simulates the null hypothesis that no relationship exists between any genotype and any aspect of the response, which in our parametric setup is equivalent to
For numerical convenience and statistical robustness, we will use only the first two moments of the model in equations (1), (4), and (6), and assume [37]. In matrix notation where each participant is
a row and effects are column vectors,
We allow the procedure to exploit the possibility that individuals with a high burden of rare alleles not only have drift in their mean phenotype because of
We will fit the mixed effects model (8)–(9) using modified Newton-Raphson optimization of the implied likelihood. The linear mixed effects approach is equivalent to assuming normality for the error
terms [38], [39], and that correct specification of mean and variance models produces correct inference robust to additional details of structure. Assumptions which better match the data at hand will
lead to more power, but they will tend to require dramatically more computational effort. For our current example we have considered a single sequenced candidate gene where computational speed is not
crucial, but we expect that methods similar to ours will be required for whole-genome or whole-exome resequencing efforts where computational resources will be a limiting factor. Additionally,
popular methods such as Markov Chain Monte Carlo and EM which can use arbitrary distributions of residuals and random effects require accurate initial estimates to perform well; MCMC also benefits
enormously from a good proposal distribution. Mixed effect regression is a reasonable way to generate these initializations. Whereas using only the first two moments for estimation is only optimal
under the normality of [37, chapter 6], which we relied upon in “model interpretation”.
As discussed above, we will be interested in fitting distinct mixed procedure [40] to estimate the model parameters and check our custom software. Example code implementing this use is maintained at
the authors' website. We generate confidence intervals using the standard asymptotic arguments in McCulloch and Searle [37, chapter 6], which are built into SAS.
Alternatively, if we use a single [41] to estimate this model using optim to maximize the likelihood, code for which is posted at the authors' web site: http://home.uchicago.edu/~crk8e/papersup.html
Bayesian interpretation
Our model is easily recast in a purely Bayesian framework. One would need to write priors for Text S1 with observed values for the genotype data. Third, assign an arbitrary fraction of the explained
variation to each source and back-calculate to find the square of the parameter.
The Bayesian analyst could continue to use our normal approximation of the distribution of the latent
Dallas Heart Study: ANGPTL4
Description of dataset
About 3500 prospectively sampled individuals from the population in Dallas, Texas, were sequenced at a candidate gene for dyslipidemia: ANGPTL4 (Ensembl Acc:16039). These individuals come primarily
from three ethnic backgrounds: non-Hispanic white (N[42], its metabolic phenotypes [43], and the sequencing methods and discovered genetic variation [8], [12] have been described previously. We
grouped all missense and nonsense mutations into a single category which we label “non-synonymous” in the tables and figures, and we grouped all synonymous and non-coding region mutations into a
single category labeled “non-coding.” Table 1 shows the number of discovered SNPs in each category in each ethnic group. We consider age, sex, ethnicity, diabetes status, and self-reported ethanol
consumption as adjuster covariates. For age, we use a flexible linear spline model with knots at every ten years to allow for nonlinearity in response. We include all interactions between ethnicity
and gender and ethnicity-gender interactions with other covariates. Because statin use is an endogenous variable indicating diagnosed dyslipidemia, we do not adjust for it. We fit models 1) ignoring
statin use and 2) increasing triglyceride levels 25% in the treated to approximate their untreated level. Because we obtained qualitatively similar results, we present only the latter.
Genetic variation in ANGPTL4.
Model estimates
Table 2 presents model summaries and point estimates with asymptotic standard errors for model parameters, stratified by ethnicity and pooled using ethnicity as an adjuster. Table 2 presents the
results setting the offset term
For ANGPTL4, we observe a p-value of .006 on 10,000 permutations versus the strong null hypothesis that no SNPs have any effect. Previous authors [12] observed a p-value for a net surplus of
non-synonymous variants in low triglyceride participants of .016 and a minimum variant-at-a-time p-value of .019 for E40K corrected for multiple testing. The improvement to the model fit by including
Table 2, a glimmer of a fitness component is only seen in the non-coding variation, and the explained variance is very small. However, to illustrate the interpretation of the plots which our approach
generates we'll take the parameter estimates at face value below.
Interpretation of diagnostic plot
Figure 3 shows the observed SNPs and estimated effect sizes (non-synonymous in black and non-coding in red) rank ordered by observed frequency (in blue). Variant-at-a-time ordinary least squares
(OLS) estimates of effect size are overlaid in green. Figure 3 displays several interesting features of the data; first there are two low-frequency non-synonymous variants with a strong effect
reducing triglyceride levels; the first is E40K (frequency in non-Hispanic whites[12]. However, adjusted for E40K we see that another more common variant R278Q almost exclusive to non-Hispanic blacks
(frequency[12] also noted an excess of rare non-synonymous variants in those with low triglyceride levels. The rare non-coding variation appears to have the opposite sign of effect; it increases
triglyceride levels. Referring to Table 2 we see that a fitness-related component of variability (of about the same scale as the change in mean) was detected; this gives rise to the wider spread of
point estimates and wider confidence intervals in non-coding variation.
Frequency versus estimated effect size in ANGPTL4 with ordinary least squares estimates.
An interesting data point in Figure 3 is a single 5% frequency non-coding variant (directly before R278Q) whose OLS effect estimate is quite large (and nominally significant) but whose model-based
effect estimate is small. Examining that variant more closely, we found that it is in strong LD with R278Q. Because E40K (which is not strongly correlated to any other variation) had a large effect
and non-synonymous variants tended to decrease triglycerides, the model assigned non-synonymous variation as more likely to have non-rare variation with large negative effect sizes and gives the
effect to R278Q. Similarly, perfectly correlated rare variants have their combined effect split evenly.
We can understand this model fit by looking at the green OLS estimates in Figure 3. Visually, the estimates for non-synonymous variation tend to be below zero. Comparing the non-synonymous to
non-coding singletons, we see more variable estimates in the non-coding singletons as well as a different mean. The model fit identifies this as opposite signs of Figure 2 we see that common
variation is essentially neutral with respect to fitness, and as a result non-zero effects in non-rare variants force
Evolutionary interpretation
An interesting potential story about natural selection on ANGPTL4 activity emerges from Figure 3. First, non-synonymous mutation tended to decrease the effectiveness of ANGPTL4 and decrease serum
triglyceride levels [8], [44], [45]. We see no evidence of selection against those mutations; variants which decreased triglycerides became more than rare in both the African and European lineages,
and we see no excess of large effects in rare SNPs. On the other hand, non-coding mutations which may alter the regulation of ANGPTL4 on average increased triglycerides. Variants with large effect
sizes were preferentially rare, and the apparent selective force was stronger in the non-European lineage, as the demographic history would predict. This meshes well with the finding that ANGPTL4
experienced a Europe specific relaxation of purifying selection [12]. We do not suggest that serum triglyceride levels in themselves were the target of purifying selection; effect on triglycerides
may only be correlated to effect on a selected function.
Simulation Studies
Population parameters
In order to determine the power and robustness of our procedure, we simulated variation in a gene with the exon structure of the gene ANGPTL4 in a study population using SFS_CODE [36] and fitted
demographic and DFE parameters [31], [33]. We used 4cM/mb for the local recombination rate and no recombination hotspots. We used
Model parameters
We chose several levels of the phenotype parameters to correspond to potential cases of interest while keeping the total fraction of variation explained by the gene about the same: a weak mean
variant effect, a strong fitness-related component of the phenotype, and a strong fitness independent component of the phenotype. We chose the baseline values such that Table 3 contains the chosen
phenotype parameter values for each set of simulations and the resulting expected percent of variance explained by the SNPs due to fitness-phenotype correlation and percent of variance explained
independent of that correlation.
Two additional batches of simulation examine the robustness of our procedure to incorrect assumptions. First we created violations of the assumed population model. We mis-specified the assumed DFE in
our analysis, making the scale parameter a factor of 5 too large or too small and keeping the truth the same. We also simulated violation of our demographic assumptions using a population which
experienced an additional 100 fold exponential growth over the last 11% of generations since out-of-Africa. Second we created violations of the assumed statistical model. We simulated three scenarios
violating the linearity assumption. First, with
Power comparisons
To compare power with existing methods, we included several proposed methods of analysis. First, we test the method of Bonferroni corrected minimum p-value of SNPs with minor allele frequency >1% or
>5%. Other proposed methods using allele counts like CAST [46] CMC [20] and weighted sums [21] were created for case-control studies, so we alter those methods to be fair in a cohort
quantitative-trait context. Our representative of CAST-like analysis is regression with the number of rare variants carried by each participant as a covariate; CMC-like analysis is the same with
non-rare SNPs (frequency greater then 1% or 5%) treated as free regression parameters. P-values are then generated by ANOVA against the nested model consisting only of only fitting the mean response.
Our representative of weighted-sum type methods is a similar regression analysis where rare variants are collapsed to a mean model with burden proportional to [21]. Because the simulated response is
actually normal, we do not use a rank transformation. We also used the same burden function for only low frequency SNPs and treated common SNPs as regression parameters. P-values are again obtained
by ANOVA versus a nested model with no genetic effects.
To demonstrate the gain (or loss) in information by considering the marginal variance, we apply a similar regression with an optimal mean model, that is (8) either for all SNPs or treating common
SNPs as free. We tested our model both with a single
Table 4 summarizes the power comparisons in each case. Our model is as or more powerful than the existing methods, even when there is substantial violation of its assumptions. The only scenario in
which our model loses some power is when there is absolutely no fitness-phenotype correlation. Even in that case, the relative loss is small, much smaller than the gain when
Simulation study power results.
We propose a novel method, EMMPAT, for association between sequenced genes and phenotype which utilizes population genetic theory to pool information among rare variants. Our method generalizes
allele-count and allele-burden techniques, and presents several advantages. Of greatest importance to the practicing scientist will be increased power and interpretability. As shown above, our method
allows us to leverage allele frequency as auxiliary data related to SNP effects and to substantially increase power to detect association in many scenarios. The availability of a well motivated
pooling strategy allows an omnibus test which incorporates common and rare variation simultaneously. Our approach provides clear interpretations for the fitted model, such as the attributable
variance in phenotype due to all polymorphisms observed in a gene, particular types of SNPs, or only the rare variation. Furthermore it facilitates tests of meaningful parameters (such as mean
derived allele burden) and group differences (such as non-synonymous versus non-coding). The regression toolbox allows model checking and exploration, such as in Figure 3 which presents the data in
an informative format. Additional model checking proceeds as usual in linear mixed models, and posterior predictive checks are similarly possible.
A relevant question is how important our method will be for diseases which have not been strongly selected against. There are three answers to consider. First, when selection and disease effect are
completely independent, common SNPs will tend to have just as large effect sizes as rare SNPs and explain much of the heritable variation in phenotype [2], [3]. We believe that most investigators
conducting resequencing studies assume rare variation to have larger effect sizes, since that is the best-justified scenario for the expense of sequencing. Second, our method allows for this
possibility in the form of estimating Introduction and Text S1, direct selection against disease is not a necessary condition for correlation between fitness and phenotype; as long as the disease
related gene is under selective pressure in any of its functions, we expect a correlation.
We have planned several extensions to this method. In addition to improved techniques of estimating fitness effects, we need to incorporate evidence for adaptive selection. Signatures of positive
selection [47]–[49] can be used to prioritize genes for study which may have been more important in differentiating humans from our ancestors and hence contribute to modern phenotypes. We expect
positively selected variants to have very different phenotype effects from neutral alleles, but it is not clear a-priori what that relationship should be or if it will be possible to reliably
identify positively selected SNPs [50], [51]. Second, for mathematical and numerical convenience we have developed this method in the context of a prospective probability sample measuring a
quantitative trait. Both these assumptions need to be relaxed for the setting of most resequencing projects. Disease phenotypes are frequently non-normal, binary, or censored such as time-to-event
from clinical trials, requiring a generalized linear mixed model. The prospective sampling assumption will also require work to relax. Retrospective sampling such as in case-control designs and
extreme-phenotype-based sampling [13], [52] is well known to distort random effect distributions [53]. Third, in our example and simulations, we assume that [35] will be important. Our model has not
included dominance or epistasis between SNPs or genes, the structure of which is probably not simple, although progress has been made on determining the impact of these features to quantitative
traits [54], [55]. Finally, because our example dataset comes from high-quality Sanger sequencing, we have ignored nonrandom missing data issues. Future work involving second generation sequencing or
beyond must address the complex nature of library coverage, alignment error, and genotyping error inherent in those technologies.
Supporting Information
Text S1
Supplementary methods and discussion.
(0.05 MB PDF)
We would like to thank Dara Torgerson and Ryan Hernandez for their assistance with using SFS_CODE and insightful thoughts on population genetic models and software. We would like to thanks Helen
Hobbs and Jonathan Cohen for access to the Dallas Heart Study dataset. We are grateful to Nancy Cox and anonymous reviewers for comments on a draft of the paper.
The authors have declared that no competing interests exist.
CRK was supported by Medical Scientist National Research Service Award T 32 GM07281 and 1F30HL103105-01. PJR was supported by R21 MH086099-01. DLN was supported in part by 1RC1HL099619-01 and
1RC2HL101651-01. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Maher B. Personal genomes: The case of the missing heritability. Nature. 2008;456:18–21. [PubMed]
Pritchard JK, Cox NJ. The allelic architecture of human disease genes: common disease-common variant… or not? Hum Mol Genet. 2002;11:2417–2423. [PubMed]
Pritchard JK. Are rare variants responsible for susceptibility to complex diseases? American Journal of Human Genetics. 2001;69:124137. [PMC free article] [PubMed]
Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, et al. Finding the missing heritability of complex diseases. Nature. 2009;461:747–753. [PMC free article] [PubMed]
Eyre-Walker A. Evolution in health and medicine sackler colloquium: Genetic architecture of a complex trait and its implications for fitness and genome-wide association studies. Proceedings of the
National Academy of Sciences. 2010;107:1752–1756. [PMC free article] [PubMed]
Gorlov IP, Gorlova OY, Sunyaev SR, Spitz MR, Amos CI. Shifting paradigm of association studies: Value of rare Single-Nucleotide polymorphisms. American Journal of Human Genetics. 2008;82:100112. [PMC
free article] [PubMed]
Li B, Leal SM. Discovery of rare variants via sequencing: Implications for the design of complex trait association studies. PLoS Genet. 2009;5:e1000481. doi: 10.1371/journal.pgen.1000481. [PMC free
article] [PubMed]
Romeo S, Yin W, Kozlitina J, Pennacchio LA, Boerwinkle E, et al. Rare loss-of-function mutations in ANGPTL family members contribute to plasma triglyceride levels in humans. The Journal of Clinical
Investigation. 2009;119:70–79. [PMC free article] [PubMed]
Paisn-Ruiz C, Washecka N, Nath P, Singleton AB, Corder EH. Parkinson's disease and low frequency alleles found together throughout LRRK2. Annals of Human Genetics. 2009;73:391–403. [PubMed]
Cohen JC, Pertsemlidis A, Fahmi S, Esmail S, Vega GL, et al. Multiple rare variants in NPC1L1 associated with reduced sterol absorption and plasma low-density lipoprotein levels. Proceedings of the
National Academy of Sciences of the United States of America. 2006;103:1810–1815. [PMC free article] [PubMed]
Cohen JC, Boerwinkle E, Mosley TH, Hobbs HH. Sequence variations in PCSK9, low LDL, and protection against coronary heart disease. N Engl J Med. 2006;354:1264–1272. [PubMed]
Romeo S, Pennacchio LA, Fu Y, Boerwinkle E, Tybjaerg-Hansen A, et al. Population-based resequencing of ANGPTL4 uncovers variations that reduce triglycerides and increase HDL. Nat Genet. 2007;39
:513–516. [PMC free article] [PubMed]
Kotowski IK, Pertsemlidis A, Luke A, Cooper RS, Vega GL, et al. A spectrum of PCSK9 alleles contributes to plasma levels of Low-Density lipoprotein cholesterol. The American Journal of Human
Genetics. 2006;78:410–422. [PMC free article] [PubMed]
Cohen JC, Kiss RS, Pertsemlidis A, Marcel YL, McPherson R, et al. Multiple rare alleles contribute to low plasma levels of HDL cholesterol. Science. 2004;305:869–872. [PubMed]
Wang J, Cao H, Ban MR, Kennedy BA, Zhu S, et al. Resequencing genomic DNA of patients with severe hypertriglyceridemia (MIM 144650). Arterioscler Thromb Vasc Biol. 2007;27:2450–2455. [PubMed]
Kryukov GV, Shpunt A, Stamatoyannopoulos JA, Sunyaev SR. Power of deep, all-exon resequencing for discovery of human trait genes. Proceedings of the National Academy of Sciences. 2009;106:3871–3876.
[PMC free article] [PubMed]
Roach JC, Glusman G, Smit AFA, Huff CD, Hubley R, et al. Analysis of genetic inheritance in a family quartet by Whole-Genome sequencing. Science. 2010;328:636–639. [PMC free article] [PubMed]
Hoggart CJ, Whittaker JC, Iorio MD, Balding DJ. Simultaneous analysis of all SNPs in Genome-Wide and Re-Sequencing association studies. PLoS Genet. 2008;4:e1000130. doi: 10.1371/journal.pgen.1000130.
[PMC free article] [PubMed]
Kwee LC, Liu D, Lin X, Ghosh D, Epstein MP. A powerful and flexible multilocus association test for quantitative traits. American Journal of Human Genetics. 2008;82:386–397. [PMC free article] [
Li B, Leal S. Methods for detecting associations with rare variants for common diseases: Application to analysis of sequence data. The American Journal of Human Genetics. 2008;83:311–321. [PMC free
article] [PubMed]
Barton NH, Keightley PD. Understanding quantitative genetic variation. Nat Rev Genet. 2002;3:11–21. [PubMed]
Johnson T, Barton N. Theoretical models of selection and mutation on quantitative traits. Philosophical Transactions of the Royal Society B: Biological Sciences. 2005;360:1411–1425. [PMC free article
] [PubMed]
24. Hartl DL, Clark AG, Clark AG. Principles of population genetics. 1997. Sinauer Sunderland, MA, USA.
Eyre-Walker A, Woolfit M, Phelps T. The distribution of fitness effects of new deleterious amino acid mutations in humans. Genetics. 2006;173:891–900. [PMC free article] [PubMed]
Eyre-Walker A, Keightley PD. The distribution of fitness effects of new mutations. Nat Rev Genet. 2007;8:610–618. [PubMed]
Keightley PD, Eyre-Walker A. Joint inference of the distribution of fitness effects of deleterious mutations and population demography based on nucleotide polymorphism frequencies. Genetics. 2007;177
:2251–2261. [PMC free article] [PubMed]
Welch JJ, Eyre-Walker A, Waxman D. Divergence and polymorphism under the nearly neutral theory of molecular evolution. Journal of Molecular Evolution. 2008;67:418–426. [PubMed]
Kryukov GV, Pennacchio LA, Sunyaev SR. Most rare missense alleles are deleterious in humans: Implications for complex disease and association studies. American Journal of Human Genetics. 2007;80
:727739. [PMC free article] [PubMed]
Yampolsky LY, Kondrashov FA, Kondrashov AS. Distribution of the strength of selection against amino acid replacements in human proteins. Hum Mol Genet. 2005;14:3191–3201. [PubMed]
Gutenkunst RN, Hernandez RD, Williamson SH, Bustamante CD. Inferring the joint demographic history of multiple populations from multidimensional SNP frequency data. PLoS Genet. 2009;5:e1000695. doi:
10.1371/journal.pgen.1000695. [PMC free article] [PubMed]
Nielsen R, Hubisz MJ, Hellmann I, Torgerson D, Andrés AM, et al. Darwinian and demographic forces affecting human protein coding genes. Genome Research. 2009;19:838–849. [PMC free article] [PubMed]
Boyko AR, Williamson SH, Indap AR, Degenhardt JD, Hernandez RD, et al. Assessing the evolutionary impact of amino acid mutations in the human genome. PLoS Genet. 2008;4:e1000083. doi: 10.1371/
journal.pgen.1000083. [PMC free article] [PubMed]
Torgerson DG, Boyko AR, Hernandez RD, Indap A, Hu X, et al. Evolutionary processes acting on candidate cis-Regulatory regions in humans inferred from patterns of polymorphism and divergence. PLoS
Genet. 2009;5:e1000592. doi: 10.1371/journal.pgen.1000592. [PMC free article] [PubMed]
Zollner S, Wen X, Pritchard JK. Association mapping and fine mapping with TreeLD. Bioinformatics. 2005;21:3168–3170. [PMC free article] [PubMed]
Hernandez RD. A flexible forward simulator for populations subject to selection and demography. Bioinformatics. 2008;24:2786–2787. [PMC free article] [PubMed]
37. McCulloch CE, Searle SR. Generalized, Linear, and Mixed Models. Hoboken, NJ, USA: John Wiley & Sons, Inc; 2000.
38. Wedderburn RWM. Quasi-likelihood functions, generalized linear models, and the Gauss–Newton method. Biometrika. 1974;61:439–447.
39. Heyde CC. Quasi-likelihood and its application. Springer; 1997.
40. Littel RC, Milliken GA, Stroup WW, Wolfinger RD. SAS system for mixed models. SAS Inst; 1996.
R Development Core Team. R: A Language and Environment for Statistical Computing. 2009. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org. ISBN 3-900051-07-0.
Victor RG, Haley RW, Willett DL, Peshock RM, Vaeth PC, et al. The dallas heart study: a population-based probability sample for the multidisciplinary study of ethnic differences in cardiovascular
health. The American Journal of Cardiology. 2004;93:1473–1480. [PubMed]
Browning JD, Szczepaniak LS, Dobbins R, Nuremberg P, Horton JD, et al. Prevalence of hepatic steatosis in an urban population in the united states: impact of ethnicity. Hepatology (Baltimore, Md)
2004;40:1387–1395. [PubMed]
hon Yau M, Wang Y, Lam KSL, Zhang J, Wu D, et al. A highly conserved motif within the NH2-terminal coiled-coil domain of angiopoietin-like protein 4 confers its inhibitory effects on lipoprotein
lipase by disrupting the enzyme dimerization. The Journal of Biological Chemistry. 2009;284:11942–11952. [PMC free article] [PubMed]
Yin W, Romeo S, Chang S, Grishin NV, Hobbs HH, et al. Genetic variation in ANGPTL4 provides insights into protein processing and function. The Journal of Biological Chemistry. 2009;284:13213–13222. [
PMC free article] [PubMed]
Morgenthaler S, Thilly WG. A strategy to discover genes that carry multi-allelic or mono-allelic risk for common diseases: a cohort allelic sums test (CAST). Mutation Research. 2007;615:28–56. [
Zeng K, Mano S, Shi S, Wu C. Comparisons of site- and Haplotype-Frequency methods for detecting positive selection. Mol Biol Evol. 2007;24:1562–1574. [PubMed]
Pickrell JK, Coop G, Novembre J, Kudaravalli S, Li JZ, et al. Signals of recent positive selection in a worldwide sample of human populations. Genome Research. 2009;19:826–837. [PMC free article] [
Voight BF, Kudaravalli S, Wen X, Pritchard JK. A map of recent positive selection in the human genome. PLoS Biol. 2006;4:e72. doi: 10.1371/journal.pbio.1000072. [PMC free article] [PubMed]
Pritchard JK, Pickrell JK, Coop G. The genetics of human adaptation: hard sweeps, soft sweeps, and polygenic adaptation. Current Biology: CB. 2010;20:R208–215. [PMC free article] [PubMed]
Ahituv N, Kavaslar N, Schackwitz W, Ustaszewska A, Martin J, et al. Medical sequencing at the extremes of human body mass. American Journal of Human Genetics. 2007;80:779–791. [PMC free article] [
Neuhaus JM, Jewell NP. The effect of retrospective sampling on binary regression models for clustered data. Biometrics. 1990;46:977–990. [PubMed]
Barton NH, Turelli M. Effects of genetic drift on variance components under a general model of epistasis. Evolution; International Journal of Organic Evolution. 2004;58:2111–2132. [PubMed]
Hill WG, Goddard ME, Visscher PM. Data and theory point to mainly additive genetic variance for complex traits. PLoS Genet. 2008;4:e1000008. doi: 10.1371/journal.pgen.1000008. [PMC free article] [
Articles from PLoS Genetics are provided here courtesy of Public Library of Science
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more...
|
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2978703/?tool=pubmed","timestamp":"2014-04-17T10:51:52Z","content_type":null,"content_length":"180076","record_id":"<urn:uuid:beee1580-704d-4c41-a14d-b87bfecf12f7>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hartsdale Math Tutor
Find a Hartsdale Math Tutor
...I have taught Math, ELA, Science, Social Studies, and Music for every grade from K through 5th. Additionally, three of my current Wyzant Students and two of my private students are in
elementary school. I have been a very active tutor in excellent standing with WyzAnt (see my ratings and review...
29 Subjects: including algebra 1, algebra 2, English, SAT math
...I have worked with elementary students for over 15 years in all subjects. I can teach phonics because I have a degree in elementary education and special education. I have taught reading to
students in the past and I'm familiar with phonic base programs like Recipe for Reading.
32 Subjects: including algebra 1, ESL/ESOL, English, prealgebra
...SAT writing is one of the standardized testing areas in which I have the most confidence, and where I've been able to raise students' scores most dramatically. I received my MFA in Film from
Tisch School of the Arts at NYU. My short films have screened at Sundance, Tribeca, SXSW and many other festivals.
36 Subjects: including probability, calculus, algebra 1, ACT Math
...Yet, given my academic background, I am most proficient in Math. I've been fingerprinted by the Department of Education, Department of Health and State Clearance Registry and I've been cleared
by all three departments. I am certified to administer CPR and Advanced First Aid.
9 Subjects: including prealgebra, grammar, geometry, algebra 2
Hello, My name's Vera, I'm originally from Brazil, but I've been in the USA for over 20 years. I graduated in Education in Brazil, and in Psychology in the US. I've been a teacher for over 15
years, and I really enjoy helping students to improve and achieve their goals.
21 Subjects: including algebra 1, algebra 2, calculus, differential equations
|
{"url":"http://www.purplemath.com/Hartsdale_Math_tutors.php","timestamp":"2014-04-16T07:51:37Z","content_type":null,"content_length":"23674","record_id":"<urn:uuid:f85ccda7-dad6-42a3-83c0-9bad0eefb75b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Exponential Functions
by Amy Burke and Emily Groenink
-The big difference about exponential functions is that the variable is the power rather than the base. For example, rather than looking at f(x)=x^2, expontential functions deal with things such as g
(x)=2^x. Evaluating an exponential function is the same as other functions, meaning that we can simply pick values for x, plug them in for x, and simplify to solve. Exponential functions tend to
start very small and then grow very quickly. -For example, we can look at g(x)=2^x in FIgure 1 more closely. First, we can make a table....From the table given, you can see that the further negative
the x values get, the closer the function approaches zero. Additionally, even though the x values only increase by increments of one, the function increases much more rapidly. -After viewing the
first table, one can see that this function only approaches 0 and is never negative. This is not always the case; there are ways to translate the functions so that they can denote negative values. In
order to translate the functions they must move down the y-axis vertically, you must either add or subtract values not in the exponent. To translate the function horizontally, the added or subtracted
value will be in the exponent with the variable. For example, if you were to make the change the first function to h(x)=(2^X)-2, the new table would look just as Figure 2 does. -As many functions
work, there can also be horizontal translations. For horizontal translations, the original function could be changed to k(x)=2^(x+2). Since the addition of 2 is in the parenthesis with the x in the
exponent, it therefore directly applies to the x rather than the y, and translates horizontally. Therefore, the new table would look like Figure 3. Back to Top of Page
From these tables, if you were to infer how they would be represented graphically, you could imagine they'd look something like this: *(Figure 1a directly correlates to the function and table of
Figure 1, as does Figure 2a to Figure 2 and Figure 3a to Figure 3)
Please note that due to the translations, each graph has a different scale, too. The first graph has increments of 1 while the second two increase by 2's. Also, in order to have the best view of each
graph, they were manipulated within the viewing box to best fit the function.
Back to Top of Page
Nayland College Mathematics http://maths.nayland.school.nz/
-A quality of these functions that you may not be confidently accustomed to is that they do not have ranges from negative infinity to positive infinity. As you can see there is always a horizontal
asymptote. In the original function, the asymptote is at y=0. This makes sense because a positive number raise to ANY number (even negative numbers or fractions) will still get you positive numbers,
no matter how close it gets to 0. -In Figure 2a, the function was translated vertically, so the horizontal asymptote is now at y=-2. Then, in Figure 3a, the horizontal asymptote is still at y=0 since
the vertical translation was not affected. -According to Wolfram|Alpha, an asymptote is a line or curve that the function approaches but never crosses. An asymptote can be vertical, horizontal, or
diagonal along any linear function, but right now we are only dealing with horizontal asymptotes. You can also visit Purple Math for more information regarding this example. Back to Top of Page
A logarithm is another type of exponential function. Many times, the use of a "log" is to manipulate the variable and get it out of the exponent. This is possible because the logarithm is the inverse
of exponential function. By applying the log, one is able to work with the variable and also solve for it much easier. Since the log is the inverse of the exponential functions we looked at before,
this is an image of how the two functions are graphed side by side:
For more information: http://people.richland.edu/james/lecture/m116/logs/logs.html
*More information on Logarithms can be found later under Real World Applications (and
Extended Practice
Back to Top of Page
Michigan Standards Related to Exponential Functions:
Common Core Standards
• Construct and compare linear, quadratic, and exponential models and solve problems.
□ Distinguish between situations that can be modeled with linear functions and with exponential functions.
☆ Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
☆ Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
☆ Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.
□ Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from
a table).
□ Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function.
□ For exponential models, express as a logarithm the solution to abct = d where a, c, and d are numbers and the base b is 2, 10, or e; evaluate the logarithm using technology.
Michigan Merit Curriculum Standards:
Standard A3: Families of Functions
• A3.2 Exponential and Logarithmic Functions
□ A3.2.2 Interpret the symbolic forms and recognize
the graphs of exponential and logarithmic
□ A3.2.3 Apply properties of exponential and
logarithmic functions.
Back to Top of Page
Real World Applications (and Extended Practice)
It is important to not only know how to solve exponential and logarithmic functions but also be able to apply the knowledge learned to real world examples. The following site does a great job of
giving examples that use exponential functions and logarithms to solve problems that incorporate interest rates, mortgage, population, radioactive decay and earthquakes. Also, it gives step by step
solutions to each of the given problems along with more examples that you can try on your own. Using real world applications makes a difficult topic such as this one more interesting and gives a
reason to understanding exponential functions and logarithms.
Exponential Word Problem Practice Logarithmic Problem Practice Back to Top of Page
Helpful Videos and Images:
This video from ThinkWell goes over the same example that has been shown above: f(x)=2^x. The professor in this video does a great job of explaining how to set up a table, plot points, and graph this
exponential function. He also compares f(x)=2^x to other functions such as g(x)=3^x. He discusses the concept of asymptotes and looks at patterns found in the functions he graphs. He then relates
these patterns to the function h(x)=(1/2)^x and shows how h(x) is similar to f(x). (See the
ThinkWell webpage
YouTube page
for more tutorials from this company.)
Exponential Function Concept Map Back to Top of Page
Citations for Pictures
- Picture provided twice
- Provided by k0re
Figures 1, 2, 3, 1a, 2a, 3a - Created and Provided by Amy Burke
Concept Map - Created and Provided by Amy Burke and Emily Groenink
Back to Top of Page
|
{"url":"http://hs-mathematics.wikispaces.com/Exponential+Functions?responseToken=09d74122e35556090d635f82d55ed8827","timestamp":"2014-04-25T02:31:15Z","content_type":null,"content_length":"70932","record_id":"<urn:uuid:e0797943-5c16-4d4e-b33f-d356c6fb6386>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What Is Marginal Costing?
Marginal costing is the amount that the variable cost of producing an item goes up for each unit manufactured. For instance, if 20 cars are built at a cost of $10,000 US Dollars (USD), and another is
built raising the cost by $4,000 USD for a total of $14,000 USD, the marginal costing for one car would be $4,000 USD. Mathematical formulas are used to figure the exact amount, and various cost
considerations, such as payroll, are excluded since it is not normally increased by simply increasing the production amount. This is the reason that simply dividing the total cost by the number of
units made will not give an accurate marginal costing rate. Each variable must be accounted for, or eliminated, in order to produce the actual amount of the increase for the unit made.
The basic formula for figuring marginal costing can vary depending upon the industry and the specific context, but basically it is marginal costing equals the cost of labor plus the cost of materials
to make the one unit plus any extra expenses and overhead costs incurred by building the one unit. To show an example of this concept in working contexts the previous example can be used. In this
case, an extra hour of labor is needed to produce the one car, which will have a cost to the company of $200 USD. Therefore, in this example, the marginal costing for the one extra car being built
would be $4,500 USD.
The marginal costing of a company helps to determine the total amount of profits made for the given period of time. To put this idea into basic terms, the income earned from one unit minus the
variable costs of the product will equal the contribution amount for the unit. After this is factored for each unit, the amount of units is then multiplied by the amount giving the total contribution
amount. This number is then placed into another formula: total contribution minus the total fixed costs equals the profit earned.
Many different numbers are used in the accounting equation to formulate exactly how much a company makes within a set period of time. Marginal costing is one such number that indicates the increase
in cost, including all variables, incurred for making that one unit. Companies not only use this number to figure their ending profits or losses, but to also analyze the cost of each unit in an
attempt to lower total debits and increase profits for the business.
|
{"url":"http://www.wisegeek.com/what-is-marginal-costing.htm","timestamp":"2014-04-21T08:39:31Z","content_type":null,"content_length":"64927","record_id":"<urn:uuid:770f6be0-6627-4d7a-9466-7dbf3e2cd60e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[1] Axel Gandy and Georg Hahn. MMCTest - a safe algorithm for implementing multiple Monte Carlo tests. Scandinavian Journal of Statistics, 2014. Early View. Also available via arXiv:1209.3963
[stat.ME]. [ bib | DOI | http ]
[2] F. Din-Houn Lau and Axel Gandy. Optimality of non-restarting cusum charts. Sequential Analysis, 32(4):458-468, 2013. [ bib | DOI ]
[3] Ioannis Phinikettos and Axel Gandy. An omnibus cusum chart for monitoring time to event data. Lifetime Data Analysis, pages 1-14, 2013. [ bib | DOI ]
Keywords: Omnibus; CUSUM; Control chart; Kolmogorov–Smirnov
[4] Marc Henrion, David J. Hand, Axel Gandy, and Daniel J. Mortlock. CASOS: a subspace method for anomaly detection in high dimensional astronomical databases. Statistical Analysis and Data Mining,
6(1):53-72, 2013. [ bib | DOI ]
[5] Axel Gandy and Patrick Rubin-Delanchy. An algorithm to compute the power of Monte Carlo tests with guaranteed precision. Annals of Statistics, 41(1):125-142, 2013. [ bib | DOI ]
[6] Axel Gandy and F. Din-Houn Lau. Non-restarting CUSUM charts and control of the false discovery rate. Biometrika, 100, 2013. [ bib | DOI ]
[7] Axel Gandy and Jan Terje Kvaløy. Guaranteed conditional performance of control charts via bootstrap methods. Scandinavian Journal of Statistics, 40:647-668, 2013. [ bib | DOI ]
[8] Axel Gandy. Performance monitoring of credit portfolios using survival analysis. International Journal of Forecasting, 28:139-144, 2012. [ bib | DOI ]
[9] Axel Gandy and Luitgard A. M. Veraart. The effect of estimation in high-dimensional portfolios. Mathematical Finance, 2012. [ bib | DOI ]
[10] Marc Henrion, Daniel J. Mortlock, David J. Hand, and Axel Gandy. A Bayesian approach to star-galaxy classification. Monthly Notices of the Royal Astronomical Society, 412(4):2286-2302, 2011. [
bib | DOI ]
Keywords: methods: statistical, surveys
[11] Ioannis Phinikettos and Axel Gandy. Fast computation of high-dimensional multivariate normal probabilities. Computational Statistics & Data Analysis, 55(4):1521 - 1529, 2011. [ bib | DOI ]
Keywords: Multivariate normal distribution
[12] A. Gandy, J. T. Kvaloy, A. Bottle, and F. Zhou. Risk-adjusted monitoring of time to event. Biometrika, 97(2):375-388, 2010. [ bib | DOI ]
Recently there has been interest in risk-adjusted cumulative sum charts, CUSUMS, to monitor the performance of e.g. hospitals, taking into account the heterogeneity of patients. Even though
many outcomes involve time, only conventional regression models are commonly used. In this article we investigate how time to event models may be used for monitoring purposes. We consider
monitoring using CUSUMS based on the partial likelihood ratio between an out-of-control state and an in-control state. We consider both proportional and non-proportional alternatives, as
well as a head start. Against proportional alternatives, we present an analytic method of computing the expected number of observed events before stopping or the probability of stopping
before a given observed number of events. In a stationary set-up, the former is roughly proportional to the average run length in calendar time. Adding a head start changes the threshold
only slightly if the expected number of events until hitting is used as a criterion. However, it changes the threshold substantially if a false alarm probability is used. In simulation
studies, charts based on survival analysis perform better than simpler monitoring schemes. We present one example from retail finance and one medical application.
[13] Axel Gandy. Sequential implementation of Monte Carlo tests with uniformly bounded resampling risk. Journal of the American Statistical Association, 104(488):1504-1511, 2009. [ bib | DOI ]
This paper introduces an open-ended sequential algorithm for computing the p-value of a test using Monte Carlo simulation. It guarantees that the resampling risk, the probability of a
different decision than the one based on the theoretical p-value, is uniformly bounded by an arbitrarily small constant. Previously suggested sequential or nonsequential algorithms, using a
bounded sample size, do not have this property. Although the algorithm is open-ended, the expected number of steps is finite, except when the p-value is on the threshold between rejecting
and not rejecting. The algorithm is suitable as standard for implementing tests that require (re)sampling. It can also be used in other situations: to check whether a test is conservative,
iteratively to implement double bootstrap tests, and to determine the sample size required for a certain power. An R-package implementing the sequential algorithm is available online.
[14] Axel Gandy and Uwe Jensen. Model checks for Cox-type regression models based on optimally weighted martingale residuals. Lifetime Data Analysis, 15(4):534-557, 2009. [ bib | DOI ]
We introduce directed goodness-of-fit tests for Cox-type regression models in survival analysis. 'Directed' means that one may choose against which alternatives the tests are particularly
powerful. The tests are based on sums of weighted martingale residuals and their asymptotic distributions. We derive optimal tests against certain competing models which include Cox-type
regression models with different covariates and/or a different link function. We report results from several simulation studies and apply our test to a real dataset.
[15] Axel Gandy, Terry M. Therneau, and Odd O. Aalen. Global tests in the additive hazards regression model. Statistics in Medicine, 27:831-844, 2008. [ bib | DOI ]
In this article, we discuss testing for the effect of several covariates in the additive hazards regression model. Bhattacharyya and Klein (Statist. Med. 2005; 24(14):2235-2240) note that an
ad hoc weight function suggested by Aalen (Statist. Med. 1989; 8:907-925) is inconsistent when used as a global test for comparing groups since the test statistic depends on which group is
used as the baseline group. We will suggest a simple alternative test that does not exhibit this problem. This test is a natural extension of the logrank test. We shall also discuss an
alternative covariance estimator. The tests are applied to a data set and a simulation study is performed.
Keywords: survival analysis;additive model;logrank test
[16] Axel Gandy, Patrick Jäger, Bernd Bertsche, and Uwe Jensen. Decision support in early development phases - a case study from machine engineering. Reliability Engineering & System Safety, 92
(7):921-929, 2007. [ bib | DOI ]
[17] Axel Gandy and Uwe Jensen. On goodness of fit tests for Aalen's additive risk model. Scand. J. Statist., 32:425-445, 2005. [ bib | DOI | .pdf ]
This is an electronic version of an article published in Scandinavian Journal of Statistics complete citation information for the final version of the paper, as published in the print
edition of Scandinavian Journal of Statistics is available on the Blackwell Synergy online delivery service, accessible via the journal's website at http://www.blackwellpublishing.com or
[18] Axel Gandy and Uwe Jensen. Checking a semiparametric additive risk model. Lifetime Data Anal., 11(4):451-472, 2005. [ bib | DOI ]
[19] Axel Gandy, Uwe Jensen, and Constanze Lütkebohmert. A Cox model with a change-point applied to an actuarial problem. Brazilian Journal of Probability and Statistics, 19(2):93-109, 2005. [ bib ]
Available at: http://www.redeabe.org.br/bjpspublishedpapers_volume19_2_pp093-109.pdf
[20] Axel Gandy. Effects of uncertainties in components on the survival of complex systems with given dependencies. In Alyson Wilson, Sallie Keller-McNulty, Nikolaos Limnios, and Yvonne Armijo,
editors, Mathematical and Statistical Methods in Reliability. World Scientific, Singapore, 2005. Proceedings of the Conference “Mathematical Models in Reliability” in Santa Fe, NM, USA, 2004. [
bib ]
[21] Axel Gandy and Uwe Jensen. A nonparametric approach to software reliability. Appl. Stoch. Models Bus. Ind., 20:3-15, 2004. [ bib | DOI ]
[1] Mei-Ling Ting Lee, Mitchell Gail, Ruth Pfeiffer, Glen Satten, Tianxi Cai, and Axel Gandy, editors. Risk Assessment and Evaluation of Predictions. Lecture Notes in Statistics 210. Springer, 2013.
[ bib ]
[2] Axel Gandy and Roberto Trotta. Special issue on astrostatistics - editorial. Statistical Analysis and Data Mining, 6(1):1-2, 2013. [ bib | DOI ]
[3] Axel Gandy and Jan Terje Kvaløy. Contribution to the discussion on the article “Statistical methods for healthcare regulation: rating, screening and surveillance” by D. Spiegelhalter, C.
Sherlaw-Johnson, M. Bardsley, I. Blunt, C. Wood and O. Grigg. Journal of the Royal Statistical Society, Series A, 175:Early View, 2012. [ bib ]
[4] Axel Gandy. Contribution to the discussion on the article “Stability selection” by N. Meinshausen and P. Bühlmann. Journal of the Royal Statistical Society, Series B, 72:458-459, 2010. [ bib ]
[5] Axel Gandy and Ian W McKeague. Aalen's additive risk model. Encyclopedia of Statistical Sciences, 2008. [ bib ]
[6] Axel Gandy. Contribution to the discussion on the article “Semiparametric analysis of case series data” by C. P. Farrington and H. J. Whitaker. Journal of the Royal Statistical Society, Series C
, 55:589-590, 2006. [ bib ]
[7] Axel Gandy. Directed Model Checks for Regression Models from Survival Analysis. Logos Verlag, Berlin, 2006. Dissertation, Universität Ulm. [ bib | .pdf ]
Copyright Logos Verlag Berlin, http://www.logos-verlag.de/cgi-bin/engbuchmid?isbn=1144&lng=deu&id=
Promotionspreis der Universitätsgesellchaft Ulm, siehe http://vts.uni-ulm.de/docs/2006/5683/vts_5683_7512.pdf, S. 4,7.
[8] Axel Gandy and Uwe Jensen. Checking a semi-parametric additive risk model. Technical Report. University of Hohenheim, 2005. [http://www.uni-hohenheim.de/ jensen/veroeffent.html]. [ bib ]
[9] Axel Gandy, Thilo Köder, Uwe Jensen, and Wolfgang Schinköthe. Ausfallverhalten bürstenbehafteter Kleinantriebe. Mechatronik F&M, 11-12:14-17, 2005. [ bib ]
[10] Patrick Jäger, Michael Wedel, Axel Gandy, Bernd Bertsche, Peter Göhner, and Uwe Jensen. Zuverlässigkeitsbewertung softwareintensiver mechatronischer Systeme in frühen Entwicklungsphasen. In
Mechatronik 2005 - Innovative Produktentwicklung, number 1892 in VDI-Berichte, pages 873-898. VDI Verlag, 2005. Contribution to the VDI Konferenz 01./02. Juni 2005, Wiesloch bei Heidelberg. [
bib ]
[11] Axel Gandy. A nonparametric additive risk model with applications in software reliability. Diplomarbeit, Universität Ulm, 2002. [ bib ]
|
{"url":"http://www2.imperial.ac.uk/~agandy/lit.html","timestamp":"2014-04-19T09:37:41Z","content_type":null,"content_length":"23733","record_id":"<urn:uuid:80ca63d1-f40b-4c96-9316-fb9f61e78673>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
perplexus.info :: Probability : A blue taxi
A crime has occured in Carborough, involving a taxi. The police interviewed an eyewitness, who stated that the taxi involved was blue.
The police know that 85% of taxis in Carborough are blue, the other 15% being green. They also know that statistically witnesses in these situations tend to be correct 80% of the time - which means
they report things wrong the other 20% of the time.
What is the probability that the taxi involved in the crime was actually blue?
|
{"url":"http://perplexus.info/show.php?pid=77&cid=251","timestamp":"2014-04-21T12:08:46Z","content_type":null,"content_length":"12716","record_id":"<urn:uuid:baade4f3-5f0a-4164-8389-fdeb927ed384>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Desoto Algebra Tutor
Find a Desoto Algebra Tutor
...Thanks for WyzAnt, I could help kids around me in this area, hope I could reach you too. I have always worked with kids, And I have been enjoying working with children. So, please feel free to
let me know if I can help you.
13 Subjects: including algebra 1, algebra 2, calculus, autism
I have taught Mathematics at the High School level for the previous three years. Classes that I have taught include Algebra 2, Geometry, Precalculus, Calculus and a couple of Engineering courses.
The way that I tutor is by building students confidence in their abilities, starting with basic proble...
13 Subjects: including algebra 1, algebra 2, chemistry, calculus
...As a medical student and former paramedic I have numerous real world examples and a unique knack for relating complex biological topics with simple metaphors. I received an A in both semesters
of chemistry from the University of Texas at Dallas and have become exceedingly familiar with its basic...
15 Subjects: including algebra 1, algebra 2, chemistry, physics
...With students in grades 4-6, I have used the word study sections of the Spectrum Phonics series. Over the past three and a half years, as a tutor with C2 Education, I have prepared several
students for the Lower Level and Middle Level of the ISEE. These students ended up doing very well on the test and were admitted to the private school of their choice.
25 Subjects: including algebra 2, algebra 1, reading, English
...Here I have the opportunity to give you a brief overview of my academic credentials, my experience tutoring and teaching, and my overall approach as a tutor. I graduated high school with Honors
in Dallas in 2001, having completed AP Calculus. I then attended Stephen F.
6 Subjects: including algebra 1, geometry, elementary math, linear algebra
|
{"url":"http://www.purplemath.com/Desoto_Algebra_tutors.php","timestamp":"2014-04-21T02:44:49Z","content_type":null,"content_length":"23787","record_id":"<urn:uuid:87c30e3c-d776-406e-b732-78307bfc9ad9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CGTalk - scirpted controller - shortest distance to splineShape (knots)
11-18-2010, 05:47 AM
For some reason search is down... if this has been covered, I apologize.
Inspired by the hexagon screen forms here. (http://www.notcot.com/archives/2010/11/mercedes-benz-sculpture-experi.php) And by the work of Ali Torabi (http://www.torabiarchitect.com/blog/?page_id=190)
I wrote a scripted controller that finds the shortest distance to a splineShape. In my crude method, I added a Normalize Spl. Mod to the shape then collapsed it to work with the splineShape. The
script makes an array of all the distances, sorts the array then uses the first value. I'm sure there is a better/faster way of doing this. And maybe a way to not use the Normalize Spline mod.
Is there a way to get an array of points evenly spaced along a spline without Normalize Spl. mod?
Is there a faster way to find the shortest distance?
This controller goes into hundreds of objects to control properties in proximity to the shape.
theMin, theMax, theFactor are contstants.
nPos is the position track of the object, an ngon in this case.
theDistances = #()
for s = 1 to (numSplines theShape) do
for k = 1 to (numKnots theShape s) do
knt = getKnotPoint theShape s k
theDist = (distance nPos knt)
append theDistances theDist
sort theDistances
if theDistances[1] < theFactor then
if (theDistances[1]/theFactor) < theMin then theMin else (theDistances[1]/theFactor)
else theMax
|
{"url":"http://forums.cgsociety.org/archive/index.php/t-935828.html","timestamp":"2014-04-18T21:06:37Z","content_type":null,"content_length":"9166","record_id":"<urn:uuid:be749552-ffdb-4e6b-9617-3b62f0d0967a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Whic sentence is correct? a) It's the wonderfulest recipe. b) It's the more better method. c) He's the shortest of the twins. d) Joe deals more honestly than his partner.
Best Response
You've already chosen the best response.
C) He's the shortest of the twins..... I think that one's right.
Best Response
You've already chosen the best response.
Actually, I disagree with MetalPen. It's d) Joe deals more honestly than his partner. This one you can solve through process of elimination. A) wonderfulest isn't a word. So that's obviously out.
B) More better is not proper English, so that's also obviously out. C is tricky. Usually, shortest is correct, but in this case, he is one of two. The ending "e s t" is usually used with a group
3 or more. Therefore it should be "He's the shortER of the twins." Thus, d is correct.
Best Response
You've already chosen the best response.
I agree with lperkows, the correct answer is D. Shouldn't use the superlative (shortest) with just two (twins). That's really tricky and requires taking into account more than just the grammar!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f04e1a9e4b075b56651f69f","timestamp":"2014-04-20T13:44:23Z","content_type":null,"content_length":"33019","record_id":"<urn:uuid:8c1a24ac-2567-44d6-b8d5-1f68aa84fcf9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Foundations of Algebra
Chapter 12, Lesson 9
12-9.A find surface areas of familiar solid figures
The side of the square base of a prism is 4.5 cm and the height of the prism is 9.5 cm. Find the surface area
of the prism.
12-9.B find volumes of familiar solid figures
The diameter of a sphere is 1 foot. Find the volume of the sphere correct to the nearest tenth of a cubic inch.
|
{"url":"http://www.sadlier-oxford.com/math/mc_prerequisite.cfm?sp=student&grade=8&id=1871","timestamp":"2014-04-18T23:15:19Z","content_type":null,"content_length":"15080","record_id":"<urn:uuid:368a4915-330d-495c-a45e-c57b0fa9f263>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
April 30th 2008, 04:02 PM #1
Apr 2008
help please asap
2 quick questions:
Find the equation of the line tangent to the curve y^2+xy+x^3=7 at the point x=2.
Given the function f(x)=ln(3+sinx) on the interval [0,7]
a. at which values of x does f have local max and local min
do you know how to differentiate implicitly? you can do that and solve for y'. you can use that to find the slope. then the equation of the tangent line is given by $y - y_1 = m(x - x_1)$, just
solve for y. Here, $m$ is the slope, given by the derivative that you found earlier, $(x_1,y_1)$ is a point the line passes through. it will be $(2,y_1)$ you can solve for $y_1$ using the initial
equation, you'll need to do this anyway to get the slope.
Given the function f(x)=ln(3+sinx) on the interval [0,7]
a. at which values of x does f have local max and local min
find $f'(x)$ and $f''(x)$.
To find the critical points, set $f'(x) = 0$ and solve for $x$ (only take the values between 0 and 7 inclusive.
To find whether the point you found above are max's or min's, check the following:
If $f''(a) < 0$ then we have a local max
If $f''(a) > 0$ then we have a local min
i know how to do both the problems. i wanted to see someone else do the work to see if my answers for the problem were correct. so if you could show your work i would greatly apperciate it.
for the first question i got y= -6x+12
and for the second answer i got local max @ x=2 and local min @ x=5
i dont know if these are correct or not
when $x = 2$, we have:
$y^2 + 2y + 8 = 7$
$\Rightarrow y^2 + 2y + 1 = 0$
$\Rightarrow (y + 1)^2 = 0$
$\Rightarrow y = -1$
So that we are concerned with the point $(2,-1)$, this will be our $(x_1,y_1)$.
Now, $y^2 + xy + x^3 = 7$
$\Rightarrow 2y~y' + y + x~y' + 3x = 0$
$\Rightarrow (2y + x)y' = -(3x + y)$
$\Rightarrow y' = - \frac {3x + y}{2y + x}$
at $(2,-1)$, the slope is undefined. So that the tangent line is just $x = 2$
Given the function f(x)=ln(3+sinx) on the interval [0,7]
a. at which values of x does f have local max and local min
$f(x) = \ln (3 + \sin x)$
$\Rightarrow f'(x) = \frac {\cos x}{3 + \sin x}$ ............by the chain rule
$\Rightarrow f''(x) = \frac {-3 \sin x - \sin^2 x - \cos^2 x }{(3 + \sin x)^2} = \frac {-3 \sin x - 1}{(3 + \sin x)^2}$
$f'(x) = 0 \implies \cos x = 0 \implies x = \frac {\pi}2,~\frac {3 \pi}2$ for $0 \le x \le 7$
$f''\left( \frac {\pi}2 \right) < 0$, so we have a local max at $x = \frac {\pi}2$
$f'' \left( \frac {3 \pi}2 \right) > 0$, so we have a local min at $x = \frac {3 \pi}2$
thank you very much
April 30th 2008, 04:12 PM #2
April 30th 2008, 04:35 PM #3
Apr 2008
April 30th 2008, 04:41 PM #4
April 30th 2008, 04:50 PM #5
Apr 2008
April 30th 2008, 05:34 PM #6
April 30th 2008, 05:38 PM #7
Apr 2008
|
{"url":"http://mathhelpforum.com/calculus/36723-help-please-asap.html","timestamp":"2014-04-19T16:14:13Z","content_type":null,"content_length":"55557","record_id":"<urn:uuid:f3a0f2b4-1c31-4854-8532-def4c3776381>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Help: Poisson-based problem
September 17th 2009, 05:27 PM #1
Sep 2009
Help: Poisson-based problem
I have a Poisson-based question that I am not sure how to approach and solve. A processor receives groups of data bits with a Poisson arrival rate of L. The probability of an error in receiving
an erroneous bit is p. The number of bits in a group of bits is Poisson with mean M. If there is no error correction (meaning retransmission) allowed, at what rate do groups of bits get to the
My initial thought is to multiply L and p together to get a basic "success" rate, then divide by M, but there's no way this is that simple. I am not sure what the correct way is to solve this
September 19th 2009, 01:39 PM #2
Sep 2009
|
{"url":"http://mathhelpforum.com/advanced-statistics/102886-help-poisson-based-problem.html","timestamp":"2014-04-20T07:07:06Z","content_type":null,"content_length":"31378","record_id":"<urn:uuid:38c1c62a-6805-462b-8540-f0cba3e667f0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Diablo 3 Skills
Monster Power Analysis by Drothvader
When I first heard about the Monster Power system with 1.0.5 I was pretty excited. Like really excited.
Not only is the game going to be more challenging, but there would be better rewards... right?
I keep seeing people say "MP1 is the most efficient way to farm" and I didn't really understand why. I kind of dismissed it as people just like killing stuff faster.
However, the more I looked at it, the more I realize this... Blizzard, out of all the things you could have possibly screwed up, this one takes the cake. I mean, this system is pretty bad...
So I'm going to quote Bashiok here.
Aw man... Damn. Yeah the description in the blog this morning is missing a crucial "for increased rewards" in that sentence. That's my fault. Thanks for the call out.
We'll of course have more info in a proper announcement piece on exactly how it'll work, but I'm digging the speculation!
So I thought... "Hey, this could be pretty neat! Increased Difficulty for Increased Rewards!"
However, today I looked at the numbers and ran an analysis... Blizzard, there's this nifty spreadsheet calculation program called Excel, it might be helpful to open it every once in a while... Just
saying guys...
What I found, is that playing anything other than MP1 was HORRIBLY inefficient. Not even just slightly inefficient, but like HORRIBLE! Like, I don't even want to say that I wasted the hour making up
these spreadsheets...
- Explaining the Math -
Alright, so the main argument here about efficiency is the speed at which you can kill monsters. Based on the Monster Health increases for Inferno, I am going to assume that if it takes 10 seconds to
kill an elite in MP0, it will take 343.9 seconds to kill it in MP10. (Not really considering how much movement I'll have to make, but for the sake of simplicity I'm going to leave out these variables
such as movement speed and time spent NOT DPSing...)
I'm also assuming that if it says 150% Health it actually has 50% more health not 2.5 times as much (150% More) If it actually is 2.5 times more HP then I'll have to redo the spreadsheet... but be
warned that it will NOT work in Blizzard's favor.
A Monster with 2000 HP takes about 10 seconds to kill doing 200 DPS.
Likewise, a Monster with 68,780 (2,439% more HP) takes about 343.9 seconds to kill at 200 DPS.
So DPS is going to be a constant in this example.
Obviously you're going to play at whatever level you're comfortable with, but just know for the example I'm calculating your efficiency at whatever DPS you currently have.
One other thing you need to know before I start is this formula.
Probability of Independent Events
1 - ( Chance to NOT get an item ^ Number of Items Rolled )
This is a gamblers fallacy as each roll is independent of the last, but I need this formula to calculate efficiency.
- Analysis -
Alright, now for the good stuff.
Looking at this chart, you'll see a bunch of random stuff. I'll try and explain it as best I can.
First, I have all my base numbers used in my calculations.
Base Magic Find - Magic Find before MP
Base Gold Find - Gold Find before MP
Base Legendary Drop Rate - A Made up number for efficiency calculations.
Average Gold Drop - A Made up number for efficiency calculations.
Time to Kill Elites (MP0 in Seconds) - A Made up number for efficiency calculations.
Hypothetical XP per Monster - A Made up number for efficiency calculations.
So what I did was I took these numbers, then calculated how many elites can be killed in an hour, and what kind of gold / XP / loot you'll see in an hour.
Now, this is assuming 100% of your time is spent killing elites. No stopping DPS, no doing anything else. So just know there's other factors that come into play, but they will impact this chart even
more negatively. For right now I'm just calculating your time spent killing elites, nothing more.
If you notice a trend, EVERYTHING about Monster Power is just a lie... There's no such thing as increased rewards when you factor in this little thing called TIME. The rewards should be proportional
to the TIME spent...
Now let's bring you up to full Paragon Level 100. (375% Magic and Gold Find which includes NV)
Just for more emphasis, I have included some efficiency charts in there too.
Now, I was going to go out of my way to give suggestions on how to balance this system... but I'm not going to waste time doing that... IF you would like me to hammer out numbers to rebalance this
system I will, but only if I know the feedback will actually be read. =/
I am just utterly disappointed... like really disappointed...
Here, let me really blow your mind.
15750% Magic Find at MP10 to be as efficient as MP0.
EDIT: I know that higher MP's are more efficient when you get to the point where you're oneshotting monsters. You can quit acting like I'm stupid...
I'm simply just saying it's poor game design to make a more "challenging" mode then have the entire point be to try and one shot it to maintain the same efficiency you would have 3-4 monster power
levels prior.
The one and only point to this thread is that the only way to maintain farming efficiency is to get to the point where you're oneshotting everything.
"More guts more glory" is a really really inaccurate statement. As long as you're not oneshotting everything, Moving up in Monster Power is not worth your time.
|
{"url":"http://www.diablo3skills.net/","timestamp":"2014-04-18T18:11:19Z","content_type":null,"content_length":"89557","record_id":"<urn:uuid:1b554f4e-8bf1-4eba-aef4-40782dbb60b3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inductance of Composite Conductor Lines
We will consider a single phase 2 wire system. It consists of two conductors say P and Q which are composite conductors. The arrangement of conductors is shown in the Fig. 1.
Conductor P is consisting of x identical, parallel filaments. Each of the filament carries a current of I/x. Conductor Q consists of y filament with each filament carrying a current of - I/y. The
conductor Y carries a current of I amps in opposite direction to the current in conductor X as it is forming return path.
The flux linkages of filament say a due to all currents in all the filaments is given by
The inductance of filament a is given by,
The inductance of filament b is given by,
The average inductance of the filaments of conductor P is
The conductor P consists of x number of parallel filaments. If all the filaments are equal inductances then inductance of the conductor would be 1/x times inductance of one filament. All the
filaments have different inductances but the inductance of all of them in parallel is 1/x times the average inductance.
Inductance of conductor P is given by,
Substituting the values of L[a], L[b] ..... L[x ] in the equation and simplifying the expression we have,
In the above expression the numerator of argument of logarithm is the xy, the root of xy terms. These terms are nothing but products of distance from all the x filaments of conductor P to all the y
filaments of the conductor Q.
For each filament in conductor P there are y distances to filaments in conductor Q and there are x filaments in conductor P. The xy terms are formed as a result of product of y distances for each of
x filaments. The xyth root of the product of the xy distances is called the geometric mean distance between conductor P and Q. It is termed as D[m ]or GMD and is called mutual GMD between the
The denominator of the above expression is the x^2 root of x^2 terms. There are x filaments and for each filament there are x terms consisting of r' (denoted by D[aa], D[bb ] etc) for that filament
times the distance from that filament to every other filament in conductor P.
If we consider the distance D[aa ] then it is the distance of the filament from itself which is also denoted as r[a]'. This r' of a separate filament is called the self GMD of the filament. It is
also called geometric mean radius GMR and identified as D[s].
Thus the above expression now becomes
Comparing this equation with the expression obtained for inductance of a single phase two wire line. The distance between solid conductors of single conductors line is substituted by the GMD between
conductors of the composite conductor line. Similarly the GMR (r') of the single conductor is replaced by GMR of composite conductor.
The composite conductors are made up of number of strands which are in parallel. The inductance of composite conductor Q is obtained in a similar manner. Thus the inductance of the line is,
L = L
[p ]
+ L
Sponsored links :
0 التعليقات:
|
{"url":"http://yourelectrichome.blogspot.com/2011/11/inductance-of-composite-conductor-lines.html","timestamp":"2014-04-20T21:21:35Z","content_type":null,"content_length":"82174","record_id":"<urn:uuid:a6053f04-1103-4137-b918-1f00b58c9196>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Precedence for Idiots
Please note that this tutorial is undergoing revision and expansion, so the comments that follow it may apply to an earlier version. This version is dated: 5-Dec-2006
The Basics - Getting Your Sums Right
If, like me, you don't come from a comp-sci background, then precedence-awareness of operators probably only goes as far as knowing that 2+3*4 means 2+(3*4), and that if you want (2+3)*4, then you'd
better damn well say so.
Beyond that, 5-1+2 might have you scratching your head - which has precedence - the '-' or the '+'? The answer is 'whichever comes first' - they have equal precedence but left associativity, so 5-1+2
is (5-1)+2, but 5+1-2 is (5+1)-2 (although you'll have fun proving that last one).
... and it's worth mentioning for the comp-sci challenged that left-associativity means "for a sequence of operators of equal precedence, give the left-most operator the precedence".
Right-associativity means the reverse. For example, ** (the 'to-the-power-of' operator) has right-associativity, so 2**3**4 is 2**(2**3), not (2**2)**3.
So far, it's all pretty straight-forward. Whether or not you know what the rules for precedence are for the basic maths operators, you are aware that they exist and need to exist, and, if in doubt,
or if you just want to make things clearer for yourself or the code-maintainer, you can always use brackets to make the order of operations explicit.
First Among Equals
So far, so good - that is until we get to the numeric-equality test, '==' and the assignment operator, '='.
The first thing to note (or at least remember) about these is that don't really have anything in common with each other. Nor do either have any strict equivalent in maths (unlike, say, '*' and '/',
It may be tempting to think otherwise, since $x = 2*4 (Perl) seems to behave a bit like X = 2 x 4 (maths). However, since we can use '=' to assign just about anything to $x, including "hello world",
it really doesn't have anything to do with numbers.
In Perl, '==', and its evil-twin, '!=', are perhaps a bit closer to the maths-class meaning of '=', since all are associated with the numeric equality of the calculations on either side - however, in
maths if the two sides don't match the operator, then you've probably made a mistake, whereas in Perl if the two sides don't match the operator, then you've just performed a valid test.
Nevertheless, the notion of precedence for these operators is somewhat confusing - if precedence is important, does that mean that we have to write ($x+$y) == (12/3) to avoid something like $x+($y ==
12/3) happening? And what would that mean anyway?
By and large, you don't need to worry. Both '=' and '==' have such low precedence that they will almost always behave as you expect (and certainly as far as any maths-based functions go), without any
need for parenthesis.
Logical Questions
However, there are some traps when we start combining '==' and '=' with the various logical operators, such as 'and' and 'or', and their alternatives, '&&' and '¦¦', as these do have lower
For example, (5 or 2 == 12) doesn't mean "does 5 or 2 equal 12?" (which would be false), instead it translates to 5 or (2 == 12), or "if 5 is true or if 2 equals 12" (which is true - 5 is a 'true'
To add to the confusion, '&&' and '¦¦' have a higher precedence than '=', whereas 'and' and 'or' have a lower precedence. This means that $x = 4==5 ¦¦ 5==5 has quite a different meaning than $x = 4==
5 or 5==5 - the first will set $x to 1 ('true') if either 4 or 5 is equal to 5, and will set $x to false if they are not. The second version will set $x to true or false purely on the basis of
whether 4 is equal to 5 (and will go on to check whether 5 is equal to 5 if it fails to set $x to a value).
Below is a short table that will hopefully make all of this a little clearer.
│Function │Meaning │$x is now..│
│$x = 5 == 6 or 5 == 5 │($x = (5 == 6)) or ($x = (5 == 5)) │FALSE │
│$x = (5 == 6 or 5 == 5)│$x = ((5 == 6) or (5 == 5)) │TRUE │
│$x = 5 == 6 ¦¦ 5 == 5 │$x = ((5 == 6) ¦¦ (5 == 5)) │TRUE │
│($x = 5 == 6) ¦¦ 5 == 5│($x = 5 == 6) ¦¦ 5 == 5 │FALSE │
│$x = 5 ¦¦ 6 == 6 │$x = (5 ¦¦ (6 == 6)) │5 │
│$x = (5 ¦¦ 6) == 6 │$x = ((5 ¦¦ 6) == 6) │TRUE │
│$x = 5 or 6 == 6 │($x = 5) ¦¦ ($x = (6 == 6)) │5 │
│$x = 1 == 2 && 3 │$x = (1 == 2) && $x = 3 │3 │
│$x = 1 == 2 ¦¦ 3 │$x = (1 == 2) ¦¦ $x = 3 │FALSE │
The real lesson here is that when you start mixing '==' or '=' with any logical operators, get into the habit of using parenthesis... and just to rub that in, let's take a look at another logical
operator, the slightly obscure, but extremely useful '?:' - and a particular trap you can fall into due to making unwarranted assumptions about the behavior of '='.
?: - If If/Else fails...
The '?:' operator is probably the least-known operator, so let's take a quick look at what it does.
The basic syntax is: <test>?<value to return if test is true>:<value to return if test is false>
Now, the "?:" construct is very useful - basically, it means that we can replace the following code:
$y = $x ? 1 : 0;
Which is all well and good - unless you make the mistake of writing:
$x ? $y=1 : $y=0;
If you run the above code, you will find that, whatever value you assign to $x, you are always told that, apparently, $x was false (i.e. $y is set to 0).
So how did that happen, why was it confusing (IMHO), and what can you do about it?
Well, to illustrate what happened, let's write an alternative version that doesn't exhibit the problem, but looks pretty much identical (using a reg-ex substitution instead of '='):
$x ? $y=1 : $y=~s/.*/0/;
This time, we get the result we expect. So what happened in the bad version that didn't happen here? Well the first thing to notice in the operator-precedence table is that '=~' has a higher
precedence than '?:', but '=' has a lower precedence. So what? All that means, presumably, is that we decide on the truth or falsehood of our initial condition before we assign any value to $y (which
sounds like a good thing).
Well... no. What precedence conceptually means in this context is "where is the boundary of our false expression?" and the answer is "it's when we hit an operator with a lower precedence than '?:'"
So $x ? $y=1 : $y=0 can be expressed as ($x ? $y=1 : $y)=0 - which, if $x is false, leads to ($y)=0 (correct), but if $x is true, leads to ($y=1)=0 (uh-oh - we did set $y to 1, but then immediately
reset it to 0).
Now, when we replace a false expression such as $y=0 with $y=~s/.*/0/, the higher precedence of '=~' means that Perl evaluates this as:
$x ? $y=1 : ($y=~s/.*/0/)
which is probably what we (the comp-sci challenged) expected in the first example.
Bottom line, '?:' can benefit from parenthesis just as much as (2+3)*5 - here is the bad code made good:
$x ? $y=1 : ($y=0);
As a small side-note, really we ought to be writing $x ? ($y=1) : ($y=0);, but Perl 'knows' that the function between '?' and ':' must be our 'true' function and is kind enough to add the virtual
parenthesis for us...
...and, as noted before, we can avoid the need for parenthesis, and save a few key-strokes, by writing:
$y = $x ? 1 : 0;
... which is really what we should have done in the first place - there is an Meditation discussing the use of '?:' at ?: = Obfuscation?.
A Final Word
This is not meant to be an exhaustive look at precedence and operators - I haven't mentioned the bit-wise operators for example. However, I hope I've covered the issues likely to fox the comp-sci
challenged (basically, if you're using bit-wise operators, I assume you know what you're doing).
Also, I'm half-tempted (well, 25% tempted) to replace this tutorial with just the one sentence "USE LOTS OF PARENTHESIS" - it's certainly the bottom line. They will make your code more readable, and
you will avoid most of the traps associated with precedence.
That said, don't go over the top:
$x = ((((((1 * 2) * 3) * (4 ** 2)) * 5) * 6) * 7)
is not really helping anyone....
Tom Melly, pm@tomandlu.co.uk
|
{"url":"http://www.perlmonks.org/index.pl?node_id=587193","timestamp":"2014-04-16T14:59:32Z","content_type":null,"content_length":"46453","record_id":"<urn:uuid:0c521b21-b505-40f0-9df0-a84360f41a20>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lines, Line Segments and Rays
So, you think you know what a line is. Well, your 2nd or 3rd grader just might challenge your definition. Lines, line segments, and rays are all second and third grade math concepts, and although
they are similar, they are not the same. What exactly do these geometry terms mean? Read on to find out.
What is it?
A line is defined as an infinite set of points forming a straight path, extending in both directions. It does not have a beginning or an end.
A line segment is a part of a line, defined by two endpoints.
A ray is a part of line that has only one endpoint, and extends indefinitely in one direction.
We typically name a line, line segment or ray using uppercase letters. For example:
Line AB
What’s the Difference?
You may have noticed that a line has arrows on both ends. This is to indicate that the line goes on forever in both directions. There are 2 points on the line that are labelled with uppercase
A line segment has an endpoint on both ends, indicating that there is a distinct beginning and end. Each endpoint is labelled with an uppercase letter.
A ray has one endpoint, and an arrow on the other end. It too is labelled with uppercase letters on each end.
• As your child learns about lines, line segments and rays, he will also be introduced to third and second grade math skill vocabulary words such as intersect and parallel .
• Intersecting lines are lines that cross one another.
• Parallel lines are two lines that are equal distance from one another (if the two lines were extended infinitely, they would never cross.
• Your child will likely be asked to draw lines, line segments and rays at school. A typical question would look like this:
Draw —>AB that intersects with <——-> CD. This would read: “Draw ray AB that intersects with line CD”. Your child would draw:
A——-B> with <C———–D> CROSSING ray AB. (The drawing should form a cross shape.)
What To Watch For
• As your child is naming lines and line segments, be sure he notices that they can be read in either direction. Using the examples from the Naming Lines, Lines Segments and Rays above, the line
could read either “line AB” or “line BA”. The line segment could read either “line segment EF” or “line segment FE.”
• The ray is the only one that doesn’t follow this rule. A ray must be read in the direction of the arrow. So, using the example G.——–.H>, this ray would be read “ray GH”, and not “ray HG”.
This is one of the most common mistakes students make.
Want More?
• As always, playing math games at home is a great way to reinforce math skills learned in school.
• Have questions or ideas about this story?
• Need help or advice about your child’s learning?
• Have ideas for future Parent Homework Help stories?
Go to “Leave a Reply” at the bottom of this page. I’d love to help!
Leave a Comment
|
{"url":"http://www.parent-homework-help.com/2011/05/17/lines-line-segments-and-rays/","timestamp":"2014-04-18T10:34:57Z","content_type":null,"content_length":"28949","record_id":"<urn:uuid:04cd6304-17ea-4969-95e3-e1a72a65ab0b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus (Intermediate Value Theorem)
Number of results: 28,459
Verify that the Intermediate Value theorem applies to the indicated interval and find the value of c guaranteed by the theorem. f(x) = x^2 - 6x + 8, [0,3], f(c) = 0 I have no idea how to use the
theorem :(
Monday, September 27, 2010 at 8:09pm by Jack
Math Calculus
The Image Theorem: The image theorem, a corollary of the intermediate value theorem, expresses the property that if f is continuous on the interval [a, b], then the image (the set of y-values) of f
on [a,b] is all real numbers between the minimum of f(x) on [a,b], inclusive. ...
Wednesday, September 24, 2008 at 6:17pm by Desperate
Math - Calculus
Show that the equation x^3-15x+c=0 has at most one root in the interval [-2,2]. Perhaps Rolle's Theorem, Mean Value Theorem, or Intermediate Value Theorem hold clues? ...Other than simply using my
TI-84, I have no idea how to accomplish this.
Monday, February 28, 2011 at 10:05pm by William
Math - Calculus
Show that the equation x^3-15x+c=0 has at most one root in the interval [-2,2]. Perhaps Rolle's Theorem, Mean Value Theorem, or Intermediate Value Theorem hold clues? ...Other than simply using my
TI-84, I have no idea how to accomplish this.
Monday, February 28, 2011 at 10:05pm by William
Use the intermediate value theorem to find the value of c such that f(c) = M. f(x) = x^2 - x + 1 text( on ) [-1,12]; M = 21
Friday, June 7, 2013 at 2:44pm by Tee
intermediate value thorem
Use intermediate value theorem to show that the polynomial function has a zero in the given interval. f(-3)= value of 0=
Thursday, May 2, 2013 at 2:32pm by lynn
verify the Intermediate Value Theorem if F(x)=squre root of x+1 and the interval is [3,24].
Friday, December 9, 2011 at 8:02pm by piyatida
Verify the hypothesis of the mean value theorem for each function below defined on the indicated interval. Then find the value C referred to by the theorem. Q1a) h(x)=√(x+1 ) [3,8] Q1b) K(x)=(x-1)/
(x=1) [0,4] Q1c) Explain the difference between the Mean Value Theorem ...
Saturday, November 3, 2012 at 11:57pm by Daniella
Use the Intermediate Value Theorem to show that there is a root in the equation x^(1/3)=1-x in the interval (0,1).
Thursday, January 21, 2010 at 1:15pm by Gabe
"use the intermediate value theorem to prove that the curves y=x^2 and y=cosx intersect"
Tuesday, July 13, 2010 at 5:53pm by teri
use the intermediate value theorem to determine whether there is a zero f(x) = -3^3 - 6x^2 + 10x + 9 ; [-1,0]
Thursday, September 22, 2011 at 8:14pm by M
Calculus (Intermediate Value Theorem)
If f(x)= x^3-x+3 and if c is the only real number such that f(c)=0, then c is between ______?
Monday, March 12, 2012 at 7:37pm by Student
Suppose f(x) = x ^ 4 4x ^ 2 + 6, and g(x) = 3x ^ 3 8x. Prove, via the Intermediate Value Theorem, that the functions intersect at least twice between x = 2 and x = 4.
Wednesday, November 3, 2010 at 6:50pm by Juana
Use the intermediate value theorem to determine whether or not f(x)=x^2+7x-7 and g(x)=4x+21 intersects on [-4,-1]. If applicable, find the point of intersection on the interval.
Sunday, December 4, 2011 at 4:47pm by arial
Use the intermediate value
Use the intermediate value theorem to show that the polynomial function has a zero in the given interval. f(x)=9x^4-3x^2+5x-1;[0,1]
Tuesday, March 26, 2013 at 10:53am by Jenn
Use the intermediate value
Use the intermediate value theorem to show that the polynomial function has a zero in the given interval. f(x)=4x^3+6x^2-7x+1; [-4,-2] f(-4)=
Tuesday, March 26, 2013 at 12:44pm by Ashley
College Algebra
1. Use the Intermediate Value Theorem to show that the polynomial function has a zero in the given interval. f(x) = 13x^4 - 5x^2 +7x -1; [3,0] Enter the value of (-3). 2. Use the Intermediate Value
Theorem to show that the polynomial function has a zero in the given interval. ...
Thursday, August 9, 2012 at 8:37pm by Kameesha
Let f be a twice-differentiable function such that f(2)=5 and f(5)=2. Let g be the function given by g(x)= f(f(x)). (a) Explain why there must be a value c for 2 < c < 5 such that f'(c) = -1. (b)
Show that g' (2) = g' (5). Use this result to explain why there must be a ...
Monday, February 7, 2011 at 10:53pm by Leanna
Use the Intermediate Value Theorem to show that there is a root of the given equation in the specified interval. cos x = x. How do I begin this problem? According to the theorem, a=0, b=1 and N=x?
Sunday, September 2, 2012 at 12:00am by KC
Consider the function f(x)=8.5x−cos(x)+2 on the interval 0‘άx‘ά1 . The Intermediate Value Theorem guarantees that there is a value c such that f(c)=k for which values of c and k? Fill in the
following mathematical statements, giving an interval with non-zero length in ...
Saturday, February 5, 2011 at 11:17pm by Abigail
Calculus (Please Check)
Show that the equation x^5+x+1 = 0 has exactly one real root. Name the theorems you use to prove it. I.V.T. *f(x) is continuous *Lim x-> inf x^5+x+1 = inf >0 *Lim x-> -inf x^5+x+1 = -inf <0 Rolles *f
(c)=f(d)=0 *f(x) is coninuous *f(x) is differentiable f'(x) = 5x^4...
Thursday, October 18, 2012 at 11:34am by Anonymous
Sorry... Consider the function f(x) = 8.5 x − cos(x) + 2 on the interval 0 ‘ά x ‘ά 1. The Intermediate Value Theorem guarantees that there is a value c such that f(c) = k for which values of c and k?
Fill in the following mathematical statements, giving an interval with ...
Sunday, February 6, 2011 at 3:36pm by Abigail
Use the Intermediate Value Theorem to show that there is a root of the given equation in the specified interval. x^4+x-3=0, interval (1,2). According the to theorem, I found that a is 1, b is 2 and N
is 0. f(1)= 2 and f(2) = 17. Is the root (1.16,0)?
Saturday, September 1, 2012 at 11:51pm by KC
Use the Intermediate Value Theorem to prove that the equation has a solution. Then use a graphing calculator or computer grapher to solve the equation. 2x^3-2x^2-2x+1=0 i am completely lost & have no
idea where to start.
Thursday, February 3, 2011 at 12:40am by Kelly
Use the intermediate theorem to show that the polynomial function value has a zero in the given interval f(x)=x^5-x^4+8x^3-7x^2-17x+7; [1.6,1.8] Find the value of f(1.6) Find the value of f(1.8)
Sunday, March 31, 2013 at 7:28pm by Alley
Use the Fundamental Theorem of Calculus to find the area of the region bounded by the x-axis and the graph of y = 4 x3 − 4 x. Answer: (1) Use the Fundamental Theorem of Calculus to find the average
value of f(x) = e0.9 x between x = 0 and x = 2. Answer: (2) Draw the ...
Monday, December 6, 2010 at 11:42pm by Erika
Let f(x) = (x+1)/(x-1). Show that there are no vlue of c such that f(2)-f(0) =f'(c)(2-0). Why does this not contradict the Mean Value Theorem? I plugged 2 and 0 into the original problem and got 3
and -1 . Then I found the derivative to be ((x-1)-(x+1))/(x-1)^2. Whould would I...
Sunday, December 17, 2006 at 5:37pm by Jamie
verify that the function satisfies the hypothesis of the mean value theorem on the given interval. then find all numbers c that satisfy the conclusion of the mean value theorem. f(x) = x/(x+2) ,
Sunday, November 7, 2010 at 8:17pm by Sasha
use intermediate value theorem to show f(x) has a zero f(x)= x^5 - 4x^4- 7x^2 - 6x; [-0.7, -0.6]
Sunday, September 25, 2011 at 2:53pm by M
use intermediate value theorem to show f(x) has a zero f(x)= x^5 - 4x^4- 7x^2 - 6x; [-0.7, -0.6]
Sunday, September 25, 2011 at 2:53pm by M
Use intermediate value theorem to show there is a root to 2x^3 + x^2 - 2 = 0 on [0,1]
Wednesday, March 13, 2013 at 5:54am by Kyle
Referring to the Mean Value Theorem and Rolle's Theorem, how can I tell if f is continuous on the interval [a,b] and differentiable on (a,b).
Sunday, November 7, 2010 at 4:17pm by Jason
Use the intermediate value theorem to verity that x^4+X-3=0 has a solution in the interval(1,2)
Sunday, October 10, 2010 at 4:36pm by Anonymous
Calculus I
Suppose that f and g are two functions both continuous on the interval [a, b], and such that f(a) = g(b) = p and f(b) = g(a) = q where p does not equal to q. Sketch typical graphs of two such
functions . Then apply the intermediate value theorem to the function h(x) = f(x) - g...
Sunday, September 18, 2011 at 12:51am by Kaiden
1) GDP does not include intermediate goods because a. that would understate the true size of GDP. b. intermediate goods are not useful to consumers. c. that would count the value of intermediate
goods twice. d. intermediate goods are not valuable. 2) The dollar value of an ...
Sunday, June 6, 2010 at 12:36pm by Bob
AP Calculus
Show that the equation x^3 - 15x + c = o has exactly one real root. All I know is that it has something to do with the Mean Value Theorem/Rolle's Theorem.
Monday, November 29, 2010 at 9:31pm by cel
For f(x) = x^3 4x 7, use the Intermediate Value Theorem to determine which interval must contain a zero of f.
Sunday, July 26, 2009 at 9:01pm by Crystal
use the intermediate value theorem to show that f(x) has a zero in the given interval. f(x) = -x^5 -2x^4 + 5x^3 + 4; [-0.9, -0.8] Stuck!
Friday, December 9, 2011 at 4:38pm by james
use the intermediate value theorem to prove that every real number has a cubic root. That is, prove that for any real number a there exists a number c such that c^3=a
Sunday, October 21, 2012 at 11:50pm by not so master
usethe intermediate theorem to show that the polynomial function has a zero in the given interval f(x)=18x^4-8x^2+9x-1;[0,3) can you please me how you got the answer
Saturday, December 22, 2012 at 11:32pm by Caylan
college algebra
Use the intermediate value theorem to determine whether the polynomial function has a zero in the given interval. f(x)=8x^5-4x^3-9x^2-9;[1,2]
Saturday, October 29, 2011 at 8:12pm by julez
Verify that the hypotheses of the Mean-Value Theorem are satisfied on the given interval, and find all values of c in that interval that satisfy the conclusion of the theorem. f(x)=x^2-3x; [-2,6]
Sunday, August 1, 2010 at 11:15am by Mely
Verify that the hypotheses of the Mean-Value Theorem are satisfied for f(x) = √(16-x^2 ) on the interval [-4,1] and find all values of C in this interval that satisfy the conclusion of the theorem.
Monday, November 29, 2010 at 2:26pm by Ronnie
determine whether the mean value theorem can be applied to f on the closed interval [a,b]. If the Mean Value Theorem can be applied, find all values of c in the open interval (a,b) such that f(c) =f
(b) - f(a) / b - a
Sunday, December 9, 2012 at 1:16am by Anonymous
Consider the function f(x)=65x−cos(x)+2 on the interval 0 less than or equal to x less than or equal to 1. The Intermediate Value Theorem guarantees that there is a value c such that f(c)=k for which
values of c and k? Fill in the following mathematical statements, ...
Thursday, February 2, 2012 at 2:36am by lauren
use the intermediate value theorem to show the polynominal function has a zero in the given interval f(x)=x^5-x^4+3x^3-2x^2-11x+6; [1.5,1.9] x= -2.33 y=10.19 after i plugged in the 1.5 and 1.9 i just
want to know if my x and y are correct
Sunday, August 26, 2012 at 12:40pm by ash
On which interval does the Intermediate Value Theorem guarantee that the polynomial x^4 + 7x^2 − 9x − 1 has a root? A. (-1/2,0) B. (1/2,1) C. (0,1/2) D. (-1,-1/2)
Friday, December 14, 2012 at 5:34pm by Anonymous
On which interval does the Intermediate Value Theorem guarantee that the polynomial x^4 + 7x^2 − 9x − 1 has a root? A. (-1/2,0) B. (1/2,1) C. (0,1/2) D. (-1,-1/2)
Friday, December 14, 2012 at 4:31pm by Amy
On which interval does the Intermediate Value Theorem guarantee that the polynomial x^4 + 7x^2 − 9x − 1 has a root? A. (-1/2,0) B. (1/2,1) C. (0,1/2) D. (-1,-1/2)
Friday, December 14, 2012 at 6:18pm by Anonymous
college algebra--need help please!!
use the intermediate value theorem to show that the polynomial function has a zero in the given interval. f(x)=x^5-x^4+9x^3-5x^2-16x+5;[1.3,1.6] f(x)1.3= ? simplify answer f(x)1.6= ? "
Friday, November 23, 2012 at 2:55pm by ladybug
suppose that 3 <_ f prime of x <_ 5, for all values x. show that 18<_ f(8)-f(2) <_ 30 <_ signs mean less or equal to... im supposed to apply mean value theorem or rolle's theorem... i dont understand
neither so i cant do the question! please help!
Saturday, November 10, 2007 at 7:19pm by Matthew
Consider the function f(x)=6x-cos(x)+5 on the interval 0 is less than or equal to x, and x is less than or equal to 1. The Intermediate Value Theorem guarantees that there is a value c such that f(c)
=k for which values of c and k? Fill in the following mathematical statements...
Wednesday, February 1, 2012 at 12:14am by Anonymous
Calculus I Theorem
I factored and simplified dy/dx of 192x^5 + 96x^3 + 12x all the way down to x^2 = u = (-1/2) and (-1/6). How does the result illustrates part 1 of the Calculus Fundamental Theorem?
Friday, May 14, 2010 at 1:09pm by John
let f(x)= (x-3)^-2 Show that there is no value of c in (1,4) such that f(4)-f(1)= (f prime of c)(4-1). Why doesn't this contradict the mean value theorem.
Tuesday, November 2, 2010 at 8:17pm by Anonymous
let f(x)= 2 - |2x-1|. Show that there is no value of c such that f(3)- f(0) = f'(c)(3-0). Why does this not contradict the mean value theorem.
Sunday, November 7, 2010 at 8:22pm by Sasha
Use the intermediate value theorm to show that the polynomial function has a zero in the given interval f(x)=x^5-x^4+8x^3-5x^2-14x5;[1.4;1.5] find the value of f(1.4) f(1.4)= find the value of f(1.5)
Thursday, December 27, 2012 at 7:10pm by Brock
Use the Intermediate Value Theorem to check whether the equation x^3 3x+2.1=0 has a root in the interval (0,1) answer: yes or no ? i have no idea how to answer to go about solving this question,
thanks for the help!
Wednesday, January 25, 2012 at 6:36pm by UCI STUDENT
calculus help
Does the function satisfy the hypotheses of the Mean Value Theorem on the given interval? f(x)= ln(x) , [1,6] If it satisfies the hypotheses, find all numbers c that satisfy the conclusion of the
Mean Value Theorem. (Enter your answers as a comma-separated list. If it does not...
Wednesday, April 16, 2014 at 4:15pm by Tom
Calculus Help Please!!!
Does the function satisfy the hypotheses of the Mean Value Theorem on the given interval? f(x) = 2x^2 − 5x + 1, [0, 2] If it satisfies the hypotheses, find all numbers c that satisfy the conclusion
of the Mean Value Theorem. (Enter your answers as a comma-separated list...
Tuesday, April 1, 2014 at 10:08pm by Layla
Calculus Help Please!!!
does the function satisfy the hypotheses of the Mean Value Theorem on the given interval? f(x) = 2x^2 − 5x + 1, [0, 2] If it satisfies the hypotheses, find all numbers c that satisfy the conclusion
of the Mean Value Theorem. (Enter your answers as a comma-separated list...
Friday, April 4, 2014 at 11:12pm by Uygur
Calculus--Pythagorean Theorem
Use the Pythaogorean Theorem to determine the exact length of AB. Express the answer as A) an exact value in simplest mixed radical form B) A decimal to the nearest hundredth The picture is right
here, I uploaded it of the diagram. h t t p : //imageshack . us/photo/my-images/...
Sunday, September 25, 2011 at 8:28pm by -Untamed-
For f(x) = x^3 4x 7, use the Intermediate Value Theorem to determine which interval must contain a zero of f. A. Between 0 and 1 B. Between 1 and 2 C. Between 2 and 3 D. Between 3 and 4
Sunday, July 26, 2009 at 9:55pm by Breanna
For f (x) = x4 2x2 7, use the Intermediate Value Theorem to determine which interval Must contain a zero of f. A. Between 0 and 1 B. Between 1 and 2 C. Between 2 and 3 D. Between 3 and 4
Monday, November 22, 2010 at 6:20am by Help please
Use the intermediate value theorem to show that f(x) has a zero in the given interval. Please show all of your work. f(x) = 3x^3 + 8x^2 - 5x - 11; [-2.8, -2.7] I'm not understanding all this
Wednesday, June 1, 2011 at 9:02pm by CheezyReezy
college algebra, Please help!!
use the intermediate value theorem to show that the polynomial function has a zero in the given interval. f(x)=4x^3+3x^2-8x+7;[-5,-2] please show all work
Sunday, November 18, 2012 at 11:31pm by ladybug
Apply mean value theorem f(x)=7-(6/x) on [1,6]
Sunday, June 24, 2012 at 5:25pm by Jody
use mean value theorem: f(x)= 7- 2^x, [0,4], c=?
Wednesday, November 28, 2012 at 6:50pm by Ashley
use the intermediate value theoremto show that the polynomial function has a zero in the given interval f(x)=x^5-x^4+8x^3-5x^2-14x5;[1.4;1.5] find the value of f (1.4)= find the value of f (1.5)=
show work, thanks
Sunday, December 30, 2012 at 6:21am by Julienne
[Mean Value Theorem] f(x)=-3x^3 - 4x^2 - 2x -3 on the closed interval [0,8]. Find the smallest value of c that satisfies the conclusion of the Mean Value Theorem for this function defined on the
given interval. I got 8 - sqrt(5696) / -18 = 3.748436059 but it's not right.
Friday, October 23, 2009 at 12:57am by Z32
I finished the first question with no problem. It was something like: Find the points at which f(x) [insert three equations for left right and in between here] is discontinuous. At each of these
points, is f cont. from the right or the left? As I said, I solved that one. I am ...
Saturday, May 22, 2010 at 4:12pm by Elisabeth
Using the mean value theorem; F'(x) = f(b)-f(a) / b-a f(x)=x^2-8x+3; interval [-1,6]
Friday, November 30, 2012 at 11:17am by Kasie
Calculus I
Section The fundamental Theorem of Calculus: Use Part I of the fundamental Theorem to compute each integral exactly. 4 | 4 / 1 + x^2 dx 0
Saturday, April 14, 2012 at 11:20am by Sandra Gibson
verify that the function satisfies the hypothesis of the mean value theorem on the given interval. then find all numbers c that satisfy the conclusion of the mean value theorem. f(x) = x/(x+2) ,
Sunday, November 7, 2010 at 9:29pm by help
For f(x) = x^3 4x 7, use the Intermediate Value Theorem to determine which interval must contain a zero of f. A. Between 0 and 1 B. Between 1 and 2 C. Between 2 and 3 D. Between 3 and 4 I am
leaning towards Choice A. What does everyone think? I would appreciate some feed ...
Thursday, July 23, 2009 at 8:23pm by Tammie
Use Mean Value Theorem and find all numbers c in (a,b) 1.x+(4/x) [1,4] Help me!
Tuesday, October 20, 2009 at 11:08am by A-tan
Find the number c that satisfies the conclusion of the Mean Value Theorem. f(x) = x/(x + 4) [1, 8]
Sunday, March 7, 2010 at 9:33pm by Erin
Use the Evaluation Theorem to find the exact value of the integral η 7 1 1/5x(dx)
Wednesday, April 17, 2013 at 5:52pm by Penelope
Let f(x)=x^(3)+x-1. Find ech number c in (1,2) that satisfies the conclusion of the Mean Value Theorem.
Wednesday, December 5, 2007 at 11:54am by Anonymous
mean value theorem prove sq root 9.1 is less than or equal to 3+1/60
Tuesday, January 1, 2013 at 2:05pm by jen
I have three questions I'm having a terrible time with: 1)Find, if possible, the absolute maximum value and where it occurs for f(x)=ln(xe^-x) on (0,infinity). 2)Find the value(s) of "c" guaranteed
by the Mean Value Theorem for the function f(x)=ln(x^2) on the interval [1,e]. ...
Sunday, March 23, 2008 at 5:55pm by Chelsea
What are two conditions that must be met before the Extreme Value Theorem may be applied?
Tuesday, April 8, 2014 at 11:03pm by bex
Find the values of c that satisfy the Mean Value Theorem for f(x)=6/x-3 on the interval [-1,2]. Is it no value of c in that interval because the function is not continuous on that interval???
Thursday, December 18, 2008 at 11:58pm by Theresa
Find the values of c that satisfy the Mean Value Theorem for f(x)=6/x-3 on the interval [-1,2]. Is it no value of c in that interval because the function is not continuous on that interval???
Friday, December 19, 2008 at 12:20am by Theresa
Use the Intermediate Value Theorem and a graphing utility to find intervals of length 1 in which the polynomial is guaranteed to have a zero. Use the root feature of a graphing utility to approximate
the zeros of the function. h(x)=x^4-10x^2+2
Sunday, December 26, 2010 at 11:13am by janet
Determine if Rolle's Theorem applies to the given function f(x)=2 cos(x) on [0, pi]. If so, find all numbers c on the interval that satisfy the theorem.
Sunday, March 6, 2011 at 7:56pm by Ky
Please help me with this problem: Find the number c that satisfies the conclusion of the Mean Value Theorem. f(x) = x/(x + 4) [1, 8] i got to f'(x)= 4/(x+4)^2=(-1/60).
Monday, March 8, 2010 at 12:02am by Sarah
Find a point c satisfying the conclusion of the Mean Value Theorem for the following function and interval. f(x)=x^−1 [1,9]
Sunday, November 6, 2011 at 1:09am by saud
use the mean value theorem to find the c's on the open interval (a,b) such that fprime(c)= (f(b)-f(a))/(b-a) f(x)= 3xlog(base 2)x , [1,2]
Wednesday, October 17, 2012 at 4:40pm by Ashley
Consider f(x)=x^3-x over the interval [0,2]. Find all the values of C that satisfy the Mean Value Theorem (MVT)
Tuesday, November 13, 2012 at 11:42pm by Daniella
Determine whether F satisfies the hypotheses of the mean value theorem on [a,b], and if so, find all numbers c in (a,b). f(x)=X^2/3 [-8,8] why this answer is f is not differantible?
Tuesday, October 20, 2009 at 12:13am by A-tan
Use the Evaluation Theorem to find the exact value of the integral η from 1/2 to 0 (a/ γ(1−x^2)dx. The answer should involve the parameter a.
Wednesday, April 17, 2013 at 5:50pm by Stacey
i am on "rolles and the mean value theorem" and was just wondering, when i am doing rolles, do i really need to find the exact value of x where f'(c) = 0? for example: f(x) = (x+4)^2 (x-3) on [-4,3]
i get to: 3x^2+10x-8=7 then i dont know if i then need to find the exact value...
Tuesday, January 28, 2014 at 11:51pm by eric
1. Determine whether Rolle's Theorem applied to the function f(x)=((x-6)(x+4))/(x+7)^2 on the closed interval[-4,6]. If Rolle's Theorem can be applied, find all numbers of c in the open interval
(-4,6) such that f'(c)=0. 2. Determine whether the Mean Value Theorem applied to ...
Wednesday, October 31, 2012 at 12:11am by Rudy
Verify that f(x) = x^3 − 2x + 6 satisfies the hypothesis of the Mean-Value Theorem over the interval [-2, 3] and find all values of C that satisfy the conclusion of the theorem.
Sunday, December 19, 2010 at 12:18pm by Ronnie
Math calculus
an automobile starts from rest and travel 4 miles along a straight road in 5 minutes. Use the mean value theorem
Tuesday, November 13, 2012 at 4:01pm by Bernard
Calculus-Mean Value Theorem
Find the function G(x) whose graph passes through (pi/38,-12)and has f(x) as its derivative: G(x)= I already found which is: F(x)=76(1/-19)cos(19x)+C
Wednesday, October 23, 2013 at 5:30pm by Sara
math - very urgent !
Verify that f(x) = x^3 − 2x + 6 satisfies the hypothesis of the Mean-Value Theorem over the interval [-2, 3] and find all values of C that satisfy the conclusion of the theorem.
Sunday, December 19, 2010 at 11:12pm by Carla
Use the Evaluation Theorem to find the exact value of the integral η^6 2 2x+1dx 1) What is the antiderivative? 2)What is theupper and lower limit? 3) Give final answer.
Friday, April 12, 2013 at 2:23pm by Sasha
Verify that the hypotheses of Rolle s Theorem are satisfied for f(x)=6cosx on the interval [9pi/2,11pi/2] and find all values of c in this interval that satisfy the conclusion of the theorem.
Sunday, August 1, 2010 at 11:13am by Mely
In the viewing rectangle [-4, 4] by [-20, 20], graph the function f(x) = x3 - 3x and its secant line through the points (-3, -18) and (3, 18). Find the values of the numbers c that satisfy the
conclusion of the Mean Value Theorem for the interval [-3, 3].
Wednesday, October 27, 2010 at 5:07pm by Danielle
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
|
{"url":"http://www.jiskha.com/search/index.cgi?query=Calculus+(Intermediate+Value+Theorem)","timestamp":"2014-04-25T06:12:49Z","content_type":null,"content_length":"38673","record_id":"<urn:uuid:16a2e14b-3581-4637-8034-e7ab616c7ffe>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bayesian game
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking - Cognitive processes Cognition - Outline Index
In game theory, a Bayesian game is one in which information about characteristics of the other players (i.e. payoffs) is incomplete. Following John C. Harsanyi's framework, a Bayesian game can be
modelled by introducing Nature as a player in a game. Nature assigns a random variable to each player which could take values of types for each player and associating probabilities or a probability
density function with those types (in the course of the game, nature randomly chooses a type for each player according to the probability distribution across each player's type space). Harsanyi's
approach to modelling a Bayesian game in such a way allows game of incomplete information to become games of imperfect information (in which the history of the game is not available to all players).
The type of a player determines that player's payoff function and the probability associated with the type is the probability that the player for whom the type is specified is that type. In a
Bayesian game, the incompleteness of information means that at least one player is unsure of the type (and so the payoff function) of another player.
Such games are called Bayesian because of the probabilistic analysis inherent in the game. Players have initial beliefs about the type of each player (where a belief is a probability distribution
over the possible types for a player) and can update their beliefs according to Bayes' Rule as play takes place in the game, i.e. the belief a player holds about another player's type might change on
the basis of the actions they have played. The lack of information held by players and modelling of beliefs mean that such games are also used to analyse imperfect information scenarios.
Specification of games
The normal form representation of a non-Bayesian game with perfect information is a specification of the strategy spaces and payoff functions of players. A strategy for a player is a complete plan of
action that covers every contingency of the game, even if that contingency can never arise. The strategy space of a player is thus the set of all strategies available to a player. A payoff function
is a function from the set of strategy profiles to the set of payoffs (normally the set of real numbers), where a strategy profile is a vector specifying a strategy for every player.
In a Bayesian game, it is necessary to specify the strategy spaces, type spaces, payoff functions and beliefs for every player. A strategy for a player is a complete plan of actions that covers every
contingency that might arise for every type that player might be. A strategy must not only specify the actions of the player given the type that he is, but must specify the actions that he would take
if he were of another type. Strategy spaces are defined as above. A type space for a player is just the set of all possible types of that player. The beliefs of a player describe the uncertainty of
that player about the types of the other players. Each belief is the probability of the other players having particular types, given the type of the player with that belief (i.e. the belief is p
(types of other players|type of this player)). A payoff function is a 2-place function of strategy profiles and types. If a player has payoff function U(x,y) and he has type t, the payoff he receives
is U(x*,t), where x* is the strategy profile played in the game (i.e. the vector of strategies played).
A signalling example
Signalling games constitute an example of Bayesian games. In such a game, the informed party (the agent) knows their type, whereas the uninformed party (the principal) does not know the (agent's)
type. In some such games, it is possible for the principal to deduce the agent's type based on the actions the agent takes (in the form of a signal sent to the principal) in what is known as a
separating equilibrium. A more specific example of a signalling game is a model of the job market. The players are the applicant (agent) and the employer (principal). There are two types of
applicant, skilled and unskilled, but the employer does not know which the applicant is, but he does know that 90% of applicants are unskilled and 10% are skilled (the type 'skilled' occurs with 10%
chance and unskilled with 90% chance). The employer will offer the applicant a contract based on how productive he thinks he will be. Skilled workers are very productive (generating a large payoff
for the employer) and unskilled workers are unproductive (generating a low payoff for the employer). The payoff of the employer is determined thus by the skill of the applicant (if the applicant
accepts a contract) and the wage paid.
The applicant's action space comprises two actions, take a university education or do not. It is less costly for the skilled worker to do so (because he does not pay extra tuition fees, finds classes
less taxing, etc.). The employer's action space is the set of (say) natural numbers, which represents the wage of the applicant (the applicant's action space might be extended to include acceptance
of a wage, in which case it would be more appropriate to talk of his strategy space). It might be possible for the employer to offer a wage that would compensate a skilled applicant sufficiently for
acquiring a university education, but not an unskilled applicant, leading to a separating equilibrium where skilled applicants go to university and unskilled applicants do not, and skilled applicants
(workers) command a high wage, whereas unskilled applicants (workers) receive a low wage.
Crucially in the game sketched above, the employer chooses his action (the wage offered) according to his belief about how skilled the applicant is and this belief is determined, in part, by the
signal sent by the applicant. The employer starts the game with an initial belief about the applicant's type (unskilled with 90% chance), but during the course of the game this belief may be updated
(depending on the payoffs of the different types of applicants) to 0% unskilled if he observes a university education or 100% unskilled if he does not.
Bayesian Nash equilibrium
In a non-Bayesian game, a strategy profile is a Nash equilibrium if every strategy in that profile is a best response to every other strategy in the profile, i.e. there is no strategy that a player
could play that would yield a higher payoff, given all the strategies played by the other players. In a Bayesian game (where players are modeled as risk-neutral), rational players are seeking to
maximize their expected payoff, given their beliefs about the other players (in the general case, where players may be risk averse or risk-loving, the assumption is that players are expected
utility-maximizing). A Bayesian Nash equilibrium is defined as a strategy profile and beliefs specified for each player about the types of the other players that maximizes the expected payoff for
each player given their beliefs about the other players' types and given the strategies played by the other players.
This solution concept yields implausible equilibria in dynamic games, where no further restrictions are placed on players' beliefs. This makes Bayesian Nash equilibrium a flawed tool with which to
analyse dynamic games of incomplete information.
Perfect Bayesian equilibrium
Bayesian Nash equilibrium results in some implausible equilibria in dynamic games, where players take turns sequentially rather than simultaneously. Some implausible equilibria might result from the
fact that in a dynamic game, players might reasonably change their beliefs as the game progresses. No procedure for doing so is available in a Bayesian Nash equilibrium. Similarly, implausible
equilibria might arise in the same way that implausible Nash equilibria arise in games of perfect and complete information, such as incredible threats and promises. Such equilibria might be
eliminated in perfect and complete information games by applying subgame perfect Nash equilibrium. However, it is not always possible to avail oneself of this solution concept in incomplete
information games because such games contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame - the entire game - and so
every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible
equilibria not being eliminated.
To refine the equilibria generated by the Bayesian Nash solution concept or subgame perfection, one can apply the Perfect Bayesian equilibrium solution concept. PBE is in the spirit of subgame
perfection in that it demands that subsequent play be optimal. However, it places player beliefs on decision nodes that enables moves in non-singleton information sets to be dealt with more
So far in discussing Bayesian games, it has been assumed that information is perfect (or if imperfect, play is simultaneous). In examining dynamic games, however, it might be necessary to have the
means to model imperfect information. PBE affords this means: players place beliefs on nodes occurring in their information sets, which means that the information set can be generated by nature (in
the case of incomplete information) or by other players (in the case of imperfect information).
Belief systems
The beliefs held by players in Bayesian games can be approached more rigorously in PBE. A belief system is an assignment of probabilities to every node in the game such that the sum of probabilities
in any information set is 1. The beliefs of a player are exactly those probabilities of the nodes in all the information sets at which that player has the move (a player belief might be specified as
a function from the union of his information sets to [0,1]). A belief system is consistent for a given strategy profile if and only if the probability assigned by the system to every node is computed
as the probability of that node being reached given the strategy profile, i.e. by Bayes' rule.
Sequential rationality
The notion of sequential rationality is what determines the optimality of subsequent play in PBE. A strategy profile is sequentially rational at a particular information set for a particular belief
system if and only if the expected payoff of the player whose information set it is (i.e. who has the move at that information set) is maximal given the strategies played by all the other players. A
strategy profile is sequentially rational for a particular belief system if it satisfies the above for every information set.
A perfect Bayesian equilibrium is a strategy profile and a belief system such that the strategies are sequentially rational given the belief system and the belief system is consistent, wherever
possible, given the strategy profile.
It is necessary to stipulate the 'wherever possible' clause because some information sets might not be reached with a non-zero probability given the strategy profile and hence Bayes' rule cannot be
employed to calculate the probability at the nodes in those sets. Such information sets are said to be off the equilibrium path and any beliefs can be assigned to them.
An example
A Bayesian game with imperfect information represented in extensive form
Information in the game on the left is imperfect since player 2 does not know what player 1 does when he comes to play. If both players are rational and both know that both players are rational and
everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitum - common
knowledge), play in the game will be as follows according to perfect Bayesian equilibrium:
Player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking he has played U when he has actually played D so that player 2 will play D' and player 1 will receive 3. In
fact in the second game there is a perfect Bayesian equilibrium where player 1 plays D and player 2 plays U' and player 2 holds the belief that player will definitely play D (i.e player places a
probability of 1 on the node reached if player 1 plays D). In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. In this
case, the perfect Bayesian equilibrium is the only Nash equilibrium.
|
{"url":"http://psychology.wikia.com/wiki/Bayesian_game","timestamp":"2014-04-20T13:06:14Z","content_type":null,"content_length":"86930","record_id":"<urn:uuid:69fdb44c-e80b-458b-b76b-f2f1fb4f84b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bellflower, CA Algebra Tutor
Find a Bellflower, CA Algebra Tutor
...I am a Long Beach Poly PACE alumna and recent graduate from UCI (double majored in cognitive science and philosophy) willing to help out students K-12 with their math skills, especially those
struggling in Prealgebra, Algebra, Geometry, and Algebra 2/Trig. Math has always been a strong point of ...
22 Subjects: including algebra 1, algebra 2, Spanish, reading
I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always
work with students to overcome obstacles that they might have.
37 Subjects: including algebra 2, algebra 1, chemistry, English
...I am dedicated to my job as an educator. I look forward to providing tutoring services for you. Thank you, GeorgeI am qualified to teach in this subject because I am a certified teacher that
has taught at an elementary school for the last nine years.
8 Subjects: including algebra 1, reading, prealgebra, grammar
...I have been a substitute teacher in mathematics in the Huntington Beach High School District. And I have taken all the preparatory coursework to getting my teaching certificate. I have a M.S.
in mathematics.
24 Subjects: including algebra 1, algebra 2, chemistry, statistics
...I think that's important to note: At an Ivy League college, I was paid to tutor other ultra-high-achieving Ivy League students. I enjoy tutoring because I truly can positively impact the lives
of my students in measurable ways. I've improved grades within weeks; I've shown students how to comprehend the great novels they're reading; I've made sense of complicated formulas.
35 Subjects: including algebra 1, algebra 2, English, reading
Related Bellflower, CA Tutors
Bellflower, CA Accounting Tutors
Bellflower, CA ACT Tutors
Bellflower, CA Algebra Tutors
Bellflower, CA Algebra 2 Tutors
Bellflower, CA Calculus Tutors
Bellflower, CA Geometry Tutors
Bellflower, CA Math Tutors
Bellflower, CA Prealgebra Tutors
Bellflower, CA Precalculus Tutors
Bellflower, CA SAT Tutors
Bellflower, CA SAT Math Tutors
Bellflower, CA Science Tutors
Bellflower, CA Statistics Tutors
Bellflower, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Bellflower_CA_Algebra_tutors.php","timestamp":"2014-04-20T08:38:03Z","content_type":null,"content_length":"24019","record_id":"<urn:uuid:6ea43b28-b6f3-424b-8a3c-14dac50f4d94>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
which equation has a semetry of x=1/5? y = −5x2 − x + 2 y = −5x2 + x + 2 y = −5x2 + 2x − 9 y = −5x2 − 2x − 9
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f1b38ae4b0abb3d86fa7f3","timestamp":"2014-04-17T18:40:10Z","content_type":null,"content_length":"47554","record_id":"<urn:uuid:17612309-f00e-4917-80a6-81a1cff2fa76>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
|
0802490492 isbn/isbn13 $$ Compare Prices at 110 Bookstores! New Unger's Bible Handbook discount, buy, cheap, used, books & textbooks
Search Results: displaying 1 - 1 of 1 --> 0802490492 ( ISBN )
New Unger's Bible Handbook
Author(s): Merrill F. F. Unger, Gary Larson
ISBN: 0802490492 ISBN-13: 9780802490490
Format: Hardcover Pages: 752 Edition: Revised
Pub. Date: 1984-07-08 Publisher: Moody Publishers
List Price: $34.99
Click link below to compare 110+ bookstores prices! Get up to 90% off list price!
[Detail & Customer Review from Barnes & Noble]
[Detail & Customer Review from Amazon]
Book Description
New Unger's Bible Handbook
A new edition featuring revised text and hundreds of color pictures, making this volume an indispensable guide to understanding the Bible. (More than 500,000 in print)
Recent Book Searches:
/ Crystallographic Groups and Their Generalizations: Workshop, Katholieke Universiteit Leuven Campus Kortrijk, Belgium, May 26-28, 1999 (Contemporary Mathematics) /
/ Commutative Normed Rings (AMS Chelsea Publishing) / Israel M. Gelfand, D. Raikov, G. Shilov
/ Probability on Algebraic Structures: Ams Special Session on Probability on Algebraic Structures, March 12-13, 1999, Gainesville, Florida (Contemporary Mathematics) / Ams Special Session on
Probability on Algebraic Structures, Gregory Budzban, Philip J. Feinsilver, Arunava Mukherjea
/ Caustics for Dissipative Semilinear Oscillations (Memoirs of the American Mathematical Society) / Jean-Luc Joly, Guy Metivier, Jeffrey Rauch
/ A Course in Operator Theory (Graduate Studies in Mathematics) / John B. Conway
/ Mathematics: Frontiers and Perspectives /
/ Codes and Association Schemes: Dimacs Workshop Codes and Association Schemes, November 9-12, 1999, Dimacs Center (Dimacs Series in Discrete Mathematics and Theoretical Computer Science) /
/ Northern California Symplectic Geometry Seminar (American Mathematical Society Translations Series 2) /
/ Une Degustation Topologique: Homotopy Theory in the Swiss Alps (Contemporary Mathematics) / Switzerland) Arolla Conference on Algebraic Topology (1999 Arolla, Dominique Arlettaz, Kathryn Hess
/ Structured Matrices in Mathematics, Computer Science, and Engineering II /
/ Basic Geometry / George D. Birkhoff, Ralph Beatley
/ The Calculus of Finite Differences / L. M. Milne-Thomson
/ Equivariant E-Theory for C*-Algebras (Memoirs of the American Mathematical Society) / Erik Guentner, Nigel Higson, Jody Trout
/ History of the Theory of Numbers (AMS Chelsea Publishing) Volumes I; II; III. THREE VOLUME SET / Leonard Eugene Dickson
/ Lectures on Hilbert Schemes of Points on Surfaces (University Lecture Series) / Hiraku Nakajima
/ Stochastic Processes, Physics and Geometry: New Interplays a Volume in Honor of Sergio Albeverio (Conference Proceedings (Canadian Mathematical Society)) /
/ Lectures on Systems, Control, and Information: Lectures at the Morningside Center of Mathematics (Ams/Ip Studies in Advanced Mathematics) /
/ An Introduction to the Theory of Local Zeta Functions (Ams/Ip Studies in Advanced Mathematics) / Jun-Ichi Igusa
/ Classical Groups and Geometric Algebra (Graduate Studies in Mathematics) / Larry C. Grove
/ Extension Theory (History of Mathematics, 19.) / Hermann Grassmann
/ Introduction to Hodge Theory / J. P. Demailly, L. Illusie, C. Peters
/ Computational Geometry: Lectures at Morningside Center of Mathematics (Ams/Ip Studies in Advanced Mathematics) /
/ Uniform Rectifiability and Quasiminimizing Sets of Arbitrary Codimension (Memoirs of the American Mathematical Society) / Guy David, Stephen Semmes
/ Riemannian Geometry During the Second Half of the Twentieth Century (University Lecture Series) / Marcel Berger
/ DNA Based Computers V: Dimacs Workshop DNA Based Computers V June 14-15, 1999 Massachusetts Institute of Technology (Dimacs Series in Discrete Mathematics and Theoretical Computer Science) /
/ Inverse Invariant Theory and Steenrod Operations (Memoirs of the American Mathematical Society) / Mara D. Neusel
/ The Doctrine of Chances: A Method of Calculating the Probabilities of Events in Play (AMS Chelsea Publishing) / Abraham De Moivre
/ Conformal Mapping / Ludwig Bieberbach
/ On Natural Coalgebra Decompositions of Tensor Algebras and Loop Suspensions (Memoirs of the American Mathematical Society) / Paul Selick, Jie Wu
/ Integral Quadratic Forms and Lattices: Proceedings of the International Conference on Integral Quadratic Forms and Lattices, June 15-19, 1998, Seoul National ... University, Korea (Contemporary
Mathematics) / China) International Conference on Advances in Structural Dynamics (2000 : Hong Kong
Browse ISBN Directory:
9780741406361-9780741411044 9780741411068-9780741414335 9780741414359-9780741417541 9780741417565-9780741420688 9780741420701-9780741422286 More...
More Info about Buying Books Online:
Make Sure To Compare Book Prices before Buy
The goal of this website is to help shoppers compare book prices from different vendors / sellers and find cheap books and cheap college textbooks. Many discount books and discount text books are put
on sale by discounted book retailers and discount bookstores everyday. All you need to do is to search and find them. This site also provides many book links to some major bookstores for book details
and book coupons. But be sure not quickly jump into any bookstore site to buy. Always click "Compare Price" button to compare prices first. You would be happy that how much you would save by doing
book price comparison.
Buy Used Books and Used Textbooks
It's becoming more and more popular to buy used books and used textbooks among college students for saving. Different second hand books from different sellers may have different conditions. Make sure
to check used book condition from the seller's description. Also many book marketplaces put books for sale from small bookstores and individual sellers. Make sure to check store review for seller's
reputation if possible. If you are in a hurry to get a book or textbook for your class, you should choose buying new books for prompt shipping.
Buy Books from Foreign Country
Our goal is to quickly find the cheapest books and college textbooks for you, both new and used, from a large number of bookstores worldwide. Currently our book search engines fetch book prices from
US, Canada, UK, New Zealand, Australia, Netherlands, France, Ireland, Germany, and Japan. More bookstores from other countries will be added soon. Before buying from a foreign book store or book
shop, be sure to check the shipping options. It's not unusual that shipping could take two to three weeks and cost could be multiple of a domestic shipping charge.
Please visit Help Page for Questions regarding ISBN / ISBN-10 / ISBN10, ISBN-13 / ISBN13, EAN / EAN-13, and Amazon ASIN
|
{"url":"http://www.alldiscountbooks.net/_0802490492_i_.html","timestamp":"2014-04-18T09:19:27Z","content_type":null,"content_length":"33951","record_id":"<urn:uuid:9f33e54a-1cf1-40c5-bb46-6b8aa3cb67dc>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
(All Articles will appear in a Special Issue of Journal of Engineering Mathematics in 2007)
Subharmonic Resonance of a Trapped Wave Near a Vertical Cylinder by Narrow-Banded
Strip Theory for Underwater Vehicles in Water of Finite Depth
Free-Surface Wave Interaction with a Thick Flexible Dock or Very Large Floating Platform
Penetration of Flexural Waves Through a Periodically Constrained Thin Elastic Plates in
The Influence of Gravity on the Performance of Planing Vessels in Calm Water
Optimal Control Theory Applied to Ship Maneuvering in Restricted Waters
A 3D Numerical Model for Computing Non-Breaking Wave Forces on Slender Piles
Wave Drift Force in a Two-Layer Fluid of Finite Depth
On the Accuracy of Finite Difference Solutions for Nonlinear Water Waves
|
{"url":"http://web.mit.edu/flowlab/newmanbook.html","timestamp":"2014-04-19T14:32:47Z","content_type":null,"content_length":"28861","record_id":"<urn:uuid:ff3cffe4-7766-4e31-a346-66b4b2bcdb86>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
|
vegan News
• metaMDS argument noshare = 0 is now regarded as a numeric threshold that always triggers extended dissimilarities (stepacross), instead of being treated as synonymous with noshare = FALSE which
always suppresses extended dissimilarities.
• Nestedness discrepancy index nesteddisc gained a new argument that allows user to set the number of iterations in optimizing the index.
• oecosimu displays the mean of simulations and describes alternative hypothesis more clearly in the printed output.
• Implemented adjusted R-squared for partial RDA. For partial model rda(Y ~ X1 + Condition(X2)) this is the same as the component [a] = X1|X2 in variance partition in varpart and describes the
marginal (unique) effect of constraining term to adjusted R-squared.
• Added Cao dissimilarity (CYd) as a new dissimilarity method in vegdist following Cao et al., Water Envir Res 69, 95–106 (1997). The index should be good for data with high beta diversity and
variable sampling intensity. Thanks to consultation to Yong Cao (Univ Illinois, USA).
• Function meandist could scramble items and give wrong results, especially when the grouping was numerical. The problem was reported by Dr Miguel Alvarez (Univ. Bonn).
• metaMDS did not reset tries when a new model was started with a previous.best solution from a different model.
• Function permatswap for community null models using quantitative swap never swapped items in a 2 by 2 submatrix if all cells were filled.
• The result from permutest.cca could not be updated because of a ‘NAMESPACE’ issue.
• R 2.14.0 changed so that it does not accept using sd() function for matrices (which was the behaviour at least since R 1.0-0), and several vegan functions were changed to adapt to this change
(rda, capscale, simulate methods for rda, cca and capscale). The change in R 2.14.0 does not influence the results but you probably wish to upgrade vegan to avoid annoying warnings.
• monoMDS: a new function for non-metric multidimensional scaling (NMDS). This function replaces MASS::isoMDS as the default method in metaMDS. Major advantages of monoMDS are that it has ‘weak’
(‘primary’) tie treatment which means that it can split tied observed dissimilarities. ‘Weak’ tie treatment improves ordination of heterogeneous data sets, because maximum dissimilarities of 1
can be split. In addition to global NMDS, monoMDS can perform local and hybrid NMDS and metric MDS. It can also handle missing and zero dissimilarities. Moreover, monoMDS is faster than previous
alternatives. The function uses Fortran code written by Peter Minchin.
• MDSrotate a new function to replace metaMDSrotate. This function can rotate both metaMDS and monoMDS results so that the first axis is parallel to an environmental vector.
• eventstar finds the minimum of the evenness profile on the Tsallis entropy, and uses this to find the corresponding values of diversity, evenness and numbers equivalent following Mendes et al. (
Ecography 31, 450-456; 2008). The code was contributed by Eduardo Ribeira Cunha and Heloisa Beatriz Antoniazi Evangelista and adapted to vegan by Peter Solymos.
• fitspecaccum fits non-linear regression models to the species accumulation results from specaccum. The function can use new self-starting species accumulation models in vegan or other
self-starting non-linear regression models in R. The function can fit Arrhenius, Gleason, Gitay, Lomolino (in vegan), asymptotic, Gompertz, Michaelis-Menten, logistic and Weibull (in base R)
models. The function has plot and predict methods.
• Self-starting non-linear species accumulation models SSarrhenius, SSgleason, SSgitay and SSlomolino. These can be used with fitspecaccum or directly in non-linear regression with nls. These
functions were implemented because they were found good for species-area models by Dengler (J. Biogeogr. 36, 728-744; 2009).
• adonis, anosim, meandist and mrpp warn on negative dissimilarities, and betadisper refuses to analyse them. All these functions expect dissimilarities, and giving something else (like
correlations) probably is a user error.
• betadisper uses restricted permutation of the permute package.
• metaMDS uses monoMDS as its default ordination engine. Function gains new argument engine that can be used to alternatively select MASS::isoMDS. The default is not to use stepacross with monoMDS
because its ‘weak’ tie treatment can cope with tied maximum dissimilarities of one. However, stepacross is the default with isoMDS because it cannot handle adequately these tied maximum
• specaccum gained predict method which uses either linear or spline interpolation for data between observed points. Extrapolation is possible with spline interpolation, but may make little sense.
• specpool can handle missing values or empty factor levels in the grouping factor pool. Now also checks that the length of the pool matches the number of observations.
|
{"url":"http://www.icesi.edu.co/CRAN/web/packages/vegan/news.html","timestamp":"2014-04-18T13:12:33Z","content_type":null,"content_length":"15512","record_id":"<urn:uuid:102d42d4-1980-49cb-86e5-2caf63753220>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Am I right? Medal.
• 2 months ago
• 2 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/52dc0d50e4b05a53debd1dff","timestamp":"2014-04-18T18:26:50Z","content_type":null,"content_length":"35689","record_id":"<urn:uuid:4449fafd-47f6-43e3-88d4-d57189aad973>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Continuity Definition
March 8th 2007, 03:11 PM #1
Global Moderator
Nov 2005
New York City
Continuity Definition
Here is something that I found contradictory (or appears to be contradictory).
I attached as a LaTeX to make it more readable.
I do not understand what you wrote. I never studied topology. (Is topology a generalization of real analysis?)
Sometime ago I was reading something on topology and I remember they defined countinous in terms of metric spaces. It is the same thing d(x,x_0) insteand of |x-x_0|. Which seems to confirm with
the "well-defined" definition.
March 9th 2007, 08:53 AM #2
March 9th 2007, 09:35 AM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/calculus/12340-continuity-definition.html","timestamp":"2014-04-17T20:03:44Z","content_type":null,"content_length":"37610","record_id":"<urn:uuid:c924cdf7-99ad-401e-9428-d1e0d32fb34e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Counter for items in lists in lists?
Ahhhh...thank you all!
Actually, I wanted to be able to count the items in the list to check
my thinking on a problem, and as it turns out, my thinking was
incorrect. So I have a follow up question now
Some background:
I was given a problem by a friend. He is in a tennis group that has 6
members. They play once a week for 10 weeks. They were having
trouble generating a schedule that was random but fair (IE everyone
gets to play a base number of times , and the mod is evenly and
randomly distributed).
I decided that I wanted to abstract as much of the problem as possible
so that it would be reusable for other groups (IE to solve it for
games with N players, Y times over X weeks). And then I decided I
wanted this to be my second program in python, so forgive my
My first pass through was short and simple: Figure out the total
number of games that will be played, and then use something like this:
gamePlayers=random.sample(templist, players_per_game)
to fill up all the game slots. Neat, simple.
The problem I found with this solution was that it didn't give me a
weighted list for remainder. The same person could get in all of the
"extra" games.
So instead I decided to fill games from my list of players by removal
until I had a number of players left less than a full game, and then
have a back-up list. The back up list would act like a queue, but if
you were already in the game, it would take the next guy. If you got
into a game from the back-up list, you were sent to the end of the
line. My 2 lines grew to 50 plus
Sadly, this isn't quite working either. Now that I can print out the
players, I see that generally things work, but every once in a while,
I get an unexpected result. For example, with 6 players, 10 weeks, 2
games per week, I would expect combinations of:
{'a': 13, 'c': 14, 'b': 14, 'e': 13, 'd': 13, 'f': 13}
(4 13s and 2 14s)
But sometimes I get:
{'a': 13, 'c': 14, 'b': 14, 'e': 14, 'd': 12, 'f': 13}
(2 13s, 3 14s and a 12)
That 12 breaks the planned even distribution.
I suspect the problem comes from the random pulling in the first part,
but I'm not sure. I also feel that some sections (espcially the
print) don't have a "python-grace", so I would love some suggestions
to make them more...slithery?
To make a long story longer, here's the code:
....#! /usr/bin/env python
....#A program to randomly fill a tennis schedule
....#The original theory looked like this:
....# gamePlayers=random.sample(templist, players_per_game)
....# print gamePlayers
....#But that didn't give a weighted list for extra games
....import random
....#Eventually these will get set dynamically
....players=['a', 'b', 'c', 'd', 'e', 'f']
....#this will be used to pull off "extra game" players
....#a templist so we can modify it.
....#our finished product:
....while len(finishedList)!=games:
.... if len(templist)>=players_per_game:
.... gamePlayers=[]
.... while len(gamePlayers)!=players_per_game:
.... randomNumber=random.randint(0, len(templist)-1)
.... potentialPlayer=templist.pop(randomNumber)
.... gamePlayers.append(potentialPlayer)
.... finishedList.append(gamePlayers)
.... else:
.... gamePlayers=templist
.... print "I am the leftover game players", gamePlayers
.... print "I am the list of backup players", backupList
.... count=0
.... while len(gamePlayers)!=players_per_game:
.... print "I am a potential player"
.... potentialPlayer=backupList[count]
.... print potentialPlayer
.... print "checking to see if I'm in the game"
.... if potentialPlayer not in gamePlayers:
.... print "I do not think the player is in the game"
.... print "I am the back-up list", backupList
.... potentialPlayer=backupList.pop(count)
.... gamePlayers.append(potentialPlayer)
.... backupList.append(potentialPlayer)
.... print "I am the back-up list after reorder", backupList
.... print "I am the gamePlayers after test and insertion",
.... else:
.... print "I think that player is in the game"
.... count+=1
.... finishedList.append(gamePlayers)
.... templist=players[:]
....#count the list (thank you, Steve!
....def count(item):
.... if not isinstance(item, list):
.... return {item:1}
.... counts = {}
.... for i in item:
.... for key, ct in count(i).items():
.... counts[key] = counts.get(key, 0) + ct
.... return counts
....def printList(weeks, games, list):
.... x=0
.... while x!=weeks:
.... y=0
.... print "Week: ", x+1
.... while y<games:
.... print "Game ",y+1, " players are ", list[y]
.... y+=1
.... x+=1
....#printing out and counting the final list
....printList(number_of_weeks, games_per_week, finishedList)
....print finishedList.count("a")
|
{"url":"http://www.velocityreviews.com/forums/t336213-counter-for-items-in-lists-in-lists.html","timestamp":"2014-04-24T21:37:36Z","content_type":null,"content_length":"77279","record_id":"<urn:uuid:64105f53-f667-42a1-a93c-8c979b38dcdd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cracking Random Number Generators - Part 4
In Part 3 of this series, we investigated the Mersenne Twister, and saw how with 624 consecutive integers obtained from it, we can predict every subsequent integer it will produce. In this part, we
will look at how to calculate previous integers that it has produced. We will also discuss how we might go about determining the internal state of the Mersenne Twister if we are unable to get 624
consecutive integers.
Breaking it down
Before we go on, let's look again at the algorithm for generating the next value for state[i]:
int y = (state[i] & 0x80000000) + (state[(i + 1) % 624] & 0x7fffffff);
int next = y >>> 1;
if ((y & 1L) == 1L) {
next ^= 0x9908b0df;
next ^= state[(i + 397) % 624];
We can see that there are 3 numbers from the previous state that are involved here, the old state[i], state[(i + 1) % mod 624], and state[(i + 397) % mod 624]. So, taking these 3 numbers, let's have
a look at what this looks like in binary:
1. 11100110110101000100101111000001 // state[i]
2. 10101110111101011001001001011111 // state[i + 1]
3. 11101010010001001010000001001001 // state[i + 397]
// y = state[i] & 0x80000000 | state[i + 1] & 0x7fffffff
4. 10101110111101011001001001011111 // y
5. 01010111011110101100100100101111 // next = y >>> 1
6. 11001110011100100111100111110000 // next ^= 0x9908b0df
7. 00100100001101101101100110111001 // next ^= state[i + 397]
If we work backwards from i = 623, then the pieces of information from the above equation is the end result (7), state[i + 1] and state[i + 397]. Starting from the result, the easiest step to unapply
is the last one, undoing an xor is as simple as applying that same xor again. So we can get from 7 to 6 by xoring with state[i + 397].
To get from 6 to 5 depends on whether y was odd, if it wasn't, then no operation was applied. But we can also see from the bitshift to the right applied from 4 to 5, that the first bit at 5 will
always be 0. Additionally, the first bit of the magic number xored at 6 is 1. So, if the first bit of the number at 6 is 1, then the magic number must have been applied, otherwise it hasn't. Hence,
we can conditionally unapply step 5.
At this point, we have the first bit of the old state[i] calculated, in addition to the middle 30 bits of state[i + 1] calculated. We can also infer the last bit of state[i + 1], it is the same as
the last bit of y, and if y was odd, then the magic number was applied at step 6, otherwise it wasn't. We've already worked out whether the magic number was applied at step 6, so if it was, the last
bit of state[i + 1] was 1, or 0 otherwise.
However, as we work backwards through the state, we will already have calculated state[i + 1], so determining its last 31 bits is not useful to us. What we really want is to determine the last 31
bits of state[i]. To do this we can apply the same transformations listed above to state[i - 1]. This will give us the last 31 bits of state[i].
Putting it all together, our algorithm looks like this:
for (int i = 623; i >= 0; i--) {
int result = 0;
// first we calculate the first bit
int tmp = state[i];
tmp ^= state[(i + 397) % 624];
// if the first bit is odd, unapply magic
if ((tmp & 0x80000000) == 0x80000000) {
tmp ^= 0x9908b0df;
// the second bit of tmp is the first bit of the result
result = (tmp << 1) & 0x80000000;
// work out the remaining 31 bits
tmp = state[(i - 1 + 624) % 624];
tmp ^= state[(i + 396) % 624];
if ((tmp & 0x80000000) == 0x80000000) {
tmp ^= 0x9908b0df;
// since it was odd, the last bit must have been 1
result |= 1;
// extract the final 30 bits
result |= (tmp << 1) & 0x7fffffff;
state[i] = result;
Dealing with non consecutive numbers
What happens, if when collecting 624 numbers from the application, that some other web request comes in at the same time and obtains a number. Our 624 numbers won't be consecutive. Can we detect
that, and what can we do about it?. Detecting it is simple, having collected 624, we can predict the 625th, if it doesn't match the next number, then we know we've missed some.
Finding out how many numbers we've missed is the next task. Let's say we only missed the 624th number. This is fairly straight forward to detect, we would find that our state[623] is equal to what we
were expecting to be the new state[0]. We would then know that we've missed one number, and by continuing to extract numbers from the application, and comparing that with the results we were
expecting, we can narrow down which one.
A generalised algorithm for doing this is beyond the scope of these blog posts. But it should be clear from the reverse engineering of the steps that if most of the values are correct, but only a few
are missing, determining what they were will be a fairly simple process.
We now know how to determine the internal state and how to go backwards as well as forwards in two of the most popular PRNG algorithms. In Part 5, we will look at what developers can do to ensure
that their applications are safe against PRNG attacks.
Re: Cracking Random Number Generators - Part 4
Hi James,
I am looking at your blog and I noticed that you are a very well informed and talented developer.
I wonder, can it really be disassembled a 32-bit RNG? Can you make a program or web app(running from a web page having subscribers) to collect a particular set of numbers as an input and then to
output and predict the next set of numbers in the right order they will come out of a particular RNG?
I know it is tough to do this because you need a RNG scrambler to try scramble bits in the 32-bit code used in the attacked RNG and then triger the generator an x amount of times and then compare the
results with the set of numbers you have earlier input to the application. Likewise a seed scrabler can be used and then do as described above, maybe. Once the program seeks and finds a matching
sequence then to attempt to give predictions of the next RNG's outputs.
I know the bits and seeds change periodically in sophisticated RNG apps and
I know there are a bunch of other things involved in it that make it a real challenge, worth of trying.
Also there could be quite profitable task if succeeded. In gambling for instance having your subscribers play for you and share profits making lists of output numbers that will play, with winning AND
losing bets (on purpose, so not to disturb casinos) but more winning ones of course to ensure profits.
Please do not consider the second part of my comment unethical because it isn't. If the gambling industry uses a computer program to generate pseudorandom number sequences I have the right to use my
computer program to help me calculate and advise me to play accordingly. Of course if you're winning constantly or you win large amounts of money, they 'll kick you out of the casino and that's
I 'll be at your disposal if you consider to reply at my thoughts.
Re: Cracking Random Number Generators - Part 4
The topic of predicting gambling websites is nothing new, it has been well covered by this paper:
Personally, I prefer to make money from contributing to society, beating an online gambling site does nothing to contribute to society.
|
{"url":"http://jazzy.id.au/default/2010/09/25/cracking_random_number_generators_part_4.html","timestamp":"2014-04-19T14:59:20Z","content_type":null,"content_length":"33383","record_id":"<urn:uuid:8c68b9f7-b017-4fcd-b55a-fe011738da29>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A question about groups of intermediate growth, II
up vote 2 down vote favorite
This question arose in the comments of A question about groups of intermediate growth. I think it might be interesting to put it more in evidence.
Let $G$ be a f.g. group with a fixed symmetric set of generators $S$ and denote by $B(n)$ the ball of radius $n$ about the identity w.r.t. the word metric induced by $S$.
Fix an integer $k\geq1$ and define $\overline\zeta_k(G)=\lim\sup_n\frac{|B(nk+k)|}{|B(nk)|}$.
Observe that
1. If $G$ has polynomial growth, then $\overline\zeta_k(G)=1$, for all $k$.
2. If $\overline\zeta_k(G)=1$ for all $k$, then $G$ has sub-exponential growth.
General question: What can we say about $\overline\zeta_k(G)$ if $G$ has intermediate growth?
Martin Kassabov, in the comment to my question, suspects that it should be always (or most of the times) equals to $1$, but I cannot find even a single examples of a group of intermediate growth for
which it is equal to $1$. I have to say that my knowledge about groups of intermediate growth is very little and I just tried to apply Corollary 1.3 in http://arxiv.org/PS_cache/arxiv/pdf/1108/
1108.0262v1.pdf, but, as already observed by Martin, it is not strong enough to give an example of groups of intermediate growth whose $\overline\zeta_k(G)=1$.
Particular question: Is there an example of group of intermediate growth for which $\overline\zeta_k(X)=1$, for all $k$?
Thanks in advance,
1 there is not "easy proof" of the observation 1, and for observation 2 it is enough to have it for some $k$. – kassabov Nov 15 '11 at 16:41
Yes, I was sure about the second point... – Valerio Capraro Nov 15 '11 at 19:32
add comment
1 Answer
active oldest votes
It seems that if $\bar \zeta_k > 1$ for some $k$ then $\bar \zeta_k > 1$ for all $k$. Also it is clear $\lim_k \sqrt[k]{\bar \zeta_k} = 1$.
My guess is that most known groups of intermediate growth satisfy $\bar \zeta_k=1$, but proving this require very carefull estimates for the size of the balls. Notice that until recently
up vote 5 the growth type on any group of initermediate growth have not be computed, which requires just a "crude" estimates of the growth type. You can see the recent papers of Bartholdi and
down vote Eschler: http://front.math.ucdavis.edu/1110.3650 and http://front.math.ucdavis.edu/1011.5266, where they have computed the growth type of many groups, but I fell that their estimates are
far from what you need to get $\bar\zeta_k=1$.
the second claim (about the limit) is clear and follows from your argument in the other question. Can you please give me some details about the first claim: if $\overline\zeta_k >1$ for
some $k$, then $\overline\zeta_k>1$, for all $k$? Thank you very much in advance. – Valerio Capraro Nov 15 '11 at 22:27
Using that $B(n)$ is an increasing function one can easily see that $\sqrt{\zeta_k} \leq \zeta_l$ for $k<l$ -- if the ratio $B(n+k)/B(n)$ is large for some $n$ we can find $m$ such that
$m < n < n+k < m +l$ which implies that $B(m+l)/B(m)$ is large. However it might not be possible to ensure that $m$ is divisible by $l$, in this case we can pick $m$ between $n$ and $n+k$
and one of the ratios $B(m+l)/B(m)$ and $B(m)/B(m-l)$ will be large. – kassabov Nov 16 '11 at 11:20
Again using that $B(n)$ is an increasing we have $\zeta_k \leq \zeta_{nk} \leq (\zeta_k)^n$ -- just break the interval of size $nk$ into $n$ intervals of size $k$. These inequalities give
that if one $\zeta_k =1$ then all others are also equal to $1$. – kassabov Nov 16 '11 at 11:22
thank you very much! – Valerio Capraro Nov 22 '11 at 19:39
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/80900/a-question-about-groups-of-intermediate-growth-ii","timestamp":"2014-04-21T01:22:27Z","content_type":null,"content_length":"58925","record_id":"<urn:uuid:d6d7578e-2a11-4482-9d0f-92ea597758b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Logarithms and Ropes (as found in Mathematician's Delight)
I recently got a copy of
Mathematician's Delight
, by W. W. Sawyer. I had loved his book
Vision in Elementary Mathematics
, so I knew I'd like this one. I found out about it through a
I stumbled upon in my wanderings, where the blogger included this:*
Nearly every subject has a shadow, or imitation. It would, I suppose, be quite possible to teach a deaf ... child to play the piano. ... [The child] would have learnt an imitation of music, and
would fear the piano exactly as most students fear what is supposed to be mathematics.
What is true of music is also true of other subjects. One can learn imitation history - kings and dates, but not the slightest idea of the motives behind it all; imitation literature - stacks of
notes on Shakespeare's phrases, and a complete destruction of the power to enjoy Shakespeare.
I think that idea, of a shadow subject, will stick with me, and become more powerful for me over time.
I've told my students logarithms were invented in a time when calculators didn't exist, and scientists were looking at lots of data about the planets, trying to discover patterns. Napier invented a
way to do multiplication by adding and division by subtracting, a second application of which allows powers and roots to also become questions of addition and subtraction. I don't think this is
enough of an introduction to this strange concept.
How did Napier dream this up? Sawyer gives us a glimmering of the sort of inspiration Napier might have had, with this marvelously concrete model for logarithms:
We are all familiar with machines which [we] use to multiply [our] own strength - pulleys, levers, gears, etc. Suppose you are fire-watching on the roof of a house, and have to lower an injured
comrade by means of a rope. It would be natural to pass the rope round some object, such as a post, so that the friction of the rope on the post would assist you in checking the speed of your
friend's descent. In breaking-in horses the same idea is used: a rope passes round a post, one end being held by a person, the other fastened to the horse. To get away, the horse would have to
pull many times harder than the person.
The effect of such an arrangement depends on the roughness of the rope. Let us suppose that we have a rope and a post which multiply one's strength by ten, when the rope makes one complete turn.
What will be the effect if we have a series of such posts? A pull of 1 pound at A is sufficient to hold 10 pounds at B, and this will hold 100 pounds at C, or 1000 pounds at D.
Thus, 10^8 will represent the effect of 8 posts. ... The number of turns required to get any number is called the logarithm of that number. ... So far we have spoken of whole turns. But the same
idea would apply to incomplete turns. ... Accordingly, 10^1/2 will mean the magnifying effect of half a turn. ... The logarithm of 2 will be that fraction of a turn which is necessary to magnify
your pull 2 times. (page 70)
I had to put the book down here, to ask myself why half a turn wouldn't magnify the pull 5 times - half of ten. As I thought about that, I wanted to know if there would be an easy way, either a
thought experiment or a very simple physical experiment (i.e., no special equipment), to prove that this relationship must be multiplicative. That is, how do we know the friction of the rope doesn't
to our pulling force, so that a certain amount is added at each turn? (Can anyone help me with this?)
If we've decided that the relationship must be multiplicative, then we know that two half turns must multiply to have the effect of one whole turn, and that would mean we need the number that
multiplied by itself gives ten. To get to this thought, I had to imagine two posts near one another, with the rope halfway around one, and then halfway around the next.
Why haven't I seen this before?!
I haven't read any more of the book yet, because I keep needing to think more about this cool idea. I look forward to more pedagogical delights as I keep reading this book, and maybe others he wrote.
(One list is at the bottom of
this page
*W. W. Sawyer wrote this book in 1943, long before feminists began to analyze the effect of using the male for the generic. Although Sawyer uses 'man' and 'he' in a generic sense in other sections
(which I've taken the liberty of changing in the second quote I've used), perhaps he was trying to avoid that in this story by calling the deaf child of his music example 'it'. I had real trouble
with that, and didn't know how to fix it without messing with his meaning, so I left the meat of the example out. You can go here to see it.
13 comments:
1. Hmm, neat!
But also: Why should two half turns (on different ropes) have the same effect as a whole turn?
2. I'm picturing one rope, similar to the situation pictured. Two half turns would give the same amount of surface for friction as one whole turn, right?
3. Well, I just don't understand these things. It makes sense for surface area to be involved, but I also figure force is involved, and maybe angles make their way into that... makes sense that it
would all work out the way you said, but I don't understand these things.
4. I am actually highly skeptical that the physical relationship being described here really is multiplicative. It works as a thought experiment but I feel like it's fudging the physical reality.
You could get an honestly multiplicative model with pulleys, but it wouldn't have the virtue of supporting the fractional exponents.
I haven't experimented with it myself but it actually seems to me that your historical-motivation intro to logs has the potential to be enormously powerful. In order to have this idea deliver
fully, I think you'd need to figure out a sequence of questions you could ask that would lead your students to generate the idea that multiplication can be reduced to addition, or at least to
become incipiently aware of it without needing to be told directly. For example, giving them an extensive set of multiplication problems involving only powers of 10? And helping them realize that
they can solve them quickly just by amalgamating the powers? Like, "what's 100 x 1,000? what's 1,000 x 100,000?" etc. And then highlighting for them the fact that in order to solve these problems
really they are solving addition problems. And then maybe another sequence of problems involving powers of 2? So that 32 x 8 becomes a simple matter of 5 + 3? Or something. I'm just brainstorming
here. But the idea would be to get them to recognize, in the context of whole-number multiplication they can already do other ways, that if you can see numbers as powers of a certain base, then
you can multiply them by adding.
5. I agree that this may be difficult to model theoretically. The only thing left is experiment. I will try this out next week and see what I get.
6. Rhett has done the experiment, and posted at his blog. (Thanks, Rhett!!)
It looks like it's probably multiplicative. Ben, I have no good instincts for physics. I want this to be either additive or multiplicative, and the data come closer to multiplicative. Does it
feel to you like it might be more complex? Can you say why?
Rhett, you said, "The normal model for friction says that the frictional force is proportional only to the force the two surfaces are pushing against each other. Not sure if that works here." Can
you say more about how friction works?
7. I was convinced it was multiplicative by imagining what happens with a rope going 0 turns around. Obviously that multiplies the force by 1, not 0.
But then I thought, wait a minute, that's assuming it's multiplicative already!
Maybe it's clear enough that the change in tension in the rope as it wraps around a bit of pole is proportional to the tension?
8. There's a good treatment of this with free body diagrams and all that physics stuff over at http://www.leancrew.com/all-this/2010/04/aye-aye-capstan/
Joshua writes: Maybe it's clear enough that the change in tension in the rope as it wraps around a bit of pole is proportional to the tension?
Yeah, this is the key to the argument that the relationship must be multiplicative. The friction from each bit of pole is what allows the tension to change along the length of the rope (otherwise
the rope would relax so as to equalize the tension). But the friction is proportional to the normal force between the rope and the pole, which is proportional to the tension. And a function which
is proportional to its own derivative is exponential.
9. Thanks, Orawnzva. I'm going to print out the post at that site, and study it.
10. Incidentally, my LJ handle is related to my real name by a wekll-known substitution cipher.
I'm a CS graduate student interested in teaching, having been so far blessed and challenged by my work as a teaching assistant and frustrated and underwhelmed by my work as a research assistant.
I added your blog (and other math teaching blogs) to my reading list recently, so you'll probably be seeing more comments from me on scattered posts, current and past, until I catch up.
11. Hi, I'm the author of the post Orawnzva linked to. As you figured, the behavior is multiplicative, and Orawnzva's explanation is quite good.
It's actually not that difficult to model theoretically; you just have to have some experience with the notion of slicing objects up into differential chunks and applying Newton's laws—and in
this case, Coulomb's law—to a typical chunk. Once you've done that, it's just a differential equations problem.
Although my original article was about a single pole, it's easy to extend the analysis to several poles, which I just did.
12. Hope to have one copy of that Math Delight book to be able to see how I could use it to explore more my knowledge in math. Thanks for the link.
13. The last comment just brought me back here. I've just now printed out Dr. Drang's post to study it. I would really like to understand this, and I don't yet.
Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
|
{"url":"http://mathmamawrites.blogspot.com/2010/05/logarithms-and-ropes-as-found-in.html","timestamp":"2014-04-18T08:03:46Z","content_type":null,"content_length":"135004","record_id":"<urn:uuid:2bc6b809-4875-4b1f-9228-f64d38d4e425>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MTMS: April 2005, Volume 10, Issue 8
Every Story Tells a Picture
John Maus
A connection between literature and mathematics through a story-graphing project. Students learn how to transfer ideas from story graphs to make them mathematical in nature. Teachers will learn how
to teach students this skill and to implement it in the classroom.
John Henry - The Steel Driving Man
David Murphy, Laura Gulley
The story of John Henry provided the setting for sixth-grade class to participate in a John Henry Day of mathematics experiments. The students collected data from experiments where students competed
against machines and technology. The student analyzed the data by comparing two box plots, a box plot of human data, and a box plot of machine or technology data.
Poetry in Motion: Using Shel Silverstein's Works to Engage Students in Mathematics
Jennifer Bay-Williams
Four Shel Silverstein works and how they can be used to launch mathematics investigations across the curriculum. Different activities that involve literature and mathematics are showcased so that
teachers may implement these activities in the classroom.
The Power of Two: Linking Mathematics and Literature
Ron Zambo
A series of mathematics activities based on the book One Grain of Rice. Multiple mathematical explorations are discussed with activities that can be implemented in the classroom, integrating many
subject areas.
Harry Potter and the Magic of Mathematics
Betsy McShea, Judith Vogel, Maureen Yarnevich
How teachers can use the Harry Potter book series to teach linear modeling and probability to their students. Different activities focusing on different mathematical concepts are discussed and can be
used in the middle grades classroom.
Making a Million Meaningful
Kim Ellett
A variety of activities that can be used with middle school students to investigate the size of the number 1,000,000. The activities focus on real-life applications of mathematical proportions.
You Read Me a Story, I Will Read You a Pattern
Charyl Pace
A unit for seventh-grade students, using children's literature to teach visual, auditory, and algebraic patterns. Teachers will learn to use a variety of literature materials to introduce different
kinds of patterns and thus, different kinds of mathematics.
|
{"url":"http://www.nctm.org/publications/toc.aspx?jrnl=MTMS&mn=4&y=2005","timestamp":"2014-04-24T20:51:45Z","content_type":null,"content_length":"45512","record_id":"<urn:uuid:f31ab197-40e4-473e-8ab9-dceecdac0f1a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Wrong proof in textbook of optics?
where k, kr and kt are wave vectors of incident, reflected and transmitted wave.
... so what is ##\vec{r}## ?
I think you may have missed part of the proof out though... merely showing that three vectors have the same extent against a fourth vector does not show that they are all co-planar. All you have to
do is rotate one of the first three about the fourth to see this.
|
{"url":"http://www.physicsforums.com/showpost.php?p=4265427&postcount=2","timestamp":"2014-04-20T05:48:32Z","content_type":null,"content_length":"7988","record_id":"<urn:uuid:60b7974f-1ed1-42a5-9627-9d08e8977ca8>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Diminishing Marginal Utility of Hoarding: Why Being a Stingy Miser Doesn’t Pay
The Diminishing Marginal Utility of Hoarding: Why Being a Stingy Miser Doesn’t Pay
The other day I was playing around with an Excel spreadsheet I made. I was looking at how much you need to save to reach your retirement goals. At one point, I thought to myself, “What happens if you
save more than necessary? How much will it increase your chances of reaching your retirement goal?” The answer I found was very interesting and backs up a verse in Proverbs. But before I can tell you
what I discovered, I’ll have to explain what I was doing and how I was doing it.
Monte Carlo Analysis
In financial planning, we use Monte Carlo analysis to simulate random stock market returns among other things. When you’re planning your retirement, it doesn’t make sense to assume you’re going to
get an 8% return every single year. The stock market just doesn’t work that way. Monte Carlo analysis gives a more realistic, though not perfect, representation of how the stock market actually
delivers returns.
Monte Carlo analysis uses the assumptions you give it to randomly pick numbers within a specific range. I used Monte Carlo analysis to pick random stock market returns within a range based on
historic performance results. I looked at how a person’s savings would grow as they invest for 45 years to reach their retirement goals.
Each 45 year investment period is called a “trial”. The success or failure of a trial depends on whether or not you reach your goal at the end of the period. I was testing to see whether you would
have accumulated enough money to retire at the end of 45 years based on your retirement goals.
Running just one trial isn’t enough. To get a meaningful result, you have to run thousands of trials. To figure out your success rate, you divide the number of successful trials by the total number
of trials you ran. In my example, I ran 5,000 trials (that means 5,000 sets of 45 year investment periods). An 80% success rate would mean that 4,000 out of my 5,000 trials were successful.
Monte Carlo analysis is useful because it incorporates the uncertainty of the stock market into your retirement planning. It has some limitations, but it’s the best we can do for trying to predict
the future. The stock market doesn’t work exactly the way the model works, and there’s also the question of what a good result should be. Traditionally, a success rate of 80% or higher is “good”
because there are so many assumptions built in to the model. Trying to go for a higher success rate means you’re placing much more importance on your assumptions being correct.
I found that saving 20% of your desired retirement income and increasing it by inflation each year would give you an 82% success rate to reach your retirement goals. That’s pretty good, but then I
wondered what would happen if you saved even more. How much would your success rate increase if you saved even more?
The Results: The Diminishing Marginal Utility of Hoarding
So I proceeded to run a Monte Carlo analysis at different savings rates. I started at 0% and increased it by 5% for each new analysis all the way up to 100%. Here’s a graph of my results:
As you can see, saving 0% gives you a 0% chance of reaching retirement – which makes sense, right? Saving 10% gives you about a 60% success rate, and saving 20% gives you an 82% success rate. But do
you notice the interesting part? As you begin to save more than 20%, your chances of successfully reaching your retirement goals go up less and less. After you hit that 20% savings mark you don’t get
very much bang for your buck.
It might be easier to see what I’m talking about using this chart:
So when you go from saving 0% for retirement to saving 5%, you increase your chances of success by 37%. If you go from 5% to 10%, you increase your success rate by another 23% giving you a success
rate of 60%. From 10% to 15% increases your chances of success by 13%, and from 15% to 20% gives you another 9% increase. Once you get to 20% though, saving another 5% only increases your success
rate by 3%. Every little bit more that you save gives you a smaller and smaller increase in your chances of success.
This shows what I call “the diminishing marginal utility of hoarding”. In economics, the law of diminishing marginal utility says that for each additional unit you use you get less satisfaction than
you did with the last one. For example, eating one chocolate bar tastes good. A second one right after doesn’t taste quite as good, the third a little less so, and so on. Eating seven chocolate bars
in a row just gives you a sick stomach.
What we’re seeing here is the law of diminishing marginal utility applied to saving. Saving money for retirement is good. But once you get to a certain point (which depends on how long you have until
retirement and how much you have already saved), saving more and more doesn’t increase your chances of success quite as much as it did before.
How Can This Be True?
Because Monte Carlo analysis is looking at thousands of possible scenarios, you’re going to have some scenarios where the stock market loses money for several years in a row. While that’s (hopefully)
not as likely in real life, saving more and more isn’t going to help you much if that happens. You’ll just keep losing the money, and the impact is even greater if you already have a lot saved. So
this phenomenon is partly due to the method we’re using, but it also illustrates a fundamental truth – being stingy doesn’t help you quite as much as you might think it will.
What God Has to Say about It
I was so excited to see these results because they help illustrate some of God’s wisdom about giving:
^24 There is one who scatters, and increases yet more. There is one who withholds more than is appropriate, but gains poverty. ^25 The liberal soul shall be made fat. He who waters shall be
watered also himself.
Proverbs 11:24-25 (WEB)
Maybe you’re thinking I didn’t really prove that point, and you’d be right if you’re only thinking about dollars and cents. When Jesus talked about giving our money to the poor, He never said that it
would make us rich in this life. When we give to honor God, we store up treasures in Heaven. This is precisely how one person can give away a lot of his money and become wealthier while another is
stingy but becomes poor.
Being a stingy miser won’t give you a better chance of reaching your retirement goals. Once you’re saving enough, you have to be content that you’re doing what you should and hand the rest over to
God. Hoarding money for yourself doesn’t help you that much in this life, and it will severely impoverish you in the next.
So how do you know when you’re saving enough? To find out, sign up for free updates to Provident Planning. I’ll be examining that question and many more that will help you prepare for a retirement
that honors God and live a life that glorifies His name.
No Comments
Be the first to start the conversation.
|
{"url":"http://www.providentplan.com/330/the-diminishing-marginal-utility-of-hoarding-why-being-a-stingy-miser-doesnt-pay/","timestamp":"2014-04-20T13:18:59Z","content_type":null,"content_length":"49108","record_id":"<urn:uuid:830febae-fc10-4ad1-bdef-32c5d412e580>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
“RECOMB-AB brings together leading researchers in the mathematical, computational, and life sciences to discuss interesting, challenging, and well-formulated open problems in algorithmic biology.”
As someone working in the field of “algorithmic biology” (which, I guess, could be defined as the application of techniques from computer science, discrete mathematics, combinatorial optimization and
operations research to computational biology problems) I was, predictably, immediately enthusiastic about the conference.
However, what really caught my attention was the following paragraph:
“The discussion panels at RECOMB-AB will also address the worrisome proliferation of ill-formulated computational problems in bioinformatics. While some biological problems can be translated into
well-formulated computational problems, others defy all attempts to bridge biology and computing. This may result in computational biology papers that lack a formulation of a computational problem
they are trying to solve. While some such papers may represent valuable biological contributions (despite lacking a well-defined computational problem), others may represent computational
'pseudoscience.' RECOMB-AB will address the difficult question of how to evaluate computational papers that lack a computational problem formulation.”
Calls-for-participation rarely strike such a negative tone. However, in this case I think the conference organizers have highlighted an extremely important point. Problems arising in computational
biology are inherently complex and this entails a bewildering number of parameters and degrees of freedom in the underlying models. Furthermore, it is commonplace for computational biology articles
to utilize a large number of intermediate algorithms and software packages to perform auxiliary processing, and this further compounds the number of unknowns (and the inaccuracies) in the system.
All this is, to a certain extent, inevitable. However, this complexity sometimes seems to have become an end in itself. This would be harmless except for the fact that scientists subsequently attempt
to draw biological conclusions from this mass of data. Rarely is the question asked: is there actually any “biological signal” left amongst all those numbers? Would we have obtained similar results
if we had just fed random noise into the system?
The fact that these questions are not posed, is directly linked to the lack of a clear and explicitly articulated optimization criterion. In other words: just what are we trying to optimize exactly?
What makes one solution “better” than another? What, at the end of the day, is the question that we are trying to answer? This is exactly what RECOMB-AB is getting at with the sentence, “This may
result in computational biology papers that lack a formulation of a computational problem they are trying to solve”. The articulation might be slightly formal, but the point they raise is
nevertheless fundamental.
It remains to be seen what kind of a role phylogenetic networks will play at RECOMB-AB, if any. For sure, the field of phylogenetic networks continues to generate a vast number of fascinating open
algorithmic problems. However, are the underlying biological models precise enough to allow us to say that we are actually producing biologically-meaningful output? Overall, I think the answer is
still no. However, I think that there is reason for optimism. The field is young and evolving and it is likely that both biologists and algorithmic scientists will have a significant role in shaping
its future. Hopefully this interplay will allow us to move forward on the biological front without losing sight of the need for explicit optimization criteria.
|
{"url":"http://phylonetworks.blogspot.com/2012/03/recomb-ab.html","timestamp":"2014-04-21T03:05:18Z","content_type":null,"content_length":"103869","record_id":"<urn:uuid:cda51390-0999-4d5e-a04e-a9f7e5b4618a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-Metric Space Endowed with Graph
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 967132, 9 pages
Research Article
Some Fixed Point Theorems in -Metric Space Endowed with Graph
^1Centre for Advanced Mathematics and Physics, National University of Sciences and Technology, Sector, H-12, Islamabad, Pakistan
^2Department of Mathematics, Quaid-i-Azam University, Islamabad, Pakistan
^3Department of Mathematics, King Abdulaziz University, Jeddah, Saudi Arabia
Received 6 July 2013; Revised 5 September 2013; Accepted 6 September 2013
Academic Editor: Douglas Anderson
Copyright © 2013 Maria Samreen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Linked References
1. I. A. Bakhtin, “The contraction mapping principle in almost metric spaces,” Journal of Functional Analysis, vol. 30, pp. 26–37, 1989.
2. J. Heinonen, Lectures on Analysis on Metric Spaces, Springer, New York, NY, USA, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
3. N. Bourbaki, Topologie Generale, Herman, Paris, France, 1974.
4. S. Czerwik, “Nonlinear set-valued contraction mappings in $b$-metric spaces,” Atti del Seminario Matematico e Fisico dell'Università di Modena, vol. 46, no. 2, pp. 263–276, 1998. View at
5. M. Bota, A. Molnár, and C. Varga, “On Ekeland's variational principle in $b$-metric spaces,” Fixed Point Theory, vol. 12, no. 1, pp. 21–28, 2011. View at MathSciNet
6. M. F. Bota-Boriceanu and A. Petruşel, “Ulam-Hyers stability for operatorial equations,” Analele Stiintifice ale Universitatii “Al. I. Cuza” din Iasi, vol. 57, supplement 1, pp. 65–74, 2011. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. M. Păcurar, “A fixed point result for $\phi$-contractions on $b$-metric spaces without the boundedness assumption,” Polytechnica Posnaniensis. Institutum Mathematicum. Fasciculi Mathematici, no.
43, pp. 127–137, 2010. View at MathSciNet
8. V. Berinde, “Sequences of operators and fixed points in quasimetric spaces,” Studia Universitatis Babeş-Bolyai, vol. 41, no. 4, pp. 23–27, 1996. View at Zentralblatt MATH · View at MathSciNet
9. S. Banach, “Sur les operations dans les ensembles abstraits et leur application aux equations integrales,” Fundamenta Mathematicae, vol. 3, pp. 133–181, 1922.
10. J. Jachymski, “The contraction principle for mappings on a metric space with a graph,” Proceedings of the American Mathematical Society, vol. 136, no. 4, pp. 1359–1373, 2008. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
11. A. Petruşel and I. A. Rus, “Fixed point theorems in ordered l-spaces,” Proceedings of the American Mathematical Society, vol. 134, no. 2, pp. 411–418, 2006. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
12. J. Matkowski, “Integrable solutions of functional equations,” Dissertationes Mathematicae, vol. 127, p. 68, 1975. View at Zentralblatt MATH · View at MathSciNet
13. D. O'Regan and A. Petruşel, “Fixed point theorems for generalized contractions in ordered metric spaces,” Journal of Mathematical Analysis and Applications, vol. 341, no. 2, pp. 1241–1252, 2008.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. R. P. Agarwal, M. A. El-Gebeily, and D. O'Regan, “Generalized contractions in partially ordered metric spaces,” Applicable Analysis, vol. 87, no. 1, pp. 109–116, 2008. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. I. A. Rus, Generalized Contractions and Applications, Cluj University Press, Cluj-Napoca, Romania, 2001. View at MathSciNet
16. V. Berinde, Contractii Generalizate si Aplicatii, vol. 22, Editura Cub Press, Baia Mare, Romania, 1997. View at MathSciNet
17. V. Berinde, “Generalized contractions in quasimetric spaces,” Seminar on Fixed Point Theory, vol. 3, pp. 3–9, 1993. View at Zentralblatt MATH
18. T. P. Petru and M. Boriceanu, “Fixed point results for generalized $\phi$-contraction on a set with two metrics,” Topological Methods in Nonlinear Analysis, vol. 33, no. 2, pp. 315–326, 2009.
View at MathSciNet
19. R. P. Agarwal, M. A. Alghamdi, and N. Shahzad, “Fixed point theory for cyclic generalized contractions in partial metric spaces,” Fixed Point Theory and Applications, vol. 2012, article 40, 11
pages, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
20. A. Amini-Harandi, “Coupled and tripled fixed point theory in partially ordered metric spaces with application to initial value problem,” Mathematical and Computer Modelling, vol. 57, no. 9-10,
pp. 2343–2348, 2012. View at Publisher · View at Google Scholar · View at Scopus
21. H. K. Pathak and N. Shahzad, “Fixed points for generalized contractions and applications to control theory,” Nonlinear Analysis: Theory, Methods & Applications, vol. 68, no. 8, pp. 2181–2193,
2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
22. S. M. A. Aleomraninejad, Sh. Rezapour, and N. Shahzad, “Some fixed point results on a metric space with a graph,” Topology and Its Applications, vol. 159, no. 3, pp. 659–663, 2012. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
23. M. Samreen and T. Kamran, “Fixed point theorems for integral G-contractions,” Fixed Point Theory and Applications, vol. 2013, article 149, 2013.
24. J. J. Nieto and R. Rodríguez-López, “Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations,” Order, vol. 22, no. 3, pp. 223–239, 2005. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
25. A. C. M. Ran and M. C. B. Reurings, “A fixed point theorem in partially ordered sets and some applications to matrix equations,” Proceedings of the American Mathematical Society, vol. 132, no. 5,
pp. 1435–1443, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
26. G. Gwóźdź-Łukawska and J. Jachymski, “IFS on a metric space with a graph structure and extensions of the Kelisky-Rivlin theorem,” Journal of Mathematical Analysis and Applications, vol. 356, no.
2, pp. 453–463, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
27. G. E. Hardy and T. D. Rogers, “A generalization of a fixed point theorem of Reich,” Canadian Mathematical Bulletin, vol. 16, pp. 201–206, 1973. View at Zentralblatt MATH · View at MathSciNet
28. R. Kannan, “Some results on fixed points,” Bulletin of the Calcutta Mathematical Society, vol. 60, pp. 71–76, 1968. View at Zentralblatt MATH · View at MathSciNet
29. S. K. Chatterjea, “Fixed-point theorems,” Doklady Bolgarskoĭ Akademii Nauk, vol. 25, pp. 727–730, 1972. View at Zentralblatt MATH · View at MathSciNet
30. W. A. Kirk, P. S. Srinivasan, and P. Veeramani, “Fixed points for mappings satisfying cyclical contractive conditions,” Fixed Point Theory, vol. 4, no. 1, pp. 79–89, 2003. View at Zentralblatt
MATH · View at MathSciNet
31. M. Păcurar and I. A. Rus, “Fixed point theory for cyclic $\phi$-contractions,” Nonlinear Analysis: Theory, Methods & Applications, vol. 72, no. 3-4, pp. 1181–1187, 2010. View at Publisher · View
at Google Scholar · View at MathSciNet
32. F. Bojor, “Fixed point theorems for Reich type contractions on metric spaces with a graph,” Nonlinear Analysis: Theory, Methods and Applications, vol. 75, no. 9, pp. 3895–3901, 2012. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
33. M. A. Petric, “Some remarks concerning Ćirić-Reich-Rus operators,” Creative Mathematics and Informatics, vol. 18, no. 2, pp. 188–193, 2009. View at Zentralblatt MATH · View at MathSciNet
|
{"url":"http://www.hindawi.com/journals/aaa/2013/967132/ref/","timestamp":"2014-04-17T13:22:10Z","content_type":null,"content_length":"36024","record_id":"<urn:uuid:7b334c7d-699f-41fa-a04f-5c8476598490>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can anyone help me with a couple question's about rate of change?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Cathy lives in a state where speeders are fined $11 for each mile per hour over the speed limit. Cathy was given a ticket for $99 for speeding on a road where the speed limit is 65 miles per
hour. How fast was Cathy driving? 71 mph 79 mph 74 mph 78 mph
Best Response
You've already chosen the best response.
Let x be the number of "speeding" miles Cathy is going. 11x <-- this is 11$ for each "speeding" mile. So if Cathy was given a $99 ticket, we could write an equation like this: 11x=99 $11 per mile
equals 99 total dollars. Then let's solve for x! Hmm if the speed limit is 65, we our final step will be to add x to 65.
Best Response
You've already chosen the best response.
@zepdrix what's x
Best Response
You've already chosen the best response.
74 mph got it.
Best Response
You've already chosen the best response.
Yay gj \c:/
Best Response
You've already chosen the best response.
Forrest Lumber purchased a table saw for $810. After 2 years the saw had a depreciated value of $630. What is the amount of yearly depreciation? @zepdrix can you help me with this? $90 $75 $180
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50771326e4b02f109be3b6c4","timestamp":"2014-04-17T16:04:34Z","content_type":null,"content_length":"40065","record_id":"<urn:uuid:5ebc27c7-bd0d-4630-945a-e2211b23e136>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear logic, coherence and dinaturality
Results 11 - 20 of 24
- In Proc. Category Theory and Computer Science (CTCS'99), Electron , 1999
"... We introduce a family of explicit substitution type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that the
simply-typed -calculus with surjective pairing is the internal language for cartesian closed categories. We show tha ..."
Cited by 7 (2 self)
Add to MetaCart
We introduce a family of explicit substitution type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that the simply-typed
-calculus with surjective pairing is the internal language for cartesian closed categories. We show that the eight equality and three commutation congruence axioms of the -autonomous type theory
characterise -autonomous categories exactly. The associated rewrite systems are all strongly normalising; modulo a simple notion of congruence, they are also confluent. As a corollary, we solve a
Coherence Problem a la Lambek [12]: the equality of maps in any -autonomous category freely generated from a discrete graph is decidable. 1 Introduction In this paper we introduce a family of type
theories which can be regarded as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that the standard simply-typed -calculus with
surjective pairing is...
- Linear Logic in Computer Science , 2004
"... This paper presents an introduction to category theory with an emphasis on those aspects relevant to the analysis of the model theory of linear logic. With this in mind, we focus on the basic
definitions of category theory and categorical logic. An analysis of cartesian and cartesian closed categori ..."
Cited by 7 (1 self)
Add to MetaCart
This paper presents an introduction to category theory with an emphasis on those aspects relevant to the analysis of the model theory of linear logic. With this in mind, we focus on the basic
definitions of category theory and categorical logic. An analysis of cartesian and cartesian closed categories and their relation to intuitionistic logic is followed by a consideration of symmetric
monoidal closed, linearly distributive and ∗-autonomous categories and their relation to multiplicative linear logic. We examine nonsymmetric monoidal categories, and consider them as models of
noncommutative linear logic. We introduce traced monoidal categories, and discuss their relation to the geometry of interaction. The necessary aspects of the theory of monads is introduced in order
to describe the categorical modelling of the exponentials. We conclude by briefly describing the notion of full completeness, a strong form of categorical completeness, which originated in the
categorical model theory of linear logic. No knowledge of category theory is assumed, but we do assume knowledge of linear logic sequent calculus and the standard models of linear logic, and modest
familiarity with typed lambda calculus. 0
- UNDER CONSIDERATION FOR PUBLICATION IN MATH. STRUCT. IN COMP. SCIENCE , 2007
"... We discuss a general way of defining contexts in linear logic, based on the observation that linear universal algebra can be symmetrized by assigning an additional variable to represent the
output of a term. We give two approaches to this, a syntactical one based on a new, reversible notion of term, ..."
Cited by 6 (2 self)
Add to MetaCart
We discuss a general way of defining contexts in linear logic, based on the observation that linear universal algebra can be symmetrized by assigning an additional variable to represent the output of
a term. We give two approaches to this, a syntactical one based on a new, reversible notion of term, and an algebraic one based on a simple generalization of typed operads. We relate these to each
other and to known examples of logical systems, and show new examples, in particular discussing the relationship between intuitionistic and classical systems. We then present a general framework for
extracting deductive systems from a given theory of contexts, and give a generic proof that all these systems have cut-elimination.
, 1998
"... We introduce a family of type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that simply-typed -calculus (augmented by
appropriate constructs for products and the terminal object) is the internal language for cartesian clos ..."
Cited by 5 (4 self)
Add to MetaCart
We introduce a family of type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that simply-typed -calculus (augmented by
appropriate constructs for products and the terminal object) is the internal language for cartesian closed categories. The rules are presented in the style of Gentzen's Sequent Calculus. A key
feature is the systematic treatment of naturality conditions by explicitly representing the categorical composition, or cut in the type theory, by explicit substitution, and the introduction of new
let-constructs, one for each of the three type constructors ?;\Omega and (, and a Parigot-style ¯-abstraction to give expression to the involutive negation. The commutation congruences of these
theories are precisely those imposed by the naturality conditions. In particular the type theory for -autonomous categories may be regarded as a term assignment system for the multiplicative (\Omega
; (;?;?)-fragmen...
, 1996
"... In this paper, we introduce the system CMLL of sequent calculus and establish its correspondence with compact closed categories. CMLL is equivalent in provability to the system MLL of classical
linear logic with the tensor and par connectives identified. We show that the system allows a fairly simpl ..."
Cited by 5 (0 self)
Add to MetaCart
In this paper, we introduce the system CMLL of sequent calculus and establish its correspondence with compact closed categories. CMLL is equivalent in provability to the system MLL of classical
linear logic with the tensor and par connectives identified. We show that the system allows a fairly simple cut-elimination, and the proofs in the system have a natural interpretation in compact
closed categories. However, the soundness of the cut-elimination procedure in terms of the categorical interpretation is by no means evident. We answer to this question affirmatively and establish
the soundness by using the coherence result on compact closed categories by Kelly and Laplaza. 1 Introduction In this paper, we introduce the system CMLL of sequent calculus and establish its
correspondence with compact closed categories. CMLL is equivalent in provability to the system MLL of classical linear logic with the tensor ffl and par O connectives identified. Compact closed
categories are abundant in ...
, 1994
"... It has been observed by several people that, in certain contexts, the free symmetric algebra construction can provide a model of the linear modality ! . This construction arose independently in
quantum physics, where it is considered as a canonical model of quantum field theory. In this context, the ..."
Cited by 3 (0 self)
Add to MetaCart
It has been observed by several people that, in certain contexts, the free symmetric algebra construction can provide a model of the linear modality ! . This construction arose independently in
quantum physics, where it is considered as a canonical model of quantum field theory. In this context, the construction is known as (bosonic) Fock space. Fock space is used to analyze such quantum
phenomena as the annihilation and creation of particles. There is a strong intuitive connection to the principle of renewable resource, which is the philosophical interpretation of the linear
modalities. In this paper, we examine Fock space in several categories of vector spaces. We first consider vector spaces, where the Fock construction induces a model of the\Omega ; &; ! fragment in
the category of symmetric algebras. When considering Banach spaces, the Fock construction provides a model of a weakening cotriple in the sense of Jacobs. While the models so obtained model a smaller
fragment, it is cl...
- TCS , 1999
"... Many different notions of "program property", and many different methods of verifying such properties, arise naturally in programming. We present a general framework of Specification Structures
for combining different notions and methods in a coherent fashion. We then apply the idea of spe ..."
Cited by 3 (1 self)
Add to MetaCart
Many different notions of "program property", and many different methods of verifying such properties, arise naturally in programming. We present a general framework of Specification
Structures for combining different notions and methods in a coherent fashion. We then apply the idea of specification structures to concurrency in the setting of Interaction Categories. As a specific
example, a certain specification
"... We introduce a family of type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that the simply-typed -calculus with
surjective pairing is the internal language for cartesian closed categories. The rules for the typing judgeme ..."
Cited by 1 (0 self)
Add to MetaCart
We introduce a family of type theories as internal languages for autonomous (or symmetric monoidal closed) and -autonomous categories, in the same sense that the simply-typed -calculus with
surjective pairing is the internal language for cartesian closed categories. The rules for the typing judgements are presented in the style of Gentzen's Sequent Calculus. A notable feature is the
systematic treatment of naturality conditions by expressing the categorical composition, or cut in the type theory, by explicit substitution. We use let-constructs, one for each of the three type
constructors ?;\Omega and (, to witness the left-introduction rules, and a Parigot-style ¯-abstraction to express the involutive negation ?. We show that the eight equality and three commutation
congruence axioms of the -autonomous type theory characterise -autonomous categories exactly. More precisely we prove that there is a canonical interpretation of the (-autonomous) type theories in
-autonomous categorie...
, 1999
"... this article, we attack the converse problem of explaining semantics as an artifact of syntax, in other words, of extracting the meaning of a program from syntactical considerations on its
dynamics, or the way it interacts with the environment. We start the analysis with a very simple slogan, where ..."
Cited by 1 (0 self)
Add to MetaCart
this article, we attack the converse problem of explaining semantics as an artifact of syntax, in other words, of extracting the meaning of a program from syntactical considerations on its dynamics,
or the way it interacts with the environment. We start the analysis with a very simple slogan, where we use module to mean procedure, in the fashion of (Girard 1987b):
, 2002
"... system for finite products and sums, and proved decidability of equality of morphisms. The question remained ∗ as to whether one can present free categories with finite products and sums in a
canonical way, i.e., as a category with morphisms and composition defined directly, rather than modulo equiv ..."
Cited by 1 (1 self)
Add to MetaCart
system for finite products and sums, and proved decidability of equality of morphisms. The question remained ∗ as to whether one can present free categories with finite products and sums in a
canonical way, i.e., as a category with morphisms and composition defined directly, rather than modulo equivalence relations. This paper shows that the non-empty case (i.e., omitting initial and
final objects) can be treated in a surprisingly simple way: morphisms of the free category can be viewed as certain binary relations, with composition the usual composition of binary relations. In
particular, there is a forgetful functor into the category Rel of sets and binary relations. The paper ends by relating these binary relations to proof nets. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1369735&sort=cite&start=10","timestamp":"2014-04-21T16:34:13Z","content_type":null,"content_length":"38326","record_id":"<urn:uuid:ed5ddfcd-acf2-46cd-ad78-9655f7e1864a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00115-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Page:O. F. Owen's Organon of Aristotle Vol. 1 (1853).djvu/195
This page has been
, but needs to be
necessity be either true or false. From true propositions then we cannot infer a falsity, but from false premises we may infer the truth, except that not the why, but the mere that (is inferred),
since there is not a syllogism of the why from false premises, and for what reason shall be told hereafter.
First then, that we cannot infer the false from true premises, appears from this: if when A is, it is necessary that B should be, when B is not it is necessary that A is not, if therefore A is true,
B is necessarily true, or the same thing (A) would at one and the same time be and not be, which is impossible. Neither must it be thought, because one term, A, is taken, that from one certain thing
existing, it will happen that something will result from necessity, since this is not possible, for what results from necessity is the conclusion, and the fewest things through which this arises are
three terms, but two intervals and propositions. If then it is true that with whatever B is A also is, and that with whatever C is B is, it is necessary that with whatever C is A also is, and this
cannot be false, for else the same thing would exist and not exist at the same time. Wherefore A is laid down as one thing, the two propositions being co-assumed. It is the same also in negatives,
for we cannot show the false from what are true; but from false propositions we may collect the truth, either when both premises are false, or one only, and this not indifferently, but the minor, if
it comprehend the whole false, but if the whole is not assumed to be false, the true may be collected from either. Now let A be with the whole of C, but with no B, nor B with C,
|
{"url":"http://en.wikisource.org/wiki/Page%3AO._F._Owen's_Organon_of_Aristotle_Vol._1_(1853).djvu/195","timestamp":"2014-04-17T19:50:07Z","content_type":null,"content_length":"24502","record_id":"<urn:uuid:94e30a89-18a7-4091-884c-35bf10bc1ec6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Is randomnes native to computer science
- FUNDAMENTA INFORMATICAE , 2002
"... We consider the notion of algorithmic randomness relative to an oracle. We prove that the probability # that a program for infinite computations (a program that never halts) outputs a cofinite
set is random in the second jump of the halting problem. Indeed, we prove that # is exactly as random as ..."
Cited by 5 (2 self)
Add to MetaCart
We consider the notion of algorithmic randomness relative to an oracle. We prove that the probability # that a program for infinite computations (a program that never halts) outputs a cofinite set is
random in the second jump of the halting problem. Indeed, we prove that # is exactly as random as the halting probability of a universal machine equipped with an oracle for the second jump of the
halting problem, in spite of the fact that # is defined without considering oracles.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1407431","timestamp":"2014-04-16T06:23:11Z","content_type":null,"content_length":"13640","record_id":"<urn:uuid:31f38234-ff71-4fa3-a764-30c9cf23e57f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hi to everyone
Real Member
Re: Hi to everyone
Something wrong? Something right too no doubt?
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: Hi to everyone
hello guys i'm new here in this forum...but i'm glad that i found you guys..hopefully i will injoy here in this room..
Re: Hi to everyone
Hi ted15700;
Welcome to the forum!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
thank you bobbym for wellcoming me here..
Re: Hi to everyone
No problem, was my pleasure.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
bobbym wrote:
No problem, was my pleasure.
bob do you know how to make web application?
Re: Hi to everyone
Hi ted15700;
I am sorry Ted but I don't do that. Maybe MathsisFun can help you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
I watched voyager all seven seasons in two weeks :-). It was amazing.
I never thought that it would be better than next generation. Next, I will watch deep space 9 than stargate buttttttttttttt .......one minute.......let me finish my exam first.
amazing forum.... people here are nicer than many math teachers.......
Re: Hi to everyone
Hi lakeheadca;
I liked voyager too.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
bobbym wrote:
Hi ted15700;
Welcome to the forum. I am sorry Ted but I don't do that. Maybe MathsisFun can help you.
it's ok bob,tnx for the info?
Re: Hi to everyone
No problem and your welcome!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Hi edkaini753;
Welcome to the forum!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Full Member
Re: Hi to everyone
Hmm bobbym it seems your introduction seems to be one big introduction for everyone new here.
The best thing about life is you don't know what to expect
Re: Hi to everyone
Yea, that wasn't my idea at all. I have no idea why that is happening, except that mine is at the top?!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Hi fseoer2010;
Welcome to the forum. Funny stories you say, must be my stuff.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Hi fseoer2010,
Welcome to the forum! I hope you visit again! Thanks for the nice words!
Character is who you are when no one is looking.
Star Member
Re: Hi to everyone
Hi Bobby, a latent salutation to you!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
igloo myrtilles fourmis
Re: Hi to everyone
Hi John;
Jane was the first person to greet me and you are the last. Seems like a hundred years...
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Hi paris12;
Welcome to the forum!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Im new here too!
Thank you! Nice too meet with you!
well... little about me:
My name is Alex
I like extreme sport!
I have a dog and prefer big cars!
My hobby is collecting coins and play on guitar!
Re: Hi to everyone
Hi paris12 and san4os,
Welcome to the forum!
Character is who you are when no one is looking.
Re: Hi to everyone
Hi Alex;
Welcome to the forum!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Hi to everyone
Hi titamisu12;
Welcome to the forum!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Real Member
Re: Hi to everyone
Hi bobbym,
Welcome to the forum!
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Hi to everyone
Thanks, I really feel like I know the place.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=149127","timestamp":"2014-04-17T15:58:35Z","content_type":null,"content_length":"34129","record_id":"<urn:uuid:0237e08e-90f5-4c39-9a13-9d7901f858a4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
|
We'll do this example twice, once with each sort of notation.
Using prime notation, take
u = x
v' = e^x
Then u' = 1 and v = e^x. We plug all this stuff into the formula:
Since the integral of e^x is e^x + C, we have
We write + C instead of – C since either way we're describing the same family of functions.
Using fraction notation, take
u = x
dv = e^xdx
Then du = dx and v = e^x. We plug all this stuff into the formula:
Since the integral of e^x is e^x + C, we have
Much to everyone's dismay, there's no set of absolute rules for determining which function should be u. There are some guidelines, though. The whole point of integration by parts is that if you don't
know how to integrate
you can apply the integration-by-parts formula to get the expression
and hopefully this second integral will be easier to integrate than the original integral.
If you pick u and v' incorrectly the first time, you'll probably realize it soon.
Sample Problem
If we try to integrate
by parts, and we choose
u = e^x
v' = x
Putting everything into the formula, we get
The new integral is
which is worse than the original. This means we didn't pick u correctly!
Thankfully, if we chose poorly the first time, all it means is we have to start over.
When picking u and v', keep these guidelines in mind:
• u ' should be simpler than u (or at least not worse!)
• v should be simpler than v' (or at least not worse!)
• You need to be able to find v from v'
• The new integral should be easier to integrate than the original integral.
|
{"url":"http://www.shmoop.com/indefinite-integrals/indefinite-integral-parts-examples.html","timestamp":"2014-04-16T22:15:40Z","content_type":null,"content_length":"29125","record_id":"<urn:uuid:6714af44-dd11-48ac-abcb-446dd9a7b9ec>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maths in a minute: The missing pound
Here's a well-known conundrum: suppose I need to buy a book from a shop that costs £7. I haven't got any money, so I borrow £5 from my brother and £5 from my sister. I buy the book and get £3 change.
I give £1 back to each my brother and sister and I keep the remaining £1. I now owe each of them £4 and I have £1, giving £9 in total. But I borrowed £10. Where's the missing pound?
The answer is that the £10 are a red herring. There's no reason why the money I owe after the whole transaction and the money I still have should add up to £10. Rather, the money I owe minus the
change I got should come to the price of the book, that is £7. Giving a pound back to each my brother and sister just re-distributes the amounts. The money I still owe is reduced to £8 and the money
I still have to £1. Rather than having £10-£3=£7, we now have £8-£1=£7. Mystery solved!
|
{"url":"http://plus.maths.org/content/maths-minute-missing-pound","timestamp":"2014-04-17T09:58:53Z","content_type":null,"content_length":"18851","record_id":"<urn:uuid:1ee34f8d-a999-47a8-8a43-4b78ddbb1b4a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Most Perplexing Mystery
May 6, 2013
The discrete log and the factoring problem
Antoine Joux is a crypto expert at Versailles Saint-Quentin-en-Yvelines University. He is also one of the crypto experts at CryptoExperts, having joined this startup company last November. His work
is featured in all three of the company’s current top news items, though the asymptotic breakthrough on the exponent of finding discrete logs in small-characteristic fields which we covered last
month is not among them. In its place are concrete results on two fields of medium characteristic (between a million and a billion) whose elements have bit-size 1,175 and 1,425. The news release on
this concludes (emphasis in original):
[We] recommend to all cryptographic users to stop using medium prime fields.
Today I want to talk about a mystery, which I find the most puzzling problem in all of complexity theory, but which Ken thinks is “only” a sign of youth of the field.
Not the ${\mathsf{P}=\mathsf{NP}}$ question, not what is the power of quantum computers, not graph isomorphism, nor any other number of great puzzles. The single strangest problem, in my opinion, is
the relationship between the discrete log problem and the integer factoring problem.
History shows that every improvement to one of the these problems seems to yield rather quickly a corresponding improvement to the other. This works for quantum and classical algorithms alike—recall
that Peter Shor’s famous paper solved both problems at one stroke. As related here:
Historically, it has been the case that an algorithmic advance in either problem, factoring or discrete logs, was then applied to the other. This suggests that the degrees of difficulty of both
problems are closely linked, and that any breakthrough, either positive or negative, will affect both problems equally.
The Problems
Let me restate the two problems for comparison, where ${p,q}$ are primes.
• The discrete log problem is: Given ${y=a^{x} \bmod p}$, find the value of ${x}$.
• The factoring problem is: Given ${y=p\cdot q}$, find the value of ${p,q}$.
What possible relationship is there between these problems? One concerns the structure of the finite field modulo some large prime ${p}$. The other is about the multiplicative structure of the
There is no known reduction from one to the other, at least none known to me. This is the great mystery.
However, they have a common parent. Consider the finite ring of integers relatively prime to ${y = pq}$, with operations modulo ${y}$. Every element ${a}$ has an order ${x}$, meaning a least integer
${x > 0}$ giving ${a^x = 1}$ (mod ${y}$), and this is the period of the powers of ${m}$. In essence, ${x}$ is the discrete logarithm of ${1}$ in base ${a}$ in this ring. Shor’s algorithm gives a
high-enough-to-amplify probability that a randomly chosen ${a}$ has a findable period ${x}$ that helps produce a factor of ${y}$.
That the same idea works for discrete logarithms in fields indicates that this common parent problem captures at least some techniques that work for both factoring and discrete log. This MathOverflow
item points out the importance of this being in turn a case of the hidden-subgroup problem for Abelian groups. Ken feels that only recently is the area finding techniques that really separate the
problems from their parent, an indication of the field shedding its youth. This paper by Joux is an example, as we explain next.
The Breakthrough
The new wrinkle—a sign of maturity—is that the improvement in the running time depends on the characteristic of the field, and is greatest when it is fixed. Recall that in a finite field ${\mathbb{F}
_{p^k}}$, ${p}$ is the characteristic and ${k}$ is the degree of extension from the prime field ${\mathbb{F}_p}$. The prime field is the same as the field ${\mathbb{Z}_p}$ of integers modulo ${p}$,
but ${\mathbb{F}_{p^k}}$ should not be confused with the integers mod ${p^k}$, which do not form a field. The idea of characteristic is also a distinction from ${\mathbb{Z}_{pq}}$.
Running times of algorithms for factoring and discrete log have heretofore been expressed in terms of the “${L}$-function,” whose general form is defined by
$\displaystyle L_Q(\beta,c) = \exp((c+o(1))(\log Q)^\beta(\log\log Q)^{1-\beta}).$
Here we may have ${Q = p^k}$. The length of the input is essentially ${n = O(\log Q) = O(k\log p)}$. The backbone of this formula is ${\exp(cn^\beta)}$, while the ${(\log\log Q)^{1-\beta}}$ factor
acts as a counterweight. The main focus is the exponent ${\beta}$. If taken only thus far, the characteristic ${p}$ seems subservient to the degree ${k}$ in the time, since the formula takes the
logarithm of ${p}$.
The relation between ${c}$ and ${\beta}$ partly depends on how two phases of the algorithms are balanced, a “sieving phase” in which information about specific field elements is compiled, and a
“linear algebra phase” for the main computation. Once other parameters are chosen to effect the balance, the ${c}$ part is often bounded and ignorable, yielding the simpler designation ${L(\beta)}$
for the formula. The game plan is to improve the axis of the ${c}$-vs.-${\beta}$ tradeoff, to get lower ${\beta}$.
The breakthrough by Joux is to do this when ${p}$ is bounded. This is important while bootstrapping the algorithm through extension fields ${\mathbb{F}_{q^k}}$ and ${\mathbb{F}_{q^{2k}}}$, where ${q}
$ itself is a power ${p^\ell}$ making ${p^\ell \approx k}$. The result is:
Theorem: For bounded ${p}$, discrete logarithms in fields of characteristic ${p}$ can be computed in time ${L(1/4 + o(1))}$.
His paper itself emphasizes two new ideas, which we express in terms we’ve described earlier on this blog.
Many “sieving” algorithms for factoring and discrete log begin with what strikes us as the opposite idea to the famous Sieve of Eratosthenes. That sieve works to find primes by crossing off numbers
with factors. The newer sieves have the opposite goal: they want to find integers with lots of small factors.
A big reason lots of small factors are useful is that they help build Chinese Remainder representations for large numbers while keeping their own arithmetic simple. More generally put, the small
factors help generate a lot of congruences—and the large numbers with those factors serve as common zero-points for those congruences. As a general strategy point, the more small factors with known
multiples, the better. Here is where we mention one idea from our old post that is already presumed by Joux:
1. Integers ${\longrightarrow}$ polynomials: As we wrote before, integers share many properties with polynomials over the integers, but polynomials are usually much easier to handle. For example,
the analogue of the famous Goldbach conjecture is solved for polynomials but is still open for integers: Every nontrivial polynomial over the integers can be written as a sum of two irreducible
polynomials. For another example, the deterministic primality algorithm of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena begins by introducing polynomials over one variable.
In Joux’s case, polynomials are used to construct extensions of finite fields to begin with. Joux operates further on these extensing polynomials. Adding one or a few variables creates more ways
to define small factors. Thus using polynomials already constitutes a way to amplify, but Joux found a new way to carry this further.
2. Amplifying Linearity: If ${g(x)}$ is a polynomial known to split into many small factors, then for any field constant ${a}$, the linearly transformed polynomial ${g(ax)}$ also has this property.
This was previously known and exploited as far as it can go. Joux noticed that certain ostensibly-nonlinear transformations could wind up having the same effect. Well, a transformation of the
$\displaystyle h(x) = \frac{ax+b}{cx+d}.$
is sometimes called a “fractional linear transformation,” and in complex analysis is known as a Möbius transformation. Now ${g(h(x))}$ is not a polynomial, but it becomes a polynomial ${g'}$
again upon multiplication by ${(cx+d)^D}$ where ${D}$ is the degree of ${g}$.
Joux proves that if you take a polynomial ${g}$ that splits into small factors over the base field ${\mathbb{F}_q}$, and take any ${a,b,c,d}$ in an extension field ${\mathbb{F}_{q^k}}$ such that
${ad eq bc}$, then the resulting polynomial ${g'}$ over ${\mathbb{F}_{q^k}}$ likewise splits into small factors over ${\mathbb{F}_{q^k}}$ (though irreducibility and degrees may not be preserved
from the corresponding factors of ${g}$).
The amplifying advantage is that whereas the ${g(ax)}$ transformation gives at most the size of the field number of new polynomials, the ${g'}$ transform gives approximately the cube of that
size—after eliminating redundancies and trivialities that happen when ${a,b,c,d}$ are all in the base field ${\mathbb{F}_q}$, for example. He also employs transforms by ratios of two quadratic
3. Seeding More than Sieving: The cubic advantage from the last step is so great that the algorithm can dispense with sieving steps needed to build a base of small factors via search. Instead we can
“seed” the base with some polynomials known to have many small factors in advance. Joux starts with the simple case of ${x^q - x}$ splitting into linear factors over ${\mathbb{F}_q}$. A further
fact of particular importance is:
Lemma: A polynomial ${x^{n+1} + a x^{n} + b x + c}$ splits (into linear factors) over ${\mathbb{F}_p}$ if and only if
$\displaystyle x^{(n+1)} + x^{n} + b a^{-n} x + c a^{-n-1}$
also splits.
The Future
When I saw Dan Boneh at the RSA conference, as I first related here, he felt very confident that Joux’s work would lead soon an improvement in factoring. We will see if these happens, but his feeling
needs to be taken quite seriously since he is an expert in computational number theory. I mentioned this to Lance Fortnow the other day and he immediately offered to make a bet with me. We have yet
to work out the details, but I believe we will firm up the bet shortly.
Open Problems
Would you like to take sides on the bet? Will factoring get improved too? What about the mystery? Is there even some heuristic that explains why these problems are related, in a more detailed way
than their common parentage?
1. May 6, 2013 7:50 pm
Just a thought, more loose than lucid most likely —
There is another kind of “discrete logarithm” that I used to call the “vector logarithm” of a positive integer $n.$ Consider the primes factorization of $n$ and write the exponents of the primes
in order as a coordinate tuple $(e_1, \ldots, e_k, 0, 0, 0, \ldots),$ where $e_j = 0$ for any prime $p_j$ not dividing $n$ and where the exponents are all $0$ after some finite point. Then
multiplying two positive integers maps to adding the corresponding vectors.
2. May 7, 2013 1:16 pm
have always wondered about this vaguely myself somewhat, the discrete log/factoring link (but of course not on this same level of sophistication!) wondering if there is some [complexity theory?]
problem equivalent yet less complex than the hidden subgroup problem that is rather involved to state, and moreover while there are remarkable/at other times exotic individual cases [eg
barringtons thm, or the symmetric group] — its never seemed to me that group theory is highly central or fundamental to TCS. this seems somewhat surprising, given its paramount importance in
theoretical math, that there is not some deeper link. maybe a missing important bridge thm? have long suspected there could maybe be a link to automated theorem proving, eg/ie maybe a group
structure in automated proofs, etc., or maybe some key link to certain types of languages/grammars, etc.
□ May 7, 2013 9:08 pm
Groups are inversely related to complexity. Any time you have symmetry in a problem space it reduces the complexity of the problem — finding one solution in an orbit gives you all the rest by
way of relatively simple transformations.
☆ May 8, 2013 2:54 pm
And yet nobody seems to work on or care to read an application of group theory to NP=?P issue.
□ May 8, 2013 11:02 am
By the way, the inverse relationship between symmetry and diversity — that we see, for example, in the lattice-inverting map of a Galois correspondence — is a variation on an old theme of
logic called the “inverse proportionality of comprehension and extension”.
C.S. Peirce, in his probings of the “laws of information”, found this principle to be a special case of a more general formula, saying that the reciprocal relation holds only when the
quantity of information is constant.
Here is a reading —
□ May 13, 2013 2:00 pm
Here are my ongoing if slightly sporadic notes on Peirce’s information theory and its relation to the logic of science —
3. May 7, 2013 7:25 pm
In the post “Factoring Could be Easy” of this blog an idea is described, which could more or less be stated as:
If n! mod m could be calculated in polynomial time for integers n,m then Factoring is in P.
Isn’t this a nice example of a technique which would work essentially for both factoring and discrete log?
As “equally” a fast way of testing if
(a^1 – y)*(a^2 – y)*…*(a^n – y) = 0 mod p
would let you calculate y = a^x mod p efficiently.
4. May 7, 2013 10:09 pm
dude… huh?! wow!! thx for the reaction/musing, but…. have no idea where you got this idea. any refs? it strikes me unfortunately, as not “not even wrong” but as spectacularly wrong! there is an
extraordinary, staggering inherent/fundamental complexity in group theory eg given that the classification of the finite simple groups took something like ~1Century, dozens of mathematicians and
1000s of pages! also the monster group is an extraordinary object surely with some deeper meaning/link in TCS…. theres gotta be something to all this…. but it would seem in TCS we’re still
“flying blind” in a big way….
□ May 8, 2013 12:33 pm
oops fyi threaded this wrong! its in reply to this comment asserting “Groups are inversely related to complexity.”….
□ May 8, 2013 1:28 pm
Well, I do love a good spectacle, but here I am just using the word complexity to describe what we might otherwise call a measure of diversity or variety in a problem space, the extent to
which we have to treat problem instances as isolated cases without any class or rule that relates them to others.
5. May 8, 2013 2:57 pm
” … Lance Fortnow the other day and he immediately offered to make a bet …”.
I wonder how much of bet Lance would be willing to make on NP[FP] = #P.
6. May 9, 2013 2:49 am
If $n!$ in $log^{c}(n)$ is to factoring $N: N > n > \sqrt{n}$, then $X$ is discrete logarithm over $\mathbb{F}_{p}: p$ satisfies some property of $X$. What is $X$?
7. May 9, 2013 7:54 pm
Logarithms were invented to simplify multiplications, so it seems just natural that they be also involved in factoring, the inverse of multiplication.
However, I’d be interested to know if the aforementioned bet went as far as the polynomiality of factoring…
8. May 12, 2013 10:34 pm
In ‘Applied Cryptography’ p.262, Bruce Schneier wrote: “Computing discrete logarithms is closely related to factoring. If you can solve the discrete logarithm problem, then you can factor. (The
converse has never been proven to be true.)”
□ May 16, 2013 8:28 am
The base is a generator. So the log is the totient of the modulus. Knowing that the latter is the product of two primes p and q, the log will then be (p-1)(q-1). So yes, you can factor with
the discrete log. Got this solution while waking up. In fact, I’m typing this from my bed in California. I hope it’s not the kind of nonsense I can come up with in the morning. At least, I’ll
have the feeling of having solved a mystery for a couple of hours. Not sure anyone reads these old threads anyway.
☆ May 16, 2013 8:31 am
I mean, the log of 1.
☆ May 16, 2013 9:29 am
Well, that’s if we’re given the generator. It is supplied in the discrete log problem, but not in the factoring problem. So it all depends on the complexity of finding a generator for the
modulus’ multiplicative group. I have no idea if it’s in P. I guess I’ll have to read Schneier’s bibliography.
☆ May 16, 2013 3:26 pm
Well it would be nice if we had uniqueness of the log of x. We don’t, but I guess the solution Schneier alluded to could be along these lines. What’s clear to me however is that there’s
no abyssmal disconnect between the two problems; far from it.
9. June 19, 2013 1:42 pm
Further progress on the discrete log:
“A quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic”
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
|
{"url":"http://rjlipton.wordpress.com/2013/05/06/a-most-perplexing-mystery/","timestamp":"2014-04-19T14:29:24Z","content_type":null,"content_length":"123242","record_id":"<urn:uuid:7da6e86a-40b8-4427-abe1-e2706169a897>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 21: Eigenvalues and eigenvectors
OK. So this is the first lecture on eigenvalues and eigenvectors, and that's a big subject that will take up most of the rest of the course.
It's, again, matrices are square and we're looking now for some special numbers, the eigenvalues, and some special vectors, the eigenvectors. And so this lecture is mostly about what are these
numbers, and then the other lectures are about how do we use them, why do we want them.
OK, so what's an eigenvector? Maybe I'll start with eigenvector. What's an eigenvector? So I have a matrix A. OK.
What does a matrix do? It acts on vectors.
It multiplies vectors x. So the way that matrix acts is in goes a vector x and out comes a vector Ax. It's like a function. With a function in calculus, in goes a number x, out comes f(x).
Here in linear algebra we're up in more dimensions.
In goes a vector x, out comes a vector Ax.
And the vectors I'm specially interested in are the ones the come out in the same direction that they went in. That won't be typical. Most vectors, Ax is in -- points in some different direction.
But there are certain vectors where Ax comes out parallel to x. And those are the eigenvectors.
So Ax parallel to x. Those are the eigenvectors.
And what do I mean by parallel? Oh, much easier to just state it in an equation. Ax is some multiple -- and everybody calls that multiple lambda -- of x.
That's our big equation. We look for special vectors -- and remember most vectors won't be eigenvectors -- that -- for which Ax is in the same direction as x, and by same direction I allow it to be
the very opposite direction, I allow lambda to be negative or zero. Well, I guess we've met the eigenvectors that have eigenvalue zero. Those are in the same direction, but they're -- in a kind of
very special way. So this -- the eigenvector x. Lambda, whatever this multiplying factor is, whether it's six or minus six or zero or even some imaginary number, that's the eigenvalue.
So there's the eigenvalue, there's the eigenvector.
Let's just take a second on eigenvalue zero.
From the point of view of eigenvalues, that's no special deal. That's, we have an eigenvector.
If the eigenvalue happened to be zero, that would mean that Ax was zero x, in other words zero. So what would x, where would we look for -- what are the x-s? What are the eigenvectors with eigenvalue
zero? They're the guys in the null space, Ax equals zero.
So if our matrix is singular, let me write this down.
If, if A is singular, then that -- what does singular mean? It means that it takes some vector x into zero. Some non-zero vector, that's why -- will be the eigenvector into zero.
Then lambda equals zero is an eigenvalue.
But we're interested in all eigenvalues now, lambda equals zero is not, like, so special anymore.
OK. So the question is, how do we find these x-s and lambdas? And notice -- we don't have an equation Ax equal B anymore.
I can't use elimination. I've got two unknowns, and in fact they're multiplied together.
Lambda and x are both unknowns here.
So, we need to, we need a good idea of how to find them. But before I, before I do that, and that's where determinant will come in, can I just give you some matrices? Like here you go.
Take the matrix, a projection matrix.
OK. So suppose we have a plane and our matrix P is -- what I've called A, now I'm going to call it P for the moment, because it's -- I'm thinking OK, let's look at a projection matrix.
What are the eigenvalues of a projection matrix? So that's my question. What are the x-s, the eigenvectors, and the lambdas, the eigenvalues, for -- and now let me say a projection matrix. My, my
point is that we -- before we get into determinants and, and formulas and all that stuff, let's take some matrices where we know what they do. We know that if we take a vector b, what this matrix
does is it projects it down to Pb.
So is b an eigenvector in, in that picture? Is that vector b an eigenvector? No. Not so, so b is not an eigenvector c- because Pb, its projection, is in a different direction. So now tell me what
vectors are eigenvectors of P? What vectors do get projected in the same direction that they start? So, so answer, tell me some x-s.
In this picture, where could I start with a vector b or x, do its projection, and end up in the same direction? Well, that would happen if the vector was right in that plane already. If the vector x
was -- so let the vector x -- so any vector, any x in the plane will be an eigenvector. And what will happen when I multiply by P, when I project a vector x -- I called it b here, because this is our
familiar picture, but now I'm going to say that b was no good for, for the, for our purposes. I'm interested in a vector x that's actually in the plane, and I project it, and what do I get back? x,
of course.
Doesn't move. So any x in the plane is unchanged by P, and what's that telling me? That's telling me that x is an eigenvector, and it's also telling me what's the eigenvalue, which is -- just compare
it with that. The eigenvalue, the multiplier, is just one. Good.
So we have actually a whole plane of eigenvectors.
Now I ask, are there any other eigenvectors? And I expect the answer to be yes, because I would like to get three, if I'm in three dimensions, I would like to hope for three independent eigenvectors,
two of them in the plane and one not in the plane.
OK. So this guy b that I drew there was not any good. What's the right eigenvector that's not in the plane? The, the good one is the one that's perpendicular to the plane. There's an, another good x,
because what's the projection? So these are eigenvectors.
Another guy here would be another eigenvector.
But now here is another one. Any x that's perpendicular to the plane, what's Px for that, for that, vector? What's the projection of this guy perpendicular to the plane? It is zero, of course.
So -- there's the null space. Px and n- for those guys are zero, or zero x if we like, and the eigenvalue is zero.
So my answer to the question is, what are the eigenvalues for a projection matrix? There they are. One and zero.
OK. We know projection matrices.
We can write them down as that A, A transpose, A inverse, A transpose thing, but without doing that from the picture we could see what are the eigenvectors.
OK. Are there other matrices? Let me take a second example. How about a permutation matrix? What about the matrix, I'll call it A now.
Zero one, one zero. Can you tell me a vector x -- see, we'll have a system soon enough, so I, I would like to just do these e- these couple of examples, just to see the picture before we, before we
let it all, go into a system where that, matrix isn't anything special. Because it is special.
And what, so what vector could I multiply by and end up in the same direction? Can you spot an eigenvector for this guy? That's a matrix that permutes x1 and x2, right? It switches the two components
of x.
How could the vector with its x2 x1, with -- permuted turn out to be a multiple of x1 x2, the vector we start with? Can you tell me an eigenvector here for this guy? x equal -- what is -- actually,
can you tell me one vector that has eigenvalue one? So what, what vector would have eigenvalue one, so that if I, if I permute it it doesn't change? There, that could be one one, thanks.
One one. OK, take that vector one one.
That will be an eigenvector, because if I do Ax I get one one. So that's the eigenvalue is one. Great.
That's one eigenvalue. But I have here a two by two matrix, and I figure there's going to be a second eigenvalue.
And eigenvector. Now, what about that? What's a vector, OK, maybe we can just, like, guess it. A vector that the other -- actually, this one that I'm thinking of is going to be a vector that has
eigenvalue minus one.
That's going to be my other eigenvalue for this matrix.
It's a -- notice the nice positive or not negative matrix, but an eigenvalue is going to come out negative.
And can you guess, spot the x that will work for that? So I want a, a vector. When I multiply by A, which reverses the two components, I want the thing to come out minus the original. So what shall I
send in in that case? If I send in negative one one.
Then when I apply A, I get I do that multiplication, and I get one negative one, so it reversed sign.
So Ax is -x. Lambda is minus one.
Ax -- so Ax was x there and Ax is minus x here.
Can I just mention, like, jump ahead, and point out a special little fact about eigenvalues.
n by n matrices will have n eigenvalues.
And it's not like -- suppose n is three or four or more.
It's not so easy to find them. We'd have a third degree or a fourth degree or an n-th degree equation.
But here's one nice fact. There, there's one pleasant fact. That the sum of the eigenvalues equals the sum down the diagonal.
That's called the trace, and I put that in the lecture content specifically. So this is a neat fact, the fact that sthe sum of the lambdas, add up the lambdas, equals the sum -- what would you like
me to, shall I write that down? What I'm want to say in words is the sum down the diagonal of A.
Shall I write a11+a22+...+ ann. That's add up the diagonal entries. In this example, it's zero. In other words, once I found this eigenvalue of one, I knew the other one had to be minus one in this
two by two case, because in the two by two case, which is a good one to, to, play with, the trace tells you right away what the other eigenvalue is. So if I tell you one eigenvalue, you can tell me
the other one.
We'll, we'll have that -- we'll, we'll see that again. OK.
Now can I -- I could give more examples, but maybe it's time to face the, the equation, Ax equal lambda x, and figure how are we going to find x and lambda.
OK. So this, so the question now is how to find eigenvalues and eigenvectors.
How to solve, how to solve Ax equal lambda x when we've got two unknowns both in the equation. OK. Here's the trick. Simple idea. Bring this onto the same side. Rewrite.
Bring this over as A minus lambda times the identity x equals zero. Right? I have Ax minus lambda x, so I brought that over and I've got zero left on the, on the right-hand side.
OK. I don't know lambda and I don't know x, but I do know something here.
What I know is if I, if I'm going to be able to solve this thing, for some x that's not the zero vector, that's not, that's a useless eigenvector, doesn't count. What I know now is that this matrix
must be what? If I'm going to be -- if there is an x -- I don't -- right now I don't know what it is.
I'm going to find lambda first, actually.
And -- but if there is an x, it tells me that this matrix, this special combination, which is like the matrix A with lambda -- shifted by lambda, shifted by lambda I, that it has to be singular. This
matrix must be singular, otherwise the only x would be the zero x, and zero matrix.OK.
So this is singular. And what do I now know about singular matrices? Their determinant is zero.
So I've -- so from the fact that that has to be singular, I know that the determinant of A minus lambda I has to be zero.
And that, now I've got x out of it.
I've got an equation for lambda, that the key equation -- it's called the characteristic equation or the eigenvalue equation. And that -- in other words, I'm now in a position to find lambda first.
So -- the idea will be to find lambda first.
And actually, I won't find one lambda, I'll find N different lambdas. Well, n lambdas, maybe not n different ones.
A lambda could be repeated. A repeated lambda is the source of all trouble in 18.06. So, let's hope for the moment that they're not repeated. There, there they were different, right? One and minus
one in that, in that, for that permutation. OK.
So and after I found this lambda, can I just look ahead? How I going to find x? After I have found this lambda, the lambda being this -- one of the numbers that makes this matrix singular. Then of
course finding x is just by elimination. Right? It's just -- now I've got a singular matrix, I'm looking for the null space. We're experts at finding the null space. You know, you do elimination, you
identify the, the, the pivot columns and so on, you're -- and, give values to the free variables.
Probably there'll only be one free variable. We'll give it the value one, like there.
And we find the other variable. OK.
So let's -- find the x second will be a doable job. Let's go, let's look at the first job of finding lambda. OK. Can I take another example? And let's, let's work that one out.
OK. So let me take the example, say, let me make it easy. Three three one and one. So I've made it easy.
I've made it two by two. I've made it symmetric.
And I even made it constant down the diagonal.
So that -- so the more, like, special properties I stick into the matrix, the more special outcome I get for the eigenvalues. For example, this symmetric matrix, I know that it'll come out with real
eigenvalues. The eigenvalues will turn out to be nice real numbers. And up in our previous example, that was a symmetric matrix. Actually, while we're at it, that was a symmetric matrix. Its
eigenvalues were nice real numbers, one and minus one. And do you notice anything about its eigenvectors? Anything particular about those two vectors, one one and minus one one? They just happen to
be -- no, I can't say they just happen to be, because that's the whole point, is that they had to be -- what? What are they? They're perpendicular. The vector, when I -- if I see a vector one one and
a one -- and a minus one one, my mind immediately takes that dot product.
It's zero. Those vectors are perpendicular. That'll happen here too.
Well, let's find the eigenvalues.
Actually, oh, my example's too easy.
My example is too easy. Let me tell you in advance what's going to happen. May I? Or shall I do the determinant of A minus lambda, and then point out at the end? Will you remind me at the -- after
I've found the eigenvalues to say why they were -- why they were easy from the, from the example we did? OK, let's do the job here. Let's compute determinant of A minus lambda I. So that's a
And what's, what is this thing? It's the matrix A with lambda removed from the diagonal.
So the diagonal matrix is shifted, and then I'm taking the determinant. OK.
So I multiply this out. So what is that determinant? Do you notice, I didn't take lambda away from all the entries. It's lambda I, so it's lambda along the diagonal.
So I get three minus lambda squared and then minus one, right? And I want that to be zero.
Well, I'm going to simplify it. And what will I get? So if I multiply this out, I get lambda squared minus six lambda plus what? Plus eight.
And that I'm going to set to zero.
And I'm going to solve it.
So and it's, it's a quadratic equation.
I can use factorization, I can use the quadratic formula. I'll get two lambdas.
Before I do it, tell me what's that number six that's showing up in this equation? It's the trace. That number six is three plus three. And while we're at it, what's the number eight that's showing
up in this equation? It's the determinant. That our matrix has determinant eight. So in a two by two case, it's really nice. It's lambda squared minus the trace times lambda -- the trace is the
linear coefficient -- and plus the determinant, the constant term.
OK. So let's -- can, can we find the roots? I guess the easy way is to factor that as something times something.
If we couldn't factor it, then we'd have to use the old b^2-4ac formula, but I, I think we can factor that into lambda minus what times lambda minus what? Can you do that factorization? Four and two?
Lambda minus four times lambda minus two.
So the, the eigenvalues are four and two. So the eigenvalues are -- one eigenvalue, lambda one, let's say, is four. Lambda two, the other eigenvalue, is two. The eigenvalues are four and two. And
then I can go for the eigenvectors. You see I got the eigenvalues first. Four and two.
Now for the eigenvectors. So what are the eigenvectors? They're these guys in the null space when I take away, when I make the matrix singular by taking four I or two I away. So we're -- we got to do
those separately. I'll -- let me find the eigenvector for four first. So I'll subtract four, so A minus four I is -- so taking four away will put minus ones there. And what's the point about that
matrix? If four is an eigenvalue, then A minus four I had better be a what kind of matrix? Singular. If that matrix isn't singular, the four wasn't correct. But we're OK, that matrix is singular. And
what's the x now? The x is in the null space.
So what's the x1 that goes with, with the lambda one? So that A -- so this is -- now I'm doing A x1 is lambda one x1.
So I took A minus lambda one I, that's this matrix, and now I'm looking for the x1 in its null space, and who is he? What's the vector x in the null space? Of course it's one one.
So that's the eigenvector that goes with that eigenvalue.
Now how about the eigenvector that goes with the other eigenvalue? Can I do that with, with erasing? I take A minus two I.
So now I take two away from the diagonal, and that leaves me with a one and a one. So A minus two I has again produced a singular matrix, as it had to.
I'm looking for the null space of that guy.
What vector is in its null space? Well, of course, a whole line of vectors.
So when I say the eigenvector, I'm not speaking correctly.
There's a whole line of eigenvectors, and you just -- I just want a basis.
And for a line I just want one vector.
But -- You could, you're -- there's some freedom in choosing that one, but choose a reasonable one. What's a vector in the null space of that? Well, the natural vector to pick as the eigenvector
with, with lambda two is minus one one.
If I did elimination on that vector and set that, the free variable to be one, I would get minus one and get that eigenvector. So you see then that I've got eigenvector, eigenvalue, eigenvector,
eigenvalue for this, for this matrix? And now comes that thing that I wanted to be reminded of.
What's the relation between that problem and -- let me write just above what we2 found here.
A equals zero one one zero, that had eigenvalue one and minus one and eigenvectors one one and eigenvector minus one one. And what do you notice? What's -- how is this matrix related to that matrix?
How are those two matrices related? Well, one is just three I more than the other one, right? I just took that matrix and I -- I took this matrix and I added three I.
So my question is, what happened to the eigenvalues and what happened to the eigenvectors? That's the, that's like the question we keep asking now in this chapter.
If I, if I do something to the matrix, what happens if I -- or I know something about the matrix, what's the what's the conclusion for its eigenvectors and eigenvalues? Because -- those eigenvalues
and eigenvectors are going to tell us important information about the matrix. And here what are we seeing? What's happening to these eigenvalues, one and minus one, when I add three I? It just added
three to the eigenvalues.
I got four and two, three more than one and minus one. What happened to the eigenvectors? Nothing at all.
One one is -- and minus -- and one -- and minus one one are -- is still the eigenvectors.
In other words, simple but useful observation.
If I add three I to a matrix, its eigenvectors don't change and its eigenvalues are three bigger. Let's, let's just see why. Let me keep all this on the same board. Suppose I have a matrix A, and Ax
equal lambda x. Now I add three I to that matrix. Do you see what3 so it's if Ax equals lambda x, then this, this other new matrix, I just have an Ax, which is lambda x, and I have a three x, from
the three x, so it's just I mean, it's just sitting there.
Lambda plus three x. So if they, if this had eigenvalue lambda, this has eigenvalue lambda plus three. And x, the eigenvector, is the same x for both matrices. OK. So that's, great.
Of course, it's special. We got the new matrix by adding three I. Suppose I had added another matrix. Suppose I know the eigenvalues and eigenvectors of A. So this is, this, this little board here is
going to be not so great.
Suppose I have a matrix A and it has an eigenvector x with an eigenvalue lambda. And now I add on some other matrix. So, so what I'm asking you is, if you know the eigenvalues of A and you know the
eigenvalues of B, let me say suppose B -- so this is if -- let me put an if here. If Ax equals lambda x, fine, and B has, eigenvalues, has eigenvalues -- what shall we call them? Alpha, alpha one and
alpha -- let's say -- I'll use alpha for the eigenvalues of B for no good reason.
What a- you see what I'm going to ask is, how about A plus B? Let me, let me give you the, let me give you, what you might think first. OK.
If Ax equals lambda x and if B has an eigenvalue alpha, then I allowed to say -- what's the matter with this argument? It's wrong. What I'm going to write up is wrong. I'm going to say Bx is alpha x.
Add those up, and you get A plus B x equals lambda plus alpha x. So you would think that if you know the eigenvalues of A and you knew the eigenvalues of B, then if you added you would know the
eigenvalues of A plus B. But that's false.
A plus B -- well, when B was three I, that worked great. But this is not so great.
And what's the matter with that argument there? We have no reason to believe that x is also an eigenvector of B. B has some eigenvalues, but it's got some different eigenvectors normally. It's a
different matrix.
I don't know anything special. If I don't know anything special, then as far as I know, it's got some different eigenvector y, and when I add I get just rubbish. I mean, I get -- I can add, but I
don't learn anything. So not so great is A plus B.
Or A times B. Normally the eigenvalues of A plus B or A times B are not eigenvalues of A plus eigenvalues of B. Ei- eigenvalues are not, like, linear. Or -- and they don't multiply.
Because, eigenvectors are usually different and, and there's just no way to find out what A plus B does to affect it. OK.
So that's, like, a caution.
Don't, if B is a multiple of the identity, great, but if B is some general matrix, then for A plus B you've got to find -- you've got to solve the eigenvalue problem. OK. Now I want to do another
example that brings out a, another point about eigenvalues. Let me make this example a rotation matrix. OK. So here's another example. So a rotate -- oh, I'd better call it Q. I often use Q for, for,
rotations because those are the, like, very important examples of orthogonal matrices.
Let me make it a ninety degree rotation.
So -- my matrix is going to be the one that rotates every vector by ninety degrees. So do you remember that matrix? It's the cosine of ninety degrees, which is zero, the sine of ninety degrees, which
is one, minus the sine of ninety, the cosine of ninety. So that matrix deserves the letter Q. It's an orthogonal matrix, very, very orthogonal matrix. Now I'm interested in its eigenvalues and
eigenvectors. Two by two, it can't be that tough. We know that the eigenvalues add to zero. Actually, we know something already here. The eigen- what's the sum of the two eigenvalues? Just tell me
what I just said. Zero, right.
From that trace business. The sum of the eigenvalues is, is going to come out zero. And the product of the eigenvalues, did I tell you about the determinant being the product of the eigenvalues? No.
But that's a good thing to know. We pointed out how that eight appeared in the, in the quadratic equation.
So let me just say this. The trace is zero plus zero, obviously. And that's the sum, that's lambda one plus lambda two.
Now the other neat fact is that the determinant, what's the determinant of that matrix? One. And that is lambda one times lambda3 two.
In our example, the one we worked out, we -- the eigenvalues came out four and two. Their product was eight. That -- it had to be eight, because we factored into lambda minus four times lambda minus
two. That gave us the constant term eight. And that was the determinant.
OK. What I'm leading up to with this example is that something's going to go wrong.
Something goes wrong for rotation because what vector can come out parallel to itself after a rotation? If this matrix rotates every vector by ninety degrees, what could be an eigenvector? Do you see
we're, we're, we're going to have trouble.
eigenvectors are -- Well. Our, our picture of eigenvectors, of, of coming out in the same direction that they went in, there won't be it. And with, and with eigenvalues we're going to have trouble.
From these equations. Let's see.
Why I expecting trouble? The, the first equation says that the eigenvalues add to zero.
So there's a plus and a minus. But then the second equation says that the product is plus one.
We're in trouble. But there's a way out.
So how -- let's do the usual stuff.
Look at determinant of Q minus lambda I.
So I'll just follow the rules, take the determinant, subtract lambda from the diagonal, where I had zeros, the rest is the same. Rest of Q is just copied.
Compute that determinant. OK, so what does that determinant equal? Lambda squared minus minus one plus what? What's up? There's my equation. My equation for the eigenvalues is lambda squared plus one
equals zero.
What are the eigenvalues lambda one and lambda two? They're I, whatever that is, and minus it, right.
Those are the right numbers. To be real numbers even though the matrix was perfectly real. So this can happen.
Complex numbers are going to -- have to enter eighteen oh six at this moment. Boo, right.
All right. If I just choose good matrices that have real eigenvalues, we can postpone that evil day, but just so you see -- so I'll try to do that. But it's out there.
That a matrix, a perfectly real matrix could have, give a perfectly innocent-looking quadratic thing,4 but the roots of that quadratic can be complex numbers. And of course you -- everybody knows
that they're -- what, what do you know about the complex numbers? So, so now -- Let's just spend one more minute on this bad possibility of complex numbers.
We do know a little information about the, the two complex numbers. They're complex conjugates of each other. If, if lambda is an eigenvalue, then when I change, when I go -- you remember what
complex conjugates are? You switch the sign of the imaginary part. Well, this was only imaginary, had no real part, so we just switched its sign.
So that eigenvalues come in pairs like that, but they're complex. A complex conjugate pair.
And that can happen with a perfectly real matrix.
And as a matter of fact -- so that was my, my point earlier, that if a matrix was symmetric, it wouldn't happen.
So if we stick to matrices that are symmetric or, like, close to symmetric, then the eigenvalues will stay real. But if we move far away from symmetric -- and that's as far as you can move, because
that matrix is -- how is Q transpose related to Q for that matrix? That matrix is anti-symmetric.
Q transpose is minus Q. That's the very opposite of symmetry. When I flip across the diagonal I get -- I reverse all the signs.
Those are the guys that have pure imaginary eigenvalues.
So they're the extreme case. And in between are, are matrices that are not symmetric or anti-symmetric but, but they have partly a symmetric part and an anti-symmetric part.
OK. So I'm doing a bunch of examples here to show the possibilities.
The good possibilities being perpendicular eigenvectors, real eigenvalues. The bad possibilities being complex eigenvalues. We could say that's bad.
There's another even worse. I'm getting through the, the bad things here today. Then, then the next lecture can, can, can be like pure happiness.
OK. Here's one more bad thing that could happen. So I, again, I'll do it with an example. Suppose my matrix is, suppose I take this three three one and I change that guy to zero. What are the
eigenvalues of that matrix? What are the eigenvectors? This is always our question.
Of course, the next section we're going to show why are, why do we care. But for the moment, this lecture is introducing them.
And let's just find them. OK.
What are the eigenvalues of that matrix? Let me tell you -- at a glance we could answer that question.
Because the matrix is triangular.
It's really useful to know -- if you've got properties like a triangular matrix. It's very useful to know you can read the eigenvalues off.
They're right on the diagonal. So the eigenvalue is three and also three. Three is a repeated eigenvalue. But let's see that happen. Let me do it right.
The determinant of A minus lambda I, what I always have to do is this determinant.
I take away lambda from the diagonal.
I leave the rest. I compute the determinant, so I get a three minus lambda times a three minus lambda.
And nothing. So that's where the triangular part came in. Triangular part, the one thing we know about triangular matrices is the determinant is just the product down the diagonal.
And in this case, it's this same, repeated -- so lambda one is one -- sorry, lambda one is three and lambda two is three.
That was easy. I mean, no -- why should I be pessimistic about a matrix whose eigenvalues can be read off right away? The problem with this matrix is in the eigenvectors. So let's go to the
eigenvectors. So how do I find the eigenvectors? I'm looking for a couple of eigenvectors. So I take the eigenvalue.
What do I do now? You remember, I solve A minus lambda I x equals zero.
And what is A minus lambda I x? So, so take three away.
And I get this matrix4 zero zero zero one, right? Times x is supposed to give me zero, right? That's my big equation for x.
Now I'm looking for x, the eigenvector. So I took A minus lambda I x, and what kind of a matrix I supposed to have here? Singular, right? It's supposed to be singular. And then it's got some vectors
-- which it is. So it's got some vector x in the null space. And what, what's the, what's -- give me a basis for the null space for that guy.
Tell me, what's a vector x in the null space, so that'll be the, the eigenvector that goes with lambda one equals three. The eigenvector is -- so what's in the null space? One zero, is it? Great.
Now, what's the other eigenvector? What's, what's the eigenvector that goes with lambda two? Well, lambda two is three again. So I get the same thing again.
Give me another vector -- I want it to be independent.
If I'm going to write down an x2, I'm never going to let it be dependent on x1. I'm looking for independent eigenvectors, and what's the conclusion? There isn't one. This is a degenerate matrix. It's
only got one line of eigenvectors instead of two.
It's this possibility of a repeated eigenvalue opens this further possibility of a shortage of eigenvectors.
And so there's no second independent eigenvector x2.
So it's a matrix, it's a two by two matrix, but with only one independent eigenvector. So that will be -- the matrices that -- where eigenvectors are -- don't give the complete story. OK.
My lecture on Monday will give the complete story for all the other matrices. Thanks.
Have a good weekend. A real New England weekend.
|
{"url":"http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/lecture-21-eigenvalues-and-eigenvectors/","timestamp":"2014-04-19T09:28:17Z","content_type":null,"content_length":"85833","record_id":"<urn:uuid:aa6dce06-b849-4778-959b-c1d3ca7a9633>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Probability and statistics blog
Uncategorized — No Comments 14
Apr 14
Uncategorized — No Comments 7
Apr 14
Uncategorized — No Comments 31
Mar 14
• Are you a fan of Wes Anderson? Revoluntion Analytics shares some ideas on how you can bring his style to your own R charts, by making use of these Wes Anderson inspired palettes.
• Given 3 random variables X, Y and Z with known distributions, can you calculate cov(X, Y) from cov(X, Z) and cov(Y, Z)?
• Some useful R tips this week are: Filtering Data with L1 Regularisation, quickly calculating summary statistics from a data frame, A Simple Introduction to the Graphing Philosophy of ggplot2, and
Visualizing principal components with R and Sochi Olympic Athletes.
• Xi’an reviews Bayesian Data Analysis by Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin.
• And finally, Nathan Yau of FlowingData presents some visuals from a study on smoking prevalence from 1996 to 2012, and concludes that smoking rate is inversely proportional to income level.
Uncategorized — No Comments 17
Mar 14
• R 3.0.3 is release (with installation and upgrading instructions and a list of updates, bug fixes and changes).
• Suppose a company has 5 servers, and there is a 1% chance that each server will be down. What is the probability that at least 3 servers are down?
• Mikio L. Braun, a PostDoc in machine learning at TU Berlin and co-founder and chief data scientist at streamdrill, discusses the difficulties of data analysis.
• Xi’an comments on a new paper by his PhD student called Approximate Integrated Likelihood via ABC methods.
• How people really read and share online.
• Joseph Rickert of Revolution Analytics publishes his R “meta” book, a collection of 14 books (all available online for free) that covers useful topics including basic probability and statistics,
regressions, experimental design, survival analysis, times series analysis and forecasting, machine learning, bioinformatics, structural equation models and credit scoring.
• And finally, Flavio Barros compiles a list of MOOC courses on R.
|
{"url":"http://www.statisticsblog.com/category/uncategorized/","timestamp":"2014-04-21T09:46:20Z","content_type":null,"content_length":"41062","record_id":"<urn:uuid:c867b98a-3ce1-4930-9a36-4d2c5f8aac41>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of multimodality
computer vision
, sets of
acquired by sampling the same scene or object at different times, or from different perspectives, will be in different coordinate systems.
Image registration
is the process of transforming the different sets of data into one coordinate system. Registration is necessary in order to be able to compare or integrate the data obtained from different
Medical image registration (e.g. for data of the same patient taken at different points in time) often additionally involves elastic (or nonrigid) registration to cope with deformation of the subject
(due to breathing, anatomical changes, etc.). Nonrigid registration of medical images can also be used to register a patient's data to an anatomical atlas, such as the Talairach atlas for
Algorithm classifications
Area-based vs Feature-based
Image registration
algorithms fall within two realms of classification: area based methods and feature based methods. The original image is often referred to as the reference image and the image to be mapped onto the
reference image is referred to as the target image. For area based image registration methods, the algorithm looks at the structure of the image via correlation metrics, Fourier properties and other
means of structural analysis. Alternatively, most feature based methods, instead of looking at the overall structure of images, fine tune their mappings to the correlation of image features: lines,
curves, points, line intersections, boundaries, etc.
Transformation model
Image registration
algorithms can also be classified according to the transformation model used to relate the reference image space with the target image space. The first broad category of transformation models
linear transformations
, which are a combination of translation, rotation, global scaling, shear and perspective components.
Linear transformations
are global in nature, thus not being able to model local deformations. Usually, perspective components are not needed for registration, so that in this case the linear transformation is an
The second category includes 'elastic' or 'nonrigid' transformations. These transformations allow local warping of image features, thus providing support for local deformations. Nonrigid
transformation approaches include polynomial wrapping, interpolation of smooth basis functions (thin-plate splines and wavelets), and physical continuum models (viscous fluid models and large
deformation diffeomorphisms).
Search-based vs direct methods
Image registration
methods can also be classified in terms of the type of search that is needed to compute the transformation between the two image domains. In search-based methods the effect of different image
deformations is evaluated and compared.
Spatial-domain methods
image registration
methods operate in the spatial domain, using features, structures, and textures as matching criteria. In the spatial domain, images look 'normal' as the human eye might perceive them. Some of the
feature matching algorithms are outgrowths of traditional techniques for performing manual
image registration
, in which operators choose matching sets of
control points
(CPs) between images. When the number of control points exceeds the minimum required to define the appropriate transformation model, iterative algorithms like
are used to robustly estimate the best solution.
Frequency-domain methods
Other algorithms use the properties of the frequency-domain to directly determine shifts between two images. Applying the
Phase correlation
method to a pair of overlapping images produces a third image which contains a single peak. The location of this peak corresponds to the relative translation between the two images. Unlike many
spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects typical of medical or satellite images. Additionally, the phase correlation uses the
Fast fourier transform
to compute the cross-correlation between the two images, generally resulting in large performance gains. The method can be extended to determine
rotation and scaling between two images by first converting the images to log-polar coordinates. Due to properties of the
Fourier transform
, the rotation and scaling parameters can be determined in a manner invariant to translation. This single feature makes phase-correlation methods highly attractive vs. typical spatial methods, which
must determine rotation, scaling, and translation simultaneously, often at the cost of reduced precision in all three.
Image nature
Another useful classification is between single-modality and multi-modality registration algorithms. Single-modality registration algorithms are those intended to register images of the same modality
(i.e. acquired using the same kind of imaging device), while multi-modality registration algorithms are those intended to register images acquired using different imaging devices.
There are several examples of multi-modality registration algorithms in the medical imaging field. Examples include registration of brain CT/MRI images or whole body PET/CT images for tumor
localization, registration of contrast-enhanced CT images against non-contrast-enhanced CT images for segmentation of specific parts of the anatomy and registration of ultrasound and CT images for
prostate localization in radiotherapy.
Other classifications
Further ways of classifying an algorithm consist of the amount of data it is optimized to handle, the algorithm's application, and the central theory the algorithm is based around. Image registration
has applications in remote sensing (cartography updating),
medical imaging
(change detection, tumor monitoring), and computer vision. Due to the vast applications to which image registration can be applied, it's impossible to develop a general algorithm optimized for all
Image similarity-based methods
Image similarity-based methods are broadly used in
medical imaging
. A basic image similarity-based method consists of a
transformation model
, which is applied to reference image coordinates to locate their
corresponding coordinates
in the target image space, an image similarity metric, which quantifies the degree of correspondence between features in both image spaces achieved by a given transformation, and an
optimization algorithm
, which tries to maximize image similarity by changing the transformation parameters.
The choice of an image similarity measure depends on the nature of the images to be registered. Common examples of image similarity measures include cross-correlation, mutual information, sum of
square differences and ratio image uniformity. Mutual information and its variant, normalized mutual information, are the most popular image similarity measures for registration of multimodality
images. Cross-correlation, sum of square differences and ratio image uniformity are commonly used for registration of images of the same modality.
Open source software
See also
• B. Glocker, N. Komodakis, G. Tziritas, N. Navab, N. Paragios: Dense Image Registration through MRFs and Efficient Linear Programming Medical Image Analysis, (in press), 2008.
• Barbara Zitová, Jan Flusser: Image registration methods: a survey Image Vision Comput. 21(11): 977-1000 (2003)
• Jan Modersitzki: Numerical Methods for Image Registration, Oxford University Press, 2004.
• Crum WR, Griffin LD, Hill DL, Hawkes DJ: Zen and the art of medical image registration: correspondence, homology, and quality Neuroimage, Vol. 20, No. 3. (November 2003), pp. 1425-1437.
• Ardeshir Goshtasby: 2-D and 3-D Image Registration for Medical, Remote Sensing, and Industrial Applications, Wiley Press, 2005.
|
{"url":"http://www.reference.com/browse/multimodality","timestamp":"2014-04-16T11:31:14Z","content_type":null,"content_length":"82605","record_id":"<urn:uuid:daf82e7e-4129-4378-9d5e-f9f51a7d883a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Submitted to:
Journal of Dairy Science
Publication Type:
Abstract Only
Publication Acceptance Date:
March 10, 2008
Publication Date:
July 7, 2008
Silva Del Rio, N., Broderick, G.A., Fricke, P.M. 2008. Mathematical simulation to assess the validity of Bonnier's equation for estimating the frequency of monozygous twinning in a population of
Holstein cattle [abstract]. Journal of Dairy Science. 91(suppl. 1):241.
Technical Abstract:
Twin calving records (n = 96,069) collected from 1996 to 2004 were extracted from Minnesota Dairy Herd Improvement archives to estimate the incidence of monozygous (MZ) twinning in a population of
Holstein cattle and to evaluate how varying the twin sex ratio and frequency of same-sex twins affects MZ estimates made, using Bonnier’s equation. Bonnier’s equation: m=2npq-n2/2pq(n-n2), estimates
the proportion of MZ twins among same-sex twins (m) based on total opposite-sexed twin pairs (n2) and the observed proportions of male (p) and female (q=1-p) calves among all twin births. Bonnier’s
equation assumes the sex of one twin is independent of the other; therefore, similar proportions of same- and opposite-sex twin pairs would be expected in the absence of MZ twinning. We hypothesized
a dramatic decrease in Bonnier’s estimate of MZ twinning if same-sex twins comprise a smaller proportion of a population than expected. Based on our study population, 56.4% of twin pairs were
same-sex (30.1% MM; 26.3% FF) and 51.9% of twin calves were male, resulting in an estimated MZ twin frequency of 11.6% using Bonnier’s equation. The estimates of MZ twinning were calculated by
simulating a reduction of same-sex twins of 5% (54.2% same-sex twins) or 10% (52.0% same-sex twins), whereas the proportion of male calves born as twins was 51.9% (observed) or simulated to be 50%.
Based on our study population, the estimates of MZ twinning were greater than expected, based on observed outcomes of MZ twinning (Silva del Rio et al., Therio. 66:1292;2006). We concluded that
slight changes in the percentage of same-sex twins in a study population dramatically affect MZ estimates using Bonnier’s equation, whereas the percentage of male calves born as twins has a minimal
impact. Thus, if factors other than MZ twinning affect the proportion of same-sex twins in a study population, Bonnier’s equation will inaccurately estimate the frequency of MZ twins.
|
{"url":"http://www.ars.usda.gov/research/publications/Publications.htm?seq_no_115=228246&pf=1","timestamp":"2014-04-20T03:23:23Z","content_type":null,"content_length":"21718","record_id":"<urn:uuid:6ad02c53-9eb1-4277-8783-1302f4abeb88>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South African Journal of Science
Services on Demand
Related links
Print version ISSN 0038-2353
S. Afr. j. sci. vol.108 no.3-4 Pretoria 2012
NEWS AND VIEWS
Solved problems in classical mechanics
Nithaya Chetty
Physics Department, University of Pretoria, Pretoria, South Africa
Book Title: Solved problems in classical mechanics - Analytical and numerical solutions with comments
Authors: O. de Lange, J. Pierrus
ISBN: 9780199582518
Publisher: Oxford University Press, Oxford; 2010, R783.00*
Review Title: Solved problems in classical mechanics
Newtonian physics has been tried and tested for more than three centuries, and classical mechanics is deeply entrenched in standard introductory courses of physics. High school physics the world over
begins with a discussion of Newton's three laws of motion, often followed by Newton's law of gravitation. Introductory university physics continues to build on Newton's laws by considering more
complexity - usually involving dynamics, for example, the motion of rigid bodies and studies of rotational motion. Classical mechanics is often, but not always in South Africa, taught at more senior
levels at universities, and here the focus tends to be on Hamilton's principle and on Lagrange's formulation of classical mechanics. The applications at this level are invariably more complex -
applications to coupled systems, chaotic systems, and so on. It is no exaggeration to say that there are several hundred excellent textbooks on classical mechanics available that have been published
over the past century or so. The question is whether there is room for yet another textbook on classical mechanics in the world.
What I find remarkable about the book of de Lange and Pierrus, is that the authors, who are academics at the University of KwaZulu-Natal, even dared to embark on this hefty project (about 600 pages),
in a field that is clearly mature and very well established. Did they have the foresight to carve out a niche that makes their contribution unique and useful in a very crowded subject area?
The answer is yes and emphatically so. The authors make significant contributions to classical mechanics by considering more complex - and hence more realistic - problems, many of which are only
tractable on the computer. They use Mathematica, which is a useful symbolic manipulation package, to solve their problems. They give excerpts of their computer code, which is very readable. By
presenting their computational methodology in such detail, the authors are helping the reader to understand the algorithmic structures of their solutions which are readily transferable to other
programming languages. This approach enables any competent computational physics student to use their favourite computing language (such as fortran, C, C++ and java) to develop their own coded
solutions to the same problems. In these respects, the book is enormously pedagogical and useful. It is a very good resource for teaching standard theoretical and computational classical mechanics.
Considering that classical mechanics is basic to both physics and practically all the engineering disciplines, there is potentially a very wide readership.
The range of topics within the book is very impressive. The authors cover problems in one, two and three dimensions, as well as problems involving linear oscillations, energy and potentials, momentum
and angular momentum, multiparticle systems, rigid bodies, non-linear oscillations, reference frames and the relativity principle. The book is written in the form of problems with solutions and with
comments. The solutions are often accompanied by graphical representations. Students would do well to turn some of these solutions to graphical animations using visualisation tools. The authors could
perhaps continue to be involved in this project by making their visualisation resources available on the web. Their comments are very insightful and often point to something new that can and should
be explored further by the reader. This exploration in turn encourages the reader to build on the programming solutions provided by the authors.
I found the final chapter, 'The relativity principle and some of its consequences', to be especially elucidating, and the problems very instructive and even suggestive. If one asserts that the laws
of physics are equally valid in all inertial frames, then the notion of a universal speed emerges very naturally, and the mystery that is often accompanied by Einstein's theory of special relativity
dissipates quite quickly. With physics, however, it is very often easier to argue the case retrospectively in elegant ways as is done here. But the genius of Einstein will always remain a mystery.
Postal address:
Physics Department
University of Pretoria
Pretoria 0001, South Africa
Email: Nithaya.Chetty@up.ac.za
* Book price at time of review.
|
{"url":"http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S0038-23532012000200007&lng=en&nrm=iso","timestamp":"2014-04-18T03:56:16Z","content_type":null,"content_length":"18797","record_id":"<urn:uuid:372ef40f-fd71-49dc-aa14-5fdbadc8df09>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Video Library
Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation
spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all
available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant
bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org.
The final part of the 04-05 Public Events series turns the spotlight on you. It s your chance to ask a panel of Perimeter researchers for their thoughts on a wide variety of scientific topics.
Heisenberg, uncertainty principle, discrete theory, space-time, Thiemann, quantum, relativity, special relativity, quantum theory, Emerson, coherent superpostions, Shrodinger, Sorkin, clock, Freidel,
gravity, Romelsberger, Burgess, Einstein, string theory, quantum entanglement
In this talk I'll discuss recent joint work with Raymond Laflamme, David Poulin and Maia Lesosky in which a unified approach to quantum error correction is presented, called "operator quantum error
correction". This scheme relies on a generalized notion of noiseless subsystems and includes the known techniques for the error correction of quantum operations --i.e., the standard model, the method
of decoherence-free subspaces, and the noiseless subsystem method--as special cases. Correctable codes in this approach take the form of operator algebras and operator semigroups.
|
{"url":"http://www.perimeterinstitute.ca/video-library?title=&page=688&qt-videos=0","timestamp":"2014-04-21T11:53:57Z","content_type":null,"content_length":"65596","record_id":"<urn:uuid:0f37c5fa-026c-4090-9d64-644a017e527f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Calculated fraction of modifiers associated with network former A as a function of ΔH/kT f , where ΔH = H B − H A , k is Boltzmann's constant, and T f is the fictive temperature of the glass. Here
systems are considered where the modifier concentration equals half the total concentration of network formers, i.e., [M]/([A] + [B]) = 0.5. The concentrations of network formers A and B are adjusted
to four different proportions, keeping the total concentration of network formers constant. All of the modifiers not associated with network former A are associated with network former B, such that
⟨p A ⟩ + ⟨p B ⟩ = 1, where ⟨p A ⟩ is given by the curves in the figure and ⟨p B ⟩ is obtained by 1 − ⟨p A ⟩.
Expectation value of the fraction of modifiers (⟨p A ⟩) associated with network former A as a function of its molar fraction, [A]/([A] + [B]). Here it is assumed that the modifier concentration is
maintained at half the total concentration of network formers, i.e., [M]/([A] + [B]) = 0.5. Calculations are performed for varying levels of ΔH/kT f . The mixed network former effect is clearly
visible for ΔH/kT f > 0, where there is a nonlinear scaling of p A with [A]/([A] + [B]).
Calculated fraction of modifiers associated with network former A as a function of ΔH/kT f for systems with different modifier concentrations and a fixed ratio of network formers, [A]/([A] + [B]) =
0.25. The modifiers are distributed proportionally between network formers A and B in the limits of ΔH/kT f = 0 or [M] = [A] + [B]. All other cases will lead to a mixed network former effect.
Calculated fraction of modifiers associated with network former A as a function of ΔH/kT f for systems with different modifier concentrations and an equal fraction of network formers A and B, i.e.,
[A]/([A] + [B]) = 0.5. As in Fig. 3 , the modifiers are distributed proportionally between the two network formers in the limits of ΔH/kT f = 0 and [M] = [A] + [B].
Fraction of modifiers associated with each of three network formers (A, B, and C) as a function of (H C − H A )/kT f . Here we have set (H B − H A )/kT f = 1 and fixed the concentrations of network
formers and modifiers as indicated in the figure. In this system having three network formers, a complicated mixed network former effect can be observed due to (i) the difference in enthalpies of
formation associated with bonding between the network modifier and each of the network formers, and (ii) the fact that the modifier concentration is less than the total concentration of network
Probability density of p A , the fraction of modifiers associated with network former A. Probability density functions are shown for four different levels of ΔH/kT f and fixed concentrations of
network formers and modifiers as indicated in the figure. In the limit of ΔH/kT f → 0, the modifier statistics follow a hypergeometric distribution. In the limit of high ΔH/kT f , the distribution
collapses to a Dirac delta function.
Relaxation of network speciation from a high fictive temperature (ΔH/kT f = 1) to a low fictive temperature (ΔH/kT f = 4). Here it is assumed that structural relaxation follows a stretched
exponential decay with a relaxation time τ and stretching exponent β = 3/7.
|
{"url":"http://scitation.aip.org/content/aip/journal/jcp/138/12/10.1063/1.4773356","timestamp":"2014-04-17T08:10:57Z","content_type":null,"content_length":"80289","record_id":"<urn:uuid:898cb88a-61f1-40ac-bc14-00662c0671d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A simple implicit measure of the effective bid-ask spread in an efficient market
Results 1 - 10 of 157
- REVIEW OF FINANCIAL STUDIES , 1988
"... In this article we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is
strongly rejected for the entire sample period (1962--1985) and for all subperiod for a variety of aggrega ..."
Cited by 226 (13 self)
Add to MetaCart
In this article we test the random walk hypothesis for weekly stock market returns by comparing variance estimators derived from data sampled at different frequencies. The random walk model is
strongly rejected for the entire sample period (1962--1985) and for all subperiod for a variety of aggregate returns indexes and size-sorted portofolios. Although the rejections are due largely to
the behavior of small stocks, they cannot be attributed completely to the effects of infrequent trading or timevarying volatilities. Moreover, the rejection of the random walk for weekly returns does
not support a mean-reverting model of asset prices.
, 2002
"... Abstract: Questions remain as to whether results from experimental economics games are generalizable to real decisions in non-laboratory settings. I conduct a survey and two experiments, a Trust
game and a Public Goods game, to measure social capital. Social capital purports to provide incentives to ..."
Cited by 97 (15 self)
Add to MetaCart
Abstract: Questions remain as to whether results from experimental economics games are generalizable to real decisions in non-laboratory settings. I conduct a survey and two experiments, a Trust game
and a Public Goods game, to measure social capital. Social capital purports to provide incentives to individuals to abide by otherwise difficult to enforce contracts. I then examine whether behavior
in the games predicts repayment of loans to a Peruvian group lending microfinance program. I find that individuals identified as "trustworthy " by the Trust game are more likely to repay their loans.
Individuals identified as "trusting " default more and save less, suggesting that those who “trust ” more in the game are prone to taking bad risks. Behavior in a public goods game does not predict
any future decisions in this context. Those with more positive attitudes towards society, as measured by three questions identical to those used in the General Social Survey, are more likely to repay
their loans.
, 2004
"... We model a dynamic limit order market as a stochastic sequential game. Since the model is analytically intractable, we provide an algorithm based on Pakes and McGuire (2001) to find a stationary
Markov-perfect equilibrium. Given the stationary equilibrium, we generate artificial time series and p ..."
Cited by 92 (5 self)
Add to MetaCart
We model a dynamic limit order market as a stochastic sequential game. Since the model is analytically intractable, we provide an algorithm based on Pakes and McGuire (2001) to find a stationary
Markov-perfect equilibrium. Given the stationary equilibrium, we generate artificial time series and perform comparative dynamics. As we know the data generating process, we can compare transaction
prices to the true value of the asset, as well as explicitly determine the welfare gains accruing to investors.
, 2006
"... Given the cross-sectional and temporal variation in their liquidity, emerging equity markets provide an ideal setting to examine the impact of liquidity on expected returns. Our main liquidity
measure is a transformation of the proportion of zero daily firm returns, averaged over the month. We find ..."
Cited by 53 (8 self)
Add to MetaCart
Given the cross-sectional and temporal variation in their liquidity, emerging equity markets provide an ideal setting to examine the impact of liquidity on expected returns. Our main liquidity
measure is a transformation of the proportion of zero daily firm returns, averaged over the month. We find that it significantly predicts future returns, whereas alternative measures such as turnover
do not. Consistent with liquidity being a priced factor, unexpected liquidity shocks are positively correlated with contemporaneous return shocks and negatively correlated with shocks to the dividend
yield. We consider a simple asset pricing model with liquidity and the market portfolio as risk factors and transaction costs that are proportional to liquidity. The model differentiates between
integrated and segmented countries and time periods. Our results suggest that local market liquidity is an important driver of expected returns in emerging markets, and that the liberalization
process has not fully eliminated its impact.
- Journal of Finance , 1998
"... It is a common view that private information in the foreign exchange market does not exist. We provide evidence against this view. The evidence comes from the introduction of trading in Tokyo
over the lunch-hour. Lunch return variance doubles with the introduction of trading, which cannot be due to ..."
Cited by 33 (5 self)
Add to MetaCart
It is a common view that private information in the foreign exchange market does not exist. We provide evidence against this view. The evidence comes from the introduction of trading in Tokyo over
the lunch-hour. Lunch return variance doubles with the introduction of trading, which cannot be due to public information since the flow of public information did not change with the trading rules.
Having eliminated public information as the cause, we exploit the volatility pattern over the whole day to discriminate between the two alternatives: private information and pricing errors. Three key
results support the predictions of private-information models. First, the volatility U-shape flattens: greater revelation over lunch leaves a smaller share for the morning and afternoon. Second, the
U-shape tilts upward, an implication of information whose private value is transitory. Finally, the morning exhibits a clear U-shape when Tokyo closes over lunch, and it disappears when trading is
- Journal of Finance , 2009
"... The effective cost of trading is usually estimated from transaction-level data. This study proposes a Gibbs estimate that is based on daily closing prices. In a validation sample, the daily
Gibbs estimate achieves a correlation of 0.965 with the transactionlevel estimate. When the Gibbs estimates ar ..."
Cited by 32 (1 self)
Add to MetaCart
The effective cost of trading is usually estimated from transaction-level data. This study proposes a Gibbs estimate that is based on daily closing prices. In a validation sample, the daily Gibbs
estimate achieves a correlation of 0.965 with the transactionlevel estimate. When the Gibbs estimates are incorporated into asset pricing specifications over a long historical sample (1926 to 2006),
the results suggest that effective cost (as a characteristic) is positively related to stock returns. The relation is strongest in January, but it appears to be distinct from size effects.
INVESTIGATIONS INTO THE ROLE of liquidity and transaction costs in asset pricing must generally confront the fact that while many asset pricing tests make use of U.S. equity returns from 1926 onward,
the high-frequency data used to estimate trading costs are usually not available prior to 1983. Accordingly, most studies either limit the sample to the post-1983 period of common coverage or use the
longer historical sample with liquidity proxies estimated from daily data. This paper falls into the latter group. Specifically, I propose a new approach to estimating the effective cost of trading
and the common variation in this cost. These estimates are then used in conventional asset pricing specifications with a view to ascertaining the role of trading costs as a characteristic in
explaining expected returns. 1
, 2001
"... Estimates of daily volatility are investigated. Realized volatility can be computed from returns observed over time intervals of different sizes. For simple statistical reasons, volatility
estimators based on high-frequency returns have been proposed, but such estimators are found to be strongly bi ..."
Cited by 29 (4 self)
Add to MetaCart
Estimates of daily volatility are investigated. Realized volatility can be computed from returns observed over time intervals of different sizes. For simple statistical reasons, volatility estimators
based on high-frequency returns have been proposed, but such estimators are found to be strongly biased as compared to volatilities of daily returns. This bias originates from microstructure effects
in the price formation. For foreign exchange, the relevant microstructure effect is the incoherent price formation, which leads to a strong negative first-order autocorrelation ρ 1 40 % for
tick-by-tick returns and to the volatility bias. On the basis of a simple theoretical model for foreign exchange data, the incoherent term can be filtered away from the tick-by-tick price series.
With filtered prices, the daily volatility can be estimated using the information contained in highfrequency data, providing a high-precision measure of volatility at any time interval.
, 2010
"... This paper examines the illiquidity of corporate bonds and its asset-pricing implications. Using transaction-level data from 2003 through 2009, we show that the illiquidity in corporate bonds is
substantial, significantly greater than what can be explained by bidask spreads. We establish a strong li ..."
Cited by 25 (11 self)
Add to MetaCart
This paper examines the illiquidity of corporate bonds and its asset-pricing implications. Using transaction-level data from 2003 through 2009, we show that the illiquidity in corporate bonds is
substantial, significantly greater than what can be explained by bidask spreads. We establish a strong link between bond illiquidity and bond prices, both in aggregate and in the cross-section. In
aggregate, changes in the market level illiquidity explain a substantial part of the time variation in yield spreads of high-rated (AAA through A) bonds, over-shadowing the credit risk component. In
the cross-section, the bond-level illiquidity measure explains individual bond yield spreads with large economic significance.
, 2002
"... This paper explores the private equity (PE) and venture capital (VC) markets and demonstrates that unavoidable principal-agent problems result in equilibrium competitive equity prices that are
decreasing in the amount of idiosyncratic risk. The structure of information in these markets means that id ..."
Cited by 20 (0 self)
Add to MetaCart
This paper explores the private equity (PE) and venture capital (VC) markets and demonstrates that unavoidable principal-agent problems result in equilibrium competitive equity prices that are
decreasing in the amount of idiosyncratic risk. The structure of information in these markets means that idiosyncratic risk will be priced even if investors can fully diversify and the private
capital markets are competitive. VCs are agents who help investors (the principals) Þnd positive NPV projects. To ensure that VCs screen properly, they must receive compensation based on the
performance of their recommendations. Significant time is required to determine if a project is NPV positive, which means that VCs will identify only a small number of investments, exposing them to
idiosyncratic risk. Furthermore, VC compensation represents a significant fraction of their wealth. Therefore, they demand returns for the risk they hold. As a result, we show that VC investments
have positive alphas while investors in VC funds earn zero alphas. In addition, some positive NPV projects with significant idiosyncratic risk will not be Þnanced. Furthermore, projects or funds that
have more idiosyncratic risk will earn higher returns. This last result can be used to empirically distinguish our idea from Þxed compensation or a lack of competition. The
- Journal of Financial Economics , 2005
"... Emerging markets are characterized by volatile, but substantial returns that can easily exceed 75 % per annum. Balancing these lofty returns are the liquidity concerns of trading in emerging
markets. Adopting the model of security returns developed by Lesmond, Ogden, and Trzcinka (1999), liquidity m ..."
Cited by 20 (1 self)
Add to MetaCart
Emerging markets are characterized by volatile, but substantial returns that can easily exceed 75 % per annum. Balancing these lofty returns are the liquidity concerns of trading in emerging markets.
Adopting the model of security returns developed by Lesmond, Ogden, and Trzcinka (1999), liquidity measures are estimated for all securities and time periods (63798 firm-years) for which daily prices
are available in 31 emerging markets from 1991 to 2000. Significant cross-sectional differences and time-series variations typify the liquidity measure over the period 1991 to 2000. The liquidity
estimates are over 80 % correlated with the proportional bid-ask spread, where available, and regression tests show high association between the proportional bid-ask spread and the liquidity
estimate. Additionally, as trade difficulty increases, proxied by price, volume, or market capitalization, the proposed liquidity measure increases consistent with the observed proportional bid-ask
spread. Multivariate regression tests show that the proposed liquidity measure remains significant regardless of controlling for all of the trade difficulty variables, as well as turnover.
Additionally, the proposed liquidity measure is found to be superior to the trade difficulty variables or turnover at explaining the spread-plus-commission costs in the majority of the 23 emerging
markets with Emerging markets are experiencing explosive growth. Not only did the total value of shares traded increase from $15 billion in 1991 to over $200 billion in 2000, but the total market
capitalization rose from $306 billion in 1991 to over $1.4 trillion in 2000.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=686749","timestamp":"2014-04-18T19:24:11Z","content_type":null,"content_length":"40785","record_id":"<urn:uuid:948a4758-610f-4c25-a335-a1161fc2580e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to find the orbit time of this satellite?
October 3rd 2009, 03:53 AM
How to find the orbit time of this satellite?
A satellite is in a circular orbit H km above the earth surface.
In how many hours does the satellite complete one orbit?
The radius of the earth is 6370 km.
H[km] = 289;
Could someone please show me how to solve this
October 3rd 2009, 04:01 AM
The gravitational attraction between the satelite and the earth is the same as the centripetal force of the satelite so
Let M be the mass of earth and small m be the mass of satelite .
Note that omega=2pi/T
Rearranging ..
Now you know R , H , G (Gravitational constant) , Mass of the earth is not given but you can check it out in your book since i hv forgotten .
Then substitute .
October 3rd 2009, 04:19 AM
Converting to hours
To convert to hours I will just divide T by 360?
October 3rd 2009, 04:26 AM
Divide by $60 \times 60 = 3600$
October 4th 2009, 01:41 AM
Kepler's Third law tells us that for a circular orbit:
where $\tau$is the period of the orbit and $r$ its' radius.
For Geostationary orbit $\tau_{geo}=24$ hours, and $r_{geo}=35786$ km.
So the period $\tau_1$ that we seek satisfies:
$\frac{\tau_1^2}{\tau_{geo}^2}=\frac{6659^3}{r_{geo }^3}$
and the rest is arithmetic.
|
{"url":"http://mathhelpforum.com/algebra/105814-how-find-orbit-time-satellite-print.html","timestamp":"2014-04-18T21:40:17Z","content_type":null,"content_length":"8652","record_id":"<urn:uuid:d240dcac-d3cc-4123-9be4-3237a330a8ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Passyunk, PA Precalculus Tutor
Find a Passyunk, PA Precalculus Tutor
...However, I really found myself seeking a higher pursuit and many people close to me encouraged me to teach. I thought back on my first teaching experience back when I was in college. I took
part in this program where students from my university taught a group of public school children how to make a model rocket and how it worked.
16 Subjects: including precalculus, Spanish, physics, calculus
...I favor the Socratic Method of teaching, asking questions of the student to help him/her find her/his own way through the problem rather than telling what the next step is. This way the
student not only learns how to solve a specific proof, but ways to approach proofs that will work on problems ...
58 Subjects: including precalculus, reading, chemistry, calculus
...My approach is tailored specifically to the student, so no two programs are alike. My expertise allows me to quickly identify students' problem areas and most effectively address these in the
shortest amount of time possible. For the SAT, each student receives a 95-page spiral-bound book of str...
19 Subjects: including precalculus, calculus, statistics, geometry
...My experience includes tutoring math and reading to elementary students as well as junior high and high school students. One student who attended college needed assistance for her realtor
exam. Upon seeking my assistance, she was able to increase her understanding of how to calculate interest as well as other lessons.
21 Subjects: including precalculus, English, reading, algebra 1
I've had many math professors throughout undergrad and graduate school and I've found that, although they all know what they're talking about and are all very intelligent, what makes a math
teacher great is understanding that not everyone understands things the same way he/she does. Throughout my y...
19 Subjects: including precalculus, calculus, trigonometry, statistics
Related Passyunk, PA Tutors
Passyunk, PA Accounting Tutors
Passyunk, PA ACT Tutors
Passyunk, PA Algebra Tutors
Passyunk, PA Algebra 2 Tutors
Passyunk, PA Calculus Tutors
Passyunk, PA Geometry Tutors
Passyunk, PA Math Tutors
Passyunk, PA Prealgebra Tutors
Passyunk, PA Precalculus Tutors
Passyunk, PA SAT Tutors
Passyunk, PA SAT Math Tutors
Passyunk, PA Science Tutors
Passyunk, PA Statistics Tutors
Passyunk, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Almonesson precalculus Tutors
Bala, PA precalculus Tutors
Billingsport, NJ precalculus Tutors
Carroll Park, PA precalculus Tutors
Eastwick, PA precalculus Tutors
Hilltop, NJ precalculus Tutors
Lester, PA precalculus Tutors
Middle City East, PA precalculus Tutors
Middle City West, PA precalculus Tutors
Oakview, PA precalculus Tutors
Overbrook Hills, PA precalculus Tutors
Penn Ctr, PA precalculus Tutors
South Camden, NJ precalculus Tutors
Verga, NJ precalculus Tutors
West Collingswood Heights, NJ precalculus Tutors
|
{"url":"http://www.purplemath.com/Passyunk_PA_Precalculus_tutors.php","timestamp":"2014-04-17T16:10:04Z","content_type":null,"content_length":"24492","record_id":"<urn:uuid:fb0fc9a4-1eab-45b1-87bf-bd52457c63ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
ASCII Text x
Zhengguang Chen, Deng Cai, Jiawei Han, Jiajun Bu, Chun Chen, Lijun Zhang, "Locally Discriminative Coclustering," IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 6, pp. 1025-1035,
June, 2012.
BibTex x
@article{ 10.1109/TKDE.2011.71,
author = { Zhengguang Chen and Deng Cai and Jiawei Han and Jiajun Bu and Chun Chen and Lijun Zhang},
title = {Locally Discriminative Coclustering},
journal ={IEEE Transactions on Knowledge and Data Engineering},
volume = {24},
number = {6},
issn = {1041-4347},
year = {2012},
pages = {1025-1035},
doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2011.71},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Knowledge and Data Engineering
TI - Locally Discriminative Coclustering
IS - 6
SN - 1041-4347
EPD - 1025-1035
A1 - Zhengguang Chen,
A1 - Deng Cai,
A1 - Jiawei Han,
A1 - Jiajun Bu,
A1 - Chun Chen,
A1 - Lijun Zhang,
PY - 2012
KW - regression analysis
KW - graph theory
KW - pattern clustering
KW - LDCC groups
KW - locally discriminative coclustering
KW - one-sided clustering techniques
KW - interfeature relationships
KW - intersample relationships
KW - sample-feature relationship
KW - bipartite graph
KW - local linear regression
KW - intrinsic discriminative structures
KW - sample space
KW - feature space
KW - local patch
KW - local linear function
KW - fitting errors
KW - Clustering algorithms
KW - Bipartite graph
KW - Matrix decomposition
KW - Partitioning algorithms
KW - Linear regression
KW - Silicon
KW - Mathematical model
KW - local linear regression.
KW - Coclustering
KW - clustering
KW - bipartite graph
VL - 24
JA - IEEE Transactions on Knowledge and Data Engineering
ER -
Different from traditional one-sided clustering techniques, coclustering makes use of the duality between samples and features to partition them simultaneously. Most of the existing co-clustering
algorithms focus on modeling the relationship between samples and features, whereas the intersample and interfeature relationships are ignored. In this paper, we propose a novel coclustering
algorithm named Locally Discriminative Coclustering (LDCC) to explore the relationship between samples and features as well as the intersample and interfeature relationships. Specifically, the
sample-feature relationship is modeled by a bipartite graph between samples and features. And we apply local linear regression to discovering the intrinsic discriminative structures of both sample
space and feature space. For each local patch in the sample and feature spaces, a local linear function is estimated to predict the labels of the points in this patch. The intersample and
interfeature relationships are thus captured by minimizing the fitting errors of all the local linear functions. In this way, LDCC groups strongly associated samples and features together, while
respecting the local structures of both sample and feature spaces. Our experimental results on several benchmark data sets have demonstrated the effectiveness of the proposed method.
[1] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, Springer Series in Statistics. Springer, 2009.
[2] W. Xu, X. Liu, and Y. Gong, "Document Clustering Based on Non-Negative Matrix Factorization," Proc. 26th Ann. Int'l ACM SIGIR Conf. Research and Development in Informaion Retrieval, pp. 267-273,
[3] A.Y. Ng, M.I. Jordan, and Y. Weiss, "On Spectral Clustering: Analysis and an Algorithm," Proc. Advances in Neural Information Processing Systems, pp. 849-856, 2002.
[4] J. McQueen, "Some Methods of Classification and Analysis of Multivariate Observations," Proc. Fifth Berkeley Symp. Math. Statistics and Probability, pp. 281-297, 1967.
[5] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, "A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise," Proc. Second Int'l Conf. Knowledge Discovery and Data
Mining, pp. 226-231, 1996.
[6] M. Rege, M. Dong, and F. Fotouhi, "Co-Clustering Documents and Words Using Bipartite Isoperimetric Graph Partitioning," Proc. Sixth Int'l Conf. Data Mining, pp. 532-541, 2006.
[7] I.S. Dhillon, "Co-Clustering Documents and Words Using Bipartite Spectral Graph Partitioning," Proc. Seventh ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 269-274, 2001.
[8] H. Zha, X. He, C. Ding, H. Simon, and M. Gu, "Bipartite Graph Partitioning and Data Clustering," Proc. 10th Int'l Conf. Information and Knowledge Management, pp. 25-32, 2001.
[9] Y. Cheng and G.M. Church, "Biclustering of Expression Data," Proc. Eighth Int'l Conf. Intelligent Systems for Molecular Biology, pp. 93-103, 2000.
[10] Y. Kluger, R. Basri, J. Chang, and M. Gerstein, "Spectral Biclustering of Microarray Data: Coclustering Genes and Conditions," Genome Research, vol. 13, no. 4, pp. 703-716, 2003.
[11] S.C. Madeira and A.L. Oliveira, "Biclustering Algorithms for Biological Data Analysis: A Survey," IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 1, no. 1, pp. 24-45, Jan.-Mar.
[12] T. George and S. Merugu, "A Scalable Collaborative Filtering Framework Based on Co-Clustering," Proc. IEEE Fifth Int'l Conf. Data Mining, pp. 625-628, 2005.
[13] P. Symeonidis, A. Nanopoulos, A.N. Papadopoulos, and Y. Manolopoulos, "Nearest-Biclusters Collaborative Filtering Based on Constant and Coherent Values," Information Retrieval, vol. 11, no. 1,
pp. 51-75, 2008.
[14] N. Slonim and N. Tishby, "Document Clustering Using Word Clusters via the Information Bottleneck Method," Proc. 23rd Ann. Int'l ACM SIGIR Conf. Research and Development in Information Retrieval,
pp. 208-215, 2000.
[15] R. El-Yaniv and O. Souroujon, "Iterative Double Clustering for Unsupervised and Semi-Supervised Learning," Proc. 12th European Conf. Machine Learning (ECML '01), pp. 121-132, 2001.
[16] I.S. Dhillon, S. Mallela, and D.S. Modha, "Information-Theoretic Co-Clustering," Proc. Ninth ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 89-98, 2003.
[17] B. Long, Z.M. Zhang, and P.S. Yu, "Co-Clustering by Block Value Decomposition," Proc. 11th ACM SIGKDD Int'l Conf. Knowledge Discovery in Data Mining, pp. 635-640, 2005.
[18] C. Ding, T. Li, W. Peng, and H. Park, "Orthogonal Nonnegative Matrix T-Factorizations for Clustering," Proc. 12th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 126-135, 2006.
[19] J. Han, Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers Inc., 2005.
[20] J.A. Hartigan, "Direct Clustering of a Data Matrix," J. Am. Statistical Assoc., vol. 67, no. 337, pp. 123-129, 1972.
[21] A. Pothen, H.D. Simon, and K.-P. Liou, "Partitioning Sparse Matrices with Eigenvectors of Graphs," SIAM J. Matrix Analysis and Applications, vol. 11, no. 3, pp. 430-452, 1990.
[22] J. Shi and J. Malik, "Normalized Cuts and Image Segmentation," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, Aug. 2000.
[23] B. Gao, T.-Y. Liu, X. Zheng, Q.-S. Cheng, and W.-Y. Ma, "Consistent Bipartite Graph Co-Partitioning for Star-Structured High-Order Heterogeneous Data Co-Clustering," Proc. 11th ACM SIGKDD Int'l
Conf. Knowledge Discovery in Data Mining, pp. 41-50, 2005.
[24] N. Tishby, F.C. Pereira, and W. Bialek, "The Information Bottleneck Method," Proc. 37th Ann. Allerton Conf. Comm., Control and Computing, pp. 368-377, 1999.
[25] A. Banerjee, I. Dhillon, J. Ghosh, S. Merugu, and D.S. Modha, "A Generalized Maximum Entropy Approach to Bregman Co-Clustering and Matrix Approximation," Proc. 10th ACM SIGKDD Int'l Conf.
Knowledge Discovery and Data Mining, pp. 509-514, 2004.
[26] S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. L, and R. Harshman, "Indexing by Latent Semantic Analysis," J. Am. Soc. for Information Science, vol. 41, pp. 391-407, 1990.
[27] D.D. Lee and H.S. Seung, "Learning the Parts of Objects by Non-Negative Matrix Factorization," Nature, vol. 401, no. 6755, pp. 788-791, 1999.
[28] Q. Gu and J. Zhou, "Co-Clustering on Manifolds," Proc. 15th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 359-368, 2009.
[29] F. Pan, X. Zhang, and W. Wang, "CRD: Fast Co-Clustering on Large Datasets Utilizing Sampling-Based Matrix Decomposition," Proc. ACM SIGMOD Int'l Conf. Management of Data, pp. 173-184, 2008.
[30] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schölkopf, "Learning with Local and Global Consistency," Advances in Neural Information Processing Systems 16, vol. 16, pp. 321-328, 2004.
[31] Y. Yang, D. Xu, F. Nie, J. Luo, and Y. Zhuang, "Ranking with Local Regression and Global Alignment for Cross Media Retrieval," Proc. 17th Ann. ACM Int'l Conf. Multimedia, pp. 175-184, 2009.
[32] S.T. Roweis and L.K. Saul, "Nonlinear Dimensionality Reduction by Locally Linear Embedding," Science, vol. 290, no. 5500, pp. 2323-2326, 2000.
[33] M. Wu and B. Schölkopf, "A Local Learning Approach for Clustering," Advances in Neural Information Processing Systems 19, vol. 19, pp. 1529-1536, 2007.
[34] F. Bach and Z. Harchaoui, "DIFFRAC: A Discriminative and Flexible Framework for Clustering," Advances in Neural Information Processing Systems 20, vol. 20, pp. 49-56, 2008.
[35] G. Strang, Introduction to Linear Algebra, third ed. Wellesley-Cambridge Press, 2003.
[36] M. Belkin and P. Niyogi, "Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering," Advances in Neural Information Processing Systems 14, vol. 14, pp. 585-591, 2002.
[37] G.H. Golub and C.F. Van Loan, Matrix Computations, third ed. Johns Hopkins Univ. Press, 1996.
[38] D. Cai, X. He, and J. Han, "Document Clustering Using Locality Preserving Indexing," IEEE Trans. Knowledge and Data Eng., vol. 17, no. 12, pp. 1624-1637, Dec. 2005.
[39] L. Lováz and M.D. Plummer, Matching Theory. North-Holland, 1986.
[40] T.R. Golub, D.K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J.P. Mesirov, H. Coller, M.L. Loh, J.R. Downing, M.A. Caligiuri, C.D. Bloomfield, and E.S. Lander, "Molecular Classification of
Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring," Science, vol. 286, no. 5439, pp. 531-537, 1999.
[41] J.-P. Brunet, P. Tamayo, T.R. Golub, and J.P. Mesirov, "Metagenes and Molecular Pattern Discovery Using Matrix Factorization," Proc. Nat'l Academy of Sciences USA, vol. 101, no. 12, pp.
4164-4169, 2004.
[42] S. Pomeroy et al., "Prediction of Central Nervous System Embryonal Tumour Outcome Based on Gene Expression," Nature, vol. 415, no. 6870, pp. 436-442, 2002.
Index Terms:
regression analysis,graph theory,pattern clustering,LDCC groups,locally discriminative coclustering,one-sided clustering techniques,interfeature relationships,intersample relationships,sample-feature
relationship,bipartite graph,local linear regression,intrinsic discriminative structures,sample space,feature space,local patch,local linear function,fitting errors,Clustering algorithms,Bipartite
graph,Matrix decomposition,Partitioning algorithms,Linear regression,Silicon,Mathematical model,local linear regression.,Coclustering,clustering,bipartite graph
Zhengguang Chen, Deng Cai, Jiawei Han, Jiajun Bu, Chun Chen, Lijun Zhang, "Locally Discriminative Coclustering," IEEE Transactions on Knowledge and Data Engineering, vol. 24, no. 6, pp. 1025-1035,
June 2012, doi:10.1109/TKDE.2011.71
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tk/2012/06/ttk2012061025-abs.html","timestamp":"2014-04-20T07:00:02Z","content_type":null,"content_length":"63643","record_id":"<urn:uuid:49abc250-e0f0-4d63-9457-034236872519>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fuzzy Based Multi Criteria Contingency Ranking for Voltage Stability Assessment
Electric Power Systems (EPS) have become very complex from the operation, control and stability maintenance standpoints. The voltage stability problem deserves special attention, since power systems
have been operating dangerously close to their stability limits. Voltage collapse and energy rationing occurrences have been reported worldwide.
Special care with transmission capacity expansion and the development of efficient operation techniques to best use the system s capabilities is crucial. The power industry restructuring process has
introduced a number of factors that have increased the possible sources of system disturbances, leading to a less robust, more unpredictable system as far as the operation is concerned (Morison et al
., 2004).
The lack of new transmission facilities, cutbacks in system maintenance, research force downsizing, unpredicted power flow patterns, just to name a few are some of the important factors that affect
the security of power systems. The mainstream philosophy of the restructured sector is to minimize investments, minimize costs and maximize the equipment s utilization. The regulatory agencies
usually define minimum voltage stability security margins for both normal operating conditions and contingency situations. Under normal operating conditions, the minimum margin should be a bit larger
depending on the demand. Figure 1 shows the idea of the Voltage Stability Margin (VSM) in a very simple way based on the well-known PV curve.
Fig. 1: Illustration of Voltage Stability Margin (VSM)
Consider the load flow equations given by:
where, superscripts s and c stand for scheduled (generation minus consumption) and calculated quantities. In this study a constant power factor model is used for the loads. None the less, the
inclusion of voltage dependent load models is straightforward. λ^bc and λ^max correspond, respectively to the current and the maximum loading factors both for normal operating conditions (base case).
Fig. 2: Impact of contingencies on VSM
VSM is computed from both loading levels, being a measure of the distance between them. For instance, VSM can be given by:
The occurrence of contingencies results in variation of the VSM as shown in Fig. 2. Contingency 1 results in a smaller VSM as compared to the base case condition. Contingency 2 presents a more severe
impact on VSM since its post-contingency maximum loading is less than the base case loading. This constitutes an infeasible contingency case. Of course this analysis is valid in case the load and
generation patterns and control settings remain unchanged after the contingency occurs. A well-accepted contingency analysis procedure is based on splitting the process in different stages. The first
stage is usually referred to as contingency ranking and contingencies from a predefined list are analyzed and ranked by using a simple, computationally fast method. Then, the top-ranked (most severe)
contingencies are selected and analyzed in a latter stage now using more complete, powerful and accurate methods. This stage is usually referred to as contingency analysis or evaluation. For
instance, the voltage stability condition of the most severe contingencies may be evaluated by continuation methods (Ajjarapu and Christy, 1992). Finally, preventive and/or corrective control
strategies must be obtained to deal with contingencies that result in insecure or emergency operating situations. The procedure just described has been discussed by Stott et al. (1987) for ranking
and analyzing contingencies for MW flow overloads and voltage magnitude violations. This study tackles the contingency ranking problem that is the one of correctly ranking contingencies regarding
their impacts on VSM. Contingencies with the smallest VSM are the most severe and ranked at the top of the list. The evaluation of the impact of contingencies on the VSM has been dealt with in
several research works found in the literature. A crucial point related to this problem is that a proposed ranking method must be efficient from the computational effort standpoint while keeping
acceptable accuracy. Also, it should be suited for use in many power system analysis process environments, from operation planning to real time operation. Finally an appropriate treatment for the
infeasible contingency cases is also proposed.
Many research works on contingency ranking for voltage stability can be found in the literature assume that the operating state at base case is known for both normal operation and maximum loading
condition. The idea here is to propose a method for ranking contingencies based on information from the base case and the post-contingency operating states. Thus, only one load flow is required for
each contingency. Therefore, the method is intended to be computationally fast and suited for real time operation application where decisions must be taken within a small time frame. The ranking
method is based on the computation of performance indices for each contingency. In their turn, the performance indices are based on the post-contingency operating state. Some of these indices include
the computation of voltage stability indices which are in turn obtained from the equations of power flows through branches (transmission lines and transformers). These voltage stability indices are
multiplied by weight factors which are based on the outage branch pre-contingency apparent power flow and on the voltage (magnitude and phase angle) variations.
The high order performance index methods have drawback of more computation time. The performance index methods also have masking phenomena. Many attempts were made to use fuzzy logic techniques in
contingency analysis (Hsu and Kuo, 1992; Lo and Abdelall, 2000; Ozdemir and Singh, 2001) which is based on separate consideration of state variables like line flows and voltage deviation. But it may
give many mis-rankings if we take only one variable based rank list. In the study a fuzzy logic based combined rank list is developed considering the performance indices for each contingency for
different single outage cases. They are combined with other performance indices based on the relationship between the branch current and maximum apparent power flows.
Voltage stability index: Figure 3 shows a branch that connects the branch i and j. The real and reactive power flows through branch i-j are:
Fig. 3: Branch i-j of a power system
where, θ^eq[ij] = θ[i]-θ[j]+φ[ij] is the angle spread at branch i-j . In the case of a transmission line; a[ij] =1, φ[ij] = 0. For transformer; b^sh[ij] = 0, φ[ij] = 0. For pure phase shifters; b^sh
[ij] = 0, a[ij] = 1 = For phase shifters; b^sh[ij] = 0. From Eq. 3 and 4 one gets:
Equation 5 has real solutions when b^2-4ac≥0; Finally, a voltage stability index is defined as follows:
VSI tends to zero as the load increases and the system approaches its voltage stability limit. Even though VSI was originally derived for a radial system, it will be meshed systems since it provides
a good approximation of the system s voltage stability condition.
Performance indices: One of the main points discussed in this study is related to the definition of appropriate Performance Indices (PI) that reflect the actual post-contingency operating conditions
regarding voltage stability.
Five PI have been chosen and were used in the proposed ranking method. The choice of using more than one PI is based on the following ideas. First, no voltage stability index alone is able to reflect
the actual post-contingency VSM in an accurate way, due to the nonlinearities of the problem. All voltage stability indices, VSI included present some degree of inaccuracy due to the simplified
assumptions they are based on. Therefore, it is expected that a PI defined in terms of voltage stability indices would also carry some degree of inaccuracy. It is important that the voltage stability
index be associated to other quantities to compensate for such inaccuracies and minimize errors. Secondly, it was also found that each definition of a PI favors the identification of certain severe
One of the contributions of this study is to show that each one of the different PI may be able to identify a number of severe contingencies and the union of the PI may result in almost all severe
contingencies identified. The definition of PI and their association to other quantities were based on exhaustive tests. In this study, the following five performance indices were defined:
Weighting factors i were added to the first three performance indices to improve their accuracy. VSI[min] is the smallest post-contingency voltage stability index among the branches connected to the
bus with the smallest base case voltage magnitude.
Weight α[1 ]is the base case apparent power through outage branch km. Weight α[2] is the largest nodal phase angle variation from the base case to the contingency case.
Weight α[3] is the largest nodal voltage magnitude variation from the base case to the contingency case. Note that factors i are based on relevant system physical quantities and their variations are
closely related to the voltage stability phenomenon. S^l[max] is calculated by:
where, branch l connects buses i and j and φ = <(V^2[i]/S^*[ij]) (Albuquerque and Castro, 2003). Obviously, subscript l represents all branches in the system but the outage branch km.
The contingencies causing line flow overloads may not necessarily cause bus voltage problems (Ozdemir and Singh, 2001) and vice versa, line flow problems and voltage limit violations problems must be
dealt separately by associated performance indices.
Fig. 4: Fuzzy surface of proposed model for IEEE-14 bus system
Fig. 5: Fuzzy surface of proposed model for practical indian 75 bus system
In the proposed approach, a new method is proposed to combine the five different rankings using fuzzy inference.
In this method five rankings are taken as inputs to the fuzzy tool box (available in MATLAB) and fuzzy co-efficient are generated based on some predefined if-then rules as a output. The fuzzy model
used in this approach is tested in MATLAB 7.0 Fuzzy tool box.
Fuzzy Interference Structure (FIS) for combined performance indices ranking: The FIS structure given below is tested in MATLAB 7.0. Fuzzy Toolbox. Figure 4 shows the fuzzy surface of the proposed
model for the IEEE-14 bus system.
Type: Mamdani, No. of inputs: 5, No. of outputs: 1, No. of rules: 7 and Method = min Or Method = max, Imp Method = min, Agg Method = max, Defuzzification Method = centroid.
Figure 5 shows the fuzzy surface of the proposed model for the practical Indian 75-bus system. For practical Indian-75 bus system. Type: Mamdani, No. of inputs: 5, No. of outputs: 1, No. of rules: 17
and Method = min, OrMethod = max, ImpMethod = min, AggMethod = max, Defuzzification Method = centroid.
Table 1: Fuzzy rules for IEEE-14 bus system
Fuzzy membership values: Fuzzy sets for the combined contingency ranking inputs are the performance indices to the contingency ranking.
Here the membership functions for all the linguistic terms are taken as triangular function. The values are shown in the following Table 1.
The developed fuzzy interference matrix fro the IEEE-14 bus system consists of seven rules and is shown in Table 1. Figure 6 shows the interface fuzzy membership value for the IEEE-14 bus system.
Fuzzy IF-THEN rules: The output and input membership functions to evaluate the severity of a post-contingent quality divided into three categories using Fuzzy set notation: low, moderate and high (
Fig. 7).
Based on these rules and corresponding contingency ranking membership value a area is selected in fuzzification. This area is further de-fuzzified which gives the fuzzy coefficient. The following
rules are implemented to obtain the fuzzy coefficient for IEEE-14 bus system and a practical 75 bus Indian system (Table 2 and 3).
The multi-criteria based contingency rank is prepared using the following flow chart. The additional performance indices of the entire system can be ranked as to generate rank lists.
The additional multi-criteria based ranking approach generally gives idea about the contingency planning in the deregulated environment. The flow chart shown in Fig. 8 shows the steps involved in
preparing a contingency ranking list.
Fig. 6: Membership functions of the performance indices
Fig. 7: Membership functions of the performance indices for practical indian 75 bus system
Table 2: Fuzzy membership values for practical Indian 75 bus system
Table 3: Fuzzy rules for practical Indian 75 bus system
Fig. 8: Flow chart for fuzzy based multi criteria contingency ranking
The proposed fuzzy inference system is tested on the IEEE-14 bus system and 75 bus practical Indian systems. Obtain the post-contingency (VSM) for each contingency of the list by using some known
method as for example the continuation method or by successive load flow computations for gradually increasing load until load flow solutions are no longer found.
Table 4: Contingency ranking by loading parameter (λ) for ieee 14 bus system
Table 5: Fuzzy membership values for ieee-14 bus system
The latter is of course very time consuming, however its results are acceptable from a practical point of view. The results were used as a reference in this study. Rank contingencies according to
their VSM computed using Continuation power flow obtaining list N. Perform the multi criteria contingency ranking and obtain five ordered lists corresponding to the five PI s. Obtain list P which
corresponds to the union of the five ordered lists (five PI s).
IEEE-14 bus system: For the sake of illustration, a detailed description of a simulation for the IEEE 14-bus, 20-branch system is shown. Table 4 shows the ranking of the ten most severe single
contingencies. They were ranked according to the respective value of the loading factor as shown in Table 5. Note that MVAR limits at generation units have been enforced in all simulations shown
here. The outage of branch 1 (connecting buses 1 and 2) results in an infeasible operating state and λ[max]<1 (negative VSM). On the other hand they may also result in small impact on the system s
maximum loading. In fact, this is the usual case for realistic systems.
Table 4 shows the ranking of the twenty single contingencies. They were ranked according to the respective value of the loading parameter. The line 1(1-2) have lowest value of λ[max ]that s why it is
ranked no.1 that means the outage of line 1 is most severe.
Table 6: Performance indices of ieee 14 bus system for single line outage
Table 7: Contingency ranking for IEEE-14 bus system based on PI values
Table 6 shows the performance indices and VSI[min] value for each contingency. The outage of line 1 and 14 gives negative value of VSI which shows infeasible case that s why its VSI value and
respective PI value are made 0. Table 7 shows the contingency ranking based on performance indices. The line outage for which the PI value is lowest (for first 3 PI) is ranked 1 i.e., most severe.
Where as for last two the line which has highest PI value is ranked 1 as most severe. The line 1 has ranked no.1 in all PI rankings which shows the outage of line 1 is most severe.
Using the fuzzy approach a fuzzy coefficient is generated by combining the five contingency rankings which is shown in Table 8. The line outage whose fuzzy coefficient is lowest is ranked highest.
Here also the line no1 is ranked as most severe contingency which shows the effectiveness of this method.
Table 8: Fuzzy based contingency ranking for IEEE 14 bus system
Table 9: Contingency ranking by loading parameter (λ) for IEEE 14 bus system
Table 10: Contingency ranking for indian utility 75 bus system based on PI values. Fiures in parenthesis shows the line connection i.e., from one bus to other bus
Table 11: Fuzzy based contingency ranking for Indian utility 75 bus system
Practical 75 bus Indian system: Practical Indian 75-Bus System having 15 generators including one slack bus, 60 load buses and 24 transformers lines. The Table 9 shows the ranking of the single
contingencies. They were ranked according to the respective value of the loading parameter. The line 1 (19-20) have lowest value of λ[max] that s why it is ranked no. 1 that means the outage of line
1 is most severe.
Table 10 shows the contingency ranking based on performance indices. The line outage for which the PI value is lowest (for first 3 PI) is ranked 1 i.e., most severe. Where as for last two the line
which has highest PI value is ranked 1 as most severe.
Using the fuzzy approach a fuzzy coefficient is generated by combining the five contingency rankings which is shown in Table 11. The line outage whose fuzzy coefficient is lowest is ranked highest.
Here line no. 64 has minimum fuzzy coefficient which shows this is the most severe contingency of this system.
The contingency ranking method for voltage stability applied in this project shows a great potential to be used as a tool for real time operation. This project demonstrated that various performance
indices couldn t reliably capture all the instable cases individually.
Each index can t rank the severity of contingency for different system under different conditions but the combination of indices can give an overall evaluation from different aspects of the system.
Results on two test systems showed that combination of indices CI with use of fuzzy will provide a better ranking for worst cases.
As evidenced by the results of IEEE 14 bus system and a practical Indian 75 bus system the approach used can provide the user with those outages that may cause immediate loss of load or islanding at
a certain bus. This is a kind of information which is very helpful to system operators.
An overall severity index is given for which outage case. These severity indices can be used as a guideline for deciding whether corrective control actions should be taken.
The researchers would like to thank the management and faculty of VIT University for their kind support and encouragement throughout this research.
|
{"url":"http://www.medwelljournals.com/fulltext/?doi=ijscomp.2010.185.193","timestamp":"2014-04-20T08:15:31Z","content_type":null,"content_length":"56033","record_id":"<urn:uuid:16566b01-b808-4cfb-9150-b3d44b812c8f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
linear algebra with transformations and eigenvectors
please help me solve the problem see the attachment step by step would help me understand it thanks
Apply T to the vectors of B and write the result as a lin. combination of the vectors in C, and then take the transpose of the coefficients matrix. For example: $T\left(\begin{array}{c}3\\2\\3\end
{array}\right)=\ left(\begin{array}{c}3\\2\end{array}\right)=\under line{\underline{1}}\left(\begin{array}{c}1\\0\end{ array}\right)+\underline{\underline{2}}\left(\begi n{array}{c}1\\1\end{array}\
right)$$\Longrightarrow$ the first COLUMN (from the left) of the wanted matrix is $\left(\begin{array}{c}1\\2\end{array}\right.$ Now you do something simmilar with the second element of B and find
the second column of the wanted matrix for T. Tonio
|
{"url":"http://mathhelpforum.com/advanced-algebra/114345-linear-algebra-transformations-eigenvectors.html","timestamp":"2014-04-21T07:35:56Z","content_type":null,"content_length":"35435","record_id":"<urn:uuid:f49569a1-c224-4ac5-8c58-6230f0bbe6fe>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cauchy Riemann Equations
January 2nd 2009, 07:33 PM #1
Oct 2008
Cauchy Riemann Equations
In deriving the necessary conditions for a function $f(x) = u(x,y) + iv(x,y)$ to be differentiable at a point $z_0$, why do we get $f'(z_0) = v_{y}(x_{0}, y_{0}) - iu_{y}(x_{0}, y_{0})$? Would
not it be $f'(z_0) = u_{y}(x_{0}, y_{0}) + iv_{y}(x_0, y_0)$?
Because the Cauchy Riemann equations are:
$\color{red}\frac{\partial{u}}{\partial{x}} = \frac{\partial{v}}{\partial{y}}$
$\color{red}\frac{\partial{v}}{\partial{x}} = -\frac{\partial{u}}{\partial{y}}$
$i\frac{\partial{f}}{\partial{x}} = \frac{\partial{f}}{\partial{y}}$
$\frac{\partial{f}}{\partial{\bar{z}}} = 0$ where $\bar{z}$ is the complex conjugate of z = x + iy.
Where u and v are the real and imaginary parts of:
$f(x + iy) = u(x,y) + iv(x,y)$
Last edited by Aryth; January 3rd 2009 at 06:34 AM.
Because the Cauchy Riemann equations are:
$\color{red}\frac{\partial{u}}{\partial{x}} = \frac{\partial{v}}{\partial{y}}$
$\color{red}\frac{\partial{v}}{\partial{x}} = \frac{\partial{u}}{\partial{y}}$
$i\frac{\partial{f}}{\partial{x}} = \frac{\partial{f}}{\partial{y}}$
$\frac{\partial{f}}{\partial{\bar{z}}} = 0$ where $\bar{z}$ is the complex conjugate of z = x + iy.
Where u and v are the real and imaginary parts of:
$f(x + iy) = u(x,y) + iv(x,y)$
No, the second one is $\frac{\partial v}{\partial x}={\color{red}-} \frac{\partial u}{\partial y}$
You're right... I edited it.
I'm surprised that advertising like this would be allowed.
Fundamentally, the answer is the "chain rule". Since z= z+ iy, $\frac{dv}{dz}= \frac{\partial v}{\partial y}\frac{\partial y}{\partial z}$ and $\frac{\partial y}{\partial z}= \frac{1}{\frac{\
partial z}{\partial y}}= \frac{1}{i}= -i$
Crucial point: 1/i= -i.
Another way of looking at the Cauchy-Riemann equations is this: With f(z)= f(x+ iy)= u(x,y)+ iv(x,y), the derivative, at $z= z_0$, is the given by the limit $\lim_{h\rightarrow 0} \frac{f(z_0+h)-
f(z_0)}{h}$ just as for real numbers. And, just like limits for real numbers, in order that the limit exist, we must get the same result approaching $z_0$ along any path. In particular, if we
approach $z_0$ along a line parallel to the real axis, h is real so we have
$\lim_{h\rightarrow 0}\frac{u(x_0+h,y_0)+ iv(x_0+ h,y_0)- u(x_0,y_0)- iv(x_0,v_0)}{h}$
$\lim_{h\rightarrow 0}\frac{u(x_0+h,y_0)- u(x_0,y_0)}{h}+ \lim_{h\rightarrow 0}\frac{iv(x_0+h,y_0)- iv(x_0,y_0)}{h}$
[tex]= \frac{\partial u}{\partial x}+ i\frac{\partial v}{\partial u}
Approaching instead along a line parallel to the imaginary axis, h is imaginary and we can use "ih", with i real, instead. Now we have
$\lim_{h\rightarrow 0}\frac{u(x_0,y_0+ h)+ iv(x_0,y_0+h)- u(x_0,y_0)- iv(x_0,y_0)}{ih}$
$\lim_{h\rightarrow 0}\frac{u(x_0, y_0+ h)- u(x_0, y_0)}{ih}+ \lim_{h\rightarrow 0}\frac{v(x_0,y_0+ h)- v(x_0,y_0)}{ih}$
and the important difference is that "i" in the denominator. It will cancel "i" in the v part and remember that 1/i= -i so
$= -i\lim_{h\rightarrow 0}\frac{u(u_0,y_0+h)- u(x_0,y_0)}{h}+ i\lim_{h\rightarrow 0}\frac{v(x_0,y_0+h)- v(x_0,y_0)}{h}$
$= -i\frac{\partial u}{\partial y}+ \frac{\partial v}{\partial y}$
In order that the limit exist, those two must be the same. Equating real and imaginary parts,
$\frac{\partial u}{\partial x}= \frac{\partial v}{\partial y}$ and
$\frac{\partial u}{\partial y}= -\frac{\partial v}{\partial x}$
January 2nd 2009, 08:02 PM #2
January 3rd 2009, 12:55 AM #3
January 3rd 2009, 06:34 AM #4
January 3rd 2009, 07:00 AM #5
MHF Contributor
Apr 2005
January 3rd 2009, 07:33 AM #6
MHF Contributor
Apr 2005
January 3rd 2009, 08:53 AM #7
|
{"url":"http://mathhelpforum.com/calculus/66628-cauchy-riemann-equations.html","timestamp":"2014-04-16T17:35:09Z","content_type":null,"content_length":"61338","record_id":"<urn:uuid:bc3c212b-e168-4449-b20f-85c790a0da28>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector on slope
Posts: 196
Joined: 2002.04
I have a vector like 1.0, 0.0, 0.0 . I want to find a new vector that follows the slope of a plane that is not parallel to the old vector. How do I change the vector to the angle of the slope? Also
how can I dampen or increase the vector's speed at different angles?
Posts: 1,403
Joined: 2005.07
say you have this plane
ax + by + cz = d
you know the normal of the plane (vector perpendicular to it) is (a, b, c), so to find a vector that lies on the plane pick any vector not equal to (a, b, c) and find the cross product of it and the
plane's normal.
A vector doesn't have speed, but you could possibly increase the magnitude, i.e. (a, b, c) -> (k*a, k*b, k*c) would make it k times 'faster'.
Sir, e^iπ + 1 = 0, hence God exists; reply!
Posts: 196
Joined: 2002.04
Ok I can't get anything to work now. I'm using the Ax + By + Cz + D = S to find my distance to the plane (I found the formula here
. The only problem is that the collision plane is in a different area than the actual plane that I'm drawing. Why is there a shift?
Here's my code:
To move the point into the plane just use the w,s,a,and d keys and you can rotate everything with the t,g,f,h keys
I realized that it looked like the plane was being shifted in the opposite direction so I used this formula: Ax + By + Cz -D = S and it worked. Now I'm wondering why is Paul's formula wrong? I still
can't get point to follow the slope. Instead it flys off in a different direction. Can you add it to my collision code so I can see what you're talking about? I want the point to follow the slope
like this mario game:
|
{"url":"http://idevgames.com/forums/thread-4273-post-33249.html","timestamp":"2014-04-19T10:02:42Z","content_type":null,"content_length":"17051","record_id":"<urn:uuid:811322b4-b98f-465f-bc51-578c0ac84db4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Detecting Collisions of Two Cubes
I'm trying to figure out how to detect collisions between two cubes in OpenGL. This is my code so far which doesn't work at all:
void CCharacter::CheckCollision(CActiveObject *object)
int w = 0;
if ( ( ( GetX() + GetW() ) > ( object->GetX() - object->GetW() ) ) &&
( ( GetX() - GetW() ) < ( object->GetX() + object->GetW() ) ) &&
( ( GetY() + GetH() ) > ( object->GetY() - object->GetH() ) ) &&
( ( GetY() - GetH() ) < ( object->GetY() + object->GetH() ) ) &&
( ( GetZ() + GetD() ) > ( object->GetZ() - object->GetD() ) ) &&
( ( GetZ() - GetD() ) < ( object->GetZ() + object->GetD() ) ) )
if ( GetX() < object->GetX() ) { w += LEFT; }
if ( GetX() > object->GetX() ) { w += RIGHT; }
if ( GetY() < object->GetY() ) { w += BOTTOM; }
if ( GetY() > object->GetY() ) { w += TOP; }
if ( GetZ() < object->GetZ() ) { w += BACK; }
if ( GetZ() > object->GetZ() ) { w += FRONT; }
if ( object->GetType() == BUILDING_TYPE ) { CollideBuilding(object,w); }
if ( object->GetType() == LADDER_TYPE ) { CollideLadder(object,w); }
if ( object->GetType() != LADDER_TYPE ) { mOnLadder = false; }
All the code should be fairly self explanatory. GetX, GetY, and GetZ return the coordinates of the CENTER of the cube. GetW, GetH, and GetD return the dimensions as the distance from the center to a
side, not from one side to the opposite. LEFT, RIGHT, BOTTOM, etc are all enumerated constants. GetType() returns a defined constant telling this code which more specific code to run.
If you can help me out, I would appreciate it.
Posts: 157
Joined: 2002.12
not an easy answer for this one.
if the cubes do not rotate all you have to do is check that cubes extend overlap in (x,y,z).
if they do rotate it becomes really tricky involves a lot of math.
Did you try google for "box collision detection"?
Posts: 177
Joined: 2002.08
There should be 3 separate tests:
-do they overlap in X?
-do they overlap in Y?
-do they overlap in Z?
If all of them are true, then they hit.
Also, each test is going to have to consist of 2 separate checks, since you have to check each cube against the other.
Posts: 157
Joined: 2002.12
if you will have lots of boxes to check.you could do a pre-process to find box pairs to do the collision test.
If you have 3 boxes [A,B,C] you would end up with an array of pairs that would look like this.
pairs [ [A,B], [A,C], [B,C] ]
Posts: 509
Joined: 2002.05
Mark Levin Wrote:There should be 3 separate tests:
-do they overlap in X?
-do they overlap in Y?
-do they overlap in Z?
If all of them are true, then they hit.
Also, each test is going to have to consist of 2 separate checks, since you have to check each cube against the other.
Not always...
/ \
\ /
\/ /\
/ \
\ /
Overlapping but not colliding
Posts: 916
Joined: 2002.10
Jake Wrote:Not always...
/ \
\ /
\/ /\
/ \
\ /
Overlapping but not colliding
I thought he was using axis aligned bounding boxes
Posts: 1,560
Joined: 2003.10
Keeping everything axis-aligned will make your life much easier. As Mark said, if the X, Y, and Z all overlap, there's a collision. However, if you have very small collision boxes and/or very
fast-moving objects, this system will fail from time to time if one object moves far enough in one framestep to go all the way past something it was supposed to collide with.
I account for this in Water Tower 3D by incorporating 3 bounding boxes into my collision tests: The player's current bounding box, the player's new intended bounding box (with velocity added), and
the bounding box of the object to collide with. For each edge on the collision object {left, right, bottom, top, back, front}, I check to see if the opposing edge on the player's current bounding box
{right, left, top, bottom, front, back} is on one side of the edge, and the opposing edge on the new intended bounding box is on the other side of that edge. If both of those are true, I know a
collision has occurred. For subsequent collision tests in the same frame, the new intended bounding box gets adjusted.
This system isn't perfect, but it seems to do that job pretty well in my case. There are a few gotchas I should tell you about if you decide to go this route. Let me know if you want me to elaborate;
I'm at work at the moment, but I can post a more detailed message after I get home.
Alex Diener
cubes are easy. rectangular prisms are not.
D = the distance between the box centers.
W = (widthA+widthB)/2.
if (D < W) then collide.
else if (D > sqrt(3)*W) then not collided.
else if (any point in boxA is inside boxB or any point of boxB is inside boxA) then collide.
15.4" MacBook Pro revA
Kelvin, can you explain why W = (widthA+widthB)/2 and why if (D > sqrt(3)*W) there is no collision? I also assume this will not work will rectangular objects. Am I correct?
SimReality/Nick Wrote:Kelvin, can you explain why W = (widthA+widthB)/2 and why if (D > sqrt(3)*W) there is no collision? I also assume this will not work will rectangular objects. Am I correct?
W is just precalculation on my part. It just happens that the distance formula simplifies down and that's one of the terms. sqrt(3)*W = the distance from the center of cubeA to a corner + the
distance from the center of cubeB to a corner. So, basically this tests if the distance is greater than the nearest distance required for two corners to touch.
Rectangles can be done in a similar fashion, but the math doesn't simplify down nearly as cleanly, and there are a lot of edge cases that you need to account for after the last check for the cubes.
If you want to do irregular shaped objects, I'd suggest using Capsules, Cylinders, and Spheres for collision checking as it will probably be the easiest to represent and implement.
15.4" MacBook Pro revA
Thanks for that tip. I'll see if I can fit it in to my code and get it to work. Unfortunately most of my objects come out as rectangles so I won't be able to use this too much.
to do the corner check for rectangles, it's 0.5*(sqrt(wA^2+hA^2+dA^2)+sqrt(wB^2+hB^2+dB^2)). Which takes quite a bit longer since you now have 2 square roots and several multiplies where as you only
have a multiply by a constant sqrt(3) for cubes which can be calculated ahead of time.
15.4" MacBook Pro revA
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Frustrations, Collisions, et al. blobbo 6 3,909 Feb 9, 2005 11:37 AM
Last Post: phydeaux
Detecting a RagePro, and related poll inio 5 4,746 Jul 2, 2002 05:00 PM
Last Post: Carlos Camacho
|
{"url":"http://idevgames.com/forums/thread-6325-post-15216.html","timestamp":"2014-04-18T18:37:21Z","content_type":null,"content_length":"41284","record_id":"<urn:uuid:33d507ed-e6ad-4d9f-80b3-8efed07405af>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
|
guess the combination number!
Re: guess the combination number!
1) Iowa
2) Alabama
3) Texas
Edit: Of course London and Paris are also in Texas!
Last edited by noelevans (2012-12-18 16:32:44)
Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional).
LaTex is like painting on many strips of paper and then stacking them to see what picture they make.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=244332","timestamp":"2014-04-19T17:29:08Z","content_type":null,"content_length":"11849","record_id":"<urn:uuid:973805b3-a4d5-43e6-874f-ee1bd174d4d2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hartshorne's proof of the birational invariance of the geometric genus
up vote 2 down vote favorite
I am confused about a couple of steps in the proof of the birational invariance of the geometric genus (Theorem II.8.19 in Hartshorne's Algebraic Geometry).
I shall sketch the proof and highlight my doubts.
Let $X,X'$ be two birationally equivalent nonsingular projective varieties over a field k. Hence there is a birational map $X-->X'$ represented by a morphism $f:V\rightarrow X'$ for some largest open
subset $V\subset X$.
Along taxonomic lines, the proof goes like this:
1. We first prove that $f$ induces an injective map $f^{\ast}:\Gamma(X',\omega_{X'})\rightarrow \Gamma(V,\omega_V)$
2. Then we prove that the restriction map $\Gamma(X,\omega_X)\rightarrow \Gamma(V,\omega_V)$ is bijective, using the valuative criterion of properness.
From this it follows that $\rho_g(X')\leq \rho_g(X)$, and the reverse inequality follows by simmetry.
In the proof of step 1: the map $f$ induces an isomorphism $U\cong f(U)$ for some open subset $U\subset V\subset X$ and then Hartshorne claims that this implies that f induces an isomorphism $\omega_
{V|U}\cong \omega_{X'|f(U)}$. Why is that?
In the proof of step 2: from the valuative criterion of properness it follows that $\textrm{codim }(X\setminus V,X)\geq 2$. In order to prove that $\Gamma(X,\omega_X)\rightarrow \Gamma(V,\omega_V)$
is bijective it suffices to prove it on open sets $U\subset X$ trivializing the canonical sheaf $\omega_{X|U}\cong \mathcal{O}$, namely that $\Gamma(U,\mathcal{O}_U)\rightarrow \Gamma(U\cap V,\
mathcal{O}_U\cap V)$ is bijective.
Since $X$ is nonsingular, from the first remark in the previous paragraph we have that $\textrm{codim }(U\setminus U\cap V,U)\geq 2$ and then Hartsorne claims that the result (bijectivity) follows
immediately from the fact that for an integrally closed Noetherian domain $A$, we have $A=\bigcap_{\textrm{ht } \mathfrak{p}=1} A_{\mathfrak{p}}$. I do not see this either.
Thanks in advance for any insight.
Regarding your item 1, the point is that $\omega_V|_U= \omega_U$ and that the canonical sheaf is an isomorphism invariant. Item 2 has to do with what is sometimes viewed as the algebraist's
4 Hartogs theorem: A function defined on the complement of codim $\ge 2$ set of a normal variety extends. This can be reduced to the affine case, where it amounts to the equality $A=\cap A_p$ (over
ht 1 primes). I may expand this later, if no one else gives you a more detailed answer. – Donu Arapura Mar 15 '12 at 13:07
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry or ask your own question.
|
{"url":"http://mathoverflow.net/questions/91256/hartshornes-proof-of-the-birational-invariance-of-the-geometric-genus","timestamp":"2014-04-24T04:00:37Z","content_type":null,"content_length":"48608","record_id":"<urn:uuid:47bc8a67-b734-47f7-bacb-9bee01ea976c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
a challenge question - who dares?
September 13th 2009, 08:31 AM
a challenge question - who dares?
My first every post (Hi), I have a quick one... here it comes...(Smirk)
if we divide 5000 over 1250 we will get the whole integer number "4". (Talking)
if we multiply 5000 by 3.68 we will get the whole integer number "18400". (Surprised)
Now, if we divide 15000 over 1250 we will get the whole integer number "12"
then when we multiply 15000 by 3.68 we will definitely get a whole integer number, in this case 55200
Now the question is: what is the relation between the multiplied number (in our case 3.68) and the divided by number (in our case 1250) in which if we change the first number we change the second
number and the result is always an integer whole number in both cases.
does it make sense? (Sleepy)
|
{"url":"http://mathhelpforum.com/algebra/102046-challenge-question-who-dares-print.html","timestamp":"2014-04-18T11:08:44Z","content_type":null,"content_length":"3941","record_id":"<urn:uuid:ab74133e-e4d1-4ea7-96a3-0c36c7ba6074>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LaGrange Multiplier Problem
May 26th 2010, 04:55 PM
LaGrange Multiplier Problem
I'm trying to find the minimum and maximum values of
$f(x,y,z,) = (4x - 1/2y + 27/2 z)$
on the surface of $g=x^4+y^4+z^4=1$
I have already put $\bigtriangledown f = \lambda \bigtriangledown g$
Getting the following 4 equations... Where do I go from here?
$4\lambda x^3=4$
$4\lambda y^3=-1/2$
$4\lambda z^3=27/2$
May 26th 2010, 05:15 PM
May 26th 2010, 05:19 PM
I know I need to solve...
There are 4 unknowns (counting λ) but variables x, y, z are raised to the power of 4.
I don't know how to solve this system... Any help?
May 26th 2010, 05:33 PM
$4\lambda x^3=4\implies x^3= \frac{4}{4\lambda }\implies x^3= \frac{1}{\lambda }\implies x= \frac{1}{\sqrt[3]{\lambda}}$
Do this for the next 2 equations
Then $x^4+y^4+z^4=1$ becomes $\left( \frac{1}{\sqrt[3]{\lambda}}\right)^4+y^4+z^4=1$
Your work will replace $y$ and $z$ with a function of $\lambda$
You will then have a solution for $\lambda$ and use this to find $x,y,z$
May 27th 2010, 02:17 AM
I'm trying to find the minimum and maximum values of
$f(x,y,z,) = (4x - 1/2y + 27/2 z)$
on the surface of $g=x^4+y^4+z^4=1$
I have already put $\bigtriangledown f = \lambda \bigtriangledown g$
Getting the following 4 equations... Where do I go from here?
$4\lambda x^3=4$
$4\lambda y^3=-1/2$
$4\lambda z^3=27/2$
Since the value of $\lambda$ is not part of the solution, I often find it best to eliminate $\lambda$ first by dividing one equation by another.
Dividing the first equation by the second gives $\frac{4\lambda x^3}{4\lambda y^3}= \frac{4}{-1/2}$ or $\frac{x^3}{y^3}= -8$ so $x= -2y$.
Dividing the third equation by the second gives $\frac{4\lambda z^3}{4\lambda y^3}= \frac{27/2}{-1/2}= \frac{z^3}{y^3}= -27$ so $z= -3y$.
Putting x= -2y and z= -3y into the last equation gives a single equation for y.
|
{"url":"http://mathhelpforum.com/calculus/146575-lagrange-multiplier-problem-print.html","timestamp":"2014-04-19T01:56:49Z","content_type":null,"content_length":"13087","record_id":"<urn:uuid:64f963b3-b7e0-48b0-b233-e34c284f5600>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
JAFO: <<<IOW, benefits are (and have been tipped) in favor of low wage earners.>>>
GH: {{{The "tip" is quite modest,}}}
"just curious ...and numbers on how much of a 'tip'?"
Almost 3:1 in favor of bottom from top to bottom and 2:1 in favor of bottom from middle to bottom and 1.5:1 in favor of middle from top to middle, all based upon percentage of income replaced.
From my earlier post:
"Social Security is expected to replace about 40 percent of pre-retirement earnings of average earners; 80 percent for the lowest earners; and 27 percent for those at the maximum taxable wage base of
$80,400, [this from some years ago] according to the Social Security Administration." [but relative percentages have not changed since then]
"The benefit formula is a three step formula based on AIME (Average Indexed Monthly Earnings). The annual earnings on which this average earnings figure is based have the same caps as were used on
the tax side. The formula for the benefit is then 90% of the first $x of AIME plus 32% of the next $y of AIME plus 15% of the balance of AIME. x and y are indexed yearly."
http://ssa.gov/pubs/10070.html [Looks like 2010 numbers to me]
"Step 5:
Multiply the first $761 in Step 4 by 90%. . $__________________
Multiply the amount in Step 4 over $761 and less than or equal to $4,586 by 32%. . . . . . . . . . . . . . . .$__________________
Multiply the amount in Step 4 over $4,586 by 15%. $__________________ "
AIME of $4,586 equates to annual earned income of $55k+ a little
And maximum AIME was $8,900 (monthly equivalent of $106,800 annual cap)
If i did the math right, if monthly AIME were 4,583.33 ($55k equivalent), benefits would be $1908.05/month whereas if monthly AIME were $8,900 ($106.8 equialent) benefits would be $2,547.15.
The latter person paid almost twice as much SS tax (1.94x or 94% more) but receives only 33.9% more benefits (647.10/1908.05). 94/33.9 is a 2.77x tip.
That might be "modest" in GH's world, but I do not consider it a "modest tip".
Regards, JAFO
|
{"url":"http://boards.fool.com/MessagePrint.aspx?mid=29028439","timestamp":"2014-04-16T17:08:58Z","content_type":null,"content_length":"6763","record_id":"<urn:uuid:a6c629a7-e8a3-4ab5-81c0-4757da2542da>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gradient of each side of a quadrilateral
September 17th 2011, 11:43 AM #1
Super Member
Oct 2008
Bristol, England
Gradient of each side of a quadrilateral
I need to find the gradient of each side of of a quadrilateral which has vertices at the points A(-7,0), B(2,3), C(5,0) and D(-1,-6).
I have drawn the quadrilateral but I don't know how I would find the gradient of each side of the quadrilateral. This is where I require some help please!
Re: Gradient of each side of a quadrilateral
I need to find the gradient of each side of of a quadrilateral which has vertices at the points A(-7,0), B(2,3), C(5,0) and D(-1,-6).
I have drawn the quadrilateral but I don't know how I would find the gradient of each side of the quadrilateral. This is where I require some help please!
$m = \frac{\Delta y}{\Delta x} = \frac{y_2-y_1}{x_2-x_1}$
... use the slope formula above for each side.
Re: Gradient of each side of a quadrilateral
So would the side AB=3-0/2--7=3/9?
Re: Gradient of each side of a quadrilateral
September 17th 2011, 11:47 AM #2
September 17th 2011, 11:52 AM #3
Super Member
Oct 2008
Bristol, England
September 17th 2011, 11:57 AM #4
|
{"url":"http://mathhelpforum.com/algebra/188196-gradient-each-side-quadrilateral.html","timestamp":"2014-04-16T11:36:25Z","content_type":null,"content_length":"40754","record_id":"<urn:uuid:cd7fad6b-2d6b-4e07-b034-811c7325e45e>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Author Message
This post received
ventivish D
Senior Manager E
Joined: 20 Feb 2008 Difficulty:
Posts: 296 45% (medium)
Location: Bangalore, India Question Stats:
Schools: R1:Cornell, Yale, NYU. 51%
R2: Haas, MIT, Ross
(01:57) correct
Followers: 4
48% (00:56)
Kudos [?]: 20 [1] , given: 0
based on 339 sessions
A rectangle is inscribed in a circle of radius 5. Is the area of the rectangle bigger than 48 ?
1. The ratio of the lengths of sides of the rectangle is 3:4
2. The difference between the lengths of sides of the rectangle is smaller than 3
Source: GMAT Club Tests
- hardest GMAT questions
Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes
This post received
ventivish wrote:
A rectangle is inscribed in a circle of radius 5. Is the area of the rectangle bigger than 48 ?
1. The ratio of the lengths of sides of the rectangle is 3:4
CEO 2. The difference between the lengths of sides of the rectangle is smaller than 3
Joined: 29 Aug 2007 A. very tricky.
Posts: 2504 1. The sides of the rectangles has to be 6 and 8 cus the diagnol of the rectangle is 10, which is the diameater of the circle.
Followers: 48 If we change (decrease) a side of the rectangle, another side will also be changed (increase) and vice versa.
Kudos [?]: 451 [1] , given: 19 2. really doesnot help.
Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html
Math: new-to-the-math-forum-please-read-this-first-77764.html
Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html
Senior Manager
Thanks alot GMATTIGER for your answers.
Joined: 20 Feb 2008 However, the question does not mention that the diagonal of the rectangle passes through the centre of the circle.
Is this something that must be assumed for a rectangle inscribed in a circle?
Posts: 296
Location: Bangalore, India
Schools: R1:Cornell, Yale, NYU.
R2: Haas, MIT, Ross
Followers: 4
Kudos [?]: 20 [0], given: 0
ventivish wrote:
GMAT TIGER Thanks alot GMATTIGER for your answers.
However, the question does not mention that the diagonal of the rectangle passes through the centre of the circle.
CEO Is this something that must be assumed for a rectangle inscribed in a circle?
Joined: 29 Aug 2007 It has to be. You cannot make any rectanlge that is inscribed in a circle with a diagnal not equal tyo the diameter of the circle. No matter however you draw a
rectangle, its diagnol is the diameter of the circle.
Posts: 2504
Followers: 48
Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html
Kudos [?]: 451 [0], given: 19 Math: new-to-the-math-forum-please-read-this-first-77764.html
Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html
This post received
Joined: 12 Jun 2009 KUDOS
Posts: 8 If the diameter = 10 and radius is 5, then the hypo of the mini triangle is 3:4:5 (you need to draw to visualise).
Followers: 0 Hence, the side of the triangle is 6 (3X2) and 8 (4X2). This is sufficient to answer the question..
Kudos [?]: 1 [1] , given: 1
ventivish wrote:
A rectangle is inscribed in a circle of radius 5. Is the area of the rectangle bigger than 48 ?
1. The ratio of the lengths of sides of the rectangle is 3:4
2. The difference between the lengths of sides of the rectangle is smaller than 3
bangalorian2000 Source: GMAT Club Tests
Senior Manager - hardest GMAT questions
Joined: 01 Feb 2010 1 is enough,
Posts: 269 (3x)^2+(4x)^2 = 10^2
Followers: 1 x=2
Kudos [?]: 34 [0], given: 2 area = 6*8 = 48 hence sufficient to answer
2 is not enough,
2l^2-6*l+9 = 100
so not enough to solve.
Hence A.
This post received
Suppose sides of the ractangle are a and b.
Then, given: a^2 + b^2 = 10^2 (The diagonal of the ractangle is same as the diameter of the circle.)
vshrivastava => a^2 + b^2 = 100
Intern Question: Is ab > 48 ?
Joined: 11 Jan 2010 S1: a = 3k and b = 4k
(3k)^2 + (4k)^2 = 100 => k=2
Posts: 35 Thus, a = 6 and b = 8 => ab = 48
Therefore, ab > 48 is not true.
Followers: 1
Result: S1 is sufficient to answer the question.
Kudos [?]: 43 [14] , given: 6
S2: b - a < 3
Squaring both the sides of the inequality, gives:
a^2 + b^2 - 2ab < 9 => 100 - 2ab < 9
Solving the inequality gives: ab > 45.5
Therefore, ab > 48 may or may not be true.
Result: S2 is NOT sufficient to answer the question.
My answer is A.
chuckberry007 wrote:
If the diameter = 10 and radius is 5, then the hypo of the mini triangle is 3:4:5 (you need to draw to visualise).
Hence, the side of the triangle is 6 (3X2) and 8 (4X2). This is sufficient to answer the question..
I initially agreed with you but on 2nd thought...
the pythagorean triples only hold for integers. The stimulus never states that the sides of the rectangle were integers.
if a & b are the sides of the rectangle and c is the diagonal
then the pyth theorem states
a^2 + b^2 = c^2
c = 10
lets set a =1
now the equation becomes:
1 + b^2 = 100
b^2 = 99
b = sqrt(99) = aproximately 9.95
Joined: 04 Feb 2007
the area of the rectangle is a*b = 1*9.95 = 9.95, which is less than 48
Posts: 87
we can also use the triples 6,8,10 to get the area = 48.
Followers: 2
Therefore we need more information than just the stimulus to answer the question.
Kudos [?]: 44 [0], given: 16
1) if the ratio of the sides are 3:4 and the diagnol is 10, then the sides must equal 6 & 8 and the area equals 48
2) try 6,8,10 :
8-6 = 2 which meets the criteria. Area = 48
try 7, x, 10:
x^2 = 100-49 = 51
x=sqrt(51)= a lil bit greater than 7
area = 7 * x = 7 * (a lil greater than 7) = something greater than 48.
We have shown that the area can equal 48 as well as be greater than 48 with the difference between the sides of the rectangle less than 3.
If you like my post, a kudos is always appreciated
xyztroy 2
Manager This post received
Joined: 05 Dec 2009
a rectangle with the max area is possible only when diagonal is the diameter. but we can have a square as well with diagonal as 10, resulting in each side with
Posts: 127 5root2 and area = 50. So we just need to make sure that it is not a square, since the A says that sides are in 3:4 ratio, so the largest rectangle can be with sides
6,8 and dia 10. and area = 48.
Followers: 2
B can conclude in both sides with same length and area = 50..so not enough...
Kudos [?]: 75 [2] , given: 0
Ans is A.
ventivish wrote:
A rectangle is inscribed in a circle of radius 5. Is the area of the rectangle bigger than 48 ?
1. The ratio of the lengths of sides of the rectangle is 3:4
2. The difference between the lengths of sides of the rectangle is smaller than 3
Senior Manager
Source: GMAT Club Tests
Joined: 13 Dec 2009
- hardest GMAT questions
Posts: 265
hypotenuse of rectangle is 10 i.e. diameter
Followers: 10
stmt1: 3:4 3x and 4x are the sides and 9x^2+16x^2 = 100 =>x^2 = 4 => x = 2 => 6 and 8 are the sides and area is 48 so suff
Kudos [?]: 91 [0], given: 13
stmt2: l-b<3 and we have l^2+b^2 = 100 cannot find l or b so insuff
hence A
My debrief: done-and-dusted-730-q49-v40
LM GMAT TIGER wrote:
Director ventivish wrote:
Joined: 03 Sep 2006 Thanks alot GMATTIGER for your answers.
However, the question does not mention that the diagonal of the rectangle passes through the centre of the circle.
Posts: 895 Is this something that must be assumed for a rectangle inscribed in a circle?
Followers: 6 It has to be. You cannot make any rectanlge that is inscribed in a circle with a diagnal not equal tyo the diameter of the circle. No matter however you draw a
rectangle, its diagnol is the diameter of the circle.
Kudos [?]: 63 [0], given: 33
The reason is that rectangle has all it's angles equal to 90degree. And you must know that angle inscribed in a semicircle is always 90degrees. This means, the
diagonal is exactly the diameter of the circle.
isiadeolumide33 Hello all,
Intern Just trying to confirm this very question ?
Affiliations: IEEE, PMI, MIEEE, Is the area of the rectangle within the corcle bigger than 48?
PMP, New Nigeria Club
For both statements 1 ) AREA = 48 !! Not > 48
Joined: 08 Dec 2009
2) Not sufficient
Posts: 5
Please kindly let me know that i am right please .
Location: Lagos , Nigeria
IMO: E
Schools: Wharton, Kellogg,NYU
STERN, Jones, Simon _________________
Followers: 0 Easy does it , an extra effort does not hurt...
Kudos [?]: 1 [0], given: 7
I have gone through the previous posts, and I agree with them:
gmatbull That the diagonal of an inscribed rectangle is always the diameter of the circle.
Director (1) 3k:4k:10 => (3k)^2 + (4k)^2 = 100…k=2; 6^2+8^2(area of rec.) = 48…Suff.
Joined: 21 Dec 2009 (2) 6x8(difference is 2) = 48; 7xsqrt(51) (difference is < 1) > 48…Insuff.
Posts: 588 OR a – b < 3 (given)…where a^2 + b^2 = 10^2
Concentration: Entrepreneurship, => a^2+b^2 – 2ab < 9
=> 100 – 2ab < 9 i.e ab >45.5. hence, ab may or may not be > 48… Insuff
Followers: 15
clearly understood...OA is thus A.
Kudos [?]: 201 [0], given: 20
KUDOS me if you feel my contribution has helped you.
Intern If you change the 2nd stmt as follows
2. The difference between the lengths of sides of the rectangle is smaller than 2
Joined: 15 Nov 2009 then the area will be bigger than 48.
So, the 1st stmt is suff to determine that the area is 48 and not bigger, the second stmt is suff to determine that the area is bigger than 48.
Posts: 31 What is the answer in this case?
Location: Moscow, Russia
Followers: 0
Kudos [?]: 9 [0], given: 0
Manager The explanation for A:
Joined: 12 Jul 2010 1) l:b is defined so l,b values can be calculated.
2) l-b<3 => l^2 + b^2 - 2lb < 9 or lb> 45.5 So unsufficient to say whether lb > 48 or not.
Posts: 68
Followers: 1
Kudos [?]: 2 [0], given: 3
Answer is D (Correct me if I am wrong)
Statement 1 : Everybody knows that this is sufficient.
Joined: 06 Sep 2010 Statement 2 : It says that the difference between the lengths of sides of the rectangle is smaller than 3,so that means l-b < 3..Also it has a fact that the diagonal
of the rectangle is 10 (since radius of circle is 5).Now,considering these values (l^2 + b^2 = 100 and l-b<3),we have only one possibility when the diagonal can be
Posts: 10 of 10 units i.e. when we have sides 8 and 6.Hence,this statement is also sufficient to answer.
Followers: 0 Let me know what you guys think....
Kudos [?]: 1 [0], given: 0
This post received
Expert's post
sahilkhurana06 wrote:
Answer is D (Correct me if I am wrong)
Statement 1 : Everybody knows that this is sufficient.
Statement 2 : It says that the difference between the lengths of sides of the rectangle is smaller than 3,so that means l-b < 3..Also it has a fact that the diagonal
of the rectangle is 10 (since radius of circle is 5).Now,considering these values (l^2 + b^2 = 100 and l-b<3),we have only one possibility when the diagonal can be
of 10 units i.e. when we have sides 8 and 6.Hence,this statement is also sufficient to answer.
Let me know what you guys think....
Statement (2) is not sufficient, see algebraic solutions on previous page or consider the following examples:
) then
and the answer to the question "is
" is NO;
Math Expert
Joined: 02 Sep 2009
Posts: 17278
Followers: 2865
Kudos [?]: 18314 [1] , given:
2345 and
) then
and the answer to the question "is
" is YES. Note that in this case inscribed figure is square, which is a special type of rectangle.
Two different answers not sufficient.
Answer: A.
I think that the problem with your solution is that you assumed with no ground for it that the lengths of the sides of the rectangle are integers.
Hope it's clear.
NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!!
PLEASE READ AND FOLLOW: 11 Rules for Posting!!!
RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9.
PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous
years. NEW!!!;
COLLECTION OF QUESTIONS:
PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6.
Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set.
,11 Mixed Questions, 12 Fresh Meat
DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency
Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New
DS set.
What are GMAT Club Tests?
25 extra-hard Quant Tests
I am new on this forum and would like to know if the
GMAT Club Tests
were providing 'official' answers explanations.
Joined: 06 Feb 2012
Having 10-15 posts to answer a question provides some interesting back and forth and sometimes multiple angles to solve it
Posts: 91
but it can unfortunately be confusing as well.
WE: Project Management (Other)
Adding an 'official' explanation to the correct answer - maybe as a second show/ hide section - would in my opinion provide added value.
Followers: 2
Kudos [?]: 40 [0], given: 16
Kudos is a great way to say Thank you...
1. The ratio of the lengths of sides of the rectangle is 3:4:- This gives area of rectangle equal to 48 hence sufficient
Intern 2. The difference between the lengths of sides of the rectangle is smaller than 3: If a,b are the two sides then a:b=3:4 or a=3k,b=4k where k is any constant.
Difference =4k-3k=k<3 hence k=0,1,2 k cannot be equal to 0,1 as if k=1 then a=3,b=4 which is not possible as diameter of circle=10. if k=2 then a=6,b=8 and area=48
Joined: 03 Apr 2009 hence sufficient.
Posts: 10 so IMO D
Followers: 0
Kudos [?]: 0 [0], given: 0
Bunuel wrote:
sahilkhurana06 wrote:
Answer is D (Correct me if I am wrong)
Statement 1 : Everybody knows that this is sufficient.
Statement 2 : It says that the difference between the lengths of sides of the rectangle is smaller than 3,so that means l-b < 3..Also it has a fact that the diagonal
of the rectangle is 10 (since radius of circle is 5).Now,considering these values (l^2 + b^2 = 100 and l-b<3),we have only one possibility when the diagonal can be
of 10 units i.e. when we have sides 8 and 6.Hence,this statement is also sufficient to answer.
Let me know what you guys think....
Statement (2) is not sufficient, see algebraic solutions on previous page or consider the following examples:
harshavmrg ) then
Manager area=ab=48
Status: I will not stop until i and the answer to the question "is
realise my goal which is my dream
too area>48
Joined: 25 Feb 2010 " is NO;
Posts: 236 If
Schools: Johnson '15 a=b=\sqrt{50}
Followers: 2 (
Kudos [?]: 19 [0], given: 16 a-b=0<3
) then
and the answer to the question "is
" is YES. Note that in this case inscribed figure is square, which is a special type of rectangle.
Two different answers not sufficient.
Answer: A.
I think that the problem with your solution is that you assumed with no ground for it that the lengths of the sides of the rectangle are integers.
Hope it's clear.
Bunuel....Can u please explain the solution..( both options A and B)... i am quite confused about this problem of how is A sufficient and how is B insufficient
Note: Give me kudos if my approach is right , else help me understand where i am missing.. I want to bell the GMAT Cat
Satyameva Jayate - Truth alone triumphs
|
{"url":"http://gmatclub.com/forum/m22-73309.html","timestamp":"2014-04-16T08:09:08Z","content_type":null,"content_length":"224477","record_id":"<urn:uuid:8be8fbb1-932e-435f-a461-436ec916a2a2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jamaica Beach, TX Math Tutor
Find a Jamaica Beach, TX Math Tutor
...I can tutor in any math subject K-8 as well as Pre-Algebra, Algebra I and II (college algebra as well), Geometry, and even Chemistry. I also have worked with home-schooled students on some of
these subjects so I can help them out, if necessary. Being a younger tutor, I certainly can relate to the kids and teenagers better.
8 Subjects: including algebra 1, algebra 2, chemistry, geometry
...A lot of students have difficulties with math and the math found in other subjects such as physics. Math is an interesting subject with a myriad of techniques for finding an answer. Math is
progressive, that is one needs to have a firm grasp on previous math instruction in order to progress successfully in later math courses.
29 Subjects: including calculus, ACT Math, algebra 1, algebra 2
...I have experience tutoring in phonics, reading, reading comprehension and math, including elementary, pre-algebra, algebra & geometry. I possess a special talent for making learning fun by
utilizing creative ways in which each student can relate. I also have experience tutoring for the Texas St...
12 Subjects: including prealgebra, English, geometry, algebra 1
I was born in Taiwan. I graduated from No.1 university in Taiwan, majored in Economics and came to the USA to pursue an MBA at Lamar University in 1988. I am a loving and patient Christian mom of
three children.
12 Subjects: including algebra 1, algebra 2, vocabulary, grammar
...I show students how to identify requirements, design and document large software solutions, develop data structures and algorithms, and connect their code to user interfaces and databases. I
introduce students to iterative software life cycles. Finally, I show students how to test their code and evaluate the effectiveness of their solutions from a user's perspective.
30 Subjects: including statistics, Java, SQL, ADD/ADHD
Related Jamaica Beach, TX Tutors
Jamaica Beach, TX Accounting Tutors
Jamaica Beach, TX ACT Tutors
Jamaica Beach, TX Algebra Tutors
Jamaica Beach, TX Algebra 2 Tutors
Jamaica Beach, TX Calculus Tutors
Jamaica Beach, TX Geometry Tutors
Jamaica Beach, TX Math Tutors
Jamaica Beach, TX Prealgebra Tutors
Jamaica Beach, TX Precalculus Tutors
Jamaica Beach, TX SAT Tutors
Jamaica Beach, TX SAT Math Tutors
Jamaica Beach, TX Science Tutors
Jamaica Beach, TX Statistics Tutors
Jamaica Beach, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Angleton Math Tutors
Bayou Vista, TX Math Tutors
Bonney, TX Math Tutors
Danbury, TX Math Tutors
El Lago, TX Math Tutors
Hillcrest, TX Math Tutors
Iowa Colony, TX Math Tutors
Jones Creek, TX Math Tutors
Liverpool, TX Math Tutors
Oyster Creek, TX Math Tutors
Port Bolivar Math Tutors
Quintana, TX Math Tutors
Surfside Beach, TX Math Tutors
Tiki Island, TX Math Tutors
West Galveston, TX Math Tutors
|
{"url":"http://www.purplemath.com/jamaica_beach_tx_math_tutors.php","timestamp":"2014-04-20T02:02:25Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:e5edb7bc-5855-46d8-bce6-9e84188dc0fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Nature of Statistics
What is statistics
Types of data
Scales of measurement
Copyright 1993-97 Thomas P. Sturm
What is Statistics
Statistics is:
The SCIENCE of
a. COLLECTING,
b. Classifying / Presenting / Tabulating / Describing, and
c. INTERPRETING,
NUMERICAL Data
All three areas will be covered:
Collecting - Chapter 3
Describing - Chapters 1 and 2
Interpreting - bulk of the course
Course Goal:
To produce good "statistical consumers"
Collecting Data
Data must be collected with a purpose - to find information about a designated group of people/places/things/events
POPULATION - the collection of ALL objects that are of interest
- must be carefully defined
- must be able to determine under all circumstances whether something is in the population or not
e.g. employees - current? fired? retired? part-time?
Problem: It's usually just too expensive (or impossible) to get the information for all objects in a population (a CENSUS)
SAMPLE - a subset of the population used to find information about the entire population
- more economical
- with care, can obtain an accurate picture of the population
So, to get information about the population, we take a sample and find information about the things in the sample
Variables in Statistics
An attribute that is relevant for all things in the population (and therefore the sample)
e.g. height, weight, color, result of casting a die, beauty
Any characteristic than can be measured for all things in the population
e.g. height (in inches), weight (in pounds), color (a word), # of spots on a die
A VALUE for a variable is assigned through a process of MEASUREMENT
e.g. use a ruler to MEASURE a VALUE of 6'4" as the OBSERVED height of a basketball player
values that COULD be obtained
e.g. 0 to 100% on an exam
values that are actually obtained in the current instance
e.g. 97%, 92%, 84%, 63% in a class of 4 students
Types of Data
ATTRIBUTE or CATEGORICAL data
useful only to place individuals into categories
(e.g. Earthlings, Martians)
a finite set of values
e.g. number of students
an infinite set of values in a bounded range
e.g. height of students
But statistics only deals with NUMERICAL data, (and MEASUREMENT assigns a numerical value to a VARIABLE,) so, for QUALITATIVE data, part of the measurement process is to assign a number to each
attribute value
e.g. SEX - 1=male, 2=female, etc.
Thus, as part of the measurement process, everything gets a number. But what can you DO with those numbers ???
Scales of Measurement
Nominal Scale (Qualitative data)
e.g. 1=male, 2=female
come from qualitative (attribute) data
can only count how many of each value you have to obtain FREQUENCY data
cannot sort, add, subtract, multiply, or divide the numbers
Ordinal Scale (Ordinal data)
e.g. 1=never, 2=occasionally, 3=frequently, 4=always
come from a condensation of quantitative data where asking for specific numbers would not be accurate
can sort in addition to count, 1 < 2
cannot add, subtract, multiply, or divide the numbers
Interval Scale (Metric data)
e.g. temperature in Fahrenheit
come from quantitative values that are measured against arbitrary starting points
can subtract in addition to sorting and counting, 24 outside, 72 inside, 48 degrees warmer inside
cannot add, multiply, or divide the numbers
Ratio Scale (Metric data)
e.g. number of courses taken, any FREQUENCY data, rates
come from quantitative values that have "natural" zeroes
0 is meaningful, Pat took 6 courses, Chris took 2 courses, Pat took 3 times as many courses as Chris
can perform all operations
In general, not all of the measurements yield the same value. This could be because of different measurements of the same thing or measurement of different members of a sample. This is called
The values of the data have some sort of a DISTRIBUTION which characterizes where in the range of POSSIBLE values the OBSERVED values most frequently fall.
Much of descriptive statistics deals with finding simple ways (perhaps as simple as a single number) of describing the distribution.
Nominal and ordinal data allow the least amount of mathematical manipulation, so the description of nominal and ordinal data is limited to counting the frequencies of the observations (and sorting
the observations if on an ordinal scale) and then presenting the counts.
|
{"url":"http://courseweb.stthomas.edu/tpsturm/private/notes/qm220/NATURE.html","timestamp":"2014-04-19T22:05:43Z","content_type":null,"content_length":"21285","record_id":"<urn:uuid:b4fed2fb-90d1-4afe-b135-399bd898ab96>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: non trivial constant folding
Chris F Clark <cfc@world.std.com>
6 Jan 2001 22:13:51 -0500
From comp.compilers
| List of all articles for this month |
From: Chris F Clark <cfc@world.std.com>
Newsgroups: comp.compilers
Date: 6 Jan 2001 22:13:51 -0500
Organization: Compilers Central
References: 01-01-015
Keywords: arithmetic, optimize
Posted-Date: 06 Jan 2001 22:13:51 EST
> My problem is that I do not manage to find a clean solution
> to optimise this:
> eval(1+var+1)
What you are looking for is sometimes called "canonicalizing" (or
sorting). You order the leaves of each expression (of commutative
operators) so that the constants are first (or last) and then move
them down to the lowest level (of associative operators)--this part is
often called reassociation. You can also apply the distributive law
(or DeMorgan's theorem or any other algebraic identity that holds(*))
to further collect the constants. For example, Fred Chow, organized
his expressions to "tower" to the left (or perhaps it was right) to
minimize register usage.
You can find information in Advanced Compiler Design and
Implementation, by Steven S Munchnick (ISBN 1-55860-320-4) pp.333-355
where it discusses Algebraic Simplifications and Reassociation. It is
also covered in Building an Optimizing Compiler by Robert Morgan (ISBN
*) One common issue is that computer arithmetic (especially floating
point) does not obey the normal arithmetic identities. Computing
operations in a different order can often cause over- or underflows
that change the numeric value. In addition, some machines have guard
bits on the registers that allow them to hold "more accurate" values
in registers than they do in memory, introducing round-off errors.
Imagine how upset your user will be when after testing (in a program)
the discriminant of a quadratic equation for a negative value (and
getting false), and then computing the root and getting an incorrect
imaginary number. (This actually happened in a FORTRAN compiler
after we did some local optimizations without paying close enough
attention to the ramifications.)
Although, the problem is commonly recognized in floating point, the
same problem can occur with integer arithmetic, especially when the
arithmetic overflows. Depending on the hardware, you can cause
exceptions to be raised (or not to be raised), if you cavalierly
reorder integer arithmetic that can overflow (and use the
instructions that trap overflows).
Hope this helps,
Chris Clark Internet : compres@world.std.com
Compiler Resources, Inc. Web Site : http://world.std.com/~compres
3 Proctor Street voice : (508) 435-5016
Hopkinton, MA 01748 USA fax : (508) 435-4847 (24 hours)
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/01-01-029","timestamp":"2014-04-19T17:10:48Z","content_type":null,"content_length":"8702","record_id":"<urn:uuid:25445ea6-3546-485a-9832-37634af216b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Littleton, CO Prealgebra Tutor
Find a Littleton, CO Prealgebra Tutor
...Sometimes, it is just a matter of presenting the material in a different way that allows a student to progress through the previously trouble-some material. I teach reading by finding out the
students main interests and then choosing reading material that they are going to be eager to read. Cho...
15 Subjects: including prealgebra, reading, geometry, algebra 1
...I attended a private college prep high school (College Prep: math, English, history, science, music, art). I earned an AP or Advanced Placement score of 3 or higher in high school for European
history (as a high school sophomore), American history, music literature, and chemistry. I went to Ric...
30 Subjects: including prealgebra, chemistry, calculus, physics
...Hands on learning is best to nurture a passion to learn, especially with Science and Math.I taught Middle School Life Science at Rishel M.S. Denver, CO Feb.-2007 to June-2008. Completed
University of Denver course in Pathophysiology in 2005.
31 Subjects: including prealgebra, chemistry, physics, biology
I have a B.S. degree in Mathematics & Computer Science. I'm a Certified Microsoft Office Specialist. I'm currently a Math Fellow at Denver Public School.
9 Subjects: including prealgebra, geometry, ESL/ESOL, algebra 1
...I have also been certified as an ACT tutor by both The Princeton Review and Studypoint, and I'm familiar with many ACT testing strategies than can help students increase their scores. In
addition, I have a 3 year substitute authorization for Colorado public schools and am certified for grades K - 12. This certification was issued August 9th, 2012.
30 Subjects: including prealgebra, reading, English, writing
Related Littleton, CO Tutors
Littleton, CO Accounting Tutors
Littleton, CO ACT Tutors
Littleton, CO Algebra Tutors
Littleton, CO Algebra 2 Tutors
Littleton, CO Calculus Tutors
Littleton, CO Geometry Tutors
Littleton, CO Math Tutors
Littleton, CO Prealgebra Tutors
Littleton, CO Precalculus Tutors
Littleton, CO SAT Tutors
Littleton, CO SAT Math Tutors
Littleton, CO Science Tutors
Littleton, CO Statistics Tutors
Littleton, CO Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Arvada, CO prealgebra Tutors
Aurora, CO prealgebra Tutors
Bow Mar, CO prealgebra Tutors
Centennial, CO prealgebra Tutors
Cherry Hills Village, CO prealgebra Tutors
Columbine Valley, CO prealgebra Tutors
Denver prealgebra Tutors
Englewood, CO prealgebra Tutors
Greenwood Village, CO prealgebra Tutors
Highlands Ranch, CO prealgebra Tutors
Lakewood, CO prealgebra Tutors
Littleton City Offices, CO prealgebra Tutors
Sheridan, CO prealgebra Tutors
Westminster, CO prealgebra Tutors
Wheat Ridge prealgebra Tutors
|
{"url":"http://www.purplemath.com/Littleton_CO_prealgebra_tutors.php","timestamp":"2014-04-18T03:40:24Z","content_type":null,"content_length":"24243","record_id":"<urn:uuid:30e1b4f3-cb2e-49b5-9f5a-994afd2ac8ca>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Severe Weather Help Page
** Severe Weather Options **
** DEFINITIONS **
[CPR] = Combined probability (Prob_Field_1 x Prob_Field_2 x ... x
[MAX] = Maximum from any member at each grid point
[MD] = Median
[MDXN] = Median value contoured. The dashed red line is the union of
all members at the lowest contour value plotted (i.e., where
at least one of the 16 members exceeds the value of the first
median contour). The dashed blue line is the intersection of
all members at the lowest contour value plotted (i.e., where
all 16 members exceed the value value of the first median
[MIN] = Minimum from any member at each grid point
[MN] = Mean
[MNSD] = Mean and Standard Deviation
[PM] = Probability Matched Mean Value
[PR] = Probability (percentage of members meeting/exceeding some
[SP] = Spaghetti
** SPECIFIC INFORMATION ON AVAILABLE FIELDS **
[MDXN]:CravenBrooks_SigSvr ->Craven-Brooks significant severe is
the product of MUCAPE x 0-6 KM shear.
Median contours begin at 10000 m^3/s^3
with the union (dashed red) and
intersection (dashed blue) at 10000
m^3/s^3 shown.
[MDXN]:SigTor_Param ->Signficant tornado parameter. Median contours
begin at 1 with the union (dashed red)
and intersection (dashed blue) at 1 shown.
[MDXN]:Supercell_Comp_Param ->Signficant tornado parameter. Median
contours begin at 1 with the union (dashed
red) and intersection (dashed blue) at 1
[MNSD]:CravenBrooks_SigSvr ->Mean and standard deviation of Craven-
Brooks Significant Severe (MUCAPE X 6 KM
[MNSD]:SigTor_Param ->Mean and standard deviation of significant
tornado parameter
[CPR]:ConPcpn_1000MUCAPE_30EffShr ->Product of the probabilities:
Convective precip >= .01"; MUCAPE
>= 1000 J/kg; Effective Shear >=
30 kts
[CPR]:ConPcpn_1000MUCAPE_40EffShr ->As above but MUCAPE >= 1000 J/kg
and effective shear >= 40 kts
[CPR]:ConPcpn_2000MUCAPE_30EffShr ->As above but MUCAPE >= 2000 J/kg
and effective shear >= 30 kts
[CPR]:ConPcpn_2000MUCAPE_40EffShr ->As above but MUCAPE >= 2000 J/kg
and effective shear >= 40 kts
[CPR]:ConPcpn_3000MUCAPE_20EffShr ->As above but MUCAPE >= 3000 J/kg
and effective shear >= 20 kts
[CPR]:ConPcpn_500MUCAPE_30EffShr ->As above but MUCAPE >= 500 J/kg
and effective shear >= 30 kts
[CPR]:ConPcpn_500MUCAPE_40EffShr ->As above but MUCAPE >= 500 J/kg
and effective shear >= 40 kts
[PR]:CravenBrooks_SigSvr>=20000 ->Probability Craven-Brooks significant
severe (MUCAPE X 0-6 KM shear) >=
20000 m^3/s^3
[PR]:CravenBrooks_SigSvr>=40000 ->As above but for 40000 m^3/s^3
[PR]:SigTor_Param>=1 ->Probability significant tornado parameter >=
[PR]:SigTor_Param>=3 ->As above but for significant tornado parameter
>= 3
[PR]:Supercell_Comp_Param>=1 ->Probability supercell composite
parameter >= 1
[PR]:Supercell_Comp_Param>=3 ->As above but for supercell composite
parameter >= 3
[SP]:CravenBrooks_SigSvr_10000 ->Spaghetti plot of Craven-Brooks
significant severe parameter at
10000 m^3/s^3
[SP]:CravenBrooks_SigSvr_20000 ->As above but for Craven-Brooks
index of 20000 m^3/s^3
[SP]:CravenBrooks_SigSvr_40000 ->As above but for Craven-Brooks
index of 40000 m^3/s^3
[SP]:SigTor_Param_1 ->Spaghetti plot of significant tornado index
of 1
[SP]:Supercell_Comp_Param_1 ->Spaghetti plot supercell composite
parameter of 1
All SREF information presented herein is for guidance purposes
only and should not be confused with official SPC operational
forecasts. Links to official SPC operational products are found
at the bottom of every menu.
Many of the SREF products were developed for testing the utility
of SREF analysis in the prediction of SPC mission critical items,
including: thunderstorms and severe thunderstorms, large scale
critical fire weather conditions, and mesoscale areas of hazardous
winter weather. In order to accomplish this goal and test new and
unqiue methods of SREF application in an operational environment,
many fields not produced in the NCEP postprocessor were calculated
locally for SPC purposes.
Close Window
|
{"url":"http://www.spc.noaa.gov/exper/sref/severe_help.html","timestamp":"2014-04-17T04:44:25Z","content_type":null,"content_length":"5429","record_id":"<urn:uuid:c20cceb3-f17b-4aeb-aa71-6062e31cf8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Theodosius Sphaerica iii 4
Return to Vignettes of Ancient Mathematics
Since the figure is somewhat complex, two view points are given, from the left and from right of arc GED.
General strategy: the theorem will prove that arc GE > arc ED. The strategy will be to show that because
1. straight-line HN = straight-line MZ (since HL = LZ, while triangle BLN = triangle ALM).
2. angle DHZ is obtuse
3. angle GNH = angle DHL
4. Therefore, angle GNH is acute.
5. Therefore (by 2, 4), angle GNH < angle DMZ.
6. Therefore (by 1, 5) arc GH < arc DZ
7. But, arc HGE = arc EDZ
8. Hence, EG = EH - GH > EZ - DZ = ED.
(general diagram: right view, left view)
circular-arc between the point and the plane that does not fall onto common section is larger than the circular-arc of the same circle between the point and plane that does fall on it.
(diagram 1) For let two great circles, AEB, GED, intersect one another at point E, (diagram 2) and from one of them equal circular-arcs, AE, EB, are taken in succession on each side of point E,
(diagram 3, i) solid right view, ii) truncated right view, iii) right view with plane, iv) clear right view with plane, v)truncated left view, vi) left view of some planes, vii) left view with other
and through points A, B let parallel planes be drawn AD, GB where AD falls on the common section of AEB, GED outside the surface of the sphere at pont E, (diagram 4: left view, right view) and let
one of the equal circular-arcs AE, EB be larger than each of GE, ED. I say that circular-arc GE is larger than circular-arc ED.
(general diagram: right view, left view)
(diagram 5: right view, left view, plain right view, plain left view) For the circle inscribed with pole E and distance EA will have come through B and will fall beyond points G, D since each of AE,
EB is larger than each of GE, ED. Let it come and let it be as AHBZ, and let the circles be completed, let circle AD meet circle AHBZ at point Q, and circle BG at point K,(diagram 6: right view, left
view) and let AB and HZ be common sections of AHBZ with AEB and HEZ, (diagram 7: right view, left view) and let AQ be the common section of circle ADQ and AHBZ, and let KB be the common section of
KGB and AHBZ,(diagram 8: right view, left view) and let DM be be the common section of HEZ and ADQ, and let GN be the common section of KGB and HEZ. (diagram 9: right view, left view) And since plane
AD meets the common section of planes HEZ, AEB, that is EL, it is outside the surface of the sphere at point E, let it meet it at X, and let EL be extended to X.(diagram 10: right view, left view)
Therefore, point X is on plane ADQ, but also on HEZ, but points D, M are on both planes DQ, HEZ. (diagram 11: right view, left view) Therefore MD meets it outside the surface of the sphere where
point E is. In fact, it meets it at X.
(general diagram: right view, left view)
(diagram 12: right view, left view) And since a great circle in a sphere, AEB, intersects through the poles one of these circles in the sphere, AHBZ, it bisects it and at right angles. Therefore, AB
is a diameter of circle AHBZ. Similarly we will prove that HZ is also a diameter of circle AHBZ. Therefore, L is its center. (diagram 13: right view, left view) And since two parallel planes KGB, ADQ
are cut by a plane, AHBZ, therefore their common sections are parallel. Therefore KB is parallel to AQ. (diagram 14: right view, left view) Again, since two parallel planes, KGB, ADQ are cut by a
plane, HEZ, their common sections are therefore parallel. Therefore GN is parallel to DM. (diagram 15: right view, left view) And since each of planes AEB, HEZ are at right angles to plane AHBZ,
therefore their common section is also at perpendicular to plane AHBZ. But their common section is EL. Therefore, EL is also perpendicular to plane AHBZ. Thus it will make right angles with all the
straight-lines touching it in the plane of circle AHBZ. But each of AB, HZ, which are in the plane of circle AHBZ, touches EL. Therefore, EL is perpendicular to each of AB, HZ.
(diagram 15 (again): right view, left view) And since angle XLN is an outside angle of triangle XLM, it is larger than the opposite and interior angle XML. But XLN is right. Therefore XML is acute.
Therefore, XMZ is obtuse.
(diagram 16: right view, left view) And since GN is parallel to DM, and HZ falls across them, therefore angle GNH is equal to angle XML. But XML is acute. Therefore angle GNH is also acute.
(diagram 17: right view, left view) And AM is parallel to NB, and two lines have been drawn through them, AB, MN, and AL is equal to LB. Therefore NL is also equal to LM. But the whole HL is also
equal to the whole LZ. Therefore remainder HN is equal to remainder MZ.
(diagram 19: right view, left view) And so since HEZ is a segment of a circle, and equal straight-lines HN, MZ have been taken out, and parallels GN, DM have been drawn, with GNH being acute and DMZ
being obtuse, therefore circular-arc HG is smaller than circular-arc DZ. And so, since the whole circular-arc HE is equal to the whole circular-arc ZE, where HG is smaller than DZ, therefore the
remainder, circular-arc GE, is larger than the remainder, circular-arc ED, Q.E.D.
Only two propositions in Theodosius end in 'Q.E.D.', this one and the introductory i 2. Perhaps, he is making a statement about the complexity of this lemma.
|
{"url":"http://web.calstatela.edu/faculty/hmendel/Ancient%20Mathematics/Theodosius/Sphaerica/iii.4/Theod.Sph.iii.4.html","timestamp":"2014-04-16T14:07:05Z","content_type":null,"content_length":"14277","record_id":"<urn:uuid:04751b8e-268d-478c-9cea-d3bcf11cd0df>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Accuplacer practice exam help
Hello board. I'm trying to solve the problems on the accuplacer practice exam online and am having trouble with one of the questions. It is:
What is the value of the expression 2x² + 3xy – 4y² when x = 2 and y = - 4?
The answer I keep coming up with is -4. The only choices are -80, 80, -32, and 32.
What am I doing wrong?
|
{"url":"http://mathhelpforum.com/algebra/184605-accuplacer-practice-exam-help.html","timestamp":"2014-04-16T08:12:36Z","content_type":null,"content_length":"42025","record_id":"<urn:uuid:496d5c50-e4be-4ed6-879e-e3a80fe495fb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Electrical Engineering: Microelectronics: Assuming a MOSFET circuit is operating in the saturation/active region, if gm = 3.2 mA/V at ID = 120 uA, then what is gm when ID = 820 uA?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
In saturation, the current in MOSFET can be described as, \[Id=uCox/2*(Vgs-Vth)^2 *W/L\]. gm is derivative of Id w.r.t Vgs. In this situation , u r using same device for both cases. Hence
mobility( u) , Cox , W and L must be same, use relation \[gm=\sqrt{2*u*Cox*W/L*Id}\] to calculate gm when ID=820uA. Can u please tell me what do u mean by active region in MOSFET. This term is
mainly used in BJT .
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/516a3703e4b015a79c1138a4","timestamp":"2014-04-17T22:11:53Z","content_type":null,"content_length":"28042","record_id":"<urn:uuid:98d35509-ffe5-4ff3-8fdb-b1d29f70e42a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tatamy Algebra 1 Tutor
Find a Tatamy Algebra 1 Tutor
...I am very reliable, responsible and dependable. I understand schedules need to be flexible due to our busy lifestyles - with work and our children's activities. I can accommodate to your needs
and simply ask for a four-hour cancellation notice if an emergency or situation arises and plans need to be changed.
24 Subjects: including algebra 1, reading, English, GED
...For the past ten years, I have spent my summers teaching at summer camps for gifted students. Please contact me to learn more about how I can help your student.I am the physics teacher and
science department chair of a local college preparatory school. I have a masters degree in physics and have nine years experience teaching at either the high school or college level.
7 Subjects: including algebra 1, physics, algebra 2, prealgebra
...While at Kutztown, I spent one year as a Supplemental Instructor and one-on-one tutor for introductory physics classes. I also have one semester experience as a teaching assistant at Cornell. I
love teaching math and physics as well as helping students obtain a deeper understanding of the material.
16 Subjects: including algebra 1, chemistry, calculus, algebra 2
...I favor a dual approach, focused on both understanding concepts and going through practice problems. Let me know what concepts you're struggling with before our session, so I can streamline the
session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft, code up websites, and write sci-fi creative stories.
26 Subjects: including algebra 1, English, calculus, physics
...I have seen many very good Algebra 2 and Pre-Calculus students flounder in Trigonometry. My one-on-one method is to show the student that Trig is quite understandable and not as overwhelming as
they might believe. I am a certified PA math teacher and have taught all levels of Math, including Algebra I, Geometry and Algebra II.
12 Subjects: including algebra 1, calculus, geometry, ASVAB
|
{"url":"http://www.purplemath.com/Tatamy_algebra_1_tutors.php","timestamp":"2014-04-19T15:22:25Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:fee67dc2-2d00-443a-8f68-8989b07ddfce>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Making the Most of Your First Grading Period
There is something about the first of every school year that has always frustrated me. It seems to me that the first two or three weeks that we spend reviewing topics from previous years is such a
waste of time. Students either enter Algebra I with a good foundation on things like integer operations, distributive property, collecting like terms or they don't. Two or three weeks of reviewing
these topics seems to do nothing for those who are lacking these skills and only seems to bore the rest of the students.
Last summer, I heard a fantastic speaker at CAMT (Conference for the Advancement of Mathematics Teaching) and she challenged us to begin our year differently. She talked about how many students will
give their maximum effort in the first grading period of the year, yet we waste this prime learning time teaching old material. She urged us to make the most of this maximum effort that most students
bring to a new school year by beginning the year with challenging material and fill in any gaps as we went.
When I came back to my campus and said I wanted to skip our "Algebra Foundations" unit also known as Chapter One in your textbook, most the teachers were very opposed to the idea. It took a little
convincing, but we finally decided to begin the year with the topic of functions. We jumped right in to rich mathematics and never looked back.
Challenging the students with new material from the first week of school on seemed to help with the complacency that some students seem to experience when presented with material they have already
mastered. I want them to get the idea early in the year that they will need to put forth effort in order to be successful.
So how did this new approach affect the struggling learners? Surprisingly, most of the students were able to fill in their gaps as we went. After an entire year of solving linear equations,
inequalities, and systems etc, they became quite proficient in the areas they were weak in when they entered algebra I. I admit, they are probably still behind their peers when it comes to being
prepared for the next level, but I don't think spending three weeks teaching them how to add and subtract signed numbers would have increased their knowledge base.
So, think about your first few weeks of school carefully. What can you do to challenge your students and make the most out of the first few weeks?
6 comments:
Wow - Thank you so much for making me think this morning! Great post!
Mrs. H, I like that you were able to convince your department to go with this. For me to convince my colleagues, I'd need more details. Can you tell us the speaker's name at that conference? Do
you know of any research that suggests students work harder in the first grading period? (I teach college. I wonder if that still holds...)
It does seem plausible, and I, too, am eager to skip weeks of review. I'd rather build the review into the new units. I would love to think more about the ways in which this better serves
@Sue, I can't remember the speaker's name, but I do know she was not an educational expert by any means. Just an ordinary classroom teacher who was basing her opinion off of past observations. It
rang true with me as I have observed it in my own classroom.
I think it would ring true in college also. I hate to admit it, but I tend to start my graduate classes strong at the beginning of the semester and then tend lose steam at the end of the semester
when I have many things competing for my attention.
Well, I may not be able to convince my colleagues, but I'll bring it up in relation to our pre-calc course. I used to feel like that course was impossible to teach well because it had too much
material. Now I do just 3 sections from the first chapter - lines (clearly review), circles, and inequalities. And I skip the second chapter of our text, which is on functions. I figure I can
help them learn about functions through the other topics we cover.
I'll be skipping review in calculus too, and expecting to review as we go.
Sue, I'm not sure how I would handle this situation at the college level. In my classes, I generally used warm-ups to review a skill I knew students would be needing that day. It seemed a good
way to integrate review as we went along.
I know calculus teachers everywhere complain about their student's algebra skills. Mine sure did. Not sure that algebra review helps them that much. I think we really learn a skill when we begin
to apply it in a meaningful context and that seems what calculus is all about to me.
Yep, I agree. (I think most of us 'know' that using a skill in a meaningful and/or higher-level application is the best way to really get it, but I guess it's still hard to see that we can skip
the review when we know the students aren't 'there' yet.)
|
{"url":"http://mathtalesfromthespring.blogspot.com/2012/08/making-most-of-your-first-grading-period.html","timestamp":"2014-04-19T14:49:34Z","content_type":null,"content_length":"207605","record_id":"<urn:uuid:fe8c0b12-21d9-40d8-935b-71fceaa99d0b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Regarding my return
Hello everyone. As you may recall, I am Shivam and have been fairly inactive in the recent times (especially due to the whole university transfer process). I am now successfully placed in the
Massachusetts Institute of Technology (Rogers Web Key still works in the US). Nowadays, I seem to have more time and plan to be more active on this forum.
Last edited by ShivamS (2014-03-15 02:04:56)
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
hi Shivam
Welcome back.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Regarding my return
Hi Shivam
Great to hear from you again!
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Regarding my return
Hi Shivamcoder3013;
How have you been?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
Hello Bob, Bobbym and Stefy. Been faring fairly well I suppose. The midterm exam is coming soon, so preparing has begun!
Last edited by ShivamS (2014-03-15 02:05:09)
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
Study hard for them. Nothing beats being prepared.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
Unless you have an absentminded professor allowing you to cheat.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
I know you are kidding. Regarding cheating, I was the laziest and worst student that ever stood upright and resembled a human being. And even I never cheated.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
I seem to think that you are somewhat in proximity of 92 years of age. Considering that, with the fact that you have near 50000 posts related to advanced mathematical, scientific and
humanities-related concepts, (plus about 5000 other communicative posts) in merely the time span from 2010-2013, you do not seem to have been at any point of time a lazy and/or bad student. As for
resembling a human being, I cannot comment on physical appearance alone.
Last edited by Shivamcoder3013 (2013-02-11 07:18:23)
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
That is a good point. I am not really lazy. I like the forum so I am here. I did not like school so I was somewhere else.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
I do not believe that you can be active on this forum, while gaining the respect of being an intelligent man, without receiving a proper education from school (I would imply that in your youth, the
sheer length of technological advancement would be fairly low, hence leaving no chance for being self-educated through the Internet (Moreover, Tim Berners Lee was probably not alive at that point).
Books could have been a source of intelligence, however it would not fully equip you with the maturity received through school.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
That is interesting but I disagree with much of that.
If I gave you two scenarios, one in which I had a first rate education or another in which I am totally self taught, you would choose?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
The generation is what I would be most dependent upon. Currently, I would pick self-taught. That is because there are ~15 hours in a day where we are inactive, if we had no educational establishment
to visit. 15 hours a day reading educational books, productive online articles et would most definitely make you more intelligent then school. In the past however, schools were more rigorous, there
were fewer educational resources etc. However of course, combining both 4 hours of school and ~8 hours of self-teaching would make you an intelligent person of likes not seen by anyone. And as you
know, if there were fewer officially educated persons (being educated from schools), we would have become extinct a long, long time ago, at a galaxy close, close at hand.
I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat
Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes
Young man, in mathematics you don't understand things. You just get used to them. - Neumann
Re: Regarding my return
In the past however, schools were more rigorous, there were fewer educational resources etc.
First of all this self taught person would need to come from a family that had a father who was brilliant. He would have to sit around the table from when he was around 4 - 5 years old and have
discussions with his father and grandfather.
He would have needed to have been raised differently then the other kids of his time and he needed to not go to a US public school.
This kid would have spent much of his time in the library reading. When he was home he was experimenting with chemistry sets, microscopes and electronics. When other kids were given toys he was given
books to read and chess sets to play with the adults.
He would have needed a high IQ, over 150. That era was perfect for producing many children like that. Now the question is how could anyone confuse that kid with me?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
Shivamcoder3013 wrote:
The generation is what I would be most dependent upon. Currently, I would pick self-taught. That is because there are ~15 hours in a day where we are inactive, if we had no educational
establishment to visit. 15 hours a day reading educational books, productive online articles et would most definitely make you more intelligent then school. In the past however, schools were more
rigorous, there were fewer educational resources etc. However of course, combining both 4 hours of school and ~8 hours of self-teaching would make you an intelligent person of likes not seen by
anyone. And as you know, if there were fewer officially educated persons (being educated from schools), we would have become extinct a long, long time ago, at a galaxy close, close at hand.
First of all, studying 15 hours a day, online or not, would most surely make your head explode.
Second, I do not agree that the combination is surely going to produce a more capable (intentionally not using the word intelligent) person. Look at bobby. He is self-taught and he turned out very
good mathematics-wise.
Third, I am not sure what you mean by an educated person. We managed to survive as cavemen, so eventually, even if we did have less educated persons than we do now, that would eventually change.
Fourth, I must say that I am getting very existential as of late. From time to time, I wonder if anything anyone does makes any sense, knowing that in a billion years (a long time, indeed, but real)
the Sun will "explode", bringing the existence of life, and humans with it, to an end. Even if we do survive that, somehow, eventually we will die out and all we did will be lost. Nothing is forever.
Sorry for the morbid thoughts, but I've had that thought for a short while now.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Regarding my return
I am opposed to that viewpoint. I first heard of it in a book by Robert Ringer. It is a modern ploy to justify any course of action because it does not matter what we do. As if our sun is what we
I am not saying this is your opinion but I have noticed that certain unscrupulous lizards have been spreading that to the young in the hopes of influencing them.
That bobby fellow sure sounds impressive and I would like to meet him someday.
Last edited by bobbym (2013-02-12 00:13:25)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Regarding my return
Well, I think him and I have different views. He wants to justify any action. I am thinking more that any action is just insignificant, but not justifiable.
I am sure you will meet bobby. Maybe if you heat up some sand...
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Regarding my return
Your actions are not insignificant. If you remember the basis for the butterfly effect you will see that even the tiny fluctuations of a butterflies wings in Beijing can make storms in New York City.
Heat up some sand?
Look at bobby. He is self-taught and he turned out very good mathematics-wise.
No one is self taught. He had lots of good teachers. Most of them were not in classrooms.
If he appears at times good at math it is an illusion. Remember, he is all smoke and mirrors, this is a consequence of the Feynman effect.
Last edited by bobbym (2013-02-11 08:48:39)
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Super Member
Re: Regarding my return
Shivamcoder3013 wrote:
The generation is what I would be most dependent upon. Currently, I would pick self-taught. That is because there are ~15 hours in a day where we are inactive, if we had no educational
establishment to visit. 15 hours a day reading educational books, productive online articles et would most definitely make you more intelligent then school. In the past however, schools were more
rigorous, there were fewer educational resources etc. However of course, combining both 4 hours of school and ~8 hours of self-teaching would make you an intelligent person of likes not seen by
anyone. And as you know, if there were fewer officially educated persons (being educated from schools), we would have become extinct a long, long time ago, at a galaxy close, close at hand.
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=252742","timestamp":"2014-04-19T07:16:58Z","content_type":null,"content_length":"36445","record_id":"<urn:uuid:3fc24ee2-22e1-4fec-bcbb-587b3b02187c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
March 10th 2010, 07:50 PM #1
Junior Member
Jan 2010
Related Rates
As a circular griddle is being heated, its diameter changes at a rate of 0.01 cm/min. When the diameter is 30 cm, at what rate is the area of the griddle changing?
I know my formula is V=PI*r^2
and i know since i am given a diameter i would use (h/2) for my radius.
but once i take my derivative of the area formula and plug in my diameter and my DD/DR as .01 cm/min, i am given .942...
the answer is .471
what am i doing wrong?
As a circular griddle is being heated, its diameter changes at a rate of 0.01 cm/min. When the diameter is 30 cm, at what rate is the area of the griddle changing?
I know my formula is V=PI*r^2
and i know since i am given a diameter i would use (h/2) for my radius.
but once i take my derivative of the area formula and plug in my diameter and my DD/DR as .01 cm/min, i am given .942...
the answer is .471
what am i doing wrong?
Your formula is: $A = \pi r^2 = \frac {\pi D^2}4$
And so, by differentiating implicitly with respect to time, you get: $\frac {dA}{dt} = \frac {\pi D}2 \cdot \frac {dD}{dt}$
Now, you want $\frac {dA}{dt}$ when $D = 30$ and $\frac {dD}{dt} = 0.01$, just plug those in and compute.
it was so obvious lol thanks man
March 10th 2010, 08:10 PM #2
March 10th 2010, 08:13 PM #3
Junior Member
Jan 2010
|
{"url":"http://mathhelpforum.com/calculus/133229-related-rates.html","timestamp":"2014-04-18T01:23:34Z","content_type":null,"content_length":"38348","record_id":"<urn:uuid:c569518f-b5b4-47ba-8f0f-3b4c5a2d80c6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
"Trellis graph" is it standard term in graph theory ? What are its properties ?
up vote 1 down vote favorite
In coding theory (convolutional codes) the graph called "trellis diagramm" is used to visualize something.
I wonder is it a standard term in graph theory? Corresponding Wikipedia article is not convincing.
If it is well-known graph - what are its properties (chromatic polynom, spectrum, etc...) ? What is known about it ? What are the references ?
Let me remind definition of trellis graph - it depends on alphabet say {0,1} and on two integers "k" (state number = (alphabet size)^p for some "p") and "n".
The graph have k*n vertexes. Each of vertexes corresponds to words in the alphabet of lenth "p". Say p=2, we have 4 words
00 00 ........ 00
01 01 ........ 01
10 10 ........ 10
11 11 ........ 11
This words are copied "n" times. Now we have (alphabet size) size of edges going from each vertex. They connect the words X and Y if X stands in l-th column and Y in (l+1) copy and Y can be obtained
by adding 1 symbol to word "X" from the left, and deleting the rightest symbol in X.
E.g. X = 00 will be connected with Y=00 and Y=10
graph-theory coding-theory
IIRC David Forney was the first to draw these graphs in order to make the correctness of the Viterbi decoding algorithm clear to all and sundry. He also coined the term "trellis". At least that's
the way the history was once told to me by another big name in coding theory. – Jyrki Lahtonen Jul 27 '12 at 21:35
add comment
1 Answer
active oldest votes
Graph theory and trellis graphs are related, for example, in Chapter 2 of this MSc. Thesis, M. Stylianou, Evaluating the Network Survivability issue of K-best Paths through Graph
Theoretic Techniques, Univ. Cyprus, June, 2005.
Another interesting work is S.D. Nikolopoulos, Addressing network survivability issues by finding the K-best paths through a trellis graph, Proc. Sixteenth Annual Joint Conference of the
IEEE Computer and Communications Societies - INFOCOM '97, IEEE, Vol.1, pp. 370-377, 1997.
up vote 1 From the abstract,
down vote
*In this paper we aim to offer a solution in the selection of the K-best disjoint paths through a network by using graph theoretic techniques. The basic approach is to map an arbitrary
network graph into a trellis graph which allows the application of computationally efficient methods to find disjoint paths. *
In the Appendix, the author presents an algorithm named Net-to-Trellis ...with a help of an example the processes of transforming a network G(V,E, c) into a A trellis graph. ...
Thanks for the answer ! Some problem with link on "thesis" :( – Alexander Chervov Jul 28 '12 at 9:34
Sorry. Corrected. – Papiro Jul 28 '12 at 10:18
add comment
Not the answer you're looking for? Browse other questions tagged graph-theory coding-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/103344/trellis-graph-is-it-standard-term-in-graph-theory-what-are-its-properties","timestamp":"2014-04-16T14:12:53Z","content_type":null,"content_length":"55109","record_id":"<urn:uuid:8c930051-1575-415a-8bd5-b25f750f8afe>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: July 2011 [00167]
[Date Index] [Thread Index] [Author Index]
Re: Numerical accuracy/precision - this is a bug or a feature?
• To: mathgroup at smc.vnet.net
• Subject: [mg120104] Re: Numerical accuracy/precision - this is a bug or a feature?
• From: "Oleksandr Rasputinov" <oleksandr_rasputinov at hmamail.com>
• Date: Fri, 8 Jul 2011 04:54:28 -0400 (EDT)
• References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net>
On Thu, 07 Jul 2011 12:33:49 +0100, Richard Fateman
<fateman at cs.berkeley.edu> wrote:
> On 7/6/2011 2:39 AM, Oleksandr Rasputinov wrote:
>> Precision (which, as defined by Mathematica, means relative uncertainty)
>> and accuracy (absolute uncertainty) are expressed as annotations after
>> the
>> number. In the special case of a number with a decimal point and no
>> annotation, the number is taken to be a machine precision real. I agree
>> that these definitions and the notational convention chosen by
>> Mathematica
>> are strange. However, there is nothing "improper" about it as a choice
>> of
>> formalism--at least, this is no worse a design choice than for
>> Mathematica
>> to have standardized on decimal notation for input and presentation of
>> numerical values rather than binary as it uses internally.
>> The purpose of this approach is as a crude but often adequate
>> approximation to interval arithmetic, whereby these (approximations of)
>> errors are carried through arithmetic operations using first-order
>> algebraic methods. When functions (such as N) that pay attention to
>> Precision and Accuracy (by Mathematica's definitions) see them
>> decreasing,
>> they increase the working precision so as to avoid numerical instability
>> being expressed in the final result. This is by no means intended to be
>> rigorous; it is merely a heuristic, but one that comes at little cost
>> and
>> works in many cases. Of course, if a user's own code treats this
>> approximation as somehow sacrosanct and ignores the precision
>> adjustments
>> necessary during the calculation while taking the final answer as
>> correct,
>> it is more likely that the approximation will have fallen apart
>> somewhere
>> down the line.
>> If you don't like significance arithmetic, you have (at least) two other
>> options at hand: either work in fixed precision ($MinPrecision
>> $MaxPrecision = prec) or use interval arithmetic. These have their own
>> drawbacks, of course (most notably that Mathematica tacitly assumes all
>> intervals are uncorrelated), but your hand isn't forced either way and
>> you
>> may even use all three methods simultaneously if you wish. Alternatively
>> you may program your own, more accurate algebraic or Monte Carlo error
>> propagation methods if you prefer.
>> ....
> This is an excellent summary of Mathematica's approach to arithmetic on
> numbers. Unfortunately many people come to use Mathematica with their
> own notions of numbers, accuracy, precision, and equality. These words
> are redefined in a non-standard way in Mathematica, leading to
> unfortunate situations sometimes. "unexplainable" behavior. confusion.
> Or worse, erroneous results silently delivered and accepted as true by a
> user, who "knows" about precision, accuracy, floating-point arithmetic,
> etc.
Apart from the curious definitions given to Precision and Accuracy (one
imagines ApproximateRelativeError and ApproximateAbsoluteError were
considered too verbose), I do not think Mathematica's way of doing things
is particularly arbitrary or confusing in the broader context of
multiprecision arithmetic. Mathematically, finite-precision numbers
represent distributions over finite fields, and they therefore possess
quantized upper and lower bounds, as well as quantized expectation values.
Strictly, then, any two such distributions cannot be said to be equal if
they represent numbers of different precision: they are then distributions
over two entirely different fields, irrespective of whether their
expectation values may be equal.
However, this definition is not very useful numerically and we are usually
satisfied in practice that two finite-precision numbers equal if the
expectation values are equal within quantization error. Note that the
question of true equality for numbers of different precisions, i.e. the
means of the distributions being equal, is impossible to resolve in
general given that the means (which represent the exact values) are not
available to us. Heuristically, the mean should be close, in a relative
sense, to the expectation value, hence the tolerance employed by Equal;
the exact magnitude of this tolerance may perhaps be a matter for debate
but either way it is set using Internal`$EqualTolerance, which takes a
machine real value indicating the number of decimal digits' tolerance that
should be applied, i.e. Log[2]/Log[10] times the number of least
significant bits one wishes to ignore. This setting has been discussed in
this forum at least once in the past: see e.g.
Note that if one wishes to be more rigorous when determining equality,
SameQ operates in a similar manner to Equal for numeric comparands, except
that its tolerance is 1 (binary) ulp. This is also adjustable, via
In regard to erroneous results: undoubtedly it is a possibility. However,
one would expect that an approximate first order method for dealing with
error propagation should at least be better in the majority of cases than
a zeroth-order method such as working in fixed precision. As stated
previously, if one desires more accurate approximations then one is in any
case free to implement them, although given the above it should be clear
that all that is generally possible within the domain of finite-precision
numbers is a reasonable approximation unless other information is
available from which to make stronger deductions. I will also note that
none of the example "problems" in this present topic are anything directly
to do with significance arithmetic; they instead represent a combination
of confusion due to Mathematica's (admittedly confusing) choice of
notation, combined with an apparent misunderstanding of concepts related
to multiprecision arithmetic in general.
> WRI argues that this is a winning proposition. Perhaps Wolfram still
> believes that someday all the world will use Mathematica for all
> programming purposes and everyone will accept his definition of terms
> like Precision and Accuracy, and that (see separate thread on how to
> write a mathematical paper) it will all be natural and consistent.
> (or that people who want to hold to the standard usage will be forced to
> use something like SetGlobalPrecision[prec_]:=
> $MaxPrecision=MinPrecision=prec.
> I believe this is routinely used by people who find Mathematica's
> purportedly "user-friendly" amateurish error control to be hazardous.
> )
> .........
> 'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it
> means just what I choose it to mean =97 neither more nor less.'
> 'The question is,' said Alice, 'whether you can make words mean so many
> different things.'
> 'The question is,' said Humpty Dumpty, 'which is to be master =97 that's
> all.'
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jul/msg00167.html","timestamp":"2014-04-16T22:11:22Z","content_type":null,"content_length":"32770","record_id":"<urn:uuid:5de9ebd9-5bbe-45ed-8b95-9314c7d8ac3d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Euler's Equation and the Reality of Nature.
Replies: 2 Last Post: Feb 17, 2013 10:04 AM
Messages: [ Previous | Next ]
Euler's Equation and the Reality of Nature.
Posted: Feb 14, 2013 2:51 AM
Euler's Equation and the Reality of Nature.
Mr. Dexter Sinister wrote:
? I understand Euler's Identity,
and I know what it means, and I know how to prove it,
there's nothing particularly mystical about it,
it just demonstrates that exponential, trigonometric,
and complex functions are related.
Given what we know of mathematics it shouldn't surprise
anyone that its various bits are connected.
It would be much more surprising if they weren't, that would
almost certainly mean something was badly wrong somewhere.?
Mr. Gary wrote:
Mathematics is NOT science.
Science is knowledge of the REAL world.
Mathematics is an invention of the mind.
Many aspects of mathematics have found application
in the real world, but there is no guarantee.
Any correlation must meet the ultimate test:
does it explain something about the real world?
As an electrical engineer I used the generalized
Euler's equation all the time in circuit analysis:
exp(j*theta) = cos(theta) + j*sin(theta).
So it works at that particular level in electricity.
Does it work at other levels, too?
Logic cannot prove it.
It must be determined by experiment, not by philosophizing.
Thinking about theirs posts I wrote brief article:
Euler's Equation and Reality.
Euler's Equation as a mathematical reality.
Euler's identity is "the gold standard for mathematical beauty'.
Euler's identity is "the most famous formula in all mathematics".
? . . . this equation is the mathematical analogue of Leonardo
da Vinci?s Mona Lisa painting or Michelangelo?s statue of David?
?It is God?s equation.?, ? It is a mathematical icon.?
. . . . etc.
Euler's Equation as a physical reality.
"it is absolutely paradoxical; we cannot understand it,
and we don't know what it means, . . . . .?
? Euler's Equation reaches down into the very depths of existence?
? Is Euler's Equation about fundamental matters??
?It would be nice to understand Euler's Identity as a physical process
using physics.?
? Is it possible to unite Euler's Identity with physics, quantum
physics ??
My aim is to understand the reality of nature.
Can Euler's equation explain me something about reality?
To give the answer to this question I need to bind
Euler's equation with an object - particle.
Can it be math- point or string- particle or triangle-particle?
No, Euler's formula has quantity (pi) which says me that
the particle must be only a circle .
Now I want to understand the behavior of circle - particle and
therefore I need to use spatial relativity and quantum theories.
These two theories say me that the reason of circle ? particle?s
movement is its own inner impulse (h) or (h*=h/2pi).
Using its own inner impulse (h) circle - particle moves
( as a wheel) in a straight line with constant speed c = 1.
We call such particle - ?photon?.
From Earth ? gravity point of view this speed is maximally.
From Vacuum point of view this speed is minimally.
In this movement quantum of light behave as a corpuscular (no charge).
Using its own inner impulse / intrinsic angular momentum
( h* = h / 2pi ) circle - particle rotates around its axis.
In such movement particle has charge, produce electric waves
( waves property of particle) and its speed ( frequency) is : c>1.
We call such particle - ? electron? and its energy is: E=h*f.
In this way I (as a peasant ) can understand the reality of nature.
I reread my post.
My God, that is a naïve peasant's explanation.
It is absolutely not scientific, not professor's explanation.
Would a learned man adopt such simple and naive explanation?
Hmm, . . . problem.
In any way, even Mr. Dexter Sinister and Mr. Gary
wouldn't agree with me, I want to say them
' Thank you for emails and cooperation?
Best wishes.
Israel Sadovnik Socratus.
' They would play a greater and greater role in mathematics ?
and then, with the advent of quantum mechanics in the twentieth
century, in physics and engineering and any field that deals with
cyclical phenomena such as waves that can be represented by
complex numbers. For a complex number allows you to represent
two processes such as phase and wavelenght simultaneously ?
and a complex exponential allows you to map a straight line
onto a circle in a complex plane.'
/ Book: The great equations. Chapter four.
The gold standard for mathematical beauty.
Euler?s equation. Page 104. /
Euler's e-iPi+1=0 is an amazing equation, not in-and-of itself,
but because it sharply points to our utter ignorance of the
simplest mathematical and scientific fundamentals.
The equation means that in flat Euclidean space, e and Pi happen
to have their particular values to satisfy any equation that relates
their mathematical constructs. In curved space, e and Pi vary.
/ Rasulkhozha S. Sharafiddinov . /
Date Subject Author
2/14/13 Euler's Equation and the Reality of Nature. socratus@bezeqint.net
2/14/13 Re: Euler's Equation and the Reality of Nature. socratus@bezeqint.net
2/17/13 Re: Euler's Equation and the Reality of Nature.
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2435310&messageID=8320336","timestamp":"2014-04-19T19:54:10Z","content_type":null,"content_length":"23371","record_id":"<urn:uuid:2d2f3f7f-1ca0-4dc7-a050-ae65a552121d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Please explain the types of these metals and Non - Metals
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4e2acd680b8b3d38d3b9cada","timestamp":"2014-04-21T07:58:52Z","content_type":null,"content_length":"38183","record_id":"<urn:uuid:689135ec-533d-45a4-bcb3-40770e0d35f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
|
11.4 Other problems
Next: 11.5 Applicability and efficiency Up: 11 Algorithms for groups Previous: 11.3 Pohlig-Helman factor method
Once sufficiently many relations between a collection of generators of the group G has been found, we have seen above that one can ``read off'' many of the properties of G such as its isomorphism
class. The above algorithms for finding relations can be applied directly to solving other problems or questions regarding the group G.
To find the order of the group G we use the above techniques to find (in succession) the order n[i] of a randomly chosen element g[i] in the group G/ < g[1],..., g[i - 1] >. This probabilistically
determines the order of G as the product of the n[i].
Given h and g in G and the fact that h is a power of g we determine this power (the Discrete Log problem) by finding a minimal relation between g and h.
Kapil Hari Paranjape 2002-10-20
|
{"url":"http://www.imsc.res.in/~kapil/crypto/notes/node61.html","timestamp":"2014-04-19T06:05:43Z","content_type":null,"content_length":"3239","record_id":"<urn:uuid:10530d54-1a40-422c-a9ab-bae1b6f0177f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Xin Zhou
Address: Department of Mathematics. Stanford University. Stanford, CA 94305-2125
E-mail: xzhou08 "at" math.stanford.edu
Office: Mathematics Department, Building 380, 381B
Curriculum Vitae
I am a fifth year PhD student in Math department at Stanford University. My advisor is Richard M. Schoen.
Research Interest:
Geometric analysis; Minimal surfaces; Mathematical General Relativity.
Min-max minimal hypersurface in $(M^{n+1}, g)$ with $Ric_{g}>0$ and $2\leq n\leq 6$. arXiv:1210.2112.
Mass angular momentum inequality for axisymmetric vacuum data with small trace. submitted, arXiv:1209.1605.
Convexity of reduced energy and mass angular momentum inequalities.(jointed with R. Schoen) submitted, arXiv:1209.0019.
On the existence of min-max minimal surfaces of genus g≥ 2. submitted, arXiv:1111.6206.
On the existence of min-max minimal torus. J. Geom. Anal.(2010)20,1026-1055.
Fall 2012. Teaching Assistant. Math 51: Linear Algebra and Multivariable Calculus.
Winter 2012. Teaching Assistant. Math 53: Ordinary Differential Equations with Linear Algebra.
Fall 2010. Teaching Assistant. Math 51: Linear Algebra and Multivariable Calculus.
Winter 2010. Teaching Assistant. Math 51: Linear Algebra and Multivariable Calculus.
Fall 2011. Course Assistant. Math 161: Set theory.
Summer 2011. Course Assistant. Math 19: Calculus.
Winter 2011. Course Assistant. Math 173: Theory of Partial Differential Equations.
Spring 2010. Course Assistant. MATH 217A: Differential Geometry.
Spring 2009. Course Assistant. MATH 132: Partial Differential Equations II.
Fall 2008. Course Assistant. Math 220: Partial Differential Equations of Applied Mathematics.
|
{"url":"http://math.stanford.edu/~xzhou08/","timestamp":"2014-04-19T06:51:54Z","content_type":null,"content_length":"2751","record_id":"<urn:uuid:b2475740-311b-4566-9cd3-f5218f9c7f8b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of binary search
binary search algorithm
binary chop
) is a technique for locating a particular value in a
sorted list
of values. To cast this in the frame of the guessing game (see Example below), realize that we seek to guess the
, or numbered place, of the value in the list. The method makes progressively better guesses, and closes in on the location of the sought value by selecting the middle element in the span (which,
because the list is in sorted order, is the
value), comparing its value to the target value, and determining if the selected value is greater than, less than, or equal to the target value. A guessed index whose value turns out to be too high
becomes the new upper bound of the span, and if its value is too low that index becomes the new lower bound. Only the sign of the difference is inspected: there is no attempt at an
interpolation search
based on the size of the difference. Pursuing this strategy iteratively, the method reduces the search span by a factor of two each time, and soon finds the target value or else determines that it is
not in the list at all. A binary search is an example of a
dichotomic divide and conquer search algorithm
Finding the index of a specific value in a sorted list is useful because, given the index, other data structures will contain associated information. Suppose a data structure containing the classic
collection of name, address, telephone number and so forth has been accumulated, and an array is prepared containing the names, numbered from one to N. A query might be: what is the telephone number
for a given name X. To answer this the array would be searched and the index (if any) corresponding to that name determined, whereupon the associated telephone number array would have X's telephone
number at that index, and likewise the address array and so forth. Appropriate provision must be made for the name not being in the list (typically by returning an index value of zero), indeed the
question of interest might be only whether X is in the list or not.
If the list of names is in sorted order, a binary search will find a given name with far fewer probes than the simple procedure of probing each name in the list, one after the other in a linear
search, and the procedure is much simpler than organizing a hash table. However, once created, searching with a hash table may well be faster, typically averaging just over one probe per lookup. With
a non-uniform distribution of values, if it is known that some few items are much more likely to be sought for than the majority, then a linear search with the list ordered so that the most popular
items are first may do better than binary search. The choice of the best method may not be immediately obvious. If, between searches, items in the list are modified or items are added or removed,
maintaining the required organisation may consume more time than the searches.
Number guessing game
This rather simple game begins something like "I'm thinking of an integer between forty and sixty inclusive, and to your guesses I'll respond 'High', 'Low', or 'Yes!' as might be the case." Supposing
is the number of possible values (here, twenty-one as "inclusive" was stated), then at most
$lceillog_2 Nrceil$
questions are required to determine the number, since each question halves the search space. Note that one less question (iteration) is required than for the general algorithm, since the number is
already constrained to be within a particular range.
Even if the number we're guessing can be arbitrarily large, in which case there is no upper bound N, we can still find the number in at most $2lceil log_2 k rceil$ steps (where k is the (unknown)
selected number) by first finding an upper bound by repeated doubling. For example, if the number were 11, we could use the following sequence of guesses to find it: 1, 2, 4, 8, 16, 12, 10, 11
One could also extend the technique to include negative numbers; for example the following guesses could be used to find −13: 0, −1, −2, −4, −8, −16, −12, −14, −13
Word lists
People typically use a mixture of the binary search and interpolative search algorithms when searching a
telephone book
, after the initial guess we exploit the fact that the entries are sorted and can rapidly find the required entry. For example when searching for Smith, if Rogers and Thomas have been found, one can
flip to a page about halfway between the previous guesses. If this shows Samson, we know that Smith is somewhere between the Samson and Thomas pages so we can bisect these.
Even if we do not know a fixed range the number
falls in, we can still determine its value by asking
simple yes/no questions of the form "Is
greater than
?" for some number
. As a simple consequence of this, if you can answer the question "Is this integer property
greater than a given value?" in some amount of time then you can find the value of that property in the same amount of time with an added factor of log2
. This is called a
, and it is because of this kind of reduction that most complexity theorists concentrate on
decision problems
, algorithms that produce a simple yes/no answer.
For example, suppose we could answer "Does this n x n matrix have determinant larger than k?" in O(n^2) time. Then, by using binary search, we could find the (ceiling of the) determinant itself in O(
n^2log d) time, where d is the determinant; notice that d is not the size of the input, but the size of the output.
The method
In order to discuss the method in detail, a more formal description is necessary. The basic idea is that there is a data structure represented by array A in which individual elements are identified
as A(1), A(2),…,A(N) and may be accessed in any order. The data structure contains a sub-element or data field called here Key, and the array is ordered so that the successive values A(1).Key ≤ A
(2).Key and so on. The requirement is that given some value x, find an index p (not necessarily the one and only) such that A(p).Key = x.
To begin with, the span to be searched is the full supplied list of elements, as marked by variables L and R, and their values are changed with each iteration of the search process, as depicted by
the flowchart. Note that the division by two is integer division, with any remainder lost, so that 3/2 comes out as 1, not 1½. The search finishes either because the value has been found, or else,
the specified value is not in the list.
That it works
The method relies on and upholds the notion
If x is to be found, it will be amongst elements (L + 1) to (R − 1)
of the array.
The initialisation of L and R to 0 and N + 1 make this merely a restatement of the supplied problem, that elements 1 to N are to be searched, so the notion is established to begin with. The first
step of each iteration is to check that there is something to search, which is to say whether there are any elements in the search span (L + 1) to (R − 1). The number of such elements is (R − L − 1)
so computing (R − L) gives (number of elements + 1); halving that number (with integer division) means that if there was one element (or more) then p = 1 (or more), but if none p = 0, and in that
case the method terminates with the report "Not found". Otherwise, for p > 0, the search continues with p:=L + p, which by construction is within the bounds (L + 1) to (R − 1). That this position is
at or adjacent to the middle of the span is not important here, merely that it is a valid choice.
Now compare x to A(p).Key. If x = A(p).Key then the method terminates in success. Otherwise, suppose x < A(p).Key. If so, then because the array is in sorted order, x will also be less than all later
elements of the array, all the way to element (R − 1) as well. Accordingly, the value of the right-hand bound index R can be changed to be the value p, since, by the test just made, x < A(p).Key and
so, if x is to be found, it will be amongst elements earlier than p, that is (p − 1) and earlier. And contrariwise, for the case x > A(p).Key, the value of L would be changed. Thus, whichever bound
is changed the ruling notion is upheld, and further, the span remaining to be searched is reduced. If L is changed, it is changed to a higher number (at least L + 1), whereas if R is changed, it is
to a lower number (at most R − 1) because those are the limits for p.
Should there have been just one value remaining in the search span (so that L + 1 = p = R − 1), and x did not match, then depending on the sign of the comparison either L or R will receive the value
of p and at the start of the next iteration the span will be found to be empty.
Accordingly, with each iteration, if the search span is empty the result is "Not found", otherwise either x is found at the probe point p or the search span is reduced for the next iteration. Thus
the method works, and so can be called an Algorithm.
That it is fast
The interval being searched is successively halved (actually slightly better than halved) in width with each iteration, so the number of iterations required is at most the base two logarithm of
. – Zero for empty lists. More precisely, each probe both removes one element from further consideration and selects one or the other half of the interval for further searching.
Suppose that the number of items in the list is odd. Then there is a definite middle element at p = (N + 1)/2 – this is proper division without discarding remainders. If that element's key does not
match x, then the search proceeds either with the first half, elements 1 to p − 1 or the second, elements p + 1 to N. These two spans are equal in extent, having (N − 1)/2 elements. Conversely,
suppose that the number of elements is even. Then the probe element will be p = N/2 and again, if there is no match one or the other of the subintervals will be chosen. They are not equal in size;
the first has N/2 − 1 elements and the second (elements p + 1 to N as before) has N/2 elements.
Now supposing that it is just as likely that N will be even as odd in general, on average an interval with N elements in it will become an interval with (N − 1)/2 elements.
Working in the other direction, what might be the maximum number of elements that could be searched in p probes? Clearly, one probe can check a list with one element only (to report a match, or, "not
found") and two probes can check a list of three elements. This is not very impressive because a linear search would only require three probes at most for that. But now the difference increases
exponentially. Seven elements can be checked with three probes, fifteen with four, and so forth. In short, to search N elements requires at most p probes where p is the smallest integer such that $2^
p > N$ Taking the binary logarithm of both sides, $p > lb\left(N\right)$
Or, $lb\left(N\right)$ (with any fractional part rounded up to the next integer), is the maximum number of probes required to search N elements.
Average performance
There are two cases: for searches that will fail because the value is not in the list, the search interval must be successively halved until no more elements remain and this process will require at
most the
probes just defined, or one less. This latter occurs because the search interval is not in fact exactly halved, and depending on the value of
and which elements of the list the absent value
is between, the interval may be closed early.
For searches that will succeed because the value is in the list, the search may finish early because a probed value happens to match. Loosely speaking, half the time the search will finish one
iteration short of the maximum and a quarter of the time, two early. Consider then a test in which a list of N elements is searched once for each of the N values in the list, and determine the number
of probes n for all N searches.
N = 1 2 3 4 5 6 7 8 9 10 11 12 13
n/N = 1 3/2 5/3 8/4 11/5 14/6 17/7 21/8 25/9 29/10 33/11 37/12 41/13
1 1.5 1.66 2 2.2 2.33 2.43 2.63 2.78 2.9 3 3.08 3.15
In short $lb\left(N\right) - 1$ is about the expected number of probes in an average successful search, and the worst case is $lb\left(N\right)$, just one more probe. If the list is empty, no probes
at all are made.
Suppose the list to be searched contains N even numbers (say, 2,4,6,8 for N = 4) and a search is done for values 1, 2, 3, 4, 5, 6, 7, 8, and 9. The even numbers will be found, and the average number
of iterations can be calculated as described. In the case of the odd numbers, they will not be found, and the collection of test values probes every possible position (with regard to the numbers that
are in the list) that they might be not found in, and an average is calculated. The maximum value is for each N the greatest number of iterations that were required amongst the various trail searches
of those N elements. The first plot shows the iteration counts for N = 1 to 63 (with N = 1, all results are 1), and the second plot is for N = 1 to 32767.
Thus binary search is a logarithmic algorithm and executes in O($log N$) time. In most cases it is considerably faster than a linear search. It can be implemented using iteration (as shown above), or
recursion. In some languages it is more elegantly expressed recursively; however, in some C-based languages tail recursion is not eliminated and the recursive version requires more stack space.
Binary search can interact poorly with the memory hierarchy (i.e. caching), because of its random-access nature. For in-memory searching, if the interval to be searched is small, a linear search may
have superior performance simply because it exhibits better locality of reference. For external searching, care must be taken or each of the first several probes will lead to a disk seek. A common
technique is to abandon binary searching for linear searching as soon as the size of the remaining interval falls below a small value such as 8 or 16 or even more in recent computers. The exact value
depends entirely on the machine running the algorithm.
Notice that for multiple searches with a fixed value for N, then (with the appropriate regard for integer division), the first iteration always selects the middle element at N/2, and the second
always selects either N/4 or 3N/4, and so on. Thus if the array's key values are in some sort of slow storage (on a disc file, in virtual memory, not in the cpu's on-chip memory), keeping those three
keys in a local array for a special preliminary search will avoid accessing widely separated memory. Escalating to seven or fifteen such values will allow further levels at not much cost in storage.
On the other hand, if the searches are frequent and not separated by much other activity, the computer's various storage control features will more or less automatically promote frequently-accessed
elements into faster storage.
When multiple binary searches are to be performed for the same key in related lists, fractional cascading can be used to speed up successive searches after the first one.
There is no particular requirement that the array being searched has the bounds 1 to
. It is possible to search a specified range, elements
instead of 1 to
. All that is necessary is that the intialisation be
L:=first − 1
R:=last + 1
, then all proceeds as before.
In more complex contexts, it might be that the data structure has many sub fields, such as a telephone number along with the name. An indexing array such as xref could be introduced so that elements
A(xref(1)).Telephone ≤ A(xref(2)).Telephone … ≤ A(xref(N)).Telephone so that, "viewed through" array xref the array can be regarded as being sorted on the telephone number, and a search would be to
find a given telephone number. In this case, A(i).Key would be replaced by A(xref(i)).Telephone and all would be as before. Thus, with auxiliary xref arrays, an array can be treated as if it is
sorted in different ways without it having to be resorted for each different usage.
When a search returns the result "Not found", it may be helpful to have some indication as to where the missing value would be located so that the list can be augmented. A possible approach would be
to return the value −L (rather than just −1 or 0, say), the negative indicating failure. This however can conflict with the array indexing protocol, if it includes zero as a valid index (since of
course −0 = 0, and 0 would be a findable result) so caution is needed.
The elements of the list are not necessarily all unique. If one searches for a value that occurs multiple times in the list, the index returned will be of the first-encountered equal element, and
this will not necessarily be that of the first, last, or middle element of the run of equal-key elements but will depend on the positions of the values. Modifying the list even in seemingly unrelated
ways such as adding elements elsewhere in the list may change the result. To find all equal elements an upward and downward linear search can be carried out from the initial result, stopping each
search when the element is no longer equal. Thus, e.g. in a table of cities sorted by country, we can find all cities in a given country.
A list of pairs (p,q) can be sorted based on just p. Then the comparisons in the algorithm need only consider the values of p, not those of q. For example, in a table of cities sorted on a column
"country" we can find cities in Germany by comparing country names with "Germany", instead of comparing whole rows. Such partial content is called a sort key.
Several algorithms closely related to or extending binary search exist. For instance, noisy binary search solves the same class of projects as regular binary search, with the added complexity that
any given test can return a false value at random. (Usually, the number of such erroneous results are bounded in some way, either in the form of an average error rate, or in the total number of
errors allowed per element in the search space.) Optimal algorithms for several classes of noisy binary search problems have been known since the late seventies, and more recently, optimal algorithms
for noisy binary search in quantum computers (where several elements can be tested at the same time) have been discovered.
There are many, and they are easily confused.
Exclusive or inclusive bounds
The most significant differences are between the "exclusive" and "inclusive" forms of the bounds. This description uses the "exclusive" bound form, that is the span to be searched is
(L + 1)
(R − 1)
, and this may seem clumsy when the span to be searched could be described in the "inclusive" form, as
. This form may be attained by replacing all appearances of "L" by "(L − 1)" and "R" by "(R + 1)" then rearranging. Thus, the initialisation of
(L − 1):=0
, and
R:=N + 1
(R + 1):=N + 1
. So far so good, but note now that the changes to L and R are no longer simply transferring the value of
as appropriate but now must be
(R + 1):=p
R:=p − 1
, and
(L − 1):=p
L:=p + 1
Thus, the gain of a simpler initialisation, done once, is lost by a more complex calculation, and which is done for every iteration. If that is not enough, the test for an empty span is more complex
also, as compared to the simplicity of checking that the value of p is zero. Nevertheless, this is the form found in many publications, such as Donald Knuth. The Art of Computer Programming, Volume
3: Sorting and Searching, Third Edition.
Locate the middle in one step
The other main variation is to combine the two step calculation of the probe position into one step, that is,
p:=(R − L)/2; p:=p + L;
p:=(L + R)/2;
which is indeed less work, but, the saving is lost because rather than
if p <= 0
(which tests the just-computed value of
directly) the test becomes
if p <= L
and this second form requires the subtraction of the value of
. Pseudo machine code should read somewhat as follows:
Load R
Subtract L
IntDiv 2 Do not store this intermediate result into p yet.
JumpZN NotFound if result is Zero or Negative, Jump to NotFound.
Add L
Store p
That is, the (human) compiler has recognised that in the three statements
p:=(R − L)/2; if p <= 0 return(−L); p:=p + L;
the value of
is already in the working register and need not be stored and retrieved until the end where it is stored once. The two-statement version,
p:=(L + R)/2; if p <= L return(−L);
would become
Load L
Add R
IntDiv 2
Store p Thus p:=(L + R)/2;
Subtract L Compare p to L for if p <= L
JumpZN NotFound
This version has the slight disadvantages of an unnecessary store to
if NotFound and that the value of
is no longer in the working register ready to be used to index the array in
of the next statement, but ordinarily it will involve the same number of actions, though computer compilers may not produce such code. Thus a tie: there is no advantage in locating the middle in one
step. However there is a very good reason not to use the two-statement form, due to the risk of overflow described
Deferred detection of equality
Because of the syntax difficulties discussed
, so that distinguishing the three states <, =, and > would have to be done with two comparisons, it is possible to use just one comparison and at the end when the span is reduced to zero, equality
can be tested for. The
distinguishes only < from >=.
Midpoint and width
An entirely different variation involves abandoning the
pointers in favour of a current position
and a width
where at each iteration,
is adjusted by + or −
is halved. Professor Knuth remarks "It is possible to do this, but only if extreme care is paid to the details" – Section 6.2.1, page 414 of
The Art of Computer Programming
, Volume 3:
Sorting and Searching
, Third Edition, outlines an algorithm, with the further remark "Simpler approaches are doomed to failure!"
Computer usage
"Although the basic idea of binary search is comparatively straightforward, the details can be surprisingly tricky…" — Professor
Donald Knuth
When Jon Bentley assigned it as a problem in a course for professional programmers, he found that an astounding ninety percent failed to code a binary search correctly after several hours of working
on it, and another study shows that accurate code for it is only found in five out of twenty textbooks (Kruse, 1999). Furthermore, Bentley's own implementation of binary search, published in his 1986
book Programming Pearls, contains an error that remained undetected for over twenty years.
Careful thought is required. The first issue is minor to begin with – how to signify "Not found". If the array is indexed 1 to N, then a returned index of zero is an obvious choice. However, some
computer languages (notably C et al) insist that arrays have a lower bound of zero. In such a case, the array might be indexed 0 to N − 1 and so a negative result would be chosen for "Not found".
Except that this can interfere with the desire to use unsigned integers for indexing. If the plan is to return −L for "not found", then unsigned integers cannot be used at all.
Numerical difficulties
More serious are the limitations of computer arithmetic. Variables have limited size, for instance the (very common) sixteen-bit
two's complement
signed integer can only hold values of −32768 to +32767. (Exactly the same problems arise with unsigned or other size integers, except that the digit strings are longer.) If the array is to be
indexed with such variables, then the values
first − 1
last + 1
must be representable, that is,
≤ 3276
. Using the "inclusive" form of the method won't help, because although
might safely hold the value 32767, should the sought value
follow the last element in the array then eventually the search will compare
safely holding 32767), then attempt to store
p + 1
and fail. Similarly, the lower bound
may be zero (for arrays whose indexing starts at zero), in which case a value −1 must be representable, which precludes the use of unsigned integers. General-purpose testing is unlikely to present a
test with these boundaries exercised, and so the detail can be overlooked. Formal proofs often do not attend to differences between computer arithmetic and mathematics.
It is of course unlikely that if the collections being searched number around thirty thousand that sixteen bit integers will be used, but a second problem arises much sooner. A common variation
computes the midpoint of the interval in one step, as p:=(L + R)/2; this means that the sum must not exceed the sixteen-bit limit for all to be well, and this detail is easily forgotten. The problem
may be concealed on some computers, which use wider registers to perform sixteen-bit arithmetic so that there will be no overflow of intermediate results. But on a different computer, perhaps not.
Thus, when tested and working code was transferred from a 16-bit version (in which there were never more than about fifteen thousand elements being searched) to a 32-bit version, and then the problem
sizes steadily inflated, the forgotten limit can suddenly become relevant again. This was the mistake not noticed for decades, and is found in many textbooks – they concentrate on the description of
the method, in a context where integer limits are far away.
To put this in simple terms, if the computer variables can hold a value of 0 to max then the binary search method will only work for N up to max − 1, not for all possible values of N. Reducing a
limit from max to max − 1 is not an onerous constraint, however, if the variant form p:=(L + R)/2 is used, then should a search wander into indices beyond max/2 it will fail. Losing half the range
for N is worth avoiding.
Syntax difficulties
Another difficulty is presented by the absence in most computer languages of a three-way result from a comparison, which forces a comparison to be performed twice. The form is somewhat as follows:
if a < b then action1
else if a > b then action2
else action3;
About half the time, the first test will be true so that there will be only one comparison of
, but the other half of the time it will be false, and a second comparison forced. This is so grievous that some versions are recast so as
not to make a second test at all
thus not determining equality until the span has been reduced to zero, and thereby foregoing the possibility of early termination – remember that about half the time the search will happen on a
matching value one iteration short of the limit. The problem is exacerbated for floating point variables that offer the special value
, which violates even the notion of equality: x = x is
if x has the value
Since Fortran does offer a three-way test, here is a version for searching an array of floating-point numbers. For labels Fortran uses numbers at the start of statements, thus the 1, 2, 3, and 4. The
if statement performs a go to to one of the three nominated labels according to the sign of its arithmetic expression. It can be seen that the flow chart of this routine corresponds to the flow chart
of a proven working method, and so, the code should work.
Well, yes and no… Leaving aside the problem of integer bounds, it remains possible that the routine might be presented with perverse parameters. For instance, N < 0 would cause trouble, and for this
reason the test is if (P <= 0) rather than if (P = 0) as it can be performed with no extra effort. Similarly, the values in array A might not in fact be in sorted order, or the actual array size
might be smaller than N. To check that the array is sorted requires inspecting every value, and this vitiates the whole reason for searching with a fast method. The proof of correctness relies on its
presumption that the array is sorted, etc., and not meeting these requirements is not the fault of the method. Deciding how much checking and what to do is a troublesome issue.
The most straightforward implementation is recursive, which recursively searches the subrange dictated by the comparison: It is invoked with initial
values of
. We can eliminate the
tail recursion
above and convert this to an iterative implementation:
Java Sample:
Single comparison per iteration
Some implementations may not include the early termination branch, preferring to check at the end if the value was found, shown below. Checking to see if the value was found
the search (as opposed to at the
of the search) may seem a good idea, but there are extra computations involved in each iteration of the search. Also, with an array of length
using the
indices, the probability of actually finding the value on the first iteration is 1 /
, and the probability of finding it later on (before the end) is the about 1 / (
). The following checks for the value at the end of the search: This algorithm has two other advantages. At the end of the loop,
points to the first entry greater than or equal to
, so a new entry can be inserted if no match is found. Moreover, it only requires one comparison; which could be significant for complex keys in languages which do not allow the result of a
comparison to be saved.
In practice, one frequently uses a three-way comparison instead of two comparisons per loop. Also, real implementations using fixed-width integers with modular arithmetic need to account for the
possibility of overflow. One frequently-used technique for this is to compute mid, so that two smaller numbers are ultimately added:
Language support
Many standard libraries provide a way to do binary search.
provides in its standard library.
algorithm functions binary_search
offers a set of overloaded
static methods in the classes and for performing binary searches on Java arrays and Lists, respectively. They must be arrays of primitives, or the arrays or Lists must be of a type that implements
interface, or you must specify a custom Comparator object.
.NET Framework
2.0 offers static
versions of the Binary Search algorithm in its collection base classes. An example would be
's method
BinarySearch(T[] array, T value). Python
provides the
can perform binary search on internal tables using the
See also
• Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Section 6.2.1: Searching an Ordered Table, pp.409–426.
• Kruse, Robert L.: "Data Structures and Program Design in C++", Prentice-Hall, 1999, ISBN 0-13-768995-0, page 280.
• Netty van Gasteren, Wim Feijen. The Binary Search Revisited, AvG127/WF214, 1995. (investigates the foundations of the Binary Search, debunking the myth that it applies only to sorted arrays)
External links
|
{"url":"http://www.reference.com/browse/binary+search","timestamp":"2014-04-16T23:03:00Z","content_type":null,"content_length":"125449","record_id":"<urn:uuid:54802433-1ae9-4ecb-ba1d-3be60c6ec967>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Circle - line intersection point
May 11th 2010, 08:45 PM #1
May 2010
Circle - line intersection point
I have a circle with radius "r" , centre (x1, y1). I have a straight line with starting point (x2, y2) end point (x3, y3). I want to know whether this line intersects with the circle? If yes what
is/are the intersection points?
Can I find out the same even for rectangle? and some irregular shape??
With regards,
Last edited by watashi; May 12th 2010 at 05:24 PM.
I have a circle with radius "r" , centre (x1, y1). I have a straight line with starting point (x2, y2) end point (x3, y3). I want to know whether this line intersects with the point? If yes what
is/are the intersection points?
Can I find out the same even for rectangle? and some irregular shape??
With regards,
Hi, watashi,
I assume the word "point" in red above is supposed to be "circle."
We can write the equation for the circle and equation for the line, and now we are solving a system of two equations in two variables, where one equation is quadratic and the other linear.
For example, a unit circle centered at the origin, whose equation is $x^2+y^2=1$, and the points $(-1,-2)$ and $(1,2)$ defining the line $y=2x$.
Substitute the value of $y$ into the first equation to get
Now we have a quadratic equation in one variable, and we can solve for $x$.
The reasoning for rectangles or other shapes is similar, except that we might not have a tidy equation to work with. Depending on the circumstances, we might elect to use approximation techniques
May 11th 2010, 11:52 PM #2
|
{"url":"http://mathhelpforum.com/geometry/144308-circle-line-intersection-point.html","timestamp":"2014-04-21T14:04:24Z","content_type":null,"content_length":"35818","record_id":"<urn:uuid:59a50515-07fd-4f74-99dd-8bad406fb294>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
setting up Constraints for a Standard Maximization problem with multiple variables
December 11th 2007, 05:26 PM #1
Dec 2007
i am asked to consider the following senario:
making a trail mix with particular ingredients: Peanuts, Raisins, M&Ms, Mini-preztels (x1+x2+x3+x4) respectively
Calories(kcal):855 ; 435 ; 1024 ; 162
Protein(g): 34.57; 4.67; 9.01; 3.87
Fat(g):72.5; .67; 43.95; 1.49
Carbs(g): 31.4; 114.74; 148.12; 33.86
( table listed is for a serving size of 1 cup )
make at most 10 cups, using all the ingredients. here's the catch.. so that the mix isn't dominated by any one ingredient, each ingredient will contribute at least (>= ) 10% of the total volume
of the mix made
more constraints, x1+x2+x3+x4<= 7000 calories
Objective function: Max Carbs 31.4x1+ 114.74x2+ 184.12x3+ 33.86x4
considering this... how would i write the constraints when forming a Simplex tableau??
i am using excel for this problem.. but i think i am missing some constraints
|
{"url":"http://mathhelpforum.com/advanced-algebra/24738-setting-up-constraints-standard-maximization-problem-multiple-variables.html","timestamp":"2014-04-18T09:19:10Z","content_type":null,"content_length":"31078","record_id":"<urn:uuid:8e8e8277-9ca3-4d3a-8221-1706b5d3ebf3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
CWU math students calculate what no mathematician has before
Math students at Central Washington University say they’ve broken a 37-year-old world record for the largest weird number — a figure that stretches 226 digits long.
Weird numbers — yes, that’s actually a mathematically accepted term — are numbers that can be divided by smaller numbers that can never add up to the original number.
Read more of this story in the Yakima Herald Republic.
|
{"url":"http://www.cwu.edu/print/3854","timestamp":"2014-04-20T14:24:24Z","content_type":null,"content_length":"4894","record_id":"<urn:uuid:0fc30158-18a7-4ed6-988f-be214b60b121>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Squares and Rectangles
Perimeter refers to the entire length of a figure or the distance around it. If the figure is a circle, the length is referred to as the circumference. Such lengths are always measured in linear
units such as inches, feet, and centimeters. Area refers to the size of the interior of a planar (flat) figure. Area is always measured in square units such as square inches (in^2), square feet (ft^
2), and square centimeters (cm^2), or in special units such as acres or hectares.
For all polygons, you find perimeter by adding together the lengths of all the sides. In this article, P is used to stand for perimeter, and A is used to stand for area.
Figures 5.1(a) and 5.1(b) show perimeter formulas for squares and rectangles.
Figure 1 Perimeter of a square and perimeter of a rectangle.
Area formulas for squares and rectangles are formed by simply multiplying any pair of consecutive sides together. Refer to Figures 5.1(a) and 5.1(b).
Example 1: Find the perimeter and area of Figure 2.
Figure 2 Finding the perimeter and area of a square.
This is a square.
Example 2: Find the perimeter and area of Figure 3.
Figure 3 Finding the perimeter and area of a rectangle.
This is a rectangle.
Example 3: If the perimeter of a square is 36 ft, find its area.
The area of the square would be 81 square feet.
Example 4: If a rectangle with length 9 in has an area of 36 in ^2, find its perimeter.
The perimeter of the rectangle would be 26 inches.
|
{"url":"http://www.cliffsnotes.com/math/geometry/perimeter-and-area/squares-and-rectangles","timestamp":"2014-04-21T12:19:22Z","content_type":null,"content_length":"133441","record_id":"<urn:uuid:730f7f46-c14e-422e-b8c7-4d8cedc472c5>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
|