content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Hyperbolic tangent
From Wikipedia, the free encyclopedia
In mathematics, hyperbolic functions are analogs of the ordinary trigonometric, or circular, functions. The basic hyperbolic functions are the hyperbolic sine "sinh" (typically pronounced /ˈsɪntʃ/ or
/ˈʃaɪn/), and the hyperbolic cosine "cosh" (typically pronounced /ˈkɒʃ/), from which are derived the hyperbolic tangent "tanh" (typically pronounced /ˈtæntʃ/ or /ˈθæn/), etc., in analogy to the
derived trigonometric functions. The inverse hyperbolic functions are the area hyperbolic sine "arsinh" (also called "asinh", or sometimes by the misnomer of "arcsinh"^[1]) and so on.
Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola. Hyperbolic functions occur in the solutions of some
important linear differential equations, for example the equation defining a catenary, and Laplace's equation in Cartesian coordinates. The latter is important in many areas of physics, including
electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
The hyperbolic functions take real values for real argument called a hyperbolic angle. In complex analysis, they are simply rational functions of exponentials, and so are meromorphic.
Hyperbolic functions were introduced in the 18th century by the Swiss mathematician Johann Heinrich Lambert.
Standard algebraic expressions
The hyperbolic functions are:
$\sinh x = \frac{e^x - e^{-x}}{2}$
$\cosh x = \frac{e^{x} + e^{-x}}{2}$
$\tanh x = \frac{\sinh x}{\cosh x} = \frac {\frac{1}{2}(e^x - e^{-x})} {\frac{1}{2}(e^x + e^{-x})} = \frac{e^{2x} - 1} {e^{2x} + 1}$
$\coth x = \frac{\cosh x}{\sinh x} = \frac {\frac{1}{2}(e^x + e^{-x})} {\frac{1}{2}(e^x - e^{-x})} = \frac{e^{2x} + 1} {e^{2x} - 1}$
$\operatorname{sech}\,x = \frac{1}{\cosh x} = \frac {2} {e^x + e^{-x}}$
$\operatorname{csch}\,x = \frac{1}{\sinh x} = \frac {2} {e^x - e^{-x}}$
Via complex numbers the hyperbolic functions are related to the circular functions as follows:
$\sinh x = - {\rm{i}} \sin {\rm{i}}x \!$
$\cosh x = \cos {\rm{i}}x \!$
$\tanh x = -{\rm{i}} \tan {\rm{i}}x \!$
$\coth x = {\rm{i}} \cot {\rm{i}}x \!$
$\operatorname{sech}\,x = \sec { {\rm{i}} x} \!$
$\operatorname{csch}\,x = {\rm{i}}\,\csc\,{\rm{i}}x \!$
where ${\rm{i}} \,$ is the imaginary unit defined as ${\rm{i}} ^2=-1\,$.
The complex forms in the definitions above derive from Euler's formula.
Note that, by convention, sinh^2x means (sinhx)^2, not sinh(sinhx); similarly for the other hyperbolic functions when used with positive exponents. Another notation for the hyperbolic cotangent
function is $\operatorname{ctnh}\,x$, though cothx is far more common.
Useful relations
$\sinh(-x) = -\sinh x\,\!$
$\cosh(-x) = \cosh x\,\!$
$\tanh(-x) = -\tanh x\,\!$
$\coth(-x) = -\coth x\,\!$
$\operatorname{sech}(-x) = \operatorname{sech}\, x\,\!$
$\operatorname{csch}(-x) = -\operatorname{csch}\, x\,\!$
It can be seen that ${\rm{cosh}}\,x\,$ and ${\rm{sech}}\,x\,$ are even functions; the others are odd functions.
$\operatorname{arsech}\,x=\operatorname{arcosh} \frac{1}{x}$
$\operatorname{arcsch}\,x=\operatorname{arsinh} \frac{1}{x}$
$\operatorname{arcoth}\,x=\operatorname{artanh} \frac{1}{x}$
Hyperbolic sine and cosine satisfy the identity
$\cosh^2 x - \sinh^2 x = 1\,$
which is similar to the Pythagorean trigonometric identity.
The hyperbolic tangent is the solution to the nonlinear boundary value problem^[2]:
$\frac 1 2 f'' = f^3 - f \qquad ; \qquad f(0) = f'(\infty) = 0$
It can also be shown that the area under the graph of cosh x from A to B is equal to the arc length of cosh x from A to B.
Inverse functions as logarithms
$\operatorname {arsinh} \, x=\ln \left( x+\sqrt{x^{2}+1} \right)$
$\operatorname {arcosh} \, x=\ln \left( x+\sqrt{x^{2}-1} \right);x\ge 1$
$\operatorname {artanh} \, x=\tfrac{1}{2}\ln \frac{1+x}{1-x} ;\left| x \right|<1$
$\operatorname {arsech} \, x=\ln \frac{1+\sqrt{1-x^{2}}}{x} ;0<x\le 1$
$\operatorname {arcsch} \, x=\ln \left( \frac{1}{x}+\frac{\sqrt{1+x^{2}}}{\left| x \right|} \right)$
$\operatorname {arcoth} \, x=\tfrac{1}{2}\ln \frac{x+1}{x-1} ;\left| x \right|>1$
$\frac{d}{dx}\sinh x = \cosh x \,$
$\frac{d}{dx}\cosh x = \sinh x \,$
$\frac{d}{dx}\tanh x = 1 - \tanh^2 x = \hbox{sech}^2 x = 1/\cosh^2 x \,$
$\frac{d}{dx}\coth x = 1 - \coth^2 x = -\hbox{csch}^2 x = -1/\sinh^2 x \,$
$\frac{d}{dx}\ \hbox{csch}\,x = - \coth x \ \hbox{csch}\,x \,$
$\frac{d}{dx}\ \hbox{sech}\,x = - \tanh x \ \hbox{sech}\,x \,$
$\frac{d}{dx}\left( \sinh^{-1}x \right)=\frac{1}{\sqrt{x^{2}+1}}$
$\frac{d}{dx}\left( \cosh^{-1}x \right)=\frac{1}{\sqrt{x^{2}-1}}$
$\frac{d}{dx}\left( \tanh^{-1}x \right)=\frac{1}{1-x^{2}}$
$\frac{d}{dx}\left( \operatorname{csch}^{-1}x \right)=-\frac{1}{\left| x \right|\sqrt{1+x^{2}}}$
$\frac{d}{dx}\left( \operatorname{sech}^{-1}x \right)=-\frac{1}{x\sqrt{1-x^{2}}}$
$\frac{d}{dx}\left( \coth ^{-1}x \right)=\frac{1}{1-x^{2}}$
Standard Integrals
For a full list of integrals of hyperbolic functions, see list of integrals of hyperbolic functions
$\int\sinh ax\,dx = \frac{1}{a}\cosh ax + C$
$\int\cosh ax\,dx = \frac{1}{a}\sinh ax + C$
$\int \tanh ax\,dx = \frac{1}{a}\ln(\cosh ax) + C$
$\int \coth ax\,dx = \frac{1}{a}\ln(\sinh ax) + C$
$\int{\frac{du}{\sqrt{a^{2}+u^{2}}}}=\sinh ^{-1}\left( \frac{u}{a} \right)+C$
$\int{\frac{du}{\sqrt{u^{2}-a^{2}}}}=\cosh ^{-1}\left( \frac{u}{a} \right)+C$
$\int{\frac{du}{a^{2}-u^{2}}}=\frac{1}{a}\tanh ^{-1}\left( \frac{u}{a} \right)+C; u^{2}<a^{2}$
$\int{\frac{du}{a^{2}-u^{2}}}=\frac{1}{a}\coth ^{-1}\left( \frac{u}{a} \right)+C; u^{2}>a^{2}$
$\int{\frac{du}{u\sqrt{a^{2}-u^{2}}}}=-\frac{1}{a}\operatorname{sech}^{-1}\left( \frac{u}{a} \right)+C$
$\int{\frac{du}{u\sqrt{a^{2}+u^{2}}}}=-\frac{1}{a}\operatorname{csch}^{-1}\left| \frac{u}{a} \right|+C$
In the above expressions, C is called the constant of integration.
Taylor series expressions
It is possible to express the above functions as Taylor series:
$\sinh x = x + \frac {x^3} {3!} + \frac {x^5} {5!} + \frac {x^7} {7!} +\cdots = \sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!}$
$\cosh x = 1 + \frac {x^2} {2!} + \frac {x^4} {4!} + \frac {x^6} {6!} + \cdots = \sum_{n=0}^\infty \frac{x^{2n}}{(2n)!}$
$\tanh x = x - \frac {x^3} {3} + \frac {2x^5} {15} - \frac {17x^7} {315} + \cdots = \sum_{n=1}^\infty \frac{2^{2n}(2^{2n}-1)B_{2n} x^{2n-1}}{(2n)!}, \left |x \right | < \frac {\pi} {2}$
$\coth x = \frac {1} {x} + \frac {x} {3} - \frac {x^3} {45} + \frac {2x^5} {945} + \cdots = \frac {1} {x} + \sum_{n=1}^\infty \frac{2^{2n} B_{2n} x^{2n-1}} {(2n)!}, 0 < \left |x \right | < \pi$ (
Laurent series)
$\operatorname {sech}\, x = 1 - \frac {x^2} {2} + \frac {5x^4} {24} - \frac {61x^6} {720} + \cdots = \sum_{n=0}^\infty \frac{E_{2 n} x^{2n}}{(2n)!} , \left |x \right | < \frac {\pi} {2}$
$\operatorname {csch}\, x = \frac {1} {x} - \frac {x} {6} +\frac {7x^3} {360} -\frac {31x^5} {15120} + \cdots = \frac {1} {x} + \sum_{n=1}^\infty \frac{ 2 (1-2^{2n-1}) B_{2n} x^{2n-1}}{(2n)!} , 0
< \left |x \right | < \pi$ (Laurent series)
$B_n \,$ is the nth Bernoulli number
$E_n \,$ is the nth Euler number
Similarities to circular trigonometric functions
A point on the hyperbola xy = 1 with x > 1 determines a hyperbolic triangle in which the side adjacent to the hyperbolic angle is associated with cosh while the side opposite is associated with sinh.
However, since the point (1,1) on this hyperbola is a distance √2 from the origin, the normalization constant 1/√2 is necessary to define cosh and sinh by the lengths of the sides of the hyperbolic
Just as the points (cos t, sin t) define a circle, the points (cosh t, sinh t) define the right half of the equilateral hyperbola x^2 − y^2 = 1. This is based on the easily verified identity
$\cosh^2 t - \sinh^2 t = 1 \,$
and the property that cosh t ≥ 1 for all t.
The hyperbolic functions are periodic with complex period 2πi (πi for hyperbolic tangent and cotangent).
The parameter t is not a circular angle, but rather a hyperbolic angle which represents twice the area between the x-axis, the hyperbola and the straight line which links the origin with the point
(cosh t, sinh t) on the hyperbola.
The function cosh x is an even function, that is symmetric with respect to the y-axis.
The function sinh x is an odd function, that is −sinh x = sinh(−x), and sinh 0 = 0.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule ^[3] states that one can convert any trigonometric identity into
a hyperbolic identity by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a
product of 2, 6, 10, 14, ... sinhs. This yields for example the addition theorems
$\sinh(x+y) = \sinh x \cosh y + \cosh x \sinh y \,$
$\cosh(x+y) = \cosh x \cosh y + \sinh x \sinh y \,$
$\tanh(x+y) = \frac{\tanh x + \tanh y}{1 + \tanh x \tanh y} \,$
the "double angle formulas"
$\sinh 2x\ = 2\sinh x \cosh x \,$
$\cosh 2x\ = \cosh^2 x + \sinh^2 x = 2\cosh^2 x - 1 = 2\sinh^2 x + 1 \,$
and the "half-angle formulas"
$\cosh^2 \tfrac{1}{2} x = \tfrac{1}{2}(\cosh x + 1)$ Note: This corresponds to its circular counterpart.
$\sinh^2 \tfrac{1}{2} x = \tfrac{1}{2}(\cosh x - 1)$ Note: This is equivalent to its circular counterpart multiplied by −1.
$\tanh ^{2}x=1-\operatorname{sech}^{2}x$
$\coth ^{2}x=1+\operatorname{csch}^{2}x$
The derivative of sinh x is cosh x and the derivative of cosh x is sinh x; this is similar to trigonometric functions, albeit the sign is different (i.e., the derivative of cos x is −sin x).
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers.
The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain hanging freely under gravity.
Relationship to the exponential function
From the definitions of the hyperbolic sine and cosine, we can derive the following identities:
$e^x = \cosh x + \sinh x\!$
$e^{-x} = \cosh x - \sinh x.\!$
These expressions are analogous to the expressions for sine and cosine, based on Euler's formula, as sums of complex exponentials.
Hyperbolic functions for complex numbers
Since the exponential function can be defined for any complex argument, we can extend the definitions of the hyperbolic functions also to complex arguments. The functions sinh z and cosh z are then
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
$e^{i x} = \cos x + i \;\sin x$
$e^{-i x} = \cos x - i \;\sin x$
$\cosh ix = \tfrac12(e^{i x} + e^{-i x}) = \cos x$
$\sinh ix = \tfrac12(e^{i x} - e^{-i x}) = i \sin x$
$\tanh ix = i \tan x \,$
$\cosh x = \cos ix \,$
$\sinh x = -i \sin ix \,$
$\tanh x = -i \tan ix \,$
Hyperbolic functions in the complex plane
$\operatorname{sinh}(z)$ $\operatorname{cosh}(z)$ $\operatorname{tanh}(z)$ $\operatorname{coth}(z)$ $\operatorname{sech}(z)$ $\operatorname{csch}(z)$
See also
1. ^ Some examples of using arcsinh found in Google Books.
2. ^ Eric W. Weisstein. "Hyperbolic Tangent". MathWorld. http://mathworld.wolfram.com/HyperbolicTangent.html. Retrieved 2008-10-20.
3. ^ G. Osborn, Mnemonic for hyperbolic formulae, The Mathematical Gazette, p. 189, volume 2, issue 34, July 1902
External links | {"url":"http://www.thefullwiki.org/Hyperbolic_tangent","timestamp":"2014-04-17T21:24:30Z","content_type":null,"content_length":"59473","record_id":"<urn:uuid:8acc09d8-def9-418c-815c-0b0eb57f1339>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annapolis, MD Science Tutor
Find an Annapolis, MD Science Tutor
...I have tutored students in elementary, middle, and high grades. I have developed fun activities for students to actually have fun while they are learning. I look forward to helping your child
become a huge success.
18 Subjects: including biology, anatomy, physiology, reading
...I have been tutoring chemistry for more than 10 years. I started as an undergrad while at Denison University (Granville, Ohio). While there I worked as a one-on-one and group tutor as well as
a teaching assistant in the general chemistry labs. After graduation I spent a year teaching general ch...
10 Subjects: including ACT Science, SAT math, chemistry, ACT Math
I am currently a Chemistry Professor. I have a PhD in Organic Chemistry and over 10 years tutoring experience. I also offer study skills for sciences and maths.
7 Subjects: including organic chemistry, chemistry, algebra 1, prealgebra
...Tutoring subjects include but not limited to chemistry, pre-algebra, algebra, calculus trigonometry, environmental science, biology etc. I take great pride and joy into discovering the gaps in
understanding for my students and I believe it is this ability that sets me apart from other tutors. P...
13 Subjects: including biology, anatomy, organic chemistry, chemistry
...I look forward to working with you to help you achieve your academic goals! I taught human anatomy and physiology at UMBC for three years. I ran a lab in which we worked with models and animal
dissections to understand the anatomy of the human body.
17 Subjects: including physiology, genetics, TEAS, algebra 1
Related Annapolis, MD Tutors
Annapolis, MD Accounting Tutors
Annapolis, MD ACT Tutors
Annapolis, MD Algebra Tutors
Annapolis, MD Algebra 2 Tutors
Annapolis, MD Calculus Tutors
Annapolis, MD Geometry Tutors
Annapolis, MD Math Tutors
Annapolis, MD Prealgebra Tutors
Annapolis, MD Precalculus Tutors
Annapolis, MD SAT Tutors
Annapolis, MD SAT Math Tutors
Annapolis, MD Science Tutors
Annapolis, MD Statistics Tutors
Annapolis, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Annapolis_MD_Science_tutors.php","timestamp":"2014-04-20T20:57:59Z","content_type":null,"content_length":"23800","record_id":"<urn:uuid:a18d716e-1e44-4c3c-bb4b-3c84890fe835>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Naughton's magic trick
February 3, 2006 -- This is the solution to Andy Naughton's trick at http://trunks.secondfoundation.org/files/psychic.swf. Congratulations for a well conceived one! (And thanks to Terri for
bringing my attention to it.)
You choose a number AB (where A is the first digit and B the second digit). The result of your calculation will be:
(10A +B) - (A + B) = 9A.
This means that, whatever B you choose, your answer can only be one of the following 9 numbers (given A between 1 and 9): 9, 18, 27, 36, 45, 54, 63, 72, 81. If you check the table of numbers and
symbols on the magic page, you can observe that the symbols change at each round in order to confuse you, but in any given round, the symbol is always the same for these 9 numbers. Your
calculation necessarily gives a result associated with the unique symbol assigned to the 9 possible solutions.
"Mathematics is a language."
-- J. Willard Gibbs | {"url":"http://www.pierrelemieux.org/newsnaughton30206.html","timestamp":"2014-04-17T04:22:36Z","content_type":null,"content_length":"3026","record_id":"<urn:uuid:7ffef064-6686-4cc0-a62c-4e18bf7b8a00>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find all solutions of the equation. x^4 − 6x^2 + 5 = 0
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510dd440e4b0d9aa3c478555","timestamp":"2014-04-16T10:23:31Z","content_type":null,"content_length":"128169","record_id":"<urn:uuid:b3bc8093-2444-4a78-9293-d06839dceaa6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Re: indexing problem
Tim Hochberg tim.hochberg at cox.net
Tue Feb 14 08:58:03 CST 2006
David M. Cooke wrote:
>Tim Hochberg <tim.hochberg at cox.net> writes:
>>David M. Cooke wrote:
>>>Gary Ruben <gruben at bigpond.net.au> writes:
>>>>Tim Hochberg wrote:
>>>>>However, I'm not convinced this is a good idea for numpy. This would
>>>>>introduce a discontinuity in a**b that could cause problems in some
>>>>>cases. If, for instance, one were running an iterative solver of
>>>>>some sort (something I've been known to do), and b was a free
>>>>>variable, it could get stuck at b = 2 since things would go
>>>>>nonmonotonic there.
>>>>I don't quite understand the problem here. Tim says Python special
>>>>cases integer powers but then talks about the problem when b is a
>>>>floating type. I think special casing x**2 and maybe even x**3 when
>>>>the power is an integer is still a good idea.
>>>Well, what I had done with Numeric did special case x**0, x**1,
>>>x**(-1), x**0.5, x**2, x**3, x**4, and x**5, and only when the
>>>exponent was a scalar (so x**y where y was an array wouldn't be). I
>>>think this is very useful, as I don't want to microoptimize my code to
>>>x*x instead of x**2. The reason for just scalar exponents was so
>>>choosing how to do the power was lifted out of the inner loop. With
>>>that, x**2 was as fast as x*x.
>>This is getting harder to object to since, try as I might I can't get
>>a**b to go nonmontonic in the vicinity of b==2. I run out of floating
>>point resolution before the slight shift due to special casing at 2
>>results in nonmonoticity. I suspect that I could manage it with enough
>>work, but it would require some unlikely function of a**b. I'm not
>>sure if I'm really on board with this, but let me float a slightly
>>modified proposal anyway:
>> 1. numpy.power stays as it is now. That way in the rare case that
>>someone runs into trouble they can drop back to power. Alternatively
>>there could be rawpower and power where rawpower has the current
>>behaviour. While the name rawpower sounds cool/cheesy, power is used
>>infrequently enough that I doubt it matters whether it has these
>>special case optimazations.
>> 2, Don't distinguish between scalars and arrays -- that just makes
>>things harder to explain.
>Makes the optimizations better, though.
Ah, Because you can hoist all the checks for what type of optimization
to do, if any, out of the core loop, right? That's a good point. Still
I'm not keen on a**b having different performance *and* different
results depending on whether b is a scalar or matrix. The first thing to
do is to measure how much overhead doing the optimization element by
element is going to add. Assuming that it's signifigant that leaves us
with the familiar dilema: fast, simple or general purpose; pick any two.
1. Do what I've proposed: optimize things at the c_pow level. This is
general purpose and relatively simple to implement (since we can steal
most of the code from complexobject.c). It may have a signifigant speed
penalty versus 2 though:
2. Do what you've proposed: optimize things at the ufunc level. This
fast and relatively simple to implement. It's more limited in scope and
a bit harder to explain than 2.
3. Do both. This is straightforward, but adds a bunch of extra code
paths with all the attendant required testing and possibility for bugs.
So, fast, general purpose, but not simple.
>> 3. Python itself special cases all integral powers between -100 and
>>100. Beg/borrow/steal their code. This makes it easier to explain
>>since all smallish integer powers are just automagically faster.
>> 4. Is the performance advantage of special casing a**0.5
>>signifigant? If so use the above trick to special case all half
>>integral and integral powers between -N and N. Since sqrt probably
>>chews up some time the cutoff. The cutoff probably shifts somewhat if
>>we're optimizing half integral as well as integral powers. Perhaps N
>>would be 32 or 64.
>>The net result of this is that a**b would be computed using a
>>combination of repeated multiplication and sqrt for real integral and
>>half integral values of b between -N and N. That seems simpler to
>>explain and somewhat more useful as well.
>>It sounds like a fun project although I'm not certain yet that it's a
>>good idea.
>Basically, my Numeric code looked like this:
>#define POWER_UFUNC3(prefix, basetype, exptype, outtype) \
>static void prefix##_power(char **args, int *dimensions, \
> int *steps, void *func) { \
> int i, cis1=steps[0], cis2=steps[1], cos=steps[2], n=dimensions[0]; \
> int is1=cis1/sizeof(basetype); \
> int is2=cis2/sizeof(exptype); \
> int os=cos/sizeof(outtype); \
> basetype *i1 = (basetype *)(args[0]); \
> exptype *i2=(exptype *)(args[1]); \
> outtype *op=(outtype *)(args[2]); \
> if (is2 == 0) { \
> exptype exponent = i2[0]; \
> if (POWER_equal(exponent, 0.0)) { \
> for (i = 0; i < n; i++, op += os) { \
> POWER_one((*op)) \
> } \
> } else if (POWER_equal(exponent, 1.0)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> *op = *i1; \
> } \
> } else if (POWER_equal(exponent, 2.0)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_square((*op),(*i1)) \
> } \
> } else if (POWER_equal(exponent, -1.0)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_inverse((*op),(*i1)) \
> } \
> } else if (POWER_equal(exponent, 3.0)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_cube((*op),(*i1)) \
> } \
> } else if (POWER_equal(exponent, 4.0)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_fourth((*op),(*i1)) \
> } \
> } else if (POWER_equal(exponent, 0.5)) { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_sqrt((*op),(*i1)) \
> } \
> } else { \
> for (i = 0; i < n; i++, i1 += is1, op += os) { \
> POWER_pow((*op), (*i1), (exponent)) \
> } \
> } \
> } else { \
> for (i=0; i<n; i++, i1+=is1, i2+=is2, op+=os) { \
> POWER_pow((*op), (*i1), (*i2)) \
> } \
> } \
>#define POWER_UFUNC(prefix, type) POWER_UFUNC3(prefix, type, type, type)
>#define FTYPE float
>#define POWER_equal(x,y) x == y
>#define POWER_one(o) o = 1.0;
>#define POWER_square(o,x) o = x*x;
>#define POWER_inverse(o,x) o = 1.0 / x;
>#define POWER_cube(o,x) FTYPE y=x; o = y*y*y;
>#define POWER_fourth(o,x) FTYPE y=x, s = y*y; o = s * s;
>#define POWER_sqrt(o,x) o = sqrt(x);
>#define POWER_pow(o,x,n) o = pow(x, n);
>POWER_UFUNC(FLOAT, float)
>POWER_UFUNC3(FLOATD, float, double, float)
>plus similiar definitions for float, double, complex float, and
>complex double. Using the POWER_square, etc. macros means the complex
>case was easy to add.
>The speed comes from the inlining of how to do the power _outside_ the
>inner loop. The reason x**2, etc. are slower currently is there is a
>function call in the inner loop. Your's and mine C library's pow()
>function mostly likely does something like I have above, for a single
>case: pow(x, 2.0) is calculated as x*x. However, each time through it
>has do decide _how_ to do it.
Part of our difference in perspective comes from the fact that I've just
been staring at the guts of complex power. In this case you always have
function calls at present, even for s*s. (At least I'm fairly certain
that doesn't get inlined although I haven't checked). Since much of the
work I do is with complex matrices, it's appropriate that I focus on this.
Have you measured the effect of a function call on the speed here, or is
that just an educated guess. If it's an educated guess, it's probably
worth determining how much of speed hit the function call actually
causes. I was going to try to get a handle on this by comparing
multiplication of Complex numbers (which requires a function call plus
more math), with multiplication of Floats which does not. Perversly, the
Complex multiplication came out marginally faster, which is hard to
explain whichever way you look at it.
>>> timeit.Timer("a*b", "from numpy import arange; a =
arange(10000)+0j; b = arange(10000)+0j").time
>>> timeit.Timer("a*b", "from numpy import arange; a = arange(10000); b
= arange(10000)").timeit(100
>That's why I limited the optimization to scalar exponents: array
>exponents would mean it's about as slow as the pow() call, even if the
>checks were inlined into the loop. It would probably be even slower
>for the non-optimized case, as you'd check for the special exponents,
>then call pow() if it fails (which would likely recheck the exponents).
Again, here I'm thinking of the complex case. In that case at least, I
don't think that the non-optimized case would take a noticeable speed
hit. I would put it into pow itself, which already special cases a==0
and b==0. For float pow it might, but that's already slow, so I doubt
that it would make much difference.
>Maybe a simple way to add this is to rewrite x.__pow__() as something
>like the C equivalent of
>def __pow__(self, p):
> if p is not a scalar:
> return power(self, p)
> elif p == 1:
> return p
> elif p == 2:
> return square(self)
> elif p == 3:
> return cube(self)
> elif p == 4:
> return power_4(self)
> elif p == 0:
> return ones(self.shape, dtype=self.dtype)
> elif p == -1:
> return 1.0/self
> elif p == 0.5:
> return sqrt(self)
>and add ufuncs square, cube, power_4 (etc.).
It sounds like we need to benchmark some stuff and see what we come up
with. One approach would be for each of us to implement this for one
time (say float) and see how the approaches compare speed wise. That's
not entirely fair as my approach will do much better at complex than
float I believe, but it's certainly easier.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-February/006296.html","timestamp":"2014-04-17T07:01:50Z","content_type":null,"content_length":"17520","record_id":"<urn:uuid:eca17101-9731-444b-b148-71ed7ae4bbd3>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pelham, NH Geometry Tutor
Find a Pelham, NH Geometry Tutor
I can help your child gain confidence in math. Although math is not everyone's best subject, it doesn't have to be so stressful. Anxiety and discouragement often get in the way of students
reaching their full potential in math.** ** See my blog on anxiety, working memory, and math for more information.
15 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I will construct an individualized study program for you based on your needs and skills. I find that I work best in concert with the student, when we can communicate about your needs and how
we can achieve your goals. I am flexible about hours and locations, and my first goal is to make you passionate about learning.
34 Subjects: including geometry, English, chemistry, biology
...As I entered teaching in 1972 I decided to dedicate my career to Dr. R., and never forget to teach his way. Never worry about COVERING material, but instead worry about getting students to
understand how each concept complements others and capitalize on prior UNDERSTANDING to teach new topics.
6 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I am also a musician who plays several instruments including guitar, bass, drums, and a little piano. I have 15 years experience playing guitar, 8 years on drums. And, I am also a video editor
and producer who has designed promotional videos for businesses and non-profit organizations.
10 Subjects: including geometry, calculus, algebra 1, algebra 2
...I have a Master's degree in Special Education. I am a Special Education teacher at Whittier Vocational Technical High School. I teach inclusion math, science, English and history classes.
21 Subjects: including geometry, reading, English, writing
Related Pelham, NH Tutors
Pelham, NH Accounting Tutors
Pelham, NH ACT Tutors
Pelham, NH Algebra Tutors
Pelham, NH Algebra 2 Tutors
Pelham, NH Calculus Tutors
Pelham, NH Geometry Tutors
Pelham, NH Math Tutors
Pelham, NH Prealgebra Tutors
Pelham, NH Precalculus Tutors
Pelham, NH SAT Tutors
Pelham, NH SAT Math Tutors
Pelham, NH Science Tutors
Pelham, NH Statistics Tutors
Pelham, NH Trigonometry Tutors | {"url":"http://www.purplemath.com/Pelham_NH_Geometry_tutors.php","timestamp":"2014-04-17T11:12:22Z","content_type":null,"content_length":"23815","record_id":"<urn:uuid:d3cfebb7-f340-44a4-8051-b18b39bc8c15>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding remainder of polynomial division
October 29th 2008, 09:46 AM #1
May 2008
Finding remainder of polynomial division
Find the remainder if the polynomial f(x) = 3x^100 + 5x^87 - 4x^40 + 2x^21 - 6 divided by p(x) = x + 1
The thing is, I can't use synthetic OR long division BECAUSE I'd have a VERY VERY long equation (since you have to fill in the missing degrees: 99,98,97,96,95......etc).
How would you solve this?
Find the remainder if the polynomial f(x) = 3x^100 + 5x^87 - 4x^40 + 2x^21 - 6 divided by p(x) = x + 1
The thing is, I can't use synthetic OR long division BECAUSE I'd have a VERY VERY long equation (since you have to fill in the missing degrees: 99,98,97,96,95......etc).
How would you solve this?
Use the Remainder theorem and find f(-1).
The remainder theorem: When a polynomial P(x) is divided by x - a , the remainder is P(a)!
Your divisor is (x + 1). This means a = -1. Now, find P(a)...which in your case is P(-1).
$P(-1)=3(-1)^{100} + 5(-1)^{87} - 4(-1)^{40} + 2(-1)^{21} - 6$
This should yield the remainder you seek.
you should use it at this way:
so let's say that:
and $q(x)$ is the unknown answer
so rewriting the cocient and according to the remainder theorem:
$p(x)=(x+1)q(x)+r(x)$ where r(x) is the remainder
so $p(-1)=r(-1)$
knowing the remainder then use synthetic division to do the operation which is easier to do.
Last edited by Black Kawairothlite; October 29th 2008 at 10:25 AM. Reason: late xD
I finally got it, thanks!
you should use it at this way:
so let's say that:
and $q(x)$ is the unknown answer
so rewriting the cocient and according to the remainder theorem:
$p(x)=(x+1)q(x)+r(x)$ where r(x) is the remainder
so $p(-1)=r(-1)$
knowing the remainder then use synthetic division to do the operation which is easier to do.
Book states f(x) = p(x) * q(x) + r(x) r(x) being remainder
where did you get the zero in (0)q(-1)?
when you replace x @ (x+1) to -1 then (-1+1)=(0)
btw i saw that on the link u put there
October 29th 2008, 09:50 AM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
October 29th 2008, 09:54 AM #3
May 2008
October 29th 2008, 10:17 AM #4
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
October 29th 2008, 10:23 AM #5
October 29th 2008, 10:26 AM #6
May 2008
October 29th 2008, 10:31 AM #7
May 2008
October 29th 2008, 02:51 PM #8 | {"url":"http://mathhelpforum.com/algebra/56368-finding-remainder-polynomial-division.html","timestamp":"2014-04-19T13:58:17Z","content_type":null,"content_length":"55084","record_id":"<urn:uuid:3f6af8c6-6cf8-4769-8eb8-937f78a21d53>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Not Enough Information about ... Ellipses
Date: 12/01/2010 at 13:17:46
From: Jane
Subject: Would you consider 3.666.... a rational number?
Would you consider this a rational number even though you do not really
know what the next digit is?
It could be 3.666789....
I would say that if the digit repeats, it should have a line over the last
Thank you
Date: 12/01/2010 at 13:40:00
From: Doctor Peterson
Subject: Re: Would you consider 3.666.... a rational number?
Hi, Jane.
If I saw this number on a calculator that showed only the digits 3.666 and
implied that there are more digits after it, I would have no idea what the
number actually is, or whether it is rational.
The ellipsis "..." is used in two slightly different ways. Sometimes it
just means "there's more here that I left out" (as in the calculator
example). Other times it means "continue the pattern you see." The latter
is a little ambiguous, since you really don't know what the pattern is;
but it's generally used only in contexts where it is understood what kind
of pattern to expect. One of those is to indicate a repeating decimal.
So you're right that, in general, you can't tell whether 3.666... is
rational or not; but in some contexts it would be somewhat of a quibble to
insist that it can't be understood as a repeating decimal, even though the
explicit notation is a far better way to say it.
I suspect that often the ellipsis is used for a repeating decimal just
because (a) the bar notation may not be understood by the reader, or (b)
it is too hard to typeset the bar, so they use the next best thing. I like
the notation 3.(6), which overcomes the second issue, though few people
would understand it.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/76214.html","timestamp":"2014-04-16T22:27:22Z","content_type":null,"content_length":"6833","record_id":"<urn:uuid:5d4a97f1-e8ae-4355-8a6b-0f102b346196>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parametric Cubic Curve Middle Coordinates
October 11th 2009, 01:59 PM
Parametric Cubic Curve Middle Coordinates
If you have two end points of x0,y0,z0 and x1, y1, z1 respectively of a parametric cubic curve and their known derivatives how do you find its middle coordinates x0.5, y0.5, z0.5
Any help that would be great thanks.
October 14th 2009, 07:56 AM
First, write the general formula...
For vector variables $x,a,b,c$
Given vectors $s,t,u,v$, you should be able to easily solve this system for $a,b,c,d$...
All of these are vectors, so you now have your general parametric equation $x(t)=at^3+bt^2+ct+d$
Plug and chug. | {"url":"http://mathhelpforum.com/advanced-applied-math/107408-parametric-cubic-curve-middle-coordinates-print.html","timestamp":"2014-04-21T12:41:03Z","content_type":null,"content_length":"6452","record_id":"<urn:uuid:892c73a5-fe12-4497-a682-1d8b1190fa60>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Framingham Algebra 2 Tutor
Find a Framingham Algebra 2 Tutor
...I am also a certified lifeguard and lifesaver PRO. I acted as Head Guard for the town of Dennis on Cape Cod for two years. I am a recent graduate from Northeastern University's mechanical
engineering program.
11 Subjects: including algebra 2, physics, calculus, algebra 1
...I favor teaching to the student's unique needs, rather than just laying out the subject matter. Teaching elementary science for me is an opportunity to help create a sense of wonder about the
world around us. I love to explain how science develops, how scientists think, and how science can make everyday life better.
55 Subjects: including algebra 2, English, reading, algebra 1
I am a licensed and currently working teacher in Beverly, Massachusetts. I hold two licenses; one in Mathematics (8-12), and one in American Sign Language (5-12). I work at Beverly High School as
an ASL and mathematics teacher. I have a decade of experience in tutoring math despite my young age.
9 Subjects: including algebra 2, geometry, algebra 1, SAT math
...My unique approach to tutoring involves working with each student according to their individual learning style. My background in communication allows me to help students understand their
course material in a way that makes sense to them as an individual. If you're a student struggling in a part...
27 Subjects: including algebra 2, reading, writing, English
...Also, two of my three children are in speech therapy, so I am familiar with speech difficulties and delays, but these two children are also part of an accelerated learning program in our
district due to their aptitude and drive. Additionally, I have experience in most of the other hard sciences ...
66 Subjects: including algebra 2, reading, chemistry, writing
Nearby Cities With algebra 2 Tutor
Ashland, MA algebra 2 Tutors
Brookline, MA algebra 2 Tutors
Cambridge, MA algebra 2 Tutors
Holliston algebra 2 Tutors
Hopkinton, MA algebra 2 Tutors
Marlborough, MA algebra 2 Tutors
Natick algebra 2 Tutors
Needham, MA algebra 2 Tutors
Newton, MA algebra 2 Tutors
Roxbury, MA algebra 2 Tutors
Sherborn algebra 2 Tutors
Somerville, MA algebra 2 Tutors
Waltham, MA algebra 2 Tutors
Wayland, MA algebra 2 Tutors
Wellesley algebra 2 Tutors | {"url":"http://www.purplemath.com/framingham_algebra_2_tutors.php","timestamp":"2014-04-18T08:50:23Z","content_type":null,"content_length":"23941","record_id":"<urn:uuid:0f9b901d-773f-4c05-b7e4-e9578dc239b7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 293-294 Problems
A compilation of prelim and exam problems.
Sorted by subject. Only a few solutions.
Last revised Jan 24, 2001, all problems Copywrite Cornell University 2000, 2001
These problems from Math 293 and Math 294 at Cornell have been collected and organized by undergraduates Prapun Suksompong, Metha Jeeradit, and Thu Dong. The project was initiated and supervised by
Andy Ruina.
Tex sources are available for experts. As Chapters 1, 3 , 4 were done by a one person and Chapters 2, 5, 6 were done by another person, they have organized their *.tex files differently. Hence,
Chapters 1, 3, and 4 tex files are in one zip file per chapter while Chapter 2, 5, and 6 tex files have a tex file for each section.
Click on the pdf file or solution file next to the topic of your interest.
Chapter 1
Beginning Linear Algebra Chapter 1 tex (zip file)
1.1 Introduction to Linear Systems and Row Reduction pdf solution
1.2 Solution Sets of Ax=b pdf solution
1.3 Vector and Matrix Equations pdf solution
1.4 Linear Transformation I pdf solution
1.5 Linear Independence pdf solution
1.6 Matrix Operations pdf solution
1.7 Special Matrices pdf solution
1.8 Matrix Inverse pdf solution
Chapter 2
More Linear Algebra
2.1 Determinants pdf tex solution
2.2 Introduction to Bases pdf tex solution
2.3 Vector Spaces pdf tex solution
2.4 Coordinates pdf tex solution
2.5 Spaces of a Matrix and Dimension pdf tex solution
2.6 Finite Difference Equations pdf tex solution
2.7 Eigen-stuff pdf tex solution
2.8 Linear Transformation II pdf tex solution
2.9 Orthogonality pdf tex solution
2.10 Orthogonal Projection pdf tex solution
2.11 Inner Product Spaces pdf tex soluton
Chapter 3
Ordinary Differential Equations Chapter 3 tex (zip file)
3.1 1^st Order ODEs pdf solution
3.2 2^nd and Higher Order ODEs pdf solution
3.3 System of ODEs pdf solution
Chapter 4
Multi-Variable Integral Calculus Chapter 4 tex (zip file)
4.1 General 2D Intergrals pdf solution
4.2 Line Integrals pdf solution
4.3 Green pdf solution
4.4 General 3D Integrals pdf solution
4.5 Surface Area pdf solution
4.6 3D Flux and Divergence pdf solution
4.7 Stokes pdf solution
Chapter 5
Fourier Series and Partial Differential Equations
5.1 Fourier pdf tex solution
5.2 General PDEs pdf tex solution
5.3 Laplace Equation pdf tex solution
5.4 Heat Equation pdf tex solution
5.5 Wave pdf tex solution
Chapter 6
6.1 Some Geometry and Kinematics pdf tex solution
The rest of the files are zipped and can be downloaded here.
A giant collection of files that includes the whole data set and all source material used to generate this data set is here (60 megabytes). | {"url":"http://ruina.tam.cornell.edu/Courses/Math293294Problems/Web/index.htm","timestamp":"2014-04-20T08:27:39Z","content_type":null,"content_length":"44308","record_id":"<urn:uuid:a058154f-692c-44d8-bd87-d41fa996bde2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Engenharia Agrícola
Services on Demand
Related links
Print version ISSN 0100-6916
Eng. Agríc. vol.32 no.3 Jaboticabal May/June 2012
TECHNICAL PAPER
SOIL AND WATER ENGINEERING
Economic viability of retrofiting emitters in center pivot irrigation systems
Viabilidade econômica da troca de emissores em sistemas de irrigação tipo pivô central
Cornélio A. Zolin^I; Rubens D. Coelho^II; Janaina Paulino^III; Marcos V. Folegatti^IV
^IPesquisador da Embrapa Agrossilvopastoril, Sinop - MT, área de Manejo, Conservação e Uso de Recursos Hídricos
^IIProfessor, Departamento de Engenharia de Biossistemas, ESALQ/USP, Piracicaba - SP
^IIIProfessora temporária, Universidade Federal de Mato Grosso - UFMT, Sinop - MT
^IVProfessor, Departamento de Engenharia de Biossistemas, ESALQ/USP, Piracicaba - SP
Although several studies have been conducted to evaluate the uniformity of water application under center pivot irrigation systems, there are few studies concerning the economic perspective of such
coefficient. The aim of this study is to present a methodology to accomplish an economic analysis as support for the decision-making to retrofit emitters in center pivot irrigation systems, and to
attribute an economic meaning to the uniformity coefficient of water application taking into account the response function productivity to the amount of water applied and the sale price of the crops.
In the hypothetic calculation example considering the variation of revenue of potato crop under center pivot irrigation system, it was verified that the area with uniformity coefficient of water
application of 90% brought an income increase of BR$ 1,992.00, considering an area about 1,0 ha. Thus, it can be concluded that the methodology presented has met the objectives proposed in the study
and made it possible to attribute an economical meaning to the coefficient of water uniformity application.
Keywords: Heermann's uniformity coefficient, production function, sale price of crops.
Embora vários estudos tenham sido conduzidos para a avaliação da uniformidade de aplicação de água por equipamentos de irrigação do tipo pivô central, são escassos os trabalhos que tenham analisado
de um ponto de vista econômico o significado de tal coeficiente. Objetivou-se com o presente trabalho apresentar uma metodologia de análise econômica como auxílio na tomada de decisão para troca de
emissores de sistemas de irrigação tipo pivô central e atribuir um significado econômico ao coeficiente de uniformidade de aplicação de água, levando-se em consideração a função de resposta da
cultura à lâmina de água aplicada e o preço de venda dos produtos agrícolas. No exemplo hipotético de cálculo, considerando-se a diferença na renda obtida com a cultura da batata irrigada com
coeficiente de uniformidade de 90% e 70%, respectivamente, verificou-se que, para um pivô de aproximadamente 1,0 ha, a irrigação com 90% de uniformidade poderia incrementar em R$ 1.992,00 a renda do
produtor. Conclui-se, portanto, que a metodologia apresentada atendeu aos objetivos propostos no trabalho e possibilitou atribuir um valor econômico ao coeficiente de uniformidade de aplicação de
Palavras-chave: coeficiente de uniformidade de Heermann, função de produção, preço de venda das culturas.
With the increased emphasis on the concept of uniformity of water distribution in irrigation systems in recent years due to the increasing need for conservation of water resources and competition for
them, high cost of energy and other supplies and lack of guarantees concerning prices of agricultural products, the choice and proper use of irrigation systems, and the adoption of appropriate
methods of water management should always be considered (SANDRI & CORTEZ, 2009).
According to BERNARDO et al. (2008) uniform distribution and efficiency of water application of sprinkler irrigation systems are essential parameters to express the quality of irrigation. Thus,
assessments of center pivots, beyond the application efficiency, should consider the uniformity of water distribution along the side of the system.
Moreover, the uniformity of water distribution and yield of irrigated crops are directly related, and the high uniformity of application can reduce percolation losses, resulting in economic and
environmental benefits.
According to FARIA et al. (2009) sprinkler irrigation systems should apply water as evenly as possible, since an uneven water application reduces the economic return and increases the environmental
impact of irrigation, due to the reduction in the productivity of irrigated crops and waste of water, energy and fertilizers. Therefore, it is of great importance that investments are made in
maintenance and manpower to improve irrigation systems in order to promote a rational water management.
Amongst the methods of sprinkler irrigation, the center pivot has shown significant expansion in domestic irrigated agriculture. This is due, among others aspects, to the fact that this is a
mechanized system that can be used even in large areas with uneven topography; it provides the potential for the use of fertirrigation; does not interfere with agricultural practices and enables the
application of small blades with high frequency.
In a study published by PAULINO et al. (2011), which studied the situation of irrigated agriculture in Brazil and presented the expressiveness and evolution of the different methods and irrigation
systems, it is possible to see the potential of center-pivot systems. According to the authors, 840 000 hectares are currently irrigated by center pivot, which corresponds to 19% of the total
irrigated area in Brazil.
For analysis and evaluation of a system of center pivot irrigation, it is necessary, among other things, to have information on the distribution of the water applied by the equipment, since low
distribution uniformity can result in unsatisfactory results in homogeneous areas of management, reducing the efficiency of application (RODRIGUES et al., 2001).
FRIZZONE et al. (2007), studying the grain yield under different uniformity of water distribution on soil surface and subsurface, concluded that the quality of irrigation affected the uniformity of
soil moisture and influenced the variables of production of beans crop.
PINTO et al. (2006), studying the influence of climatic variables and hydraulic performance of a center pivot irrigation in western Bahia, noted that on the first 340 m from the center pivot, the
blade applied was higher than the average water depth, except in the area ranging from 60 to 90 m from the center pivot which represented only 1.8% of the irrigated area. By contrast, in the last 164
m, which represented 45.5% of the total irrigated area, the authors observed blades below the average. It is noteworthy that, probably, these problems could be avoided by replacing the emitters on
the system by more appropriate ones considering that the diameter of the pipe side of the system is sized properly.
In a reseach done by CASTIBLANCO (2009), in which was studied the energy savings in center pivot irrigation due to the improvement in uniformity of water distribution, the author considered different
values of uniformity coefficient of water application and compared the net revenues obtained with five different values for the m³ of water consumed and sale price for the bean crop.
To analyze the results, the authors considered total and supplemental irrigation of 50% and 75% in dry and wet periods, and found that higher uniformity coefficients provided greater net revenues and
greater energy savings for higher prices products. It was also noted that additional irrigation enabled higher profits and greater energy savings, especially for irrigation carried out in the wet
Although there are several studies in the literature regarding the uniformity of water application, in view of the importance of this parameter there are few studies that analyze from an economic
point of view the relationship between the uniformity coefficient and the crop yield.
It is known that certain amounts of uniformity coefficient of water application are suitable, such as 90 and 95% for center pivot systems and drip systems, respectively, but this means little in
economic terms, i.e.: what is the economic loss in terms of production if a producer irrigates a particular crop with a uniformity coefficient of 80%? And what will be its increase in profit if the
producer opts to manage irrigation with a uniformity coefficient of 90%, refitting the emitters of the system?
The aim of this study is to present an analysis methodology to help in the decision making with regard to the exchange of emitters used in center pivot irrigation, and assign an economic meaning to
the uniformity coefficient of water application based on the rainfall during the crop cycle, the production function (water) and the sale price of agricultural products.
The methodology to assess the economic viability of the exchange of emitters used in center pivot irrigation considers the following steps described below.
The equation proposed by Heermann & Hein (1968), adapted from Christiansen, is used to calculate the uniformity coefficient of water application (CUCh)
In which,
CUC[H] - uniformity coefficient of water application, %;
MP - weighted average water depth from sampled collectors, mm;
w[i] - water depth colected at collector (i), mm;
A[i] - area represented by collector (i), ha, and
N - number of collectors.
To calculate the wetted area represented by each collector i, according to its importance over the pivot, a serial number is assigned to each of them, where the first collector receives the number 1
and so on until the last collector, its representative area being calculated according to the following equation:
In which,
R - distance from the collector (i) to the center pivot, m, and
S[l] - spacing between collectors, m.
To simulate the viability of refitting the pivot's emitters it is required to select a response function to the water depth applied from the crop being evaluated in order to account the amount which
could be saved through the emitters' refitting. The total number of emitters along the pivot is calculated using the following equation:
In which,
N[Te] - yotal emitters number, and
S[e] - spacing between emitters, m.
The total amount for the issuers is obtained by multiplying the total number of emitters (N[Te]) by its unit value.
Response functions to applied irrigation, used for calculating the viability of the exchange of emitters can be generically represented by a quadratic polynomial equation as follows:
In which,
Y - crop yield, Mg ha^-1;
A, B e C - constants of the equation (admensional), and
w - water depth applied, mm.
To obtain the optimum blade (w*) of water to be applied, the crop response to applied water equation has to be derived and equal to zero; the maximum yield (Y*) is obtained by the substitution of the
optimal blade in its respective function, and the maximum production (P*) is obtained by the product between maximum productivity and total wetted area of the pivot.
The blade estimated to be applied along the production cycle in the area, represented by the collector I, is obtained by the following equation:
In which,
we[i] - estimated water depth to be applied over production cycle for collector i, mm, and
C - precipitation occurred over crop production cycle, mm.
The estimated yield for the area represented by the collector i (Yi), is obtained by assigning the values of we[i] in the production function and the estimated total average productivity (Y[MT]) is
calculated by the following equation:
Average production estimated for the total wet area of the equipment (P[ME]) can be obtained by eq.(8), where:
In which:
P[ME] - average productivity estimated for the total wet area of the equipment, Mg.
The estimated profit from the exchange of emitters is obtained by the following equation:
In which,
LTE - profit from emitters refitting, BR$;
V - crop sale price, BR$ kg^-1, and
V[e] - emitters price per unit, BR$.
Regarding eq.(9) it is important to note that the profit from the exchange of emitters can be estimated for CUC[H] value below 100% and therefore for maximum production value (P*) inferior to that
obtained with an optimal blade. In this case, for calculating the profit gained with the exchange of emitters the numbers used must be the estimated productions in initial and final conditions,
namely before and after the emitters exchange, as described below in greater detail in the calculation example presented below.
Hypothetical example of calculation
To exemplify the application of this methodology two hypothetical calculation examples were performed, the first considering a CUC[H] of 70% and the second considering a CUC[H] of 90%, tables 1 and 2
respectively. The examples used hypothetical water depth value collected from each collector (i) to obtain CUC[H] value of 70% and 90%. The response function of the potato crop to water depth applied
was used in the examples of calculation, according to Duarte (1989) cited by Coelho et al. (1998), which is represented by the following equation:
The calculation examples considered a 50 mm precipitation during the crop cycle and the spacing between emitters and collectors as being 2.5 and 3 m respectively. The unit value of emitters to be
replaced was considered (hypothetically) to be BR$ 20.00 and the sale price per kg of potato crop was considered to be the average deflated price for the year 2007 according to the AGRIANUAL (2008).
Having the amounts shown in Tables 1 and 2, the profit made with the exchange of emitters was estimated from the difference between the estimated average yields with CUC[H] 90% and 70% respectively,
as follows.
It is important to note that the profit earned from the exchange of emitters obtained in the examples relates to the area irrigated by the equipment, which corresponds to approximately 1.0 ha,
therefore the larger the irrigated area covered by the system of center pivot irrigation is, the higher the profit will be. It is also noted that the improvement of CUC[H] can substantially
contribute to achieving higher incomes, corroborating the findings of CASTIBLANCO (2009).
Another important observation regarding the methodology involves the addition of precipitation in the yield calculation, considering that the larger this variable during the cycle, the smaller the
difference of production and consequently the profit from the emitters refitting.
The methodology presented has met the purposes of this study since that based on it, it was possible to quantify the monetary value which represents the increase or decrease in the uniformity
coefficient of water application, thereby attributing an economic significance to this coefficient, a fact which is scarcely studied.
The authors thank the Ministry of Science and Technology (MCT), the National Council for Scientific and Technological Development (CNPq) and São Paulo Research Foundation (FAPESP), for financially
supporting this research through the National Institute of Science and Technology in Irrigation Engineering (INCTEI).
AGRIANUAL 2008. Anuário da agricultura brasileira. 13.ed. São Paulo: FNP, 2008. 504 p. [ Links ]
BERNARDO, S.; SOARES, A. A.; MANTOVANI, E. C. Manual de irrigação. 8. ed. Viçosa: UFV, 2008. 625 p. [ Links ]
CASTIBLANCO, C.J.M. Economia de energia em irrigação por pivô central em função da melhoria na uniformidade da distribuição de água. 2009. Dissertação (Mestrado em Irrigação e Drenagem) - Escola
Superior de Agricultura "Luiz de Queiroz", Piracicaba, 2009. [ Links ]
COELHO, R.D.; FOLEGATTI, M. V.; FRIZZONE, J.A. Simulação da produtividade de batata em função da regulagem do aspersor (sistema portátil). Revista Brasileira de Engenharia Agrícola e Ambiental,
Campina Grande, v.2, n.3, 1998. Disponível em: <http://www.agriambi.com.br /revista/v2n3/273.pdf>. Acesso em: 12 abr. 2010. [ Links ]
FARIA, L.C.; COLOMBO, A.; OLIVEIRA, H.F.E.; PRADO, G. Simulação da uniformidade da irrigação de sistemas convencionais de aspersão operando sob diferentes condições de vento. Engenharia Agrícola.
Jaboticabal, v.29, n.1, 2009. Disponível em: <http://www.scielo.br/scielo. php?script=sci_arttext&pid=S0100-69162009000100003&lng=pt&nrm=iso >. Acesso em: 9 abr. 2010. [ Links ]
FRIZZONE, J.A.; REZENDE, R.; GONÇALVES, A.C.A.; HELBEL, J.C. Produtividade do feijoeiro sob diferentes uniformidades de distribuição de água na superfície e na subsuperfície do solo. Engenharia
Agrícola, Jaboticabal, v.27, n.2, 2007. Disponível em: <http://www.scielo.br/scielo. php?script=sci_arttext&pid=S0100-69162007000300010&lng=pt&nrm=iso>. Acesso em: 9 abr. 2010. [ Links ]
HEERMANN, D.F.; HEIN, P.R. Performance characteristics of self-propelled center-pivot sprinklers irrigation systems. Transactions of the ASAE, St. Joseph, v. l, n.11, p.11-5, 1968. [ Links ]
PAULINO, J.; FOLEGATTI, M.V.; ZOLIN, C.A.; SÁNCHEZ-ROMÁN, R.M.; JOSÉ, J.V. Situação da agricultura irrigada no brasil de acordo com o censo agropecuário 2006. Irriga, Botucatu, v.16, n. 2, 2011.
Disponível em: <http://200.145.140.50/index.php/irriga/ article/viewFile/201/113>. Acesso em: 5 fev. 2012. [ Links ]
PINTO, J.M.; SILVA, C.L.; OLIVEIRA, C.A.S. Influência de variáveis climáticas e hidráulicas no desempenho da irrigação de um pivô central no oeste baiano. Engenharia Agrícola, Jaboticabal, v.26, n.1,
2006. Disponível em: <http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-69162006000100009&lng=pt&nrm=iso >. Acesso em: 9 abr. 2010. [ Links ]
RODRIGUES, T.R.I.; BATISTA, H.S.; CARVALHO, J.M.; ALEXANDRE, O.; GONÇALVES, A.O,; MATSURA, E.E. Uniformidade de distribuição de água em pivô central, com a utilização da técnica TDR na superfície e
no interior do solo. Revista Brasileira de Engenharia Agrícola e Ambiental, Campina Grande, v.5, n.2, 2001. Disponível em: <http://www.scielo.br/scielo. php?script=sci_arttext&pid=
S1415-43662001000200002&lng=en&nrm=iso&tlng=pt>. Acesso em: 20 mar. 2010. [ Links ]
SANDRI, D.; CORTEZ, D.A. Parâmetros de desempenho de dezesseis equipamentos de irrigação por pivô central. Ciência e Agrotecnologia, Lavras, v.33, n.1, p.271-278, 2009. [ Links ]
Recebido pelo Conselho Editorial em: 23-4-2010
Aprovado pelo Conselho Editorial em: 7-2-2012 | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0100-69162012000300019&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-19T03:32:41Z","content_type":null,"content_length":"49883","record_id":"<urn:uuid:ef6b2802-daea-4219-a2a5-cb3d6da3d421>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
{-# LANGUAGE MultiParamTypeClasses, ScopedTypeVariables, FlexibleContexts #-}
module Math.Root.Finder where
import Control.Monad.Instances ()
import Data.Tagged
-- |General interface for numerical root finders.
class RootFinder r a b where
-- |@initRootFinder f x0 x1@: Initialize a root finder for the given
-- function with the initial bracketing interval (x0,x1).
initRootFinder :: (a -> b) -> a -> a -> r a b
-- |Step a root finder for the given function (which should generally
-- be the same one passed to @initRootFinder@), refining the finder's
-- estimate of the location of a root.
stepRootFinder :: (a -> b) -> r a b -> r a b
-- |Extract the finder's current estimate of the position of a root.
estimateRoot :: r a b -> a
-- |Extract the finder's current estimate of the upper bound of the
-- distance from @estimateRoot@ to an actual root in the function.
-- Generally, @estimateRoot r@ +- @estimateError r@ should bracket
-- a root of the function.
estimateError :: r a b -> a
-- |Test whether a root finding algorithm has converged to a given
-- relative accuracy.
converged :: (Num a, Ord a) => a -> r a b -> Bool
converged xacc r = abs (estimateError r) <= abs xacc
-- |Default number of steps after which root finding will be deemed
-- to have failed. Purely a convenience used to control the behavior
-- of built-in functions such as 'findRoot' and 'traceRoot'. The
-- default value is 250.
defaultNSteps :: Tagged (r a b) Int
defaultNSteps = Tagged 250
-- |@traceRoot f x0 x1 mbEps@ initializes a root finder and repeatedly
-- steps it, returning each step of the process in a list. When the algorithm
-- terminates or the 'defaultNSteps' limit is exceeded, the list ends.
-- Termination criteria depends on @mbEps@; if it is of the form @Just eps@
-- then convergence to @eps@ is used (using the @converged@ method of the
-- root finder). Otherwise, the trace is not terminated until subsequent
-- states are equal (according to '=='). This is a stricter condition than
-- convergence to 0; subsequent states may have converged to zero but as long
-- as any internal state changes the trace will continue.
traceRoot :: (Eq (r a b), RootFinder r a b, Num a, Ord a) =>
(a -> b) -> a -> a -> Maybe a -> [r a b]
traceRoot f a b xacc = go nSteps start (stepRootFinder f start)
Tagged nSteps = (const :: Tagged a b -> a -> Tagged a b) defaultNSteps start
start = initRootFinder f a b
-- lookahead 1; if tracing with no convergence test, apply a
-- naive test to bail out if the root stops changing. This is
-- provided because that's not always the same as convergence to 0,
-- and the main purpose of this function is to watch what actually
-- happens inside the root finder.
go n x next
| maybe (x==next) (flip converged x) xacc = [x]
| n <= 0 = []
| otherwise = x : go (n-1) next (stepRootFinder f next)
-- |@findRoot f x0 x1 eps@ initializes a root finder and repeatedly
-- steps it. When the algorithm converges to @eps@ or the 'defaultNSteps'
-- limit is exceeded, the current best guess is returned, with the @Right@
-- constructor indicating successful convergence or the @Left@ constructor
-- indicating failure to converge.
findRoot :: (RootFinder r a b, Num a, Ord a) =>
(a -> b) -> a -> a -> a -> Either (r a b) (r a b)
findRoot f a b xacc = go nSteps start
Tagged nSteps = (const :: Tagged a b -> a -> Tagged a b) defaultNSteps start
start = initRootFinder f a b
go n x
| converged xacc x = Right x
| n <= 0 = Left x
| otherwise = go (n-1) (stepRootFinder f x)
-- |A useful constant: 'eps' is (for most 'RealFloat' types) the smallest
-- positive number such that @1 + eps /= 1@.
eps :: RealFloat a => a
eps = eps'
eps' = encodeFloat 1 (1 - floatDigits eps') | {"url":"http://hackage.haskell.org/package/roots-0.1/docs/src/Math-Root-Finder.html","timestamp":"2014-04-24T14:39:43Z","content_type":null,"content_length":"19020","record_id":"<urn:uuid:1799e7e8-b400-403e-91d6-77f9a06240fa>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Overbrook Hills, PA Prealgebra Tutor
Find an Overbrook Hills, PA Prealgebra Tutor
...I studied French in high school and I am now studying both Greek and Latin. In addition, I've spent time as an SAT tutor where I taught students vocabulary and tips for memorizing vocabulary.
While studying Latin, I've developed a greater understanding of English grammar.
10 Subjects: including prealgebra, algebra 1, vocabulary, grammar
...Know the penalties for wrong answers. My personal notes will help the student master the basics, then expand to harder problems. It's not necessary to do all the problems, but you MUST get the
easy and intermediate ones right!
35 Subjects: including prealgebra, English, reading, chemistry
...I find that when students are really engaged in their writing, they often write much better. I studied English and World Literature at Schreyer Honors College at Penn State. I am certified to
teach English at the secondary level, but I am willing to tutor students of all ages.
12 Subjects: including prealgebra, reading, English, writing
...These are the courses whose knowledge I utilize everyday as part of my job. I currently have by BS in Biology and am continuing my education seeking a master's in Environmental Engineering. As
part of my going back to school for my master's I have had to pick up several more calculus classes and many more engineering classes, so I have a strong knowledge of science and math.
16 Subjects: including prealgebra, physics, geometry, biology
...I am an artist and graphic designer looking to share some of the knowledge and skills I have acquired over the years. I earned a BFA in Visual Communication Design with an emphasis in
Illustration from the University of Dayton in 2001. In 2005 I completed an MFA in Painting at Northern Illinois University.
15 Subjects: including prealgebra, reading, writing, geometry
Related Overbrook Hills, PA Tutors
Overbrook Hills, PA Accounting Tutors
Overbrook Hills, PA ACT Tutors
Overbrook Hills, PA Algebra Tutors
Overbrook Hills, PA Algebra 2 Tutors
Overbrook Hills, PA Calculus Tutors
Overbrook Hills, PA Geometry Tutors
Overbrook Hills, PA Math Tutors
Overbrook Hills, PA Prealgebra Tutors
Overbrook Hills, PA Precalculus Tutors
Overbrook Hills, PA SAT Tutors
Overbrook Hills, PA SAT Math Tutors
Overbrook Hills, PA Science Tutors
Overbrook Hills, PA Statistics Tutors
Overbrook Hills, PA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Bala, PA prealgebra Tutors
Belmont Hills, PA prealgebra Tutors
Bywood, PA prealgebra Tutors
Carroll Park, PA prealgebra Tutors
Cynwyd, PA prealgebra Tutors
Drexelbrook, PA prealgebra Tutors
Kirklyn, PA prealgebra Tutors
Llanerch, PA prealgebra Tutors
Merion Park, PA prealgebra Tutors
Merion Station prealgebra Tutors
Merion, PA prealgebra Tutors
Oakview, PA prealgebra Tutors
Penn Valley, PA prealgebra Tutors
Penn Wynne, PA prealgebra Tutors
Westbrook Park, PA prealgebra Tutors | {"url":"http://www.purplemath.com/Overbrook_Hills_PA_prealgebra_tutors.php","timestamp":"2014-04-18T14:05:32Z","content_type":null,"content_length":"24601","record_id":"<urn:uuid:9151aecd-48b3-4dc2-857b-d8d1a1c0b5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Who named Catalan numbers?
The question. A year ago, on this blog, I investigated Who computed Catalan numbers. Short version: it’s Euler, but many others did a lot of interesting work soon afterwards. I even made a
Catalan Numbers Page with many historical and other documents. But I always assumed that the dubious honor of naming them after Eugène Catalan belongs to Netto. However, recently I saw this site
which suggested that it was E.T. Bell who named the sequence. This didn’t seem right, as Bell was both a notable combinatorialist and mathematical historian. So I decided to investigate who did
the deed.
First, I looked at Netto’s Lehrbuch der Combinatorik (1901). Although my German is minuscule and based on my knowledge of English and Yiddish (very little of the latter, to be sure), it was clear
that Netto simply preferred counting of Catalan’s brackets to triangulations and other equivalent combinatorial interpretations. He did single out Catalan’s work, but mentioned Rodrigues’s work as
well. In general, Netto wasn’t particularly careful with the the references, but in fairness neither were were most of his contemporaries. In any event, he never specifically mentioned “Catalan
Second, I checked the above mentioned 1938 Bell’s paper in the Annals. As I suspected, Bell mentioned “Catalan’s numbers” only in passing, and not in a way to suggest that Catalan invented them.
In fact, he used the term “Euler-Segner sequence” and provided careful historical and more recent references.
Next on my list was John Riordan‘s Math Review MR0024411, of this 1948 Motzkin’s paper. The review starts with “The Catalan numbers…”, and indeed might have been the first time this name was
introduced. However, it is naive to believe that this MR moved many people to use this expression over arguably more cumbersome “Euler-Segner sequence”. In fact, Motzkin himself is very careful to
cite Euler, Cayley, Kirkman, Liouville, and others. My guess is this review was immediately forgotten, but was a harbinger of things to come.
Curiously, Riordan does this again in 1964, in a Math Review on an English translation of a popular mathematics book by A.M. Yglom and I.M. Yaglom (published in Russian in 1954). The book mentions
the sequence in the context of counting triangulations of an n-gon, without calling it by any name, but Riordan recognizes them and uses the term “Catalan numbers” in the review.
The answer. To understand what really happened, see this Ngram chart. It clearly shows that the term “Catalan numbers” took off after 1968. What happened? Google Books immediately answers –
Riordan’s Combinatorial Identities was published in 1968 and it used “the Catalan numbers”. The term took off and became standard within a few years.
What gives? It seems, people really like to read books. Intentionally or unintentionally, monographs tend to standardize the definitions, notations, and names of mathematical objects. In his notes
on Mathematical writing, Knuth mentions that the term “NP-complete problem” became standard after it was used by Aho, Hopcroft and Ullman in their famous Data Structures and Algorithms textbook.
Similarly, Macdonald’s Symmetric Functions and Hall Polynomials became a standard source of names of everything in the area, just as Stanley predicted in his prescient review.
The same thing happened to Riordan’s book. Although now may be viewed as tedious, somewhat disorganized and unnecessarily simplistic (Riordan admitted to dislike differential equations, complex
analysis, etc.), back in the day there was nothing better. It was lauded as “excellent and stimulating” in P.R. Stein’s review, which continued to say “Combinatorial identities is, in fact, a book
that must be read, from cover to cover, and several times.” We are guessing it had a tremendous influence on the field and cemented the terminology and some notation.
In conclusion. We don’t know why Riordan chose the term “Catalan numbers”. As Motzkin’s paper shows, he clearly knew of Euler’s pioneer work. Maybe he wanted to honor Catalan for his early
important work on the sequence. Or maybe he just liked the way it sounds. But Riordan clearly made a conscious decision to popularize the term back in 1948, and eventually succeeded.
UPDATE (Feb. 8, 2014) Looks like Henry Gould agrees with me (ht. Peter Luschny). He is, of course, the author of a definitive bibliography of Catalan numbers. Also, see this curious argument
against naming mathematical terms after people (ht. Reinhard Zumkeller).
1. February 12, 2014 at 2:33 pm |
In spite of priority, certainly there is some virtue to calling them Catalan numbers because “Eulerian numbers” would be very ambiguous! Terms like “Catalan objects”, “rational Catalan
combinatorics”, etc., become immediately comprehensible thanks to this historical (mis)attribution.
2. February 12, 2014 at 6:47 pm |
I agree. The point of this post was not to express grievances about mis-named objects, but to point out that using modern tools like Ngram one can indeed figure out exactly who named them. And
even for the notoriously famous sequences like Catalan numbers, the answer is surprising (at least, it was surprising to me).
1. No trackbacks yet. | {"url":"http://igorpak.wordpress.com/2014/02/05/who-named-catalan-numbers/","timestamp":"2014-04-16T19:39:42Z","content_type":null,"content_length":"59002","record_id":"<urn:uuid:6054c6b6-a68f-4402-a1a6-7c9da1cff770>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Westvern, CA Trigonometry Tutor
Find a Westvern, CA Trigonometry Tutor
...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math
competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma...
28 Subjects: including trigonometry, Spanish, chemistry, French
...I am always on time and in case of emergency: I notify the student as early as possible. Tutoring is my passion and I always look for an opportunity to aid a student, to improve his or her
skills, and to bring out his or her talent. Albert Einstein said once: "It is the supreme art of the teac...
11 Subjects: including trigonometry, chemistry, algebra 2, geometry
...I have been proofreading student essays for years, both in person and online. I was the coordinator of Biola University's Writing Center, where I worked one on one with undergraduate and
graduate students to improve their essays so their voices came through. I am well acquainted with test preparation skills.
22 Subjects: including trigonometry, English, ACT Reading, ACT Math
...Having received a B.A. in Music Theory/History from Oberlin College, I am officially trained to teach you about music. In particular I love giving music lessons on flute and saxophone, but
also basic voice lessons and aural skills. I also have a great background in arranging music (I took a few...
18 Subjects: including trigonometry, chemistry, calculus, algebra 2
...I have worked hard to overcome my own problems with getting distracted and having test anxiety. I work by helping students to build their vocabulary and to extract the critical information
from the reading passages. I have been programming in C# actively for the past 3 years and have programmed in a similar language, Java, since the year 2000.
42 Subjects: including trigonometry, reading, Spanish, chemistry
Related Westvern, CA Tutors
Westvern, CA Accounting Tutors
Westvern, CA ACT Tutors
Westvern, CA Algebra Tutors
Westvern, CA Algebra 2 Tutors
Westvern, CA Calculus Tutors
Westvern, CA Geometry Tutors
Westvern, CA Math Tutors
Westvern, CA Prealgebra Tutors
Westvern, CA Precalculus Tutors
Westvern, CA SAT Tutors
Westvern, CA SAT Math Tutors
Westvern, CA Science Tutors
Westvern, CA Statistics Tutors
Westvern, CA Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Broadway Manchester, CA trigonometry Tutors
Cimarron, CA trigonometry Tutors
Dockweiler, CA trigonometry Tutors
Dowtown Carrier Annex, CA trigonometry Tutors
Foy, CA trigonometry Tutors
Green, CA trigonometry Tutors
La Tijera, CA trigonometry Tutors
Lafayette Square, LA trigonometry Tutors
Miracle Mile, CA trigonometry Tutors
Pico Heights, CA trigonometry Tutors
Preuss, CA trigonometry Tutors
Rimpau, CA trigonometry Tutors
View Park, CA trigonometry Tutors
Wagner, CA trigonometry Tutors
Windsor Hills, CA trigonometry Tutors | {"url":"http://www.purplemath.com/Westvern_CA_trigonometry_tutors.php","timestamp":"2014-04-19T09:44:42Z","content_type":null,"content_length":"24503","record_id":"<urn:uuid:ebc8e6f7-cc4d-488d-86bc-185afe5be16e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Twelve 10" NHT Subwoofer Build.
post #1 of 274
8/27/08 at 7:27pm
Thread Starter
Well, I'm about to start my basement's final subwoofer design and I might need a little bit of advice along the way with this one.
I've decided to go a slightly different way than two 12' or 15's.
I just ordered 12 of NHT's 10" subwoofers (new) that used to be used in their "SubTwo" boxes. My understanding is that they perform pretty good, so I figure why not overkill the whole thing and just
use 10-12 to be on the safe side.
Anyway, here are the specs: I have to be honest, I'm in the learning stage here, so I don't understand most of these numbers just yet.
Re: 9.58
Qms: 2.952
Qes: .444
Qts: .386
Vas: 103.84
Cms: .00561
Mms: 97.8
BL: 16.87
SPL: 85.47
Zmax: 73.2
Sd: .03631
Xmax: 12.5
Hg: 8
Hvc: 33
Photo of the old NHT SubTwo:
Their product sheet says:
Frequency Response:21Hz - 180Hz, +/-3dB
If anyone has any recommendations on box shape, or cubit ft per sub, it would be much appreciated. I have read that they should be in a sealed box, but I'm not sure of the size just yet. Power will
be from an EP1500 or EP2500.
Because I listen to music about 30-35%, I thought that this idea might make some sense.
What if I make 4 seperate boxes, each holding 3 subs. Each box would be wired at 4 Ohms.
Could I put 2 boxes in the front, and 2 boxes in the back of the room, then wire the fronts together as 8 Ohms for one amp channel, and wire the backs together for the other channel. That way I could
just use channel one for listening to music, but run both channels at the same time when I watch movies?
I simply don't know how loud these things will be and I'm not sure if I will need more than the 450 watts per channel that the EP2500 does at 8 Ohms.
Plus would having a fairly common 4 Ohm wired box may be better later down the road if they get split up?
I was thinking about 10's just the other day.
So I calculated a pair of eD 18's vs. eight eD 10's. Came out to be the almost the same output, with the two 18's having just slightly more output.
I decided to stick with my plan of two (or four) 18's. Especially when I factored in that two eD 18's would be under $400 while eight 10's would be more than twice as expensive.
Your having four seperate boxes should smooth out the room response if used in seperate corners, according to what I've read in this forum.
Originally Posted by Erich H
Because I listen to music about 30-35%, I thought that this idea might make some sense.
What if I make 4 seperate boxes, each holding 3 subs. Each box would be wired at 4 Ohms.
Could I put 2 boxes in the front, and 2 boxes in the back of the room, then wire the fronts together as 8 Ohms for one amp channel, and wire the backs together for the other channel. That way I could
just use channel one for listening to music, but run both channels at the same time when I watch movies?
I simply don't know how loud these things will be and I'm not sure if I will need more than the 450 watts per channel that the EP2500 does at 8 Ohms.
Plus would having a fairly common 4 Ohm wired box may be better later down the road if they get split up?
If you're going with an EP2500 and four boxes @ 4ohms each, then you'll want to run the amp in mono. Wire up pairs of boxes in 8 ohms, then bring the pairs together to present a 4ohm mono load for
the EP2500. This would use max output from the amp. Just make sure to use something like 10guage speaker wire going to each box, if you're planning on having them far away from the amp.
I built 2 sealed boxes each with 2 drivers and powered by Oaudio 500w amps-2cuft each box. Never measured them but performed well for HT under my mains. The Dual 10 ported end table version in
6.5cuft blew them away on HT but didn't like sub 20hz content-the subsonic filter in the amp wasn't steep enough to control excursion and the drivers would bottom out. I've since disabled both
versions and i'm trying these out in H frame dipole alignment as bass bins crossed high around 200hz below OB MMTs. I think six dual driver towers placed around your room would be pretty intense.
Maybe 2 up front with a higher Q and 2 sides and 2 back in around 3 cubes eash and crossed a bit lower-say 40hz. Either way sounds like fun. Good luck
The 10" subwoofers came to about $400 shipped. So that's pretty good I thought. I can make sealed enclosures and keep the boxes height low enough to go under my screen. I have low ceilings in my
I'm not sure how much power these 10's will need, but the originals were 500 watts with 2 in each box. I think the singles had 250 watts?
If that's the case, wouldn't running the EP2500 in 4 Ohm bridged be way more than needed? Well, I mean wouldn't running it at 8 Ohms and 2 channels be plenty and not over work the subs or the amp?
I guess I'm thinking of how nice it would be if I could run just the front set for music if needed.
Maybe it just won't work. I have never used an amp like the EP2500.
12 each subs @ 250 watts each = 3000 watts.
EP2500 rated at 2400 watts 4ohm mono or 2x1200 watts @ 2ohms.
So you could alternatively run two boxes at 2ohms on one channel, and 2 at 2ohms on the other channel.
Twelve of those 10s should be great!
I actually have 8 of these sitting in boxes waiting for me to finalize a design. I am planning on building 2 sealed boxes with 4 subwoofers each (probably at opposite ends of the box). When wired in
parallel, each box will be about 2.5 ohms and each will be running on a channel of an EP2500.
I certainly won't be turning the gain up all the way because according to WinISD my cones are launching across the room if I give them too much power. The boxes are probably going to be about 4 cubic
feet. On the TS parameters sheet, Jack recommends 1.278 cubic foot per subwoofer, but 1 cubic foot per subwoofer increases power handling without getting Q too high.
As for your build, there are obviously many combinations possible. You could do the four boxes you mentioned, which seems like the magical subwoofer number. You could also build two boxes, six in
each, and put these between your main speakers or corner loaded at the front (or rear) of the room.
Have you played around with WinISD and these subwoofers much?
It's best to not have to wire the boxes wired in series if they are separated with long wire runs.
What I have in mind would all be based on the woofers being wired in series pairs, 2 or 4 per enclosure.
Each driver is 12Ω nominal and when wired in series pairs they will handle the full output of the EP2500 running bridged(~96v), in an appropriate box.
The power will be evenly distributed across the woofers ~200W@.
Two towers ~full room height up font in the corners loaded with 4 drivers each, and two boxes with 2 drivers each distributed to even out room modes further.
I do believe going with the front towers over distributing boxes around the perimeter will help lock in the bass to sound like it's part of the font stage. A dual vertical array should also help with
room modes vs lower perimeter placement
Like so:
Dan, I think that's a really slick idea. Would they cause issues sitting directly behind my front speakers? Heck, I could always get something to replace those ported rti-8's.
I have also considered shrinking down my screen size a bit if needed.
What if I built the box so that the the front is 13" wide. That would allow me to have the option of an array standing up on a small pedestal in the corner, or removing the pedestal and laying them
along the front floor if needed for future placement issues. Maybe something like 13"x60" housing 4 subs each? I could make the pedestal and/or cap height whatever is needed to fill the corners.
I did try, but I'm not sure how well I did. I need to work more on that.
THEN, today I got an e-mail from the guy at NHT that is selling these. He said he made a slight mistake and they didn't have 12, only 7.
So, he actually upgraded my order to the 083 model at no extra cost. I thought that was pretty nice as the unit is $10 more.
They are a little bit different, but not much. Here is the link to the actual specs. Honestly, the vast majority of this is a bit Greek to me right now.
In his email, he mentioned that if I went sealed, the recommendation was for 2 cu ft per driver. He also said "This will give you an anechoic -3dB point of 30Hz. Room gain will drop the -3dB
frequency even lower."
Obviously I have multiple ways to wire them, but he did mention this, which I believe is along the same line as Neo Dan mentioned.
"wire the drivers 6 in parallel, then both groups in series ( 3.17ohms at DC) or 4 drivers in parallel then all 3 groups in series (7.15ohms at DC). The actual impedance will be much higher than
these values."
I'd like to get started on these boxes this weekend if I have the time.
I have read about people porting these 083 models, is there any reason for doing that if I have 12??? Or should I just keep it fairly simple with sealed and lots of power?
I'm looking at the page linked in the above post... which set of T/S parameters should I be modeling with to model this driver?
There are T/S parameters listed for:
Small Signal
I started with the Large+warm (thinking this was large signal when the driver's warm) and things looked pretty reasonable until I noticed the FS of 10Hz... does that seem reasonable?
Any help would be appreciated!
4 per box 8'³ sealed is good
4 per box 10'³ @ ~15.75Hz via 1-6" x 28" port would be nice
Going ported would require a high pass filter the Behringer MIC2200 can do this and convert your RCA out to balanced line to drive the amp. It's goes for ~$100 or less.
Look in the excel file:
Please reread the 2nd paragraph on the sale page. For T/S modeling you need to use the T/S parameters located in the Excel spreadsheet.
I've been thinking about this build for a few days now and I appreciate the extra help guys, I really do. One goal I was hoping for was to be able to run a lesser number of these for music. But I'm
not so sure this will be possible because of the Ohms and the EP2500.
I suppose there isn't much difference whether I go 2 x 1200 at 2 Ohms or 1 x 2400 at 4Ohms?
If there is no difference, then I keep trying to get the fronts on one channel, and the rest on the other channel.
My only thought was this, which is taking a few ideas here and putting them together.
Using Neo Dans method for the front array. But only using 3 per box giving a 4 Ohm load to each. One on each side, wired to give me 2 Ohms.
Then have another set of 3 directly in the middle under the center channel, and the last box behind the seating area.
Each sub would be getting 200 watts, and I would have the ability to use the front channel for music, or both channels at the same time for theater.
I know it's more building, but that's okay.
Would that idea make any sense at all? I think I would feel a bit more comfortable being able to control 2 channels and have 4 Ohm boxes.
you could do it but I wouldn't recommend it.
what AVR are you using?
Do you have a balanced line converter or EQ that has balanced lines out?
Why do you want to use less subwoofers when listening to music?
I have a Denon 987. I will be getting some type of EQ at least for the subs. The Denon does have preamp outs and line inputs per channel. It also has built in EQ's for each speaker as well. I haven't
used that yet.
Jack, I will eventually be building my own fronts once I get a better handle on all of the lingo and learn more. Likely using some more of your speakers.
But right now I have Polk rt1-8's in the front, a CSi-3 for the center, and FXi-A4's for surround. I don't think they can keep up with all 12 subs for music.
Maybe this is just my own perseption, but for music I seem to like the bass coming from up front by the mains. I've been moving subs around in my room for about 6 months and while the back is
definitely stronger bass in my room, once I put two 12" subs on the front wall, it just sounded right to me.
Because of the rediculous number of times I have moved subs, I'm a little worried about building this system and not having many options once built. I'd be pretty well locked down because of the
unusual Ohm load for the different boxes and they would all need to be wired exactly one way for the amp. I'm not sure I could make much change in the room without rebuilding every single box.
One question I have never been able to find an answer for is pretty simple: Will I get the same solid music kick from 12 speakers running at 25%, or would it sound more solid running 6 at 50%? I
would think that running all of them at a much lower volume would give you more of a muddy bass sound without the extra punch.
Does that Denon allow custom settings depending upon which source you select? If so, you could stay with the suggested four corner approach for HT via the DVD player, and then just the front corners
with your mains for stereo music listening via the CD player. Not as easily selected if you are listening to SACD or DVD-A from the DVD player though, but those are normally enjoyed as surround sound
Where do you crossover now(subwoofer x-over in the AVR), do you run the mains fullrange.
The Denon does have 3 "User Modes". I have one set up for CD music, one for HT, and the other is for the tuner.
I have an amp that allows me to switch between 2 pairs of speakers (A+B). I have the A's set up with 2 15" sonotubes for HT and the B is set up for 2 12" ED 13ov.2's for music. Yes, I know I'm a sub
So right now I just turn on the reciever, hit the user mode to whichever, then pick the proper set of subs.
Having only the 15's on in the back while listening to music (or any sub in the back) makes it seem directional or disconnected because they are 22' away from the fronts not very good at all for
music. But not a big deal for movies for some reason....sounds okay.
I guess that's why I am worried about not being able to turn off the rear subs for music. Maybe I don't need to worry. But if there's chance for issues, I won't know until after the build.
All speakers are set to small. Crossover is set to 80.
Dan, you mentioned that you don't recommend the four 4 Ohm boxes. Is this because of amp issues between 4 Ohm mono versus 2 channel at 2 Ohms?
Will I be heating my basement with this thing running at 2 Ohms??
Chances are, I won't be able to turn this thing up anywhere near what it can do. It would start buckling the plaster walls upstairs!
You should be comparing the difference between the responses of the two locations with the same sub woofers by moving the eD subs back to the rear and measuring/ listening for the difference.
If you MUST have 4 boxes buy 4 more drivers, still going with 4 drivers per box wired series parallel. I'd still run all or most of it up front, maximum one box in the rear. Most likely in the center
position. probably down firing.
With your low SPL requirements I would consider getting a bassis, with the 12 or more drivers you should be able to tweak in a nice low end.
With subs up front you should be able to raise the x-over up to 120Hz or so, giving you more punch from the subs and taking the workload off the Polk towers.
Have you ever considered moving the Polk's out into the room further, and maybe a tad closer to each other. It may seriously improve your sound-stage, and allow you to run without the center channel.
A huge improvement, imagine the sound coming out of the screen.
I will try to to move the Polks out farther from the wall then and see how that goes. I have been thinking that things are just too spaced out.
I have tried the ED subs in the back and I do get much more bass, but it doesn't sound as nice. There seems to be just one sweet spot in that back area, but it also sounds disconnected and very
obvious to pin point it's location. So yeh, your idea of keeping most up front is definitely going to happen.
I believe I have a serious spike somewhere in the back. I can easily hit 115db with some quickly made 15" Sonotubes hooked up to an amp feeding 100 watts to each. Amp is set around the 10:00
location, not even half way up. Denon sub level is at '0'. They are 800-1000 watt subs. And I've got a decent amount of accoustic panels and traps up as well.
Just to be sure.
The idea of 1 box of 3 in each corner as you drew out earlier (sitting on a base to get it up higher).....then one box of 3 right in the middle under the screen......total of 9 up front....then one
downfiring box of 3 in the back.......do you think that would be okay? I know it might not be ideal, but will I notice a huge difference with either of these methods:
9 up front, 3 in the back
or 8 up front, 4 in the back
Not that I'm all for looks on this, but the 3 per box trio up front might look pretty cool staring back at you too!
Tough decisions.
P.S.- I ordered the EP2500 today.
[quote=NEO Dan;14550401]4 per box 8'³ sealed is good
4 per box 10'³ @ ~15.75Hz via 1-6" x 28" port would be nice
Going ported would require a high pass filter the Behringer MIC2200 can do this and convert your RCA out to balanced line to drive the amp. It's goes for ~$100 or less.
Look in the excel file:
Thanks for the link!
My math on figuring resistance is a bit sketchy (especially running 3 drivers per box). What are the wiring options for boxes of three drivers and what resistances do you end up with? I'm running
some models in Unibox and need to know how much power each driver would be seeing.
I've been doing the measurements for boxes of 3, 4, and even 6 per box to try and make this work the best (as in appearance too).
My reasoning for the 3 per box deal was also based on the thought that I might be moving in a couple years. Making a box that is roughly 80" tall might not fit at the next place. Granted I can just
rebuild at that point.
I was using WinISD, but it was the free version and not giving me much data. So I did download Unibox and I'm currently running numbers in that. I seem to be getting an odd "max power input" figure.
It seems very low per sub. I'll keep at it though.
I think my subwoofers and amp will be arriving tomorrow.
I tried to do a few quick designs for the boxes and would like to get some feedback. I just started using SketchUp (as you'll soon notice) and tried coming up with a some different layouts. The cu/ft
listed would be before drivers and bracing.
Any ideas on these?
4 in a sealed box:
4 in a ported box:
6 in a sealed box:
Or another idea I was thinking about:
Now that last one is intresting to me. But I'm not quite ready for the whole fullrange crossover deal just yet. However I thought that I could build a small recessed rectangle where the mids and
tweeter would be.....big enough to slide a center channel or book shelf unit right into it. Maybe somehow allowing it to swivel and be directional? Okay, that might be a bit odd, it was just a
Any comments are appreciated. And feel free to poke fun at my Sketch Up work too!
Looks good, Erich.
I modeled your boxes in WinISD and I would shrink the sealed boxes and make the vented box bigger.
The Q for the sealed boxes is around .714 if you take the 4 subwoofer box down to 6 cubic feet and take the 6 subwoofer box down to 9 cubic feet. The vented box gives you a better f3 if you take it
to 14 cubic feet and tune it to 20 Hz.
See what NEODan has to say!
One thing to note...if you do 2 boxes of six you can put each box on a channel of the amp. If you do 3 boxes of 4, you'll be doing two boxes on one channel of the amp and 1 box on the other
channel...it's not too big of a deal but something to think about.
I think you are modeling the npt-11-075-2 woofers. Erich is using the npt-11-083-x woofers, which have a softer spider and therefore slightly different T/S parameters. The 083 should have a Qtc of
about 0.7 in 2 cuft.
Thanks guys. Those are sort of approximate values on size. I can't go much taller than 80". I could go a bit wider and I suppose a little bit deeper.
Sealed was my initial thought and it seems to be what the programs are recommending. But I would take the time to do ported as well, then again, box size may be a factor unless I stuff 3 in that 10
cu ft box versus 4.
I'm pretty excited about getting these things going!
post #2 of 274
8/27/08 at 7:46pm
Thread Starter
post #3 of 274
8/27/08 at 8:45pm
post #4 of 274
8/27/08 at 8:49pm
post #5 of 274
8/27/08 at 8:58pm
• 3,941 Posts. Joined 1/2008
• Location: nj
• Thumbs Up: 12
post #6 of 274
8/27/08 at 9:07pm
Thread Starter
post #7 of 274
8/27/08 at 9:24pm
post #8 of 274
8/27/08 at 10:09pm
post #9 of 274
8/27/08 at 11:01pm
post #10 of 274
8/28/08 at 5:56am
Thread Starter
post #11 of 274
8/28/08 at 4:53pm
Thread Starter
post #12 of 274
8/29/08 at 8:27am
• 1,946 Posts. Joined 7/2006
• Location: All alone in northern MN...
• Thumbs Up: 20
post #13 of 274
8/29/08 at 9:11am
post #14 of 274
8/29/08 at 9:17am
post #15 of 274
8/29/08 at 10:16am
Thread Starter
post #16 of 274
8/29/08 at 11:13am
post #17 of 274
8/29/08 at 11:45am
post #18 of 274
8/29/08 at 12:16pm
Thread Starter
post #19 of 274
8/29/08 at 12:30pm
• 4,340 Posts. Joined 9/2003
• Location: Yardley, PA
• Thumbs Up: 10
post #20 of 274
8/29/08 at 1:04pm
post #21 of 274
8/29/08 at 1:12pm
Thread Starter
post #22 of 274
8/29/08 at 1:16pm
Thread Starter
post #23 of 274
8/29/08 at 6:13pm
post #24 of 274
8/29/08 at 8:10pm
Thread Starter
post #25 of 274
9/2/08 at 9:28am
• 1,946 Posts. Joined 7/2006
• Location: All alone in northern MN...
• Thumbs Up: 20
post #26 of 274
9/2/08 at 10:39am
Thread Starter
post #27 of 274
9/3/08 at 9:22pm
Thread Starter
post #28 of 274
9/3/08 at 11:01pm
post #29 of 274
9/4/08 at 12:06am
post #30 of 274
9/4/08 at 5:45am
Thread Starter | {"url":"http://www.avsforum.com/t/1060966/twelve-10-nht-subwoofer-build","timestamp":"2014-04-20T11:53:04Z","content_type":null,"content_length":"216033","record_id":"<urn:uuid:1c31543d-2f5e-4947-ac12-2f40f36d062b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Define a Function
November 12th 2005, 06:21 PM #1
Global Moderator
Nov 2005
New York City
Define a Function
I was trying to Generalize the Harmonic Function, define H(n)=1+1/2+1/3+..+1/n
for integral values of n. Now how can H(x) be defined in such as way such as it will be countinous and H(x)=H(n) for integral values of x. This is anagolus with the Gamma function as a
generalization for the factoril.
Thus given:
H(x) is countinous for x>1 or x=1.
Find a possible H.
I was trying to Generalize the Harmonic Function, define H(n)=1+1/2+1/3+..+1/n
for integral values of n. Now how can H(x) be defined in such as way such as it will be countinous and H(x)=H(n) for integral values of x. This is anagolus with the Gamma function as a
generalization for the factoril.
Thus given:
H(x) is countinous for x>1 or x=1.
Find a possible H.
A linear interpolant (or spline interpolant) will do. To find something special, you probably have to impose extra conditions, such as logarithmic convexity in the case of the Gamma function.
Something based on the logarithm might work; for example, it is known that lim(H(n)-ln(n),n->infinity) is a constant, called the Euler-Mascheroni constant and denoted by gamma.
November 13th 2005, 11:51 AM #2
November 22nd 2005, 12:10 AM #3
Nov 2005 | {"url":"http://mathhelpforum.com/calculus/1282-define-function.html","timestamp":"2014-04-17T14:10:08Z","content_type":null,"content_length":"35354","record_id":"<urn:uuid:e4f12354-d2aa-4949-819f-acfb9e2e1b93>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
how do we calculate quantity of cement , sand and
aggregates in 1 m3 of M
how do we calculate quantity of cement , sand and
Question aggregates in 1 m3 of M30 grade concrete ?
Question Submitted By :: Nsewani
I also faced this Question!! Rank Answer Posted By
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer 1:1:2 3 Ratul
# 1
Is This Answer Correct ? 159 Yes 109 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer In the case M30 concrete we calculating by design mix.but 3 Sjtbehera
# 2 incase of norminal mix we can use 1:1:2. [Hari Machines]
calculating cement, sand and concrete.
toal ratio=1+1+2=4
cement = 15.4/4=3.85cum=11bags
sand=3.85*1=3.85 cum
chips=3.85*2=7.7 cum
Notes: why we divided 15.4
but it is difficult to acess exactly the amount of each
material recquired to produce 1cum.of wet concrete when
deposite in place.
to find out volumes of cement sand & coarse aggregate
divided a numerical no. 15.4 variable up to 15.7 according
to proportioning and water cement ratio by the summation of
the proporation of the ingrediants used and then multiply
the result thus obtained their respective srength of
Is This Answer Correct ? 133 Yes 192 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer M30- cement=1/4*1.54=0.385*28.8=11bags 0 Sachdeva D.p
# 3 sand-0.385*1=0.385cum
agregate 0.385*2=0.77cum
Notes;why we multiply by 1.54
dry dencity=1.54*wet denciti in conct.work
Is This Answer Correct ? 115 Yes 47 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer If you Consider M25 Grade Concrete Nominal mix Design is 0 Sateesh Patil
# 4 1:1:2(1 Cement:1 Sand:2 Aggregates)
Volume of Dry Mix=1.5% X Volume of wet mix
If you Consider 1 Cum Concrete
Cement=1/5X1.5(1)=0.3 Cum(1 Bag of Cement=50 Kgs=1.235
Cft=0.035 Cum)
Then 0.3 /0.035=8.57 bags=428 kgs
Sand=1/5X1.5(1)=0.3 Cum=10.584 Cft
Aggregates=2/5X1.5(1)=06 Cum=21.168 Cft
Is This Answer Correct ? 46 Yes 150 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer if we consider concrete grade M20(1:2:3) 0 Randeepchaudhary
# 5 1+2+3=6
wet volume 1.58cum
cement = 1/6x1.58=0.26cum
volume of one bag of cement = 0.033cum
total quantity of cement bag one CUM concrete =0.26/.033=7.87bags
7.87x50 kg=393kg
Is This Answer Correct ? 38 Yes 65 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer The approach by Sjtbehera is totally correct but has some 0 Rajarshi Basu
# 6 little silly mistakes...I have tried to give a rectified
In the case M30 design mix, the quantities of cement,sand
and aggregates cannot be calculated as it is a variable
quantity.It depends upon the Mix designer how much quantity
of cement,sand and aggregates he will employ.His only
intention would be to design a mix whose specified
characteristic compressive strength at 28 days=30 N/sq.mm
But in case of nominal mix we can use 1:1:2
Let us consider a volume of 10 cu. m (Wet concrete)
It is difficult to access exactly the amount of each
material required to produce 10 cu m of wet concrete when
deposited in place.
Hence to convert the wet volume into dry volume,
Increase by 54 % to account for shrinkage and wastage
Thus it becomes=15.4 cu. m (Variable upto 15.70)
Calculating cement, sand and aggregates in the mix:
Total summation of proportion=1+1+2=4
Cement = 15.4/4= 3.85 cu. m =3.85/0.0347=110 bags
Sand=15.4/4=3.85 cu. m
Aggregates(20mm to 6mm)=15.4 x (2/4)=7.70 cum
Hope u have understood and the mistake is clarified!!!
Is This Answer Correct ? 68 Yes 21 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer Take a Ratio 1:1.5:3 0 Rahul Mehta J.e
# 7 Now 1+1.5+3= 5.5
Now Divide This By 1.52 Which Is wet Volume
Now 1.52/5.5 =0.276
Now Multiply This by 30 Coz 1 Cum Of Cement Contains 30
bags dry.
==> 0.276 X 30 = 8.29 Bags Or 414 Kg cement.
Is This Answer Correct ? 40 Yes 31 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer for designing 1m3 concrete of M25 mix the ratio is 1:1:2 0 Subilash .s(civil Engineer)
# 8 for calculating cement,sand and course aggregate ,
cement= 1/(1+1+2)=.25
density of cement =1440kg/m3
cement required for this mix=1440*.25=360kg=6bags+10kg
course aggregate=1-.5=.5m3
this methode of designing is known as volume batching ,this
methode will be ues to design mixes upto m25.
Is This Answer Correct ? 33 Yes 25 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer for m30 grade .we have to calculate the quantity of all 0 Samsulhoque@jadavpur Universit
# 9 ingredients of concrete by design mix as per IS
10262-2009.it depends upon the whether admixture is used or not.
Is This Answer Correct ? 19 Yes 18 No
Re: how do we calculate quantity of cement , sand and aggregates in 1 m3 of M30 grade concrete ?
Answer take the ratio 1:1.5:3 0 Ranga
# 10 cement =1
sand =1.5
metal =3
So what is the ratio we want to mixing 1+1.5+3=5.5
Volum of wet cement concret is 1.54 to 1.57
unit weight of cement bag is 1440 kg/cum
one bag of cement = 50/1440 = 0.034722 cum
SOLUTION : CEMENT = 1.54/5.5 =0.28
=0.28/0.0347 = 8.069 ( 8 bags)
one bag of cement =50 kg X 8.069 = 403.45 kgs.
SAND solution =0.28 X 1.5 = 0.42 Cum
1.5 is the mixing proportion
METAL SOLUTION = 0.28 x 3 = 0.84 cUM.
3 is the mixing proportion
1 Cum cement concrete is 1:1.5:3 ratio
cement = 403.45 kgs.
sand = 0.42 Cum.
metal = 0.84 Cum.
Is This Answer Correct ? 160 Yes 27 No
123 >> | {"url":"http://www.allinterview.com/showanswers/95351.html","timestamp":"2014-04-18T08:07:00Z","content_type":null,"content_length":"51961","record_id":"<urn:uuid:415b45b8-ff22-4159-bba9-3ebc844b0cf4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number Bunching
On January 4, 2012, as part of an MAA special session on the Mathematics of Sudoku and other Logic Puzzles at the Joint Mathematical Meetings in Boston, I gave a talk about critical sets in Futoshiki
squares. I wouldn’t say I generated any groundbreaking results, but I showed that a new problem has some interesting aspects to it, and if it gets people thinking more about puzzle-related
combinatorics, mission accomplished.
Slides (plus a couple of bonus puzzles I prepared for the session) can be found on this page under “The Mathematics of Puzzles.” I’m very open to talking about this and related problems, so do get in
touch with me if the slides inspire anything.
Note: If you happen upon this entry after July 2012 and the link above doesn’t lead anywhere, chances are I’ve completed my move to Brown but haven’t updated old entries. If that’s the case, look for
me on the Brown website, and the slides are likely linked from my web page there. (If you happen upon this entry after July 2112, greetings from the 21st century! Sorry about that whole global
warming thing. We tried our best, but fossil fuels are just so convenient…)
Strange days indeed.
The genesis of this blog was an effort to reinvigorate my research program. I have always considered myself a teacher above a researcher, but I have also been holding on to a tenure track position
for three years, and in such a position, even at a teaching-focused liberal arts school like Guilford, there is an expectation to produce measurable research progress. So I attempted to post here
more often and try to get my ideas in writing, in the hopes that it would keep me motivated and help draw collaborators.
Except after a few months, the blog petered out. And a few things of note happened.
1) After a rather unproductive research summer, I found myself doing an about-face and deciding that at this phase of my career, making independent research a focus was not an efficient use of
skills, and thus, I applied to a number of non-tenure-track teaching positions.
2) Laura Taalman and Jason Rosenhouse, in coordination with the release of their excellent (and reasonably priced!) book Taking Sudoku Seriously, organized a session on Sudoku and other pencil
puzzles at the Joint Meetings. Given my extracurricular interests, I felt that if there was ever a Joint Meetings session at which I should give a talk, it was this one.
3) While I was waiting to hear from schools I applied to at the end of Item 1, Brown University decided to institute a brand new Lecturer position, which I was offered and accepted for Fall 2012. The
aim of this position, among other things, is to add consistently, stability, and vision to the undergraduate calculus program. In addition to teaching two courses a semester, I will be organizing and
supporting up to seven calculus courses across the curriculum.
All of this leaves me in certainly an exciting position, but a slightly awkward one when it comes to research… I have now obtained a position where research is not a requirement, which was my goal,
but ironically, having prepared my talk for Boston, I’m probably more enthusiastic about research than I have been in some time! But on the other hand, I haven’t really done much actively so far this
year, so it’s not as if that enthusiasm is leading toward a paper. But I have a number of people who have expressed interest in collaborating and who have shared ideas, and I find research much
easier to approach when it’s not actively being demanded. I’ll probably have a lot less time for research in the new job, but when I do have time, I may be more inclined to work on it than I am now.
The thrust of all of this is that I can’t guarantee this blog will be updated frequently. Certainly when I have research ideas/progress that I think is worth sharing, I’ll drop by and post, but those
moments may be few and far between. Really the main reason I showed up now was to post my slides from the Boston talk (see next post), but I figured I should say something about where I’ve been for
6+ months. So after that post, I may be back in less than 6+ months. Or maybe not. Time will tell.
Also, I think I might write a book. But perhaps that’s getting ahead of myself…
Naturally, immediately after posting about who I’m trying to reach with this blog, I stopped trying to reach them for over a month. I blame a complex combination of the statistics course I’m
teaching, the National Puzzlers’ League convention, and the lethargy-inducing summer heat for my posting hiatus.
I’ve said this is intended to be primarily a research blog, but with my course coming to an end, teaching is more on my mind (as it often is), so I figure I’ll bring up a pedagogical issue. This
summer, I taught Guilford’s Elementary Statistics course for the fourth time. The first time I taught it, I was not in love with the textbook we used (Brase & Brase), and I spearheaded a shift to one
of several texts available by Sullivan. This summer, I continued to use Sullivan, but I changed the source of my homework assignments.
Previously I assigned odd-numbered exercises as uncollected practice problems, and even-numbered exercises as homework to be turned in. (Convincing students to complete the uncollected problems is a
battle worth discussing in its own future post.) For this course, I still used practice problems from the text, but I started writing my own problem sets to replace the even-numbered problems. There
are pros and cons to this approach, some of which I expected, and some of which caught me by surprise.
Flexibility: When choosing assignments from the book, I often need to comb the exercises carefully to make sure I have a representative sample of the types of obstacles I want my students to be able
to navigate (For example, making sure to include a certain number of two-tailed tests, intervals using all different test statistics, probability questions that involve independent and conditional
events, and so forth). Some of these then need to be trimmed to create assignments that reflect the concepts I want to teach (do problem #34, but you may ignore this instruction which applies to a
concept we had to skip due to time constraints). When writing my own assignments, I can create problems that deal with exactly what I want to cover, no more, no less.
Compatibility with Exams: I have, in the past, received complaints that the style of problems I pose in my exams is different than the problems students have practiced from the book. This argument
arises in most of my classes, but in stats more than others, where the problems tend to be wordy and the language is flexible. This semester, since the homework assignments were also written in my
style, students no longer expressed frustration about the “language gap” on the exams (although a few complained about the differences in wording between practice problems from the book and the
problem sets).
Improved Performance?: The average homework grade in this iteration of the course was higher (by about 3 percent) than in either of the past two semesters. However, there are too many lurking
variables to assume the assignments are responsible: the current course is taught in the summer when many students have fewer distractions, we meet twice a week rather than once a week in the fall/
spring evening sections, and of course, every class has a different population of students. But while the grade increase can’t automatically be attributed to the different style of homework, at least
it didn’t have an obvious negative effect.
Academic Honesty: While Guilford has an honor code, it should surprise no one to hear that it isn’t always followed to the letter. I’ve taught at two schools, and at both I’ve had at least one
incident in which a student got their hands on the textbook solutions, which has left me in a paranoid state where every time a student does well, I have to prove to myself that they’re not copying
from an external source. Assigning problems that have not previously seen the light of day very effectively stops the paranoia, although to sustain this benefit, the assignments would need to be
heavily altered from semester to semester.
Time: The obvious disadvantage to assembling these assignments is that takes a lot of time to create, typeset, and distribute the problems. This was not hard to fit in this summer when I was teaching
one class, but as my three-course load for the fall approaches, trying to prep original handouts for all three courses is daunting. It would presumably be easier to update these assignments once I’ve
taught for a long time and I’ve developed a library of problems. But as noted above, keeping the assignments fresh is important if one wants to ensure the solutions aren’t floating around the student
Lack of Flexibility: Yes, I know this appears to contradict my pro above. I have a tendency to try to plan in advance as much as possible, so that I have material planned on a week-to-week basis as
the course begins, and homework selected a few weeks in advance. Of course, sometimes things don’t go quite according to schedule, and some adaptation is needed… When I was assigning problem numbers
from the book, it was trivial to bring up the website and move Problem 48 from Week 5 to Week 6. When I show up for class with a handout already, that’s a bit more etched in stone. This problem
solves itself if I’m willing to do less advance planning, but since that’s something I find comfortable, this counts as a con for me.
Distance from Textbook: I already had some issues with students not doing the practice problems (which is fine with me if they don’t need to in order to learn, but most students in the course do),
and incidents those seemed to increase this semester, possibly because having the collected problems originate from an entirely different source left some students thinking they didn’t need to open
the textbook to succeed. I frequently tried to dissuade them of this notion, but it’s still worth mentioning.
So what to do going forward? (Now that I’ve spent enough time on this entry that I could have already written an assignment?) My fall courses are Statistics, Calculus I, and Multivariable. My current
plan is to keep using original homework for stats, and to use book problems for Multivariable, since interesting problems in that course are harder to write, and I have fewer academic dishonesty
issues in that course. I’m still very much on the fence about Calculus I. Feel free to push me toward either side.
I’m still working on finding a voice for this blog, and I find myself repeatedly asking myself the same question when writing: Who is this blog for? One can ask that question in terms of mathematical
maturity, area of expertise, or simply why they might be reading this.
Well, one member of the target audience is me, in the sense that when writing about my own research ideas and questions, putting them in writing (especially in a public forum where there’s some
accountability) helps give me focus. Reading the posts back later is often useful as well. But it becomes something of a useless exercise if I don’t have readers, and so I have to pick a certain
level to pitch the content at. Some types of readers may have looked at some of my previous entries and stumbled upon terms they didn’t understand… others might have been mildly disgusted that I took
the time to (not very rigorously) define a Cayley graph. So let me say a little about where I’m aiming.
It’s important to me to make this blog accessible to people who are not Ph.D-level researchers, and certainly not necessarily in my area. As a dedicated undergraduate teacher, and someone who would
like to direct more undergraduate research, I want to be able to send students here so that they can initiate conversations with me about posts they find interesting. If at some point I find myself
on the job market, I’d like this to be a place where search committees can get more information about me, and I don’t assume those committee members are number theorists. And thinking in a very
general sense, if something is available to everyone on the internet, I like the idea that a random netizen might stumble upon it and be able to read it. So to reach a wide audience, I try to explain
things in simpler terms than one would see in the literature. It’s sort of a “popular science” voice without the “popular.”
When I summarize papers and talks, I intentionally don’t give details of the results. This is simply because I don’t think it’s my place to do so, since the findings aren’t mine. I do try to give a
feel for the flavor of the arguments, and enough information so that if a reader is interested, they can figure out where to go. On the other hand, when discussing my own work, I tend to be more
specific, since I have both the knowledge and the authority (using that term pretty loosely) to do so. If I’m talking about something I’m working on, it means that I’m open to feedback and/or
collaboration, so if anything piques your interest, don’t hesitate to comment on it or contact me privately.
As I try to pitch things to this rather broad target audience, I hope people won’t make judgments based on my choice of tone; if, for example, I take more space than strictly necessary to explain a
concept, it doesn’t mean that I would require that sort of explanation to grasp it myself. Being a researcher and being a teacher are both complex and difficult arts; just as I have to juggle both as
an academic, I have to juggle both when writing here. I hope that if the result is not at the optimal level for you as a reader, you’ll bear with me.
I’ve recently (okay, longer than recently) wondered whether my interest in combinatorial number theory is actually an interest in combinatorics, filtered by a somewhat stubborn background in number
theory. To put it differently, growing up I studied a large quantity of number theory and very little combinatorics, and now I find myself quite interested in both. The fact that combinatorics hooked
me with a lot less exposure might mean that that’s what I should really be doing…
Anyway, with this in mind, I’ve been making a point to scan the titles regularly on math.CO, which feels like wandering through an unfamiliar neighborhood within your city. I find a lot of the
abstracts intriguing, though my repertoire of theorems and terminology is a bit lacking. Notably, however, my most recent pass turned up this paper on sum-product estimates. How in the world is this
not cross-listed on math.NT?
A latin square is an n x n arrangement of numbers from 1 to n such that every row and column contains each of the n symbols exactly once. A gerechte framework is a partition of the n x n square into
n sets of n cells. A gerechte design for such a framework is a Latin square in which each set in the partition also contains each of the n symbols once. A multiple gerechte framework (and design) is
one in which multiple partitions are specified (and satisfied).
The name “gerechte design” was coined by W.U. Behrens, with “gerechte” meaning “fair” in German. Note that if n=9 and the gerechte framework is nine 3×3 subsquares, a gerechte design is a valid
Sudoku configuration.
In an attempt to yank things closer to the number theory world, I propose an additional object: A sum-gerechte design is one in which the sets in the framework do not necessarily contain all symbols
from 1 to n, but the sum of the symbols must be n(n+1)/2. Note that unlike in the case of gerechte designs, this setting requires the symbols to actually be the integers from 1 to n; in the first
paragraph, we could have used any alphabet with n letters without changing the fundamental nature of the definitions.
Some of what’s known: E.R. Vaughan determined that the process of determining whether a gerechte framework admits a gerechte design is NP-complete. Vaughan and Courtiel proved that a framework
consisting entirely of rectangles of equal size (though not necessarily oriented the same way) always admits a gerechte design. And as I mentioned in an earlier post, R.A. Bailey, Peter Cameron, and
Robert Connelly fully characterized a multiple gerechte design which they called “symmetric Sudoku.” The specific case of Sudoku has been studied by various mathematicians due to its popularity, with
many brute-force approaches yielding numerical data, but with many questions remaining open.
Here are some questions I’ve come up with, some of which I’ve thought about, some of which are off the top of my head. If you have insights into them, have seen work on them elsewhere, or you’d
simply like to discuss the questions further, don’t hesitate to get in touch with me.
1(a). Given a fixed n, for what numbers m is there an n x n gerechte framework that admits exactly m gerechte designs?
1(b). Given a fixed n, for what numbers m is there an n x n gerechte framework that admits exactly m sum-gerechte designs?
1(c). Given a fixed n, for what numbers m is there an n x n gerechte framework that admits exactly m sum-gerechte designs but no gerechte designs?
1(a) was the first question I asked, in an earlier post on this blog. My by-hand (incomplete) evaluation of 4×4 gerechte designs revealed frameworks admitting 0, 1, 4, 6, 8, 12, and 24 colorings, all
of which are factors of the number of possible Latin squares; however, for n = 5 there is at least one framework admitting 20 designs, which is not a factor of the number of Latin squares. I’ve
downloaded GAP and the GRAPE and DESIGN packages to try to analyze cases more efficiently, but I haven’t figured out how to use them yet. (And my lack of a UNIX system may be limiting for that
A Bailey-Kunert-Martin paper from 1991 looks at randomization amongst gerechte designs, which should relate to the number of available designs; however, the paper almost exclusively deals with
rectangular frameworks, and the approach doesn’t seem to generalize well.
As for 1(c), for n = 5, there is always at least one framework that admits a sum-gerechte design but not a gerechte design; in fact, one can force this to be the case using only three of the rows,
and the rest of the framework is unconstrained (but will affect the number of sum-gerechte designs).
2. If each of the sets in a gerechte framework is contiguous (that is, it is an unbroken collection of adjacent cells), does the framework necessarily admit a gerechte design?
Vaughan also asks this question in the final section of his paper, The complexity of constructing gerechte designs. [DEL:I conjecture that the answer is yes, and I hope there may be an inductive
method to build a design in this situation.:DEL] I spoke to Vaughan, and he found a simple 6×6 framework with contiguous regions that cannot be filled. He is still, however, interested in the
question of whether the decidability problem is NP-complete when restricting to frameworks of contiguous regions.
3. Given a “Latin gerechte framework” (one in which no two cells in the same set are in the same row or column), what can we say about the set of sum-gerechte designs?
This is a generalization of the traditionally studied area of orthogonal latin squares, which are gerechte designs from Latin frameworks. For example, when n=6, no such gerechte designs exist, but
perhaps there are sum-gerechte designs.
4. In the Vaughan-Courtiel paper, the authors quote a result by Hilton that says an (S,T,U)-outline Latin square can be always be lifted to a Latin square. Is there a reliable way to determine how
many such Latin squares amalgamate to a given (S,T,U)-outline Latin square?
An (S,T,U)-outline Latin square essentially involves condensing the rows, columns, and/or alphabet into fewer than n classes, and then placing multiple symbols per cell to compensate. The Hilton
paper is from 1987, and I haven’t actually checked to see whether he considers the counting problem. It’s possible this question has already been answered.
5(a). What multiple gerechte frameworks admit gerechte designs? Unique gerechte designs? Sum-gerechte designs?
5(b). In the gerechte case, each set has to contain {1,…,n}, and in the sum-gerechte case each set can contain any set of numbers that sum to n(n+1)/2. Suppose we specify another set of acceptable
n-tuples. Are there constraints that lead to interesting problems?
5(a) and 5(b) are both extremely general, perhaps too much so to yield any interesting results. But I figured they were worth cataloguing here.
My plan to post blog entries from my hotel room in New York didn’t work out as intended, mainly because I brought my iPad instead of my laptop. While the tablet makes a good substitute for a computer
in a lot of ways, I’m still not comfortable writing long documents on it. It’s now a few days later, and I’m going to do my best to translate my massive sheafs of notes into some comments about the
workshop. I was originally going to try to say something about all of the talks, but to maximize the possibility that this post actually gets finished, I’m going to narrow it down to some of the
pieces I found most interesting/accessible.
In my last post, I talked about reading some of Giorgis Petridis‘s papers so that I would be able to follow parts 3 and 4 of his series of his talks; the material he presented was almost exclusively
from those papers, so I don’t have much to mention contentwise that I didn’t already say. I will add that Giorgis was a very eloquent speaker, and I enjoyed speaking with him throughout the
Both of the days I attended started off with talks involving factorization theory (which is discussed in Chapter 1 of Geroldinger/Ruzsa’s Combinatorial Number Theory and Additive Group Theory, but
ironically I skipped to Chapter 2). One of the main ideas in this area is generalizing the concept of a unique factoization domain to deal with possible factorization lengths. For example, if E is
the set of all even numbers, E doesn’t have unique factorization, but any two factorizations of the same element into irreducibles results in the same number of irreducibles. Other rings don’t have
this property, and one can look at the elasticity of elements (largest factorization length divided by smallest) and of the ring (supremum over all element elasticities).
Scott Chapman talked about block monoids, algebraic objects that arise from studying factorization in number fields. Studying the structure of these objects allows one to determine the Davenport
constant, which provides a route into the elasticity of the ring integers in the corresponding number field. On Friday, Paul Baginski presented some results on multiplicatively closed arithmetic
progressions, like the even-numbers example above. He also defined the terms accepted elasticity (the supremum occurs as the elasticity of an element) and full elasticity (every rational number below
the supremum occurs). Accepted elasticity obviously fails if the elasticity is infinite or irrational, but interestingly, Paul mentioned an example, {4+6x}, in which the elasticity is 2 but it is not
Alex Kontorovich presented some material on a conjecture by Zaremba that has come up in his work on Apollonian circle packing with Jean Bourgain. The initial question is whether, given a prime d and
a primitive root b (mod d), the points $\left( \left\{\frac{b^n}{d} \right\}, \left\{\frac{b^{n+1}}{d} \right\} \right)$ are equidistributed on [0,1]^2. One might expect the answer to be yes, but
Alex showed two sets of data where the answer was yes in one case, and very much no in another. As it turns out, there is a bound for the discrepancy which is based on the largest number that appears
in the continued fraction expansion of (b/d).
This means it’s of interest to know what denominators occur in continued fractions for which numbers in the expansion are bounded. Zaremba’s conjecture states that once you raise the bound to A=5,
any natural number can occur as a denominator. It is known that A=4 isn’t sufficient, as no rational number with a denominator of 54 has a continued fraction expansion without exceeding 4. Alex went
on to share a conjecture by Hensley that ties the bound to Hausdorff dimension of the set of fractions with bounded expansions, a counterexample to that conjecture, and some new bounds involving
Hausdorff dimension that do work.
Steven J Miller asked me not to go into detail about his results in this blog, as a lot of his current research is done with undergraduates, and he didn’t want me to cause anyone to beat his students
to the punch. Suffice it to say that Steve and his students are continuing to find interesting results about MSTD (More Sums Than Differences) sets. These are sets in which, contrary to expectation,
the difference set A – A is smaller than the sumset A + A. Most of the initial results about these sets were probabilistic in nature, as one can fix the extreme portions of the sets and then show
that a positive proportion of the possible choices for the middle result in a MSTD set. But the new progress involves construction of explicit sets, and some exploration of (kA+kA) vs. (kA-kA).
In memory of the recent passing of Yahya Ould Hamidoune, Mel Nathanson (the conference host) gave a series of talks discussing Hamidoune’s graph-theoretic approaches to additive combinatorics
questions. The approach involves Cayley graphs (graphs constructed from a group G, with edges determined by pairs differing by elements of a subset S). Vertex sets which are sufficiently large and
separated from a sufficiently large vertex sets, and which are minimal with respect to these constraints, are known as k-atoms, and the strong requirements result in useful theorems that can be
applied to k-atoms. In the final lecture, Mel used these theorems to prove the Cauchy-Davenport theorem for sumsets mod p. There is a more detailed treatment of Hamidoune’s work over at Terry Tao’s
blog in this entry.
These are some of the main talks I wanted to mention… I may add some more later, but for now I’m going to cut myself off. | {"url":"http://dankatzmath.wordpress.com/","timestamp":"2014-04-20T08:14:59Z","content_type":null,"content_length":"62461","record_id":"<urn:uuid:2e2af599-12c8-483a-b1b7-eb8a3b71be49>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Great Gun Fight
A REASON ONLINE debate
More guns mean less crime. That's the essential thesis of John R. Lott Jr.'s path-breaking book, appropriately titled More Guns, Less Crime: Understanding Crime and Gun Control Laws (University of
Chicago Press, 2000), which looked at the relationship between liberalized gun laws and criminal activity. In both the original 1998 and revised 2000 editions, Lott, a senior research scholar at Yale
Law School, used national gun and crime data to perform an unprecedentedly thorough study of the issue. On the face of it, his claim makes sense: If criminals assume that potential victims may be
armed, they'll be less likely to act. (See "Cold Comfort," January 2000.)
Not so fast, says George Mason University physicist Robert Ehrlich. In his new book, Nine Crazy Ideas in Science (A Few May Even Be True) (Princeton University Press), Ehrlich argues that the data
are in fact inconclusive and that Lott is massaging the results to fit his theory. Ehrlich, a gun owner himself, concludes that liberalized gun laws have had no appreciable effect one way or another.
So which is it? We invited Ehrlich and Lott to debate the issue on Reason Online from May 21-24. Each was allowed to make two contributions and, after the initial salvo, each had to respond within
hours of the other's posting. Readers interested in more information can visit the debate, which includes links to many of the sources mentioned below, including both Ehrlich's and Lott's books.
Robert Ehrlich
More Guns Mean More Guns
Why John Lott is wrong
John Lott's book, More Guns, Less Crime contains many points with which I agree. For example, I believe that many criminals are leery of approaching potential victims who may be armed -- an idea at
the core of his deterrence theory that guns help to prevent crime. I also believe that violent criminals are not typical citizens, and that the possession of a gun by a law-abiding citizen is
unlikely to turn him into a crazed killer. Additionally, Lott has a point when he speaks of the media's overreporting of gun violence by and against kids and the corresponding underreporting of the
defensive use of guns to prevent crime.
As a gun owner myself, I was quite prepared to accept Lott's thesis that the positive deterrent effect of guns exceeds their harmful effects on society, but as a scientist I have to be guided by what
the data actually show, and Lott simply hasn't made his case. Here's why:
Lott misrepresents the data. His main argument that guns reduce crime is based on the impact on various violent crime rates of "concealed carry laws," which allow legal gun owners to carry concealed
weapons. Since these laws were passed at different dates in different states, he looks at how the crime rates change at t=0, the date of the law's passage in each state. Lott's book displays a series
of very impressive-looking graphs that show dramatic and in some cases immediate drops in every category of violent crime at time t=0. The impact on robberies is particularly impressive, where a
steeply rising robbery rate suddenly turns into a steeply falling rate right at t=0 -- almost like the two sides of a church steeple. As they say, when something looks too good to be true, it
probably is. Lott neglects to tell the reader that all his plots are not the actual FBI data (downloadable from their Web site), but merely his fits to the data.
The actual data are much more irregular with lots of ups and downs, and they show nothing special happening at time t=0. Lott has used the data from 10 states in his book. When we look at changes in
the robbery rate state by state, only two of the states (West Virginia and Georgia) show decreases at t=0, while the other eight show increases. Overall, averaging the 10 states, there is a small but
not statistically significant increase in the robbery rate at t=0, certainly not the dramatic decrease Lott's fits show. In fact, Lott's method of doing his fits is virtually guaranteed to produce an
"interesting" result at time t=0. What he does is to fit a smooth curve (actually a parabola) to the data earlier than t=0, and a separate curve to the data later than t=0.
Given a completely random set of data, Lott's fitting procedure is virtually guaranteed to yield either a drop or a rise near time t=0. Only if the data just happened to lie on a single parabola on
both sides of t=0 would the fits show nothing special at that time. Since random data would show a drop or a rise equally often at t=0, we have a 50 percent chance of finding a drop -- not a very
good argument for the drop being real. The fact that all categories of violent crime (murder, rape, assault, robbery) show drops is also not particularly surprising, since the causes of violent crime
(whatever they are) probably affect the rates in all the separate categories. Similarly, it is no more mysterious that when the overall stock market rises or falls dramatically the individual sectors
(industrials, utilities, etc.) are more likely than not to move in the same direction.
Lott's results are not consistent. Taking Lott's fits at face value, we find they give inconsistent results. For example, he shows murders, rapes, and robberies each declining sharply and immediately
at t=0, the year of passage of the laws, but the aggravated assault rate rises slightly and doesn't start its descent until three years after the law's passage. Presumably, the same sorts of folks
are committing murders and assaults, so this difference is very puzzling. Similarly, Lott shows the rate of multiple public shootings declining dramatically (by 100 percent) only two years after t=0.
But using follow-up data in a more recent paper, Lott shows multiple shootings rising precipitously the year before t=0 and then declining right at t=0. It's difficult enough to understand why the
impact of the laws should be so much greater on multiple shootings by crazed killers than ordinary murders (which drop only 10 percent), but figuring out how the laws could work in reverse time on
the thinking of these psychos is a real challenge.
Lott's results cannot account for all the relevant variables. Recognizing that violent crime rates can depend on all sorts of factors aside from the passage of concealed carry laws, Lott includes
many variables when he runs his multiple linear regressions to disentangle the impact of each factor. Many of these variables, such as arrest rates, percentage of African Americans, and population
density, account for a far greater percentage of the variation in violent crime than the mere 1 percent he attributes to passage of the laws. However, with such a small dependence on the one factor
he is looking for, only if Lott has included all the relevant variables that could affect the rate of violent crime can he hope to see the residual amount due to the effect of that one factor. In
answer to this criticism, Lott says OK -- tell me what variable I've left out and I'll include it. But the list of plausible variables that could affect violent crime rates over time is virtually
Here, for example, are 14 that Lott didn't include: (1) amount of alcohol sold, (2) price of alcohol, (3) amount of drugs sold, (4) price of drugs, (5) number of police on the beat, (6) number of
police brutality complaints, (7) average summer temperature, (8) number of convicted felons on the streets, (9) average age of convicted felons on the streets, (10) percentage of teenagers living in
two-parent households, (11) high school dropout rate, (12) dollars spent on crime prevention programs, (13) minimum wage rate, (14) amount of media violence. I'm sure readers could come up with many
more plausible factors, any one of which could mask the true dependence on the concealed carry laws.
Lott doesn't properly compute statistical significance. Another very serious problem with Lott's method is how he calculates the statistical significance of his results. He essentially asks, What is
the probability of getting the observed variation of the crime rate on either side of t=0 based on changes in the various socio-demographic variables and random variations? If that computed
probability is very small, he regards his hypothesis that the concealed carry laws made the difference as being proven.
But that's not right. He needs to look at the probability of a change in the crime rate for years t= -3, -2, -1, 0, 1, 2, 3, etc. Only if the probability is very much less for year zero than the
other years can he consider his results meaningful. It seems very likely, however, that Lott would find similarly low probabilities for all these other years, because only if the violent crime rate
were static over time would there be no significant variation on either side of year t=0, or any other given year. In fact, John Donahue, a law professor at Stanford, analyzed Lott's data and found
that the most significant turning point for the robbery rates occurs before t=0.
Lott has correctly observed that, by passing concealed carry laws in various states in various years, the U.S. has been in effect conducting an extremely interesting social experiment. That
experiment, in principle, can give us an empirical answer to the relationship between easing restrictions on gun-carrying permits and crime. However, his one-sided analysis of the data inspires
little confidence that we can count on him to tell us the true results of this experiment. From all indications it seems that the concealed carry laws probably have had almost no effect, one way or
the other.
John R. Lott Jr.
Taking Stock
Less gun control means less violent crime
Robert Ehrlich's review of the first edition of my book, More Guns, Less Crime, is well-written, and it is interesting to know that he owns a gun despite his concerns about research on the benefits
of doing so. Unfortunately, however, his discussion is incomplete and simply inaccurate. Below are responses to the more important claims he makes.
"Lott neglects to tell the reader that all his plots are not the actual FBI data...but merely his fits to the data." There are several places in my book that discuss how the diagrams show how crime
rates change before and after right-to-carry laws are adopted once other factors have been taken into account. It is important to distinguish not just whether there was a decline in crime rates, but
whether there was a decline relative to other states that did not adopt the right-to-carry laws. The second edition of More Guns, Less Crime, which was published in 2000, was also clear on this
point, and its graphs showed the changes in crime relative to other states that did not change their laws and were in the same region of the country.
"Lott has used the data from 10 states in his book." I used data from the entire United States. The first edition used state-level data from all the states and the District of Columbia, as well as
county-level data for the entire country from 1977 through 1992 (and, in some estimates, up to 1994). The second edition of the book not only updated the county and state data through 1996, but also
used city-level data for the largest 2,000 cities. Possibly what Ehrlich means here is that only 10 states (with a total of 718 counties) adopted right-to-carry laws during the 1977-1992 period. The
point of examining all counties in all the states was to make a year-by-year comparison of how the crime rates had changed in the counties with the right-to-carry laws relative to the counties in
states without the laws. In the second edition of my book, a total of 20 states, representing 1,432 counties, adopted right-to-carry laws between 1977 and 1996.
"The actual data are much more irregular with lots of ups and downs, and they show nothing special happening at time t=0." My book reports the year-to-year changes in crime rates, and these results
are consistent with the before-and-after trends. One of the benefits of examining the change in trends is that there are straightforward statistical tests to see if the change is statistically
"Overall, averaging the 10 states, there is a small but not statistically significant increase in the robbery rate at t=0, certainly not the dramatic decrease Lott's fits show." Ehrlich has examined
state-level robbery rates for the 10 states that had adopted right-to-carry laws between 1977 and 1992, using data extended up until 1995 for the four years on either side of adoption. He finds that
there is no statistically significant change in before-and-after trends. He claims to use data up until 1997, but that is not possible since he limited the sample to only four years after adoption
and the first full year these states had the law in effect was 1992. I have tried to replicate his results, but have been unable to do so: Robbery rates are declining after adoption relative to how
they were changing prior to adoption.
Yet even if his data analysis had been correct, his approach has a lot of problems. The main difficulty is that there is no comparison of what is going on in the states that do not adopt
right-to-carry laws. When such a comparison is made, the drop in crime is about twice as large in right-to-carry states and twice as statistically significant. Accounting for other factors (e.g., the
arrest rate for robbery) also increases the statistical significance of the drop. Many aspects of what he did are unclear, such as whether he weighted each state equally or weighted them by
population (as is normally done). But neither approach altered the final result.
"What [Lott] does is to fit a smooth curve (actually a parabola) to the data earlier than t=0, and a separate curve to the data later than t=0." This is only one of several different approaches
reported in my book. The first edition also presented actual data on the number of permits issued per county over time for several states where the data were available. The second edition further
examined whether differences in right-to-carry laws can affect the number of people who get permits (e.g., the permitting fees, the length of the training requirement, and how many years the law has
been in effect), and whether this in turn can explain the changes in crime rates.
"Given a completely random set of data, Lott's fitting procedure is virtually guaranteed to yield either a drop or a rise near time t=0." This is not literally true. Besides a flat line, other
possibilities very obviously include the crime rate first rising and then falling after adoption -- or falling and then rising. The question is also not whether there is a change in trends, but also
whether those changes are statistically significant.
"Similarly, Lott shows the rate of multiple public shootings declining dramatically (by 100 percent) only two years after t=0. But using follow-up data in a more recent paper, Lott shows multiple
shootings rising precipitously the year before t=0 and then declining right at t=0." There are no inconsistencies. This paper, co-authored with William M. Landes, examined whether the results were
sensitive to removing observations from the year of adoption, as well as the two years prior to adoption. We found that the results remained essentially unchanged.
"It's difficult enough understanding why the impact of the laws should be so much greater on multiple shootings by crazed killers than ordinary murders (which drop only 10 percent), but figuring out
how the laws could work in reverse time on the thinking of these psychos is a real challenge." It is all too easy to dismiss mass murderers as totally irrational. But individuals who go on shooting
sprees are often motivated by goals such as fame. Making it difficult to obtain those goals may discourage some from engaging in their attacks. There is also the issue of stopping attacks that do
still occur. Suppose that a right-to-carry law deters crime primarily by raising the probability that a perpetrator will encounter a potential victim who is armed. In a single-victim crime, this
probability is likely to be very low. Hence the deterrent effect of the law -- though negative -- might be relatively small.
Now consider a shooting spree in a public place. In a crowd, the likelihood that one or more potential victims or bystanders are armed would be very large even though the probability that any
particular individual is armed is very low. This suggests a testable hypothesis: A right-to-carry law will have a bigger deterrent effect on shooting sprees in public places than on more conventional
To illustrate, let the probability (p) that a single individual carries a concealed handgun be .05. Assume further that there are 10 individuals in a public place. Then the probability that at least
one of them is armed is about .40 (= 1 - (.95)10). Even if (p) is only .025, the probability that at least one of 10 people will be armed is .22 (= 1 - (.975)10).
Ehrlich claims that I fail to account for all relevant variables. Sure, there could possibly be still other variables out there, though I doubt it. The data used in the first edition of the book have
been made available to academics at 45 different universities. I know of no study that has attempted to account for as many factors as I have, but if Ehrlich thinks that other factors are important,
he is perfectly free to see whether including them alters the results. Other academics have tried different variables -- for example, Bruce Benson at Florida State University tried including other
variables for private responses to crime, and Carl Moody at the College of William & Mary used additional variables to account for law enforcement -- but so far none of these other variables has
altered the results.
However, the variable list that I attempted to account for is much more extensive than Ehrlich indicates. Among the factors that I accounted for in the first and second editions of my book are: the
execution rate for the death penalty; conviction rates; prison sentence lengths; number of police officers; different types of policing policies (community policing, problem-orientated policing,
"broken window" strategies); hiring rules for police; poverty; unemployment; four different measures of income; many different types of gun control and enforcement; cocaine prices; the most detailed
demographic information on the different age, sex, and racial breakdowns of the population used in any study; and many other factors.
Discovering some left-out variable is more difficult than simply saying that other factors affect the crime rate. This left-out factor must be changing in the different states at the same time that
the right-to-carry laws are being adopted. In addition, crime rates are declining as more permits are issued in a county, so the left-out variable must similarly be changing over time. Other evidence
that I presented in my book indicates that just as crime rates are declining in counties with right-to-carry laws, adjacent counties on the other side of state borders in states without these laws
are experiencing an increase in violent crime. The more similar these adjacent counties, the larger the spillover. Right-to-carry laws also reduce crime rates where the criminal and the victim come
into direct contact with each other relative to those crimes where there is no such contact. To alter the results, these left-out factors would have to vary systematically to coincide with all these
different results.
One of the reasons I graphed the before-and-after trends as well as the year-to-year variations in crime rates was to allow readers to judge for themselves whether the adoption of right-to-carry laws
coincided with changes in crime rates. For a general audience, I thought that this graphical approach was the most straightforward.
As to the appropriateness of a particular statistical test, the answer depends upon what question one is asking. The one test that Ehrlich questions asked whether there was a statistically
significant change in the slopes in crime rates before and after the laws are adopted. For that question, the F-test that I used is the appropriate test.
Research by Florenz Plassman and Nicolaus Tideman that is forthcoming in the October 2001 issue of the Journal of Law and Economics breaks down crime data by each state and by individual years before
and after the adoption of the right-to-carry law. They find that for all 10 states that adopted such laws between 1977 and 1992, murder, rape, and robbery rates fell after adoption. If Ehrlich were
to identify the statistical test which he says shows a significant turning point for robbery before the adoption of right-to-carry laws, I would be happy to comment on it.
It is flattering that my research is the first topic that Ehrlich discusses in his book, Nine Crazy Ideas in Science. My research, however, is not alone in studying this issue. A large number of
academics have examined the data. While a few academic articles have been critical of some of the methodology, not even these critics have found a bad effect from right-to-carry laws. In fact, the
vast majority of academics have found benefits as large or larger than the ones I report.
What is also interesting is how little criticism there is of the other gun control topics that my book addressed. For example, no academics have found significant evidence that waiting periods or
background checks reduce violent crime rates. Unfortunately, what I have found is that many of these gun control laws actually lead to more crime and more deaths.
In his book, Ehrlich awards "cuckoos" to the ideas he discusses, with one cuckoo meaning "Why not?" and four cuckoos meaning "certainly false." He gives my work three cuckoos, but there are a lot of
academics who must then be in the same boat as I am. More important, his criticisms are based upon either an incomplete or inaccurate reading of my work.
Robert Ehrlich
Data Distortion
Lott's numbers don't tell us anything
I reply below to the main criticisms of John Lott -- at least those which I have understood.
Lott doesn't deny that he misleads the reader by neglecting to mention that his plots are fits to the data, because he can't. His graphs are in fact labeled "number of violent crimes" per 100,000
population and I find no statement in his book that the graphs are fits, rather than actual data. In his reply, Lott justifies the use of displaying fits by noting that it is important to show
"adjusted" crime rates after other variables (aside from the laws) have been taken into account.
Lott is correct that I was using the first edition of his book when I made the comment about only 10 states changing their right-to-carry laws in the stipulated time period.
Lott claims that I "used data up until 1997, but that is not possible since he limited the sample to only four years after adoption [of the laws]...." Clearly, he is mistaken, since my plots show
data extending 10 years before the law's adoption.
My statement about the changes in slope in the various states was based on simple linear fits to the data two years on either side of t=0, without weighting the states by population. However, without
doing any statistical analysis whatsoever, a mere glance at the graphs for the 10 states should allow readers to decide for themselves whether the data for the 10 states actually show anything
particular happening at time t=0. (The data for robbery can be found plotted in my book or downloaded from the FBI Web site.)
Lott claims that his fitting procedure is not biased, because by using random data one is notvirtually guaranteed to find a drop or a rise at t=0, as I claimed. Instead, he points out that the random
data might show an abrupt change in the slope, not the actual level, at t=0 (e.g., first rising then falling, or first falling then rising). But Lott's correction to my statement actually makes my
basic point even stronger, since a decrease in slope is exactly what might be expected if Lott were right. Thus, if his fitting procedure would force random data to show a change in slope at t=0 --
equally often an increase or decrease -- we can't have too much confidence that any observed decrease in slope validates his theory.
It's difficult to find anything about mass murder amusing, but I find Lott's calculation for the greater deterrent effect of easing concealed-carry laws on multiple shootings very humorous.
Essentially, he is saying that after concealed-carry laws are eased, mass murderers really are more deterred than ordinary murderers, because the chances are much greater that someone in a large
group is actually armed. Now, I don't think mass murderers are totally irrational. But I find this type of probability calculation more revealing of Lott's thinking than that of mass murderers, some
of whom I imagine would relish the idea of going out in a blaze of glory, in case someone in the group were armed. ("Suicide by police" seems to be a fairly common act by some psychos.)
In Lott's rebuttal on this same issue he fails to address the other inconsistency in his results: How could the laws act in reverse time, causing a big spurt of mass shootings the year before the
laws were enacted? He also neglects to answer my question on how his analysis can show the murder rate dropping immediately after the laws are passed, but the aggravated assault rate not starting its
drop until four years later.
Lott is right in pointing out that the omitted variables would need to change systematically in a way correlated with the dates of passing the laws. But given that the laws (according to him) account
for such a tiny fraction of the change in crime rates, and given an extremely long list of possible variables, it seems likely that some of them could fit the bill. If Lott's claim that he really has
accounted for all the key variables that affect violent crime rates were correct, then he really should be able to predict how the crime rates will change in the future in each state, based on all
these variables. Moreover, if his predictions fail to be borne out in any state it would show that he has left out some factor. (We are all used to hearing about why the stock market did what it did
on any given day, after the fact. But the failure to make such accurate predictions ahead of time tells us that maybe we really don't fully understand all the variables that make the market do what
it does, any more than we understand the variation in crime rates.)
I am not alone in questioning Lott's statistical analysis -- see, for example, work by Daniel Webster, Jens Ludwig, Daniel Black, and Daniel Nagin. Lott notes that his F-test is the appropriate one
to answer the question of whether there was a statistically significant change in the slope in crime rates at t=0. I don't dispute that the change in the slope of crime rates may be statistically
significant at t=0. After all, there might have been a real change at that point in time for reasons unrelated to the laws.
However, I claim that the slope will probably also be found to change by statistically significant amounts at most other years as well, and that would show that there's nothing special happening at t
=0, the year the laws were passed. The real test of whether it was the liberalized gun laws that made the difference is that a statistically significant change in slope was found at t=0 and only at t
To see this basic flaw in Lott's statistical analysis, let's imagine that some lunatic has a theory that the Nasdaq drops every full moon. Presumably, according to Lott, the way to test this theory
would be to do a linear regression involving as many extraneous variables as we can think of that might affect the Nasdaq -- and not to worry too much that we may not have gotten them all. Then using
the regression, we need to see if the Nasdaq had a statistically significant drop on days when the moon was full. It very well might show a statistically significant drop on those days. Why not?
However, I expect that the Nasdaq would also show drops (and rises) having comparable statistical significance for other lunar phases as well -- thereby proving exactly nothing.
Prof. Lott, wouldn't you agree that a finding of a statistically significant change in the crime rates at years before t=0 would invalidate your results? Will you tell us what your analysis shows for
the statistical significance of changes in slope at years other than t=0?
John R. Lott Jr.
The Effect Is Clear
Disarming law-abiding citizens leads to more crime
To Prof. Ehrlich, the "basic flaw" in my statistical analysis is that concealed handgun laws are likely to be just accidentally related to changes in the crime rate. He takes a simple example of
explaining how the stock market changes over time. Obvious variables to include would be the interest rate and the expected growth in the economy, but many other variables -- many of dubious
importance -- could possibly also be included. The problem arises when such variables are correlated to changes in stock prices merely by chance.
An extreme case would be including the prices of various grocery store products. A store might sell thousands of items, and the price of one -- say, peanut butter -- might happen to be highly
correlated with the stock prices over the particular period examined. We know that there is little theoretical reason for peanut butter to explain overall stock prices, but if you go through enough
grocery store prices, it just might happen that one of them accidentally moves up and down with the movements in the stock market over a particular period of time. Similar problems can occur with
other obviously unrelated variables, such as the incidence of full moons or sunspots.
There are ways to protect against this "dubious variable" problem. One is to expand the original sample period. If no true causal relationship exists between the two variables, this coincidence is
unlikely to keep occurring in future years. And this is precisely what I did as more data became available: Originally, I looked at data through 1992, then extended it to 1994, then up until 1996,
and then, in recent working papers, up through 1998. If Ehrlich understood this, he would realize that this is equivalent to his request that I should try to "predict how the crime rates will change
in the future."
Another approach guarding against the "dubious variable" problem is to replicate the same test in many different places. Again, this is exactly what I have done here: I have studied the impact of
right-to-carry laws in different states at different times, and I have included new states as more and more states have adopted these laws as the time period has been extended.
As I discussed previously, I have also provided many qualitatively different tests, linking not only the changes in gun laws to changes in crime rates but also the actual issuance of permits; the
changes in different types of crimes; rates of murders in public and private places; and comparisons of border counties in states with and without right-to-carry laws. Even if I accidentally found a
variable that just happened to be related to crime in one of these dimensions, it seems unlikely that you would get consistent results across all these different tests.
In any case, as far as I know, no one except Ehrlich is arguing that testing whether right-to-carry laws affect crime is the theoretical equivalent of including as variables such things as full
moons. Whatever one's views on the topic, there are legitimate questions over whether these laws increase or decrease crime -- and the only way that we can test that is to include them as a variable
in the regressions.
However, the bottom line is clear: If Ehrlich believes that there is a particular variable that has been left out and that corresponds with all these changes, I have given him the data set; instead
of speculating about what might be, he should actually do the work to see if his concerns are valid. No previous study has accounted for even a fraction of the alternative explanations for changing
crime rates as I have and, more important, my regressions explain over 95 percent of the variation in crime rates over time.
His concerns about using before-and-after trends make little sense to me because I report the results in many different ways: linear and nonlinear trends before and after, year-to-year changes, and
before-and-after averages. Readers of my book can view the graphs with the year-to-year changes and judge for themselves when the change in trends occur.
As I explicitly note in my book (pages 146-7 in the first edition), my graphs showing the nonlinear trends before and after the change in laws are constructed similarly to how other economists have
analyzed crime data. No explanation is offered for why I shouldn't have focused on whether there was a decline in crime relative to other states that did not adopt the right-to-carry laws.
Ehrlich might find it amusing that deterrence does work, but the data on guns and crime consistently shows that the greater the likelihood that a person can defend himself, the greater the
deterrence. William M. Landes and I point to evidence that perpetrators of multiple victim shootings are disproportionately psychotic, deranged, or irrational. Ehrlich and others claim that a law
permitting individuals to carry concealed weapons would therefore not deter shooting sprees in public places (though it might reduce the number of people killed or wounded). Yet a right-to-carry law
will both raise the potential perpetrator's cost (he is more likely to be wounded or killed or apprehended if he acts) and lower his expected benefit (he will do less damage if he encounters armed
resistance). Even those bent on suicide may refrain from attacking if the harm that they can do is sufficiently limited. Although not all offenders will alter their behavior in response to the law,
some individuals might refrain from a shooting spree.
Instead of so casually dismissing our result as "very humorous," Ehrlich and others should rise to the challenge to examine the data and see if they can offer a better explanation for the large drops
in multiple-victim public shootings when states adopt right-to-carry laws. These crimes have seriously shocked the nation, and finding ways to reduce such incidents is very important.
Finally, in both editions of my book, I respond to the critics of my work that Ehrlich mentions in his last dispatch. (I direct interested readers to chapters 7 and 9 of More Guns, Less Crime.)
This debate has focused on just my findings dealing with right-to-carry laws, but just as important are the overall effects of gun control laws. Despite the best of intentions, law-abiding citizens,
not criminals, are most likely to obey the different restrictions that are imposed. Disarming the law-abiding relative to criminals has one consequence: more crime. | {"url":"http://reason.com/archives/2001/08/01/the-great-gun-fight/print","timestamp":"2014-04-20T14:11:41Z","content_type":null,"content_length":"46698","record_id":"<urn:uuid:78bb6377-693d-42b6-9878-5fe1aa8646d8>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
Family of ellipses with eccentricities {0.25, 0.36, 0.46, 0.57, 0.67, 0.78, 0.88, 0.99} in order of light to dark shade. The left family share vertexes, the right are confocal.
Mathematica Notebook for Ellipse
See the History section of Conic Sections page.
Ellipse is a family of curves of one parameter. Together with hyperbola and parabola, they make up the conic sections. Ellipse is also a special case of hypotrochoid.
Ellipse is commonly defined as the locus of points P such that the sum of the distances from P to two fixed points F1, F2 (called foci) are constant. That is, distance[P,F1] + distance[P,F2] == 2 a,
where a is a positive constant.
Tracing a Ellipse. Ellipse Family
The eccentricity of a ellipse, denoted e, is defined as e := c/a, where c is half the distance between foci. Eccentricity is a number that describe the degree of roundness of the ellipse. For any
ellipse, 0 ≤ e ≤ 1. The smaller the eccentricy, the rounder the ellipse. If e == 0, it is a circle and F1, F2 are coincident. If e == 1, then it's a line segment, with foci at the two end points.
Vertexes of the ellipse are defined as the intersections of the ellipse and a line passing through foci. The distance between the vertexes are called major axis or focal axis. A line passing the
center and perpendicular to the major axis is the minor axis. Half the length of major axis is called semimajor axis. Half the length of major axis is called semiminor axis.
Let a ellipse's sum of distances be “2*a”, and center to a focus be c, and semiminor axis be b, and essentricity be e. These values are related by the formula: b^2+c^2==a^2, e*a==c, and center to a
vertex is “a”. This is a very useful formula when trying to solve for a unknown.
The formula for ellipse can be derived in many ways. By the definition of sum of distances distance[P,F1] + distance[P,F2] == 2*a, setting focuses to be {±c,0} we easily find the Cartesian equation
to be:
Sqrt[(x-c)^2+(y-0)^2] + Sqrt[(x- (-c))^2+(y-0)^2] == 2 *a
After getting rid of square roots, we have:
16*a^4 - 16*a^2*c^2 + (-16*a^2 + 16*c^2)*x^2 - 16*a^2*y^2 == 0
which has the form of 2nd degree polynomial. Now minus both sides by a constant we have the form:
for some A and B, where the vertexes of the ellipse happens to be {±A,0} and {±B,0}. To derive the parametric formula, noticed that the x component or y component oscillates. So, if we relpace y by
B*Sin[t] and solve for x, we easily found a parametric formula corresponding to x^2/A^2+y^2/B^2==1 to be:
{B* Cos[t], A* Sin[t]}
ellipse_standard.gcf ellipse_sum_dist.gcf ellipse_eq_derive.nb.zip
Similarly, we can derive other forms. The following gives ellipses with eccentricity as parameter, with Vertexes at {±1,0}, and focuses at {±e, 0}.
• Parametric: {Cos[t], Sqrt[1-e^2] Sin[t]}.
• Cartesian: x^2 + y^2/(1 - e^2) == 1 ellipse_plot.gcf
Point Construction
This method is obvious from the paramteric formula for ellipses {a Cos[t], b Sin[t]}, where a and b are the radiuses of the circles. Figure below showing a ellipse with a:=5, b:=3, and e==8/10.
Point Construction, Point-wise Construction
Ellipse's Property of inscribed and circumscribed circles and center
From the point of view of a given ellipse, this is a property that relates a point on the ellipse, to its circumscribed circle, inscribed circle, and center. That is: given a ellipse and a point P on
the curve. Let there be lines passing P and perpendicular to the ellipse's major and minor axes. These 2 pendicular lines will intersect the circimscribed and inscribed circles at 2 points, if we
consider only intersections that lie in the same quadrant as P. Now, a line, passing these two points will intersect the ellipse's center.
This theorem leads to the Trammel of Archimedes below.
Trammel of Archimedes
Ellipse is the glissette of a line of constant length with endpoints on other two mutually orthogonal lines. That is, the trace of fixed point on a line of constant length with its endpoints freely
glides on two mutually orthogonal lines. This mechanism is called Trammel of Archimedes. The trace of the line itself will generate a astroid.
Trammel in Motion
ellispe as Trammel
Proof: Assume the ellipse to be in a position where its major axis is aligned with the x axis and minor axis aligned with the y axes. Let A be the center of ellipse. Let P be a point on the ellipse.
Let there be a circumscribed circle r, and a inscribed circle q. Now, let there be a line thru P, parallel to minor axis. Let the intersection of this line and the circumscribed circle be Q (pick the
intersection that lies in the same quadrant as P). Let there be a line thru P, parallel to the major axis. Let the intersection of the line and the inscribed circle be R. By the point-wise
construction theorem, line ARQ are collinear. Let F be the midpoint of RQ. Let F be the center of a rectangle, and lower left corner at A. Label the upper left corner D, upper right C, lower right E.
Now, consider triangle[Q,P,R]. Note that dist[Q,R] is constant, angle[Q,P,R]==90°, F bisects QR. Therefore, then distance[F,P] is constant. Now, consider the rectangle DCEA. The diagnals of the
rectangle is constant, since AF is constant. (AF is constant because F is the bisector of RQ, and R and Q lies on the circum/in-scribed circles, and ARQ is collinear.) The distance[D,E] is constant
because AC twice of AE, and AE is constant because F is mid of the radiuses of the circumscribed circle and inscribed circle. P lies on DE by symmetry arguments with center F. (this proof is badly
worded and verbose. Exercise: write a better proof)
Tangent Construction
Ellipse Tangent Construction
Givens: A circle k with center F1, a point F2 inside the circle, and a point P on the circle. Now, Let t be a line that bisects Line[P,F2]. Let Q be the intersection between b and Line[P,F1]. As P
moves around the circle, the traces of Q is a ellipse with focus F1 and F2 and dist[F1,P] being its distance sum, and the line t is its tangent at Q.
Proof: We want to prove that dist[F1,Q]+dist[F2,Q]==dist[F1,P]. Since Q lies on the line t, and t bisects line[P,F2], thus dist[Q,P]==dist[Q,F2]. Since Q also lies on segment[P,F1], so dist[F1,Q] +
dist[Q,P] == dist[F1,P]. Combine the above equation together shows dist[F1,Q]+dist[F2,Q]==dist[F1,P]. To show that t is the tangent at Q, note that line perpendicular to t and passing Q bisects the
angle[F1,Q,F2]. (detail omitted here)
Note, that the midpoint of P and F2 traces out a circle. This leads to the following theorem that the pedal of a ellipse with respect to a focus is a circle.
The pedal of a ellipse with respect to a focus is a circle, conversely, the negative pedal of a circle with respect to a point inside the circle is a ellipse. This fact can be used to draw ellipses
by envelope of lines.
1. Start with a circle and a point F1 inside the circle.
2. Draw a line j from a point P on the circle to F1.
3. Draw a line k perpendicular to j and passing P.
4. Repeat step 2 and 3 for other points P on the cricle.
5. The envelope of lines m is a ellipse with a focus at F1 and two vertexes touching the circle.
Ellipse as Circle's Negative Pedal
Ellipse as Hypotrochoid
Ellipse can be generated as a hypotrochoid. Let the parameters of the hypotrochoid be {A,B,H}, where A is the radius of the fixed circle, B is the radius of the rolling circle, and H is the distance
from the tracing point to the center of the rolling circle. If A/2 == B and H ≠ B, then it's a ellipse with semimajor axis a==B+H and semiminor axis b==Abs[B-H]. Eccentricity is then e:=c/a == Sqrt[a
^2-b^2]/a == Sqrt[(B+H)^2-(B-H)^2]/(B+H)
Given A, B:=1/2 A, and e, we may want to find H such that the hypotrochoid {A,B,H} generate a ellipse with eccentricity e. This is easily solved with the above information. The solution is:
h == ( -B (-2+e^2) ± 2 B Sqrt[(1-e)*(1+e)] ) / e^2
If we let A:=1, B:=1/2, e:=8/10, we get h==1/8 or 2. Two ellipses of eccentricity 8/10 as hypotrochoids with parameters {1,1/2,1/8} and {1,1/2,2} are shown below.
Ellipse Tracing Ellipse Tracing
Optical Property
Lightrays from one focus will reflect to the ther focus. If the radiant point is not at a focus, a caustic curve forms.
Ellipse's Caustic
Ellipse's inversion with respect to a focus is a dimpled limacon of Pascal.
Ellipse Inversion
Cylinder Slice
The intersection of a right circular cylinder and a plane is a ellipse. One can see this by tilting a cup or a cone shaped paper cup filled with liquid. Let r be the radius of the base circle of the
cylinder, α be the angle formed by the cutting plane and the plane of the base circle of the right circular cylinder. The intersection will be a ellipse with semi-major axis r/Cos[α] and semi-minor
axis r.
Related Web Sites
See: Websites on Plane Curves, Printed References On Plane Curves.
Robert Yates: Curves and Their Properties.
See: Websites on Conic Sections.
blog comments powered by | {"url":"http://xahlee.info/SpecialPlaneCurves_dir/Ellipse_dir/ellipse.html","timestamp":"2014-04-17T12:29:53Z","content_type":null,"content_length":"19966","record_id":"<urn:uuid:167218d2-0c92-40f5-8428-19ea4366be40>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Don't understand use of auxiliary function in proving theorems
November 17th 2010, 09:09 PM
Don't understand use of auxiliary function in proving theorems
How was the auxiliary function to be used in proving the theorem below determined? I don't understand why this function was chosen, and how the author arrived at the decision of choosing this
function? This is the first time I came across the use of auxiliary functions in proving a theorem, so I am a bit confused whether the function is chosen by pure intuition or is there a method
behind this.
Thanks a lot for reading.
Attachment 19753
November 17th 2010, 11:41 PM
The important thing is that $F$ works and you understand why works. All the same, in this case, analyze:
1.- $G(x)=f(x)-f(a)$ translates vertically the graph of $f$ in such a way that $G(a)=0$ .
2.- The equation of the straight line containing the endpoints of the $G$ graph is:
Fernando Revilla | {"url":"http://mathhelpforum.com/calculus/163636-dont-understand-use-auxiliary-function-proving-theorems-print.html","timestamp":"2014-04-17T22:31:36Z","content_type":null,"content_length":"6321","record_id":"<urn:uuid:6bf432b0-e7e2-4601-a994-ca1f1910b740>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
FIG. 1.
Partial configuration of a Cu nanofoam created from a [100] single crystal. Shock loading is along the x-axis or [100].
FIG. 2.
The x – t diagram of for , showing different regimes: unshocked, elastic precursor, plastic wave, and elastic release.
FIG. 3.
The shock velocity–particle velocity ( ) plot for elastic and plastic shocks. HEL: Hugoniot elastic limit. (2.5km s^−1) and C [0] (0.16km s^−1) are the intercepts of the straight lines with the
vertical axis.
FIG. 4.
The stress ( )–specific volume (V) curves obtained from MD simulations, fit to MD results, experiments on porous and full density Cu, ^ ^6,31 and predicted compacted Hugoniot.
FIG. 5.
(a) The shock pressure–temperature (P–T) plot for different (circles), along with the equilibrium melting curve (solid curve). ^ ^32 The dashed curves are guide to the eye. (b) MSDs obtained from the
absorbing wall simulations. The numbers in (a) and (b) denote in km s^−1.
FIG. 6.
Atomic configurations (projected onto the xy-plane) for (a) and t=100 ps; (b) and t=50 ps; (c) and t=37 ps. Color-coding is based on . Shock direction: left right.
FIG. 7.
Atomic configurations (projected onto the xy-plane) for and t=14 ps; (b) and t=9 ps, showing forward or transverse flows (arrows). Color-coding is based on u[x] in km s^– 1. Shock direction: left
FIG. 8.
The x – t diagram of a 1-nm thick slice cut along the void diameter in the shock direction (AOB in Fig. 1 ) obtained from the 2D binning analysis for , showing internal jetting (circled region), free
surface jetting and atomization. Color-coding is based on u[x] . Shock direction: left right.
FIG. 9.
2D stress ( , and in GPa) maps on the xy-plane for and t=50 ps [cf. Fig. 6(b) ]. Shock direction: left right.
FIG. 10.
2D temperature maps on the xy-plane for different shock strengths, showing hotspot formation due to internal jetting. Temperature is in K. Color is saturated above a chosen temperature. Shock
direction: left right.
FIG. 11.
Atomic configurations (projected onto the xy-plane) showing nanojets at free surfaces for (a) and (b) . Atoms are color-coded with u[x] in km s^−1. Shock direction: left right.
FIG. 12.
Jet velocity (a) and jet height (b) for different . The numbers denote in km s^−1. The curves for are shifted by −50 ps.
Article metrics loading... | {"url":"http://scitation.aip.org/content/aip/journal/jap/113/6/10.1063/1.4791758","timestamp":"2014-04-24T05:21:54Z","content_type":null,"content_length":"103013","record_id":"<urn:uuid:29f256e9-b26e-4453-94b8-8823ce2e0ccf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Level set of Regular functions in Alexandrov spaces again an Alex. space?
up vote 3 down vote favorite
Let $X^n$ be an Alexandrov space, and $f: X^n\to \mathbb R^k$ a regular map, does the level set necessary be an Alexandrov space?
In my mind, the intrinsic metric on the level set is 'comparable' to the ambient metric, but is it necessary an Alexandrov space?
alexandrov-geometry mg.metric-geometry
add comment
1 Answer
active oldest votes
The answer is "no" even for regular semiconcave function $f:X\to\mathbb R$
up vote 3 down If $f:X\to \mathbb R$ is convex then it is a long-standing open problem.
vote accepted
Thanks Prof. Petrunin, In fact, I am reading your paper joint work with Kapovitch and TUSCHMANN. The 4.3 blow-up method, it seems the reason that $\frac{1}{\theta_{n,2}}M_{n,2}$
converges to some Alexandrov space $A_2$ requires that the level set $M_{n,2}$ also has curvature bounded below, right? Or did I miss something? (or should it be rescaling the whole
manifold $M_n$ instead of the level set? – John B Mar 2 '10 at 0:42
No, first you pass to the limit space and then you note that it is splitting as $R^k\times A_2$. – Anton Petrunin Mar 2 '10 at 0:53
@Leonid. The simplest case is a distance map (i.e. each coordinate $x^i$ is a distance function) such that $dx^i(\xi)>0$ for some direction $\xi$ at each point. – Anton Petrunin Mar
2 '10 at 0:59
OK, so it's the rescal of the whole manifold $\frac{1}{\theta_{n,2}}M_{n,1}$ converge to $\mathbb R^k\times A_2$, not the 'fiber' converges to $A_2$ as intrinsic metric spaces. –
John B Mar 2 '10 at 1:03
oops, I meant the $\frac{1}{\theta_{n,1}\theta_{n,2}}M_{n,1}$ converge to $\mathbb R^k\times A_2$ – John B Mar 2 '10 at 1:06
show 2 more comments
Not the answer you're looking for? Browse other questions tagged alexandrov-geometry mg.metric-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/16790/is-level-set-of-regular-functions-in-alexandrov-spaces-again-an-alex-space","timestamp":"2014-04-17T04:56:01Z","content_type":null,"content_length":"56317","record_id":"<urn:uuid:164dd969-b2d8-40bf-aa8f-a0c54a33c884>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00158-ip-10-147-4-33.ec2.internal.warc.gz"} |
Layers of the Earth
Learning Goals
Use graphs of travel times of seismic body waves.
Use data provided to compute velocity for one seismic wave and travel time for a second seismic wave from a station at a known distance from the epicenter of an earthquake.
Deduce the path of the second wave and recognize the implications for the internal structure of Earth.
Mathematical Skills
Use basic algebra (distance, rate and time) and geometry.
Apply Snell's law.
Context for Use
This activity consists of a set of quantitative problem-solving exercises that can be used as an in-class activity or an assignment in any introductory course with a unit on Earth structure, plate
tectonics, or seismology, such as:
Physical geology or physical geography
Historical geology or Earth history
Environmental geology, natural hazards, natural disasters
Earth system science
Earth materials
Description and Teaching Materials
Teaching Notes and Tips
An Instructor's Guide to all K
yah Math activities is available online from the
Instructor Resources page
on the K
yah Math website.
Students record their work and answers in a word-processor document or a notebook, which can be submitted to the instructor for assessment. Solutions to these problems are available online from the
Instructor Resources page
on the K
yah Math website.
References and Resources | {"url":"http://serc.carleton.edu/keyah/activities/layers_earth.html","timestamp":"2014-04-17T01:03:08Z","content_type":null,"content_length":"24003","record_id":"<urn:uuid:e96ab24f-4096-4466-bd43-944e29a3edaf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
C06LAF Inverse Laplace transform, Crump's method
C06LBF Inverse Laplace transform, modified Weeks' method
C06LCF Evaluate inverse Laplace transform as computed by C06LBF
F01ABF Inverse of real symmetric positive-definite matrix using iterative refinement
F01ADF Inverse of real symmetric positive-definite matrix
F02SDF Eigenvector of generalized real banded eigenproblem by inverse iteration
F07AJF Inverse of real matrix, matrix already factorized by F07ADF
F07AWF Inverse of complex matrix, matrix already factorized by F07ARF
F07FJF Inverse of real symmetric positive-definite matrix, matrix already factorized by F07FDF
F07FWF Inverse of complex Hermitian positive-definite matrix, matrix already factorized by F07FRF
F07GJF Inverse of real symmetric positive-definite matrix, matrix already factorized by F07GDF, packed storage
F07GWF Inverse of complex Hermitian positive-definite matrix, matrix already factorized by F07GRF, packed storage
F07MJF Inverse of real symmetric indefinite matrix, matrix already factorized by F07MDF
F07MWF Inverse of complex Hermitian indefinite matrix, matrix already factorized by F07MRF
F07NWF Inverse of complex symmetric matrix, matrix already factorized by F07NRF
F07PJF Inverse of real symmetric indefinite matrix, matrix already factorized by F07PDF, packed storage
F07PWF Inverse of complex Hermitian indefinite matrix, matrix already factorized by F07PRF, packed storage
F07QWF Inverse of complex symmetric matrix, matrix already factorized by F07QRF, packed storage
F07TJF Inverse of real triangular matrix
F07TWF Inverse of complex triangular matrix
F07UJF Inverse of real triangular matrix, packed storage
F07UWF Inverse of complex triangular matrix, packed storage
F08JKF Selected eigenvectors of real symmetric tridiagonal matrix by inverse iteration, storing eigenvectors in real array
F08JXF Selected eigenvectors of real symmetric tridiagonal matrix by inverse iteration, storing eigenvectors in complex array
F08PKF Selected right and/or left eigenvectors of real upper Hessenberg matrix by inverse iteration
F08PXF Selected right and/or left eigenvectors of complex upper Hessenberg matrix by inverse iteration | {"url":"http://www.nag.com/numeric/fl/manual20/html/indexes/kwic/inverse.html","timestamp":"2014-04-20T23:57:14Z","content_type":null,"content_length":"12107","record_id":"<urn:uuid:90d054e6-2ce9-43bf-9137-27d5bfb431d0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
SciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python. It adds significant power to the interactive Python session by exposing the user to
high-level commands and classes for the manipulation and visualization of data. With SciPy, an interactive Python session becomes a data-processing and system-prototyping environment rivaling sytems
such as MATLAB, IDL, Octave, R-Lab, and SciLab.
The additional power of using SciPy within Python, however, is that a powerful programming language is also available for use in developing sophisticated programs and specialized applications.
Scientific applications written in SciPy benefit from the development of additional modules in numerous niche’s of the software landscape by developers across the world. Everything from parallel
programming to web and data-base subroutines and classes have been made available to the Python programmer. All of this power is available in addition to the mathematical libraries in SciPy.
This document provides a tutorial for the first-time user of SciPy to help get started with some of the features available in this powerful package. It is assumed that the user has already installed
the package. Some general Python facility is also assumed such as could be acquired by working through the Tutorial in the Python distribution. For further introductory help the user is directed to
the Numpy documentation.
For brevity and convenience, we will often assume that the main packages (numpy, scipy, and matplotlib) have been imported as:
>>> import numpy as np
>>> import scipy as sp
>>> import matplotlib as mpl
>>> import matplotlib.pyplot as plt
These are the import conventions that our community has adopted after discussion on public mailing lists. You will see these conventions used throughout NumPy and SciPy source code and documentation.
While we obviously don’t require you to follow these conventions in your own code, it is highly recommended.
SciPy is organized into subpackages covering different scientific computing domains. These are summarized in the following table:
│Subpackage │ Description │
│cluster │Clustering algorithms │
│constants │Physical and mathematical constants │
│fftpack │Fast Fourier Transform routines │
│integrate │Integration and ordinary differential equation solvers │
│interpolate│Interpolation and smoothing splines │
│io │Input and Output │
│linalg │Linear algebra │
│maxentropy │Maximum entropy methods │
│ndimage │N-dimensional image processing │
│odr │Orthogonal distance regression │
│optimize │Optimization and root-finding routines │
│signal │Signal processing │
│sparse │Sparse matrices and associated routines │
│spatial │Spatial data structures and algorithms │
│special │Special functions │
│stats │Statistical distributions and functions │
│weave │C/C++ integration │
Scipy sub-packages need to be imported separately, for example:
>>> from scipy import linalg, optimize
Because of their ubiquitousness, some of the functions in these subpackages are also made available in the scipy namespace to ease their use in interactive sessions and programs. In addition, many
basic array functions from numpy are also available at the top-level of the scipy package. Before looking at the sub-packages individually, we will first look at some of these common functions.
Scipy and Numpy have HTML and PDF versions of their documentation available at http://docs.scipy.org/, which currently details nearly all available functionality. However, this documentation is still
work-in-progress, and some parts may be incomplete or sparse. As we are a volunteer organization and depend on the community for growth, your participation - everything from providing feedback to
improving the documentation and code - is welcome and actively encouraged.
Python also provides the facility of documentation strings. The functions and classes available in SciPy use this method for on-line documentation. There are two methods for reading these messages
and getting help. Python provides the command help in the pydoc module. Entering this command with no arguments (i.e. >>> help ) launches an interactive help session that allows searching through the
keywords and modules available to all of Python. Running the command help with an object as the argument displays the calling signature, and the documentation string of the object.
The pydoc method of help is sophisticated but uses a pager to display the text. Sometimes this can interfere with the terminal you are running the interactive session within. A scipy-specific help
system is also available under the command sp.info. The signature and documentation string for the object passed to the help command are printed to standard output (or to a writeable object passed as
the third argument). The second keyword argument of sp.info defines the maximum width of the line for printing. If a module is passed as the argument to help than a list of the functions and classes
defined in that module is printed. For example:
>>> sp.info(optimize.fmin)
fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None,
full_output=0, disp=1, retall=0, callback=None)
Minimize a function using the downhill simplex algorithm.
func : callable func(x,*args)
The objective function to be minimized.
x0 : ndarray
Initial guess.
args : tuple
Extra arguments passed to func, i.e. ``f(x,*args)``.
callback : callable
Called after each iteration, as callback(xk), where xk is the
current parameter vector.
:Returns: (xopt, {fopt, iter, funcalls, warnflag})
xopt : ndarray
Parameter that minimizes function.
fopt : float
Value of function at minimum: ``fopt = func(xopt)``.
iter : int
Number of iterations performed.
funcalls : int
Number of function calls made.
warnflag : int
1 : Maximum number of function evaluations made.
2 : Maximum number of iterations reached.
allvecs : list
Solution at each iteration.
*Other Parameters*:
xtol : float
Relative error in xopt acceptable for convergence.
ftol : number
Relative error in func(xopt) acceptable for convergence.
maxiter : int
Maximum number of iterations to perform.
maxfun : number
Maximum number of function evaluations to make.
full_output : bool
Set to True if fval and warnflag outputs are desired.
disp : bool
Set to True to print convergence messages.
retall : bool
Set to True to return list of solutions at each iteration.
Uses a Nelder-Mead simplex algorithm to find the minimum of
function of one or more variables.
Another useful command is source. When given a function written in Python as an argument, it prints out a listing of the source code for that function. This can be helpful in learning about an
algorithm or understanding exactly what a function is doing with its arguments. Also don’t forget about the Python command dir which can be used to look at the namespace of a module or package. | {"url":"http://docs.scipy.org/doc/scipy-0.9.0/reference/tutorial/general.html","timestamp":"2014-04-21T02:00:06Z","content_type":null,"content_length":"19146","record_id":"<urn:uuid:6300c257-902e-4e57-875a-b9bef4964731>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
NAEP - Mathematics 2007: Sample Questions
This sample question measures fourth-graders' understanding of number properties and operations.
The percentages below indicate how students performed on the question. In addition to the overall percentage of students who answered the question correctly, the percentage of students at each
achievement level who answered the question correctly is presented.
As an example of how to interpret these percentages, 36 percent of students overall answered this question correctly. When only the students in the Proficient category are considered, 46 percent
answered correctly.
See more about this question in the NAEP Questions Tool.
View this question, at score 294, on a map of NAEP mathematics items.
SOURCE: U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), 2007 Mathematics Assessment. | {"url":"http://nationsreportcard.gov/math_2007/m0016.asp?subtab_id=Tab_1&tab_id=tab1","timestamp":"2014-04-21T03:01:16Z","content_type":null,"content_length":"21856","record_id":"<urn:uuid:3b3c70d4-9709-4e39-a724-d59b4f1b098c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help with a couple trig problems!
July 20th 2013, 09:24 PM #1
Jul 2013
Need help with a couple trig problems!
Hello everyone I was wondering if you could help me out with a couple trigonometry problems. For the first one I attempted it and got stuck. For the second one, are you supposed to set the real
part on left to the real part on right, and the imaginary part on the left to the imaginary part on the right?
e.g. on left side: real part = sin(x) imaginary part = cos(y). On right side: real part =-cos(x) imaginary part = -1
So if I were to set them equal to each other it would look like:
real parts: sin(x) = -cos(x) and imaginary parts: cos(y) = -1
answer for real parts would be 3pi/4 or 7pi/4 . Except with the given range, it would have to be 7pi/4. (x =7pi/4)
answer for imaginary parts would be pi. (y = pi)
Is this correct?
Re: Need help with a couple trig problems!
\displaystyle \begin{align*} \frac{7\pi}{4} \end{align*} is NOT in the region \displaystyle \begin{align*} -\frac{\pi}{2} \leq x \leq \frac{\pi}{2} \end{align*}. What would be the equivalent
angle to \displaystyle \begin{align*} \frac{7\pi}{4} \end{align*} in that region?
For the first question, surely you can at least see \displaystyle \begin{align*} \sin{(2x)} = \frac{2}{5} \end{align*}. How could you go from here?
Re: Need help with a couple trig problems!
Thanks for the help I appreciate it very much. The algebra part of my brain kept telling me to set the first equation to 0 in order to solve but I realized how easy it was once you directed me a
little bit. I'll be sure not to make the same mistake in the future. As for #2, luckily I checked this forum right before class and saw that the correct answer is -pi/4 according to the range.
Thanks again for the help it is greatly appreciated.
Re: Need help with a couple trig problems!
Yes \displaystyle \begin{align*} -\frac{\pi}{4} \end{align*} is correct
July 20th 2013, 10:15 PM #2
July 22nd 2013, 12:22 PM #3
Jul 2013
July 22nd 2013, 06:45 PM #4 | {"url":"http://mathhelpforum.com/trigonometry/220722-need-help-couple-trig-problems.html","timestamp":"2014-04-21T03:27:14Z","content_type":null,"content_length":"43049","record_id":"<urn:uuid:9fcebed9-0ffd-4b97-b54a-adbe2e585e87>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schlicht domain
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
What is a schlicht domain over $\mathbb{C}^n$? How is it different from a domain in $\mathbb{C}^n$? Examples?
add comment
Schlicht domain over ${\mathbb C}^n$ is the same as a domain in ${\mathbb C}^n$. The point is that one also defines domains over ${\mathbb C}^n$ as connected complex manifolds $M^n$ equipped with a
locally biholomorphic map $f:M^n\to {\mathbb C}^n$. The schlicht property just means that $f$ is 1-1.
See http://www.encyclopediaofmath.org/index.php/Riemannian_domain for general definition and references for Riemann domains. People also consider branched Riemann domains where "locally
biholomorphic" is replaced with "holomorphic with discrete fibers." These are generalizations of Riemann surfaces of multivalued holomorphic functions of one variable.
More on the "schlicht property": mathoverflow.net/questions/62218/…
J. H. S. Nov 22 '12 at 22:33
So, every domain in $\mathbb C^n$ is schlicht.
Saurabh T Nov 22 '12 at 22:50
add comment
Not the answer you're looking for? Browse other questions tagged complex-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/114190/schlicht-domain","timestamp":"2014-04-21T09:55:04Z","content_type":null,"content_length":"52406","record_id":"<urn:uuid:2895e067-ec42-4267-86bc-13166754c882>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Memoirs on Differential Equations and Mathematical Physics
Table of Contents: Volume 45, 2008
T. Buchukuri, O. Chkadua, D. Natroshvili, A.-M. Sändig
Mem. Differential Equations Math. Phys. 45 (2008), pp. 7-74.
download pdf file.
Lamara Bitsadze
Mem. Differential Equations Math. Phys. 45 (2008), pp. 75-83.
download pdf file.
L. Giorgashvili, G. Karseladze, G. Sadunishvili
Solution of a Boundary Value Problem of Statics of Two-Component Elastic Mixtures for a Space with Two Nonintersecting Spherical Cavities
Mem. Differential Equations Math. Phys. 45 (2008), pp. 85-115.
download pdf file.
Mikheil Usanetashvili
On the Problem with a Sloping Derivative for a Mixed Type Equation in the Case of a Two-Dimensional Degeneration Domain
Mem. Differential Equations Math. Phys. 45 (2008), pp. 117-124.
download pdf file.
Malkhaz Ashordia
On the Existence of Bounded Solutions for Systems of Nonlinear Generalized Ordinary Differential Equations
Mem. Differential Equations Math. Phys. 45 (2008), pp. 125-130.
download pdf file.
Malkhaz Ashordia
On The Existence of Bounded Solutions for Systems of Nonlinear Impulsive Equations
Mem. Differential Equations Math. Phys. 45 (2008), pp. 131-134.
download pdf file.
Ivan Kiguradze
Some Boundary Value Problems on Infinite Intervals for Functional Differential Systems
Mem. Differential Equations Math. Phys. 45 (2008), pp. 135-140.
download pdf file.
© Copyright 2008, Razmadze Mathematical Institute. | {"url":"http://www.emis.de/journals/MDEMP/vol45/contents.htm","timestamp":"2014-04-19T14:40:34Z","content_type":null,"content_length":"3483","record_id":"<urn:uuid:1d99f6f7-2de6-4b6d-aba8-4333d547111f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
In the classical construction of conic sections, where does the axis of the cone intersect the plane?
up vote 3 down vote favorite
Everybody knows that if I take the intersection of a right circular cone with a plane, I get a conic section. My question is, where does the symmetry axis of the cone intersect the plane? Does this
point relative to the conic have a name, or a simple description? For example, for an ellipse I first guessed that it was one focus of the ellipse, but that is false.
geometry mg.metric-geometry conic-sections
1 I guess you can see "focus" is wrong by considering the hyperbola made when the plane is parallel to the axis of the cone... – Gerald Edgar Mar 21 '12 at 13:03
Right, for that one specific kind of hyperbola the point goes off to infinity. – Keenan Pepper Mar 21 '12 at 14:07
You don't need a right circular cone to get a conic section, right? I think a skew elliptic cone will work just as well. Then it makes sense to ask if this point you describe is independent of the
expression of the curve as a section of a cone. – Jeff Strom Mar 21 '12 at 14:16
2 @Jeff: The answer is clearly no. If you get an ellipse from a right circular cone, the point is off-center, but if you get it from an elliptic cone that's dead on, the point is the center. – Will
Sawin Mar 21 '12 at 16:42
All circular cones whose section is a given ellipse also produce a differents points. The more narrow is the cone, the closer to the center is the point (one gets the center from sections of a
1 cylinder). In general, I'd say the point is between the foci, at certain distances from them, whose ratio is equal to the ratio of the radii of the Dandelin spheres. – Pietro Majer Mar 21 '12 at
show 1 more comment
1 Answer
active oldest votes
Following Keenan's suggestion I delete my comment and make it into an answer:
up vote 4 down vote Projectively speaking, there is no distinguished point inside a conic because the group of projective transformations that preserves the conic acts transitively on its interior:
accepted if someone gives you a circle and an unmarked ruler, you will never be able to construct the center.
add comment
Not the answer you're looking for? Browse other questions tagged geometry mg.metric-geometry conic-sections or ask your own question. | {"url":"http://mathoverflow.net/questions/91817/in-the-classical-construction-of-conic-sections-where-does-the-axis-of-the-cone","timestamp":"2014-04-19T02:51:24Z","content_type":null,"content_length":"57775","record_id":"<urn:uuid:f29f6612-f518-4990-993e-5ed34edaaa03>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
watt vs kilowatt.hour
I just can't understand that remark at all
I may have used a bad formulation for my remark.
I'm pretty sure the remark was just a comment on the fact that people lack intuition for visualizing energy quantities in real-life situations.
yes. What I exactly meant is, say, lifting 1 kg up 1 meter and feel what it 'means' is easy, even for someone with zero science inclination. This is my real accurate data, 1 kg and 1 meter.
For heat, it's already a little more difficult to grasp. I think, I might be wrong, that for most people, it is difficult to have a clear image of a 1 degree rise in temperature, more so for the
electrical energy needed to produce it.
The idea is to get super easy understandable examples that will trigger interest where there was none.
my problem now, I realize, is that even with kgs and meters, when you deal with such large numbers, it becomes intangible again, it's impossible to represent those amounts. I will have to change
scales first I guess, start with the 100W light bulb.
Thanks again. | {"url":"http://www.physicsforums.com/showthread.php?p=4175933","timestamp":"2014-04-19T22:52:53Z","content_type":null,"content_length":"65699","record_id":"<urn:uuid:77becb8e-ea99-4065-b4f4-db636cf4b79c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Programming - Vertex Enumeration
February 1st 2011, 04:05 PM #1
Jan 2011
Hi, I have the following problem (I have scanned and attached a file of what I have got so far...but I really don't know what I am doing)... It says:
Write down the following LP problem in standard form by introducing slack variables $x_4$ and $x_5$. And also, use vertex enumeration to identify all vertices of the feasible region and, hence,
determine an optimal solution to the problem.
maximise $f = -x_1 + 4x_2 + 5x_3$
subject to
$-x_1 + x_2 + 3x_3 <= 3$
$2x_2 + x_3 <= 8$
$x_1, x_2, x_3 >= 0$
So this is easy, I got:
maximise $f= -x_1 + 4x_2 + 5x_3$
subject to
$-x_1 + x_2 + 3x_3 + x_4 = 3$
$2x_2 + x_3 + x_5 = 8$
$x_1, x_2, x_3, x_4, x_5 >= 0$
So what I don't know is when a singular system is inconsistent (I've crossed through 2 rows, but I am not sure...), how you pick the values for the coordinates (there could be much more than 10
combinations and I don't know what I should look for)... and also, do I need to draw the LP problem to see which ones are feasible, or is there any other way I can work it out mathematically?
I would really appreciate it if you could give me a little explanation so that I can get this done before tomorrow
Last edited by mbmstudent; February 1st 2011 at 04:07 PM. Reason: forgot attaching file
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/169956-linear-programming-vertex-enumeration.html","timestamp":"2014-04-23T18:02:42Z","content_type":null,"content_length":"33187","record_id":"<urn:uuid:2fbfd20f-218e-4754-8c7e-d731f92ba4de>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'The Dealership, Part 2' Brain Teaser
The Dealership, Part 2
Logic Grid puzzles come with a handy interactive grid that will help you solve the puzzle based on the given clues.
Puzzle ID: #29824
Category: Logic-Grid
Submitted By: thompson1
You will need the answers from "The Dealership: Part 1" to complete this.
"Car"-l (Carl) Smith is back, and he has another crisis on his hands. The 5 cars he bought a couple weeks ago were sold last week. He was walking across the street from his favorite fast food
restaurant with the papers from the sales, when a car pulled out right in front of him. He dropped the papers and couldn't get them back because the wind blew them all away. Help Carl figure out
each customer's first and last names, what car they bought, how much money they paid, and on what day they bought the cars.
First Names - Fred, Jill, Calvin, Hannah, Katie
Last Names - Jenkins, Chapman, Kendrick, Felton, Hampton
Car - Mini, BMW, Mercedes, Jaguar, Ford
$$ - $16,000, $24,000, $32,000, $38,000, $40,000
Day of the week - Monday, Tuesday, Thursday, Friday, Saturday
1. The 5 people are: Fred, Mr. Chapman, the one who bought a car on Thursday, the one who bought a car for $38,000, and the one who bought a Ford.
2. The 5 cars are: The BMW, the one bought by Hannah, the one bought by Mrs. Hampton, the one bought on Friday, and the one bought for $32,000.
3. Carl made a profit on each car.
4. The person who bought a car 2 days after Felton did bought her car for $8,000 less than the person who bought the car that was sold to Carl on Thursday.
5. No car was bought and sold on the same day of the week.
6. The Mini, which was not bought on Monday, was bought 1 day after the Ford.
7. Jill paid $8,000 less for her car than the person who bought a car on Saturday.
8. Kendrick bought his car before Felton did.
Show Answer
What Next?
locomom Great teaser...I also had enjoyed part 1. Thanks for the fun!!
Apr 17, 2006
roscoep Good Job! Had to think a couple of times. Good use of #1's answers.
Apr 17, 2006
elemandia This took me forever... I completely missed the fact that wednesday wasnt listed so the two days later clue kept messing me up.
Apr 18, 2006
Winner4600 THIS IS GREAT!!! The most challenging grid I have seen in a while...It's definitely going to my favs!!!!!
Apr 20, 2006
Is there going to be a part 3?
thompson1 No, there will not be a Part 3, but I am making a series of new ones (that don't require past answers to do.)
Apr 20, 2006 I am glad you guys liked it!
bashbach Good one, and hard. I sat forever trying to do it!
Apr 28, 2006
ffoppia had to be careful about what was sold and what was bought, but the nice thing was no redundancy in clues.
May 11, 2006
Mellie627 This was definitely a challenge!!!!
Jun 17, 2006
Thel That was a great teaser, tough and fun!
Dec 02, 2006
Yankeejimg A great teaser! You had me going between logic and the reality of what you could actually sell the mini for. Logic finally won out.
Dec 30, 2006
dreamlvr1432 I couldn't wait till tomorrow to try this one after doing #1. Another good one. This one seemed a bit easier than part 1 to me. But still challenging!
Jan 19, 2008
mom_rox Having just completed Part 1 must have helped, because I completed this one in my first attempt. Nice wording of the clues to make one think!
Mar 19, 2008
tomsdiy Logic failed me. Only Carl the Crook could mark up and sell the Jag for only 14% more than he paid and yet the Mini he could mark up and sell for 220% more. I guess you can't use too
Apr 20, 2010 much logic in a logic teaser. Maybe that's why it is called a teaser?? Or, is this a trick-logic-teaser??
Obilio For the commenter above me, there's a difference between logical and sensible
Jun 18, 2013 | {"url":"http://www.braingle.com/brainteasers/teaser.php?id=29824&comm=1","timestamp":"2014-04-17T19:05:29Z","content_type":null,"content_length":"38820","record_id":"<urn:uuid:0d0f7134-2cdd-4f9a-bd2a-2a24b1ba8512>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Best of the Web Directory
Automatic Calculus Solutions
Provides mathematical solutions, covering derivatives, integrals, graphs, matrices, determinants, and systems of linear equations.
Dan's Math: Calculus
Lessons covering limits, differential calculus, integral calculus and vector calculus.
The Elementary Calculus Line
Information about specific math functions, domains and ranges, and integrals. With derivatives and graphing functions.
Fractional Calculus
Generalizes the derivative of a function to non-integer order. Includes downloads, links, and contact information.
Harvey Mudd College Tutorials
Pre-calculus includes algebra review, binomial theorem, and complex numbers. Includes differential equation and single variable calculus.
Langara College: Calculus Resources
Topics in calculus include precalculus review, limits and continuity, derivatives, integration, and infinite series. With other internet resources for calculus and analysis.
Qrhetoric Calculus
Contains the site's mission and goals, tutorials, and donations, and different topics in math. With references.
S.O.S. Math - Calculus
Includes integration studies, clinical sample analysis, and general expression analysis. With contract manufacturing and testing services.
Technology Based Problems
Offers a complex, technology-based problems in calculus with applications in science and engineering. Includes information on how to search categories.
That's Calculus
Includes ordering details, curriculum projects, and contact information.
World Web Math: Calculus Summary
Informs about the two main parts of this area of mathematics, differential and integral calculus. With derivatives and integrals information. | {"url":"http://botw.org/top/Science/Math/Calculus/","timestamp":"2014-04-18T05:33:10Z","content_type":null,"content_length":"23693","record_id":"<urn:uuid:bdd60036-d6f1-4666-aaa5-ff5824bc001c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 10
10 2.6 EFFECT OF SPEED ON DISTANCE TO CLIMB Equation 2.8. The examples in the following section demonstrate the application of flange climb criteria in track The above criteria, both the general
formula and biparam- tests. eter method, were derived based on the flange climb simula- tion results of a single wheelset running at a speed of 5 mph. Simulation results show the climb distance
slightly increases 2.8 EXAMPLES OF APPLICATION OF FLANGE with increasing running speed due to increased longitudinal CLIMB CRITERIA creep force and reduced lateral creep force (2), as shown in As an
example of their application, the flange climb criteria Figure 2.6. were applied to a passenger car with an H-frame truck under- The dynamic behavior of wheelset becomes very compli- going dynamic
performance tests at the FRA's Transportation cated at higher running speed (above 80 mph for 5 mrad Technology Center, Pueblo, Colorado, on July 28, 1997. The AOA and above 50 mph for 10 mrad AOA).
However, the car was running at 20 mph through a 5 degree curve with 2 in. distance limit derived from the speed of 5 mph should be vertical dips on the outside rail of the curve. The L/V ratios
conservative for higher operating speeds. were calculated from vertical and lateral forces measured from the instrumented wheelsets on the car. 2.7 APPLICATION OF FLANGE CLIMB CRITERIA Table 2.4
lists the 4 runs with L/V ratios higher than 1.13, exceeding the AAR Chapter XI flange climb safety criterion. 2.7.1 In Simulations The rails during the tests were dry, with an estimated friction
coefficient of 0.5. The wheel flange angle was 75 degrees, The application of flange climb criteria in simulations can resulting in a corresponding Nadal value of 1.13. be found in Chapter 3 of
Appendix B. The climb distance and average L/V in Table 2.4 were cal- culated for each run from the point where the L/V ratio exceeded 1.13. 2.7.2 In Track Tests In tests, when AOA is unknown or
can't be measured, the 2.8.1 Application of General Flange Climb AOAe described in Section 2.3 has to be estimated using Criterion The instrumented wheelset has the AAR-1B wheel profile with 75.13
degree maximum flange angle and 0.62 in. flange 5 Flange Climb Distance (feet) length; by substituting these two parameters into the general 4 flange climb criterion, the flange criterion for the
AAR-1B wheel profile is as follows: 3 26.33 2 D< AOAe + 1.2 1 0 The axle spacing distance for this rail car is 102 in. The 0 20 40 60 80 constant c was adopted as 2.04 since the vehicle and truck
Travel Speed (mph) design is similar to the heavy rail vehicle in Table 2.2. AOA=10 mrad AOA=5 mrad According to Equation 2.5, the AOAe is about 7.6 mrad for this passenger H-frame truck on a
5-degree curve. By substi- Figure 2.6. Effect of travel speed on distance to wheel tuting the AOAe into the above criteria, the safe climb dis- climb. (L/V ratio = 1.99, AAR-1B wheel (75-degree
flange tance without derailment is 3 ft. According to Table 2.3, the angle) and AREMA 136 RE rail.) conservative AOAe for a 5-degree curve should be 10 mrad; TABLE 2.4 Passenger car test results:
Climb distance and average L/V measured from the point where the L/V ratio exceeded 1.13, for friction coefficient of 0.5 Maximum Average L/V Runs Speed Climb Distance L/V Ratio Ratio rn023 20.39 mph
1.79 1.39 5.8 ft rn025 19.83 mph 2.00 1.45 6.3 ft rn045 19.27 mph 1.32 1.23 0.7 ft rn047 21.45 mph 1.85 1.52 5 ft
OCR for page 10
11 the conservative safe climb distance without derailment is is even below the 20 mrad AOAe criterion line, which sel- 2.4 ft; however, the climb distance according to the 50 ms dom happened for an
H-frame truck running on the 5-degree criterion is 1.4 ft. (The 50-ms criterion is discussed in Appen- curve. The other three runs were running unsafely because dix B, Section B1.3.) their climb
distances exceeded the 10 mrad conservative The wheel, which climbed 0.7 ft distance in run rn045 with AOAe criterion line. a 1.23 average L/V ratio (maximum L/V ratio 1.32), was run- The same
conclusion is drawn by applying the general ning safely without threat of derailment according to the cri- flange climb criterion and the biparameter flange climb cri- terion. The other three runs
were unsafe because their climb terion to the passenger car test. The climb distances of these distances exceeded the criterion. two criteria also show that the general flange climb criterion is more
conservative than the biparameter criterion. The rea- son for this is that the average L/V ratio in the test, which is 2.8.2 Application of Biparameters Criterion 1.23, is lower than the 1.99 ratio
used in the simulation to derive the general flange climb criterion. The difference Figure 2.7 shows the application of the biparameters cri- between these two criteria shows the biparameter flange
terion on the same passenger car test. The run (rn045) with climb criterion is able to reflect the variation of the L/V ratio. the maximum 1.32 L/V ratio is safe, since its climb distance However,
the general flange climb criterion is conservative of 0.7 ft is shorter than the 4.3-ft criterion value calculated by for most cases since the sustained average 1.99 L/V ratio dur- the biparameter
formula (Equation 2.7). The climb distance ing flange climb is rare in practice. 9 8 7 Climb Distance (feet) 6 5 4 3 2 1 0 1.1 1.2 1.3 1.4 1.5 1.6 Average L/V Ratio during Climb Measured Formula, 7.6
mrad AOA Formula, 10 mrad AOA Formula, 20 mrad AOA Figure 2.7. Application of the biparameter criterion for friction coefficient of 0.5. | {"url":"http://www.nap.edu/openbook.php?record_id=13841&page=10","timestamp":"2014-04-18T06:02:39Z","content_type":null,"content_length":"46508","record_id":"<urn:uuid:7f44f27a-90be-402c-9644-5bcb159c392a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Niwot Precalculus Tutor
...I am familiar with all realms of physics from mechanics to electricity and magnetism to quantum mechanics. In addition to tutoring high school and college physics students, I have also aided
many students in preparing for their AP Physics B and C Exams. A very important part of a successful career is being able to communicate ideas and/or research results successfully to a broad
18 Subjects: including precalculus, calculus, physics, geometry
...I use Microsoft Outlook 8 hours per day at my engineering job. I have multiple projects sorted by subfolders and personal data folders. I archive my data regularly to keep my inbox size
18 Subjects: including precalculus, chemistry, calculus, geometry
...I have been working with students of all levels from Algebra 1 to Calculus 3 covering all the various areas covered by the SAT Math test. I will work with you to develop skills that help you
earn the higher scores through fundamental math skill development and test taking tips that are helpful with the SAT. I have a PhD in Physics with a minor in Math.
14 Subjects: including precalculus, calculus, physics, GRE
...Using the STL as building blocks to producing complex software is one of the many skills, but also understanding how to use the language (public versus private variable/functions), multiple
inheritance, factory or decorator patterns, and a variety of other techniques help to produce software whic...
47 Subjects: including precalculus, chemistry, calculus, physics
...I also attended UC Berkeley as an Engineering major.I took this class at American River College in Sacramento, CA. I received an A, one of the highest grades in the class. I've tutored many
students in this subject over the last 12 years, at both the junior college and university level.
11 Subjects: including precalculus, calculus, statistics, geometry | {"url":"http://www.purplemath.com/niwot_precalculus_tutors.php","timestamp":"2014-04-20T20:57:04Z","content_type":null,"content_length":"23929","record_id":"<urn:uuid:dcb6856b-a263-4e7f-a77f-9b58c855c938>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
yarn numbers
Systems for sizing yarn fall into two basic traditional types, and a third modern one.
A: The yarn number is based on the length of yarn needed to make up a specified weight. The larger the number, the finer the yarn. Cotton, wool and linen are numbered with such systems.
B: The yarn number is based on the mass of a specified length of yarn. The larger the number, the heavier the yarn. Silk, synthetic fibers and jute are numbered with such systems.
C: The actual maximum diameter of the yarn is specified.
Many yarns consist of more than one ply. To describe the weight of multi-ply yarn both the number of plies and a yarn number must be given. s are A fraction is used twoumbers separated by a slash.
The first number is the number of plies. The second number is the cut or run number of the yarn as a whole, not of the plies separately. So, for example, 2/10s cut yarn would have two plies, and 3000
yards would weigh a pound. In other words, the plies themselves would be 20-cut.
A different format is used for silk. The system for silk is different
Yarn number systems by type of fiber
• jute, heavy flaxes, coarse hemp
• mohair and camel's hair
In English-speaking countries, the yarn number is the number of 840-yard hanks in a pound. The convention for indicating plies resembles that for wool. Two-ply 20s would be written 2⁄20s or 20⁄2, and
would be twice the weight, length for length, of single ply 20s yarn.
On the Continent, the yearn number is the number of 1000-meter lengths in 500 grams.
Jute, heavy flaxes, coarse hemp
In the Dundee Jute Count, the count was the weight in pounds avoirdupois of a spindle of 14,400 yards. “If 14,400 yards weigh 8, 10 or 12 lbs. the grist of the yarn is called 8, 10 or 12 lb.
Linen, fine hemp
In England and the United States, the Irish system, by which the counts or lea number is the number of 300-yard leas in a pound.
Linen has been spun as fine as 400s and even 600s, which are used in making fine lace. To achieve such fineness, Belgian hand spinners worked only in damp basements.
Mohair and camel's hair
Same systems as used for worsted wool.
Modern (for all fibers)
Over the years there have been dozens, perhaps hundreds of yarn numbering systems. Many of these are still in active use in the 21st century.
In 1900 a conference was held in Paris in an attempt to reach agreement on a single international standard (the Congrès International pour l'unification du numérotage des fils). They chose method A:
the international metric count is the length in meters of 1 gram of the yarn. But they made an exception for raw and thrown silks, for which the yarn number is the weight in 0.05 gram chunks of a
450-meter length. So, for example, if a 450-meter length of silk weighed 1.5 grams, its yarn number would be (1.5/0.05) 30.
Many years later another international unit was developed, one based on method B. This unit, the tex, is the weight in grams of 1 kilometer of the yarn.
In England, the count was the number of 300-yard hanks in an avoirdupois pound.
On the continent, the count was the number of 1000-meter hanks in 1 kilogram.
Silk and synthetic fibers
Synthetic yarns other than glass, and raw and thrown silk yarns are sized by the metric and denier systems.
The metric yarn number, or legal denier count, is the mass in grams of a 450-meter length of the yarn divided by 0.05, or, another way of saying the same thing, the mass in grams of a 9000-meter
length. Adopted in 1900 by the Congress in Paris mentioned above.
The denier was a French coin, = ^1/[12] of a sou, whose mass was used as a weight in calculating yarn numbers. In Great Britain and the United States, denier was originally applied only to raw silk.
Being a natural product, silk varies in thickness, so the size is usually given as a range, for example, “^13/[16] denier.”
Some old denier systems, with the international for comparison:
“international” denier mass in grams of a 500-meter length ÷ 0.05
Turin denier mass in grams of a 474-meter length ÷ 0.05336
Milan denier mass in grams of a 476-meter length ÷ 0.0511
Old Lyonese denier mass in grams of a 476-meter length ÷ 0.5311
New Lyonese denier mass in grams of a 500-meter length ÷ 0.05311
The Manchester dram system, or English dram system, was formerly used for thrown silk. The yarn number is the weight of a 1000-yard skein in drams. Nowadays the denier is used for everything.
In the Yorkshire ounce system, also formerly used for thrown silk, the size is the number of yards in 1 ounce avoirdupois.
Spun silk yarn, which is made from leftovers after filament silk has been produced, is numbered by a different system in the United States and the United Kingdom, one like that used for cotton. The
yarn number is the number of 840-yard lengths (a hank) in a pound. The smaller the number, the heavier the yarn.
Unlike cotton, the count in a fraction representing multi-ply yarn describes the finished yarn, not the plies.
American cut system
The yarn number is the number of 300-yard hanks needed to make up a pound.^1 Thus 600 yards of 2-cut yarn weigh a pound. Symbol, N[ac]. In practice coarse yarns are typically five-cut to seven-cut,
medium 18-cut to 21-cut, and fine yarns 30-cut to 35-cut.
Used around Philadelphia.
1. ASTM Standard D-123-03. Standard Terminology Relating to Textiles.
Edition approved 10 February 2003.
American run system
The yarn number is the length in yards of one pound of the yarn, divided by 1600. Symbol N[ar]. This is the same as the weight in ounces of 100 yards. So one pound of number 1 run yarn is 1600 yards
long, one pound of number 2 run yarn is 3200 yards long, and so on. Numbers 1 through 3 are coarse, 3½ to 5 are medium, and numbers 6 to 8 runs are fine.
Lederer^1 says a run was 1644 yards and quotes a 1734 Connecticut law that speaks of yarn that is “eight runs to the pound.”
1. Richard M. Lederer, Jr.
Colonial American English. A Glossary.
Essex, Connecticut: A Verbatim Book, 1985.
Page 200.
American grain system
Bradbury says the yarn number is the weight in grains of 50 yards. US Conditioning and Testing says of 20 yards.
The yarn number is the number of 300-yard cuts in 24 ounces avoirdupois. In practice, it is more easily measured as the number of 200-yard units in 1 pound avoirdupois, which is the same thing.
The yarn number is the number of 300-yard cuts in 26 ounces avoirdupois.
West of England
The yarn number is the number of 320-yard hanks in 1 pound. This is equivalent to the number of 20-yard lengths of yarn in 1 ounce.
Originally the yarn number was the number of skeins of 1,536 yards each, in one
warten. (A warten is 6 pounds.) Eliminating the warten, this is the same as the number of 256-yard skeins in a pound. Since there are 256 drams in a pound, the yarn number is also the number of yards
which weigh 1 dram.
Halifax Rural District
The yarn number was the number of drams that 80 yards weighed.
German woolen count
The yarn number is the number of hanks, each of 2200 Berlin ells, in 500 grams.
Worsted is wool that has been carded and combed until the fibers are parallel. Generally, only very high quality wool is worth this level of effort.
In English-speaking countries the count is the number of 560-yard hanks in a pound. A pound of 2-count yarn is thus 1120 yards long.
On the Continent, the count is the number of 1000-meter hanks in 1 kilogram.
The values of the Super scale were chosen to approximate the count numbers from the older "number of 560-yard hanks in a pound" system.
The difference between the Super designation and the plain S, e.g., "180's" and "super 180's," is that the plain s number may be used on any fabric that is at least 45% wool. The SUper designation is
reserved for fabrics made with new wool. There are two exceptions which if blended mohair, cashmere alpaca, silk and 2) to achieve decorative effects, as much as 5% of non-wool yarn may be added.
In the United States, these specifications are made legal requirements in the Wool Products Labeling Act of 1939 (15 U.S.C. § 68)
Designation The average diameter
(as, e.g. "80's" of the wool fiber
or "Super 80's") must not exceed
(in microns)
80's 19.75
90's 19.25
100's 18.75
110's 18.25
120's 17.75
130's 17.25
140's 16.75
150's 16.25
160's 15.75
170's 15.75
180's 14.75
190's 14.25
200's 13.75
210's 13.25
220's 12.75
230's 12.25
240's 11.75
250's 11.25
International Wool Textile Organization.
Arbitration Agreement and Other International Agreements (Blue Book).
Appendix 2: Fabric Labelling Code of Practice; Quality Definitions Relating to "Super S" and "S" Descriptions.
Brussels: International Wool Textile Organization, annual.
There are five different French systems for the numbering of worsted yarns: The metric system, measuring 496 yards per pound; the new Roubaix, 354 yards per pound; the old Roubaix, 708 yards per
pound; the Reims, 347 yards per pound and the Fourmies, 352 yards per pound.
Louis Harmuth.
Dictionary of Textiles.
New York: Fairchild Publishing Co., 1915.
The first ("metric system") is simply the number of 1000 m lengths in 1 kg, converted to English units. What the original definitions of the other four systems are we know not.
The following table gives some very approximate equivalents, by weight, for the various systems.
denier worsted cotton woolen linen tex metric
(run) (lea)
10 *
50 160 106 56 298 5.6 180
75 106 71 37 198 8.3 120
100 80 53 28 149 11.1 90
150 53 35 19 99 16.6 60
200 40 27 14 74 22.2 45
300 27 18 9.3 50 33.4 30
400 20 13 7.0 37 44.4 22.5
500 16 11 5.6 30 55.5 18
700 11.4 7.6 4.0 21 77.7 12.9
1000 8.0 5.3 2.8 15 111 9
1500 5.3 3.5 1.9 10 166 6
2000 4.0 2.7 1.4 7 222 4.5
home | units index | search | to contact Sizes | acknowledgements | help |
terms of use | {"url":"http://sizes.com/units/yarn_numbers.htm","timestamp":"2014-04-18T08:13:59Z","content_type":null,"content_length":"19413","record_id":"<urn:uuid:ae2bfc71-17a4-4cf5-ac31-1f23d1484c75>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00525-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplying by 0
Date: 03/09/2001 at 04:23:12
From: Bruce Griffis
Subject: Multiplying by 0
This may be a stupid question, but why, when multiplying any number by
0, do you get 0?
If I have a dollar in my hand and multiply it by 0, won't I still have
a dollar? If not, then who took my dollar?
Date: 03/09/2001 at 08:41:28
From: Doctor Rick
Subject: Re: Multiplying by 0
Hi, Bruce.
They say the only stupid question is the one that you don't ask. Even
"stupid questions" can lead to interesting insights if they make us
look at the obvious in a new way. I don't know if I'll do that, but it
could happen!
What does your illustration mean? If I have a dollar and multiply it
by 2, I have $2. I could just as well ask where the second dollar came
from, as you can ask where your dollar went.
Let's fix your picture to make it clear where money is coming from.
Let's say you're a barber and every customer tips you a dollar. At the
end of the day, how much have you made in tips? Multiply a dollar by
the number of customers you had. If you had 10 customers, you made
$10. If you had 20 customers, you made $20.
So, what about the day that your shop was closed because of a
blizzard? By your reasoning, you'd still make a buck, even though you
had zero customers.
No, it's clear that you make $0: $1 times 0 customers is $0.
- Doctor Rick, The Math Forum
Date: 03/09/2001 at 08:48:18
From: Doctor Peterson
Subject: Re: Multiplying by 0
Hi, Bruce.
Picture multiplication by a whole number as making that many piles,
each containing the original amount. If I start with a dollar and
multiply by 5, I make 5 piles with a dollar in each. If I multiply by
zero instead, I have zero piles; and zero piles (no matter how much I
claim "each of them" has) will contain zero dollars.
You can also go in the other direction, multiplying zero by anything;
this should give the same result. So start with 0 dollars, and make 57
piles, each containing zero dollars. Again, the total is zero.
I'm wondering if you are confusing "multiplying by 0" with "adding
zero times as much." If you have a dollar in your hand and multiply it
by ONE, you will still have the same amount; multiplying by one
doesn't change anything. Multiplying by 2 takes what you had, makes
another copy of it, and gives you back twice what you started with.
But multiplying by zero takes away what you had and gives you none of
it back.
- Doctor Peterson, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/63122.html","timestamp":"2014-04-17T11:16:26Z","content_type":null,"content_length":"7631","record_id":"<urn:uuid:529dcce4-1331-42a5-b68c-21bf4ed76c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Chicago Guide to Writing about Multivariate Analysis
Study Guide
This Study Guide for The Chicago Guide to Writing about Multivariate Analysis, 2nd Edition, by Jane E. Miller provides problem sets, suggested course extensions, and solutions for each substantive
chapter of the book, suitable for use in the college classroom. Individual parts of the guide may be downloaded below. Or the entirety of the 190-page guide is available in one PDF file.
Title Page, Table of Contents, and Preface
Chapter 1. Introduction
Suggested Course Extensions
Chapter 2. Seven Basic Principles
Problem Set | Suggested Course Extensions | Solutions
Chapter 3. Causality, Statistical Significance, and Substantive Significance
Problem Set | Suggested Course Extensions | Solutions
Chapter 4. Five More Technical Principles
Problem Set | Suggested Course Extensions | Solutions
Chapter 5. Creating Effective Tables
Problem Set | Suggested Course Extensions | Solutions
Chapter 6. Creating Effective Charts
Problem Set | Suggested Course Extensions | Solutions
Chapter 7. Choosing Effective Examples and Analogies
Problem Set | Suggested Course Extensions | Solutions
Chapter 8. Basic Types of Quantitative Comparisons
Problem Set | Suggested Course Extensions | Solutions
Chapter 9. Quantitative Comparisons for Multivariate Models
Problem Set | Suggested Course Extensions | Solutions
Chapter 10. The “Goldilocks Problem” in Multivariate Regression
Problem Set | Suggested Course Extensions | Solutions
Chapter 11. Choosing How to Present Statistical Test Results
Problem Set | Suggested Course Extensions | Solutions
Chapter 12. Writing Introductions, Conclusions, and Abstracts
Problem Set | Suggested Course Extensions | Solutions
Chapter 13. Writing about Data and Methods
Problem Set | Suggested Course Extensions | Solutions
Chapter 14. Writing about Distributions and Associations
Problem Set | Suggested Course Extensions | Solutions
Chapter 15. Writing about Multivariate Models
Problem Set | Suggested Course Extensions | Solutions
Chapter 16. Writing about Interactions
Problem Set | Suggested Course Extensions | Solutions
Chapter 17. Writing about Event History Analysis
Problem Set | Suggested Course Extensions | Solutions
Chapter 18. Writing about Hierarchical Linear Models
Problem Set | Suggested Course Extensions | Solutions
Chapter 19. Speaking about Multivariate Analyses
Problem Set | Suggested Course Extensions | Solutions
Chapter 20. Writing for Applied Audiences
Problem Set | Suggested Course Extensions | Solutions
The University of Chicago Press
1427 E. 60th Street • Chicago, IL 60637 USA
Voice: 773.702.7700 • Fax: 773.702.9756
The University of Chicago Press Home | {"url":"http://www.press.uchicago.edu/books/miller/multivariate/multivariate_study_guide2.html","timestamp":"2014-04-19T12:06:20Z","content_type":null,"content_length":"15338","record_id":"<urn:uuid:4d3abe2d-b7cd-44bc-88b7-06cefb0a4930>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
VSEPR shape of XeF
Step 1: Use lewis structure guidelinesto draw the lewis structure of XeF[4].
Step2: Apply VSEPR notation, A X E
A=Number of central atoms
X=Number of surrounding atoms
E= Number of lone pairs on central atom
For the above molecule VSEPR notation will be AX[4]E[2]
Step 3: Use VSEPR table to find the shape. AX[4]E[2] has square planar shape. So the shape of XeF[4] is square planar
Download a copy of VSEPR shapes table here
Bond angle in XeF[4]
Bond angle of F-Xe-F bond in this
molecule is 90º .The representation is shown below.
Image courtesy:wikipedia.org | {"url":"http://biochemhelp.com/vsepr-shape-of-xef4.html","timestamp":"2014-04-17T06:41:26Z","content_type":null,"content_length":"7285","record_id":"<urn:uuid:22879827-e271-4436-bae4-0aebfb80cf63>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annals of Mathematics
We haven't found any reviews in the usual places.
Mekriman G 31 Concerning the Summability of Double Series of a Certain 7
Michal A D and T Y Thomas Differential Invariants of Relative 31
Miller N On Related Maxima and Minima 117
10 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=Ht00AAAAIAAJ","timestamp":"2014-04-18T21:00:54Z","content_type":null,"content_length":"96404","record_id":"<urn:uuid:8a72ed22-50f1-467a-8478-127f939de793>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Course
A major priority in the design of this course is the engagement of students as scientists and citizens. This is accomplished through the variety of techniques described below.
Course Syllabus (Acrobat (PDF) 54kB Nov7 08)
Course Design
The course meets twice a week for 80 minutes. I prefer this format, as compared to meeting three times a week for 50 minutes, as I regularly have the students engage in interactive group activities
during the class and the longer time block facilities such activities.
The text for the course is Differential Equations by Blanchard, Devaney, and Hall, 3rd edition, published by Brooks/Cole. The authors are all researchers in the field of dynamical systems and they
apply a dynamical systems perspective to their presentation of differential equations. There is a strong emphasis on quantitative analysis of equations using graphical and numerical methods and a
corresponding decrease in emphasis on analytical techniques. The text includes a strong focus on mathematical modeling.
Formats and Pedagogies
A computer disk comes with the text. This disk, that can be used on both PC and MacIntosh computers, contains a variety of easy to use simulations and demonstrations that illustrate many of the ideas
in the course. Most of the programs are menu driven, with the user selecting from a set of pre-programmed examples, so there is no learning curve required to use them. The output is displayed in a
beautiful visual form. In a few important cases, such as to graph slope fields or vector fields and draw their associated solutions curves, the user can enter her own formulas into the programs.
The class format is an integrated mixture of lecture, seminar and lab. Part of the time I lecture, there is also a lot of group work, often using the computer programs, and classroom discussion. In
earlier versions of the course, I would use the computer programs to demonstrate ideas, via a computer projection system, to the class. The class would have a separate computer laboratory component
in which students would do assignments in our computer lab. Several years ago, the math department purchased a set of ten laptop computers. Now the students, in teams of two or three, use these
laptops during class time to explore the concepts themselves and at present we do not have a separate computer lab component. There are still some more extensive computer assignments that students do
on their own time.
For the group work, I have both open-ended discovery work and guided work. For the discovery work, I have the students use the computer programs to investigate a new situation and respond to prompts
such as "what do you observe?", "do you see any patterns?", "what questions do you have?", "can you make some predictions or conjectures? ". In the guided work, the students practice a technique that
I have presented during lecture.
I regularly assign homework problems from the textbook. Students read out of the book Collapse: How Societies Choose to Fail or Succeedby Jared Diamond, and write short response papers in which they
describe the ways that they see the material in our math course applying to the social issues being discussed in the chapter. There is a more focused assignment on over-population and the Rwandan
genocide (See Appendix for Rwanda Assignment). There is a final project in which student teams learn about a topic of interest that involves differential equations, give a short oral presentation on
their project and write a 10 – 15 page report on their findings. (See Appendix for description of final project and list of potential project topics.)
We have a special three hour class meeting one evening in which we learn about over harvesting of resources by playing the simulation game Fishing Banks, Ltd created by Dennis Meadows. (See Appendix
for Fishing Simulation Game).
Class Schedule
Below is the course "play-by-play" in which I briefly describe the topic for each class and also have links to the handouts for group work and computer work that we used in class that day. Also below
is an example of a group modeling project.
Class Schedule (Acrobat (PDF) 100kB Nov7 08) | {"url":"http://serc.carleton.edu/sencer/ode_real_world/course.html","timestamp":"2014-04-18T14:11:24Z","content_type":null,"content_length":"28732","record_id":"<urn:uuid:71fdd710-2283-4049-b7c5-1ff538884310>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Los Angeles, CA Algebra 2 Tutor
Find an East Los Angeles, CA Algebra 2 Tutor
...I can tailor my lessons around your child's homework or upcoming tests and stay synchronized. Your child's skills will be improved in a few sessions. I am organized, professional and friendly.
14 Subjects: including algebra 2, reading, Spanish, ESL/ESOL
...I am an approved tutor in SAT preparation. I have been working with the teacher training program at UCLA giving future teachers techniques and methods of teaching elementary mathematics. I
work well with K-6th children.
72 Subjects: including algebra 2, reading, English, geometry
...I have tutored students from several Pasadena area high schools, both public and private, and will be happy to provide recommendations. I graduated from Dartmouth College in 1984, and have
interviewed high school seniors for admission to that school. I understand the pressures that students face today, as I'm with teens most of my days.
5 Subjects: including algebra 2, algebra 1, SAT math, linear algebra
I have taught math for over 5 years! Many of my students are from grade 2 to 12, some are from college. I also have a math tutor certificate for college students from Pasadena City College. I
graduated in 2012 from UCLA.
7 Subjects: including algebra 2, geometry, algebra 1, trigonometry
...I continue working this way for about 5 years imparting work shops in Geology. During this time I learned how to motivate the Students individually or in a group in different levels. In order
for me to be effective, I must be first honest, trustful and knowledgeable in the subject.
8 Subjects: including algebra 2, Spanish, geometry, chemistry
Related East Los Angeles, CA Tutors
East Los Angeles, CA Accounting Tutors
East Los Angeles, CA ACT Tutors
East Los Angeles, CA Algebra Tutors
East Los Angeles, CA Algebra 2 Tutors
East Los Angeles, CA Calculus Tutors
East Los Angeles, CA Geometry Tutors
East Los Angeles, CA Math Tutors
East Los Angeles, CA Prealgebra Tutors
East Los Angeles, CA Precalculus Tutors
East Los Angeles, CA SAT Tutors
East Los Angeles, CA SAT Math Tutors
East Los Angeles, CA Science Tutors
East Los Angeles, CA Statistics Tutors
East Los Angeles, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
August F. Haw, CA algebra 2 Tutors
Boyle Heights, CA algebra 2 Tutors
City Industry, CA algebra 2 Tutors
City Of Industry algebra 2 Tutors
Commerce, CA algebra 2 Tutors
Firestone Park, CA algebra 2 Tutors
Glassell, CA algebra 2 Tutors
Hazard, CA algebra 2 Tutors
Los Nietos, CA algebra 2 Tutors
Montebello, CA algebra 2 Tutors
Monterey Park algebra 2 Tutors
Rancho Dominguez, CA algebra 2 Tutors
South, CA algebra 2 Tutors
Walnut Park, CA algebra 2 Tutors
Windsor Hills, CA algebra 2 Tutors | {"url":"http://www.purplemath.com/East_Los_Angeles_CA_Algebra_2_tutors.php","timestamp":"2014-04-18T11:23:46Z","content_type":null,"content_length":"24424","record_id":"<urn:uuid:2ca460ee-a940-4b31-bb80-ff45ea17dd50>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Cantor's theorem of little interest in constructive math
Matthew Frank mfrank at math.uchicago.edu
Tue Feb 13 15:38:51 EST 2001
Cantor's theorem is of little interest in constructive math.
At least, given my norms for constructive math, it ought to be of little
interest. Set theory (as in the study of cardinals) and constructive math
each have their own appeal, but I find the mixture unappealing.
Here's a familiar, positive, typical example: Classically, one might
prove the existence of a trascendental by noting that the reals are
uncountable and the algebraic reals are countable. We don't use the
cardinalities in a constructive proof because it is easy to construct a
transcendental number directly.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2001-February/004729.html","timestamp":"2014-04-19T04:22:25Z","content_type":null,"content_length":"3082","record_id":"<urn:uuid:32a7721c-53e5-49df-983d-92b569f93b27>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motion of electrons in orbitals and shape of orbitals
Why don't electrons move only along the surface of orbitals?
Moreover, how do electrons move within orbitals, random movement or do they follow a definite path?
In a p-orbital, does one lobe consist of only one electron?
Why is the p-orbital dumbbell shaped and not spherical?
The representation of atomic orbitals in terms of their electron densities just tells you that the majority of said electron density is mostly localized within that volume - not that the electron is
bound to the surface contour that is plotted.
Strictly speaking, an orbital is a one-electron wavefunction.
To (over)simplify the discussion, an electron in a
orbital has non-zero angular momentum. When one works through the math for this case, you get the dumbbell-looking electron density.
If the electron does not go around the nucleus in a circular path, won't it come crashing to the nucleus?
Welcome to quantum mechanics. Classical reasoning breaks down here. The notion that electrons "orbit" the nucleus is incorrect. | {"url":"http://www.physicsforums.com/showthread.php?t=520334","timestamp":"2014-04-19T04:42:00Z","content_type":null,"content_length":"38804","record_id":"<urn:uuid:10ff8922-346e-49bc-b40e-08b45b0eadbd>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solutions for the Accessibility Community
At Design Science, we believe math can and should be made accessible. The unfortunate reality, however, is that virtually all math in educational content and assessments is not accessible to many
students with disabilities. We believe this is a problem that needs fixing and that it can be fixed. We are committed to working with the accessibility community to make math fully accessible in the
next few years.
The current state of math skills of the nation's 6.5 million students with disabilities has become a critical issue for America's public schools. Learn about the extent of this problem, and why math
accessibility is important to solving it.
The concept of accessibility to mathematical information for people with disabilities may be new to some. Learn about the basic concepts of math accessibility, and how it strives to make mainstream
educational content universally designed to meet the needs of all students.
Although making mathematics accessible may be more difficult than making plain text accessible, many of the issues underlying accessible math can be solved through using MathML technologies. Learn
how MathML is an important technology in math accessibility.
Design Science is involved in a number of activities in this area: the World Wide Web Consortium, the DAISY Consortium and the NIMAS Development Committee, and our research and development activities
supported by funding from the National Science Foundation.
Design Science has developed a number of products that can help make math accessible. Learn how to create universally designed accessible math content, how to allow people with disabilities to read
and write math, product features that allow better access and usability, and access to product VPATs outlining product compliance with Federal Section 508 Accessibility Standards.
There are a growing number of assistive technology applications that support accessible math. Find out about AT products that can help you access math.
Education policy needs to be developed which provides for full support of accessible math instruction and assessment. Learn about the need for math accessibility requirements in textbook adoption,
assessment administration and software selection policy, and some of the legal mandates applicable to math accessibility policy.
Whether you are a classroom teacher, a school district or state education administrator, a publisher, an assistive technology vendor, a parent, or a person with disability, there are things you can
do to help make math accessible. Learn how you can get involved in making sure that math accessibility solutions are available to everyone.
Here you will find announcements and news items that should be of interest to the math accessibility community.
We have compiled a few of the most comprehensive resources for those who would like to explore additional information. Learn about other sources of information on the web about accessible math
technologies and techniques. | {"url":"http://www.dessci.com/en/solutions/access/","timestamp":"2014-04-20T23:32:04Z","content_type":null,"content_length":"18990","record_id":"<urn:uuid:14aaa990-2156-460d-8e5d-a4f17e9fb8d9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
OG12 Mistake?
Question Stats:
72%27% (01:05)based on 11 sessions
144144 wrote:
for all integers N, the function F is defined by f(n)=a^n
where a is a constant. what is the value of f(1)
1. f(2)=100
2. f(3)=-1000
so a=10/-10 BUT
it can also mean that a=1/10 and n=-2
More importantly, there is a mathematical mistake in what you are doing. If
f(n) = a^n
then when we evaluate f(2), we replace n with 2 on the right side of the equation above:
f(2) = a^2
If instead we replace n with -2 to get a^(-2), as you have done, then we are finding the value of something completely different - we're finding the value of f(-2). The value of f(-2) is not relevant
in this question since we aren't given any information about it.
I've gone through the entirety of the Quant material in OG-12 looking for errors or typos. There are a few in the solutions section (some typos, but also some mathematical or logical errors), and
there are a couple of minor errors in the explanatory material at the beginning of the book. As best I can tell, the questions themselves are free of mistakes in OG-12 (though the first printing of
OG-11 had a couple of misprints in the questions section). On the whole it's a very well-edited book, at least relative to most GMAT publications.
Nov 2011: After years of development, I am now making my advanced Quant books and high-level problem sets available for sale. Contact me at ianstewartgmat at gmail.com for details.
Private GMAT Tutor based in Toronto | {"url":"http://gmatclub.com/forum/og12-mistake-113275.html?kudos=1","timestamp":"2014-04-18T20:55:21Z","content_type":null,"content_length":"162146","record_id":"<urn:uuid:a7141c17-956c-482c-a676-384fdf8ace02>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 2007 [00407]
[Date Index] [Thread Index] [Author Index]
Re: Solving simple equations
• To: mathgroup at smc.vnet.net
• Subject: [mg83225] Re: Solving simple equations
• From: Bill Rowe <readnewsciv at sbcglobal.net>
• Date: Thu, 15 Nov 2007 05:42:43 -0500 (EST)
On 11/14/07 at 4:51 AM, hredlich at gmx.net (Holger) wrote:
>I'm trying to solve these two simple equations but it doesn't work.
>meqn = { x'[t]==beta(x[t]+(Subscript[P,f]-p[t])),
>eqp = NSolve[ meqn/.{ p'[t]->0,x'[t]->0},{p[t], x[t]
>I guess I'm doing a mistake somewhere. Does anyone have idea where
>the mistake is?
You are trying to get NSolve to do something it simply isn't
intended to do. NSolve is intended to provide a numerical
solution to a set of polynomial equations. Your equations cannot
be reduced to polynomials and you've not given numeric values to
all of the coefficients.
You can solve things with Mathematica as follows:
=46irst, lets simplify things a bit by getting rid of the unneeded
variable t, i.e. (This needs to be done if you are going to get
a numeric solution)
In[5]:= meqn /. {p'[t] -> 0, x'[t] -> 0, p[t] -> p, x[t] -> x}
Out[5]= {0 == beta*(-p + x + Subscript[P, f]),
0 == Tanh[x*Subscript[a, 2] + Subscript[a, 1]*
(-p + x + Subscript[P, f])] - x}
Now the first equation can easily be solved for p
In[6]:= Solve[First[%], p]
Out[6]= {{p -> x + Subscript[P, f]}}
putting this solution into the second equation gives
In[7]:= Last[%%] /. First[%]
Out[7]= 0 == Tanh[x*Subscript[a, 2]] - x
Now once Subscript[a, 2] is replaced with a specific numeric
value, FindRoot can be used to find a numeric solution for x.
That can then be substituted into the solution to the first
equation to get p.
To reply via email subtract one hundred and four | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Nov/msg00407.html","timestamp":"2014-04-16T22:01:35Z","content_type":null,"content_length":"26583","record_id":"<urn:uuid:9a892ff9-d6c4-468c-934e-806d146b4555>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
ET: Legacy
27th Feb 2012, 19:38 #41
Junior Member
Join Date
Jan 2012
Re: ET: Legacy
hi : im not sure if this is classed as a bug or glitch but, i like to assign left ctrl as crouch and when i am crouching i cant change to any weapon unless i stand and then its possible. its
something i got used to when playing on linux but on windows it doesnt normally happen, anyhow im still using it and also not having any noticeable problems with jaymod 2.2.0..nice work : )
looking forward to further enhancements - thanks.
Re: ET: Legacy
I'm having some issues running the latest two betas. I'm getting the following error:
error while loading shared libraries: libXxf86dga.so.1: wrong ELF class: ELFCLASS64
Using Ubuntu 11.10 64bits, (beta 1 works fine)
I've also tried to build it, but I'm also getting some errors:
In file included from /usr/include/curl/curl.h:35:0,
from src/qcommon/dl_main_curl.c:38:
/usr/include/curl/curlrules.h:142:3: error: size of array ‘__curl_rule_01__’ is negative
/usr/include/curl/curlrules.h:152:3: error: size of array ‘__curl_rule_02__’ is negative
src/qcommon/dl_main_curl.c: In function ‘DL_BeginDownload’:
src/qcommon/dl_main_curl.c:126:5: warning: passing argument 1 of ‘FS_CreatePath’ discards ‘const’ qualifier from pointer target type [enabled by default]
src/qcommon/dl_public.h:54:10: note: expected ‘char *’ but argument is of type ‘const char *’
make[1]: *** [obj/x32/Release/etlegacy/dl_main_curl.o] Error 1
make: *** [etlegacy] Error 2
Wrong version of curl I guess... which version is needed?
Re: ET: Legacy
That error informs that game tried to load 64bit library instead of 32bit. 32bit binaries are not compatible with 64 and can't be included and/or dload().
About compilation error... it compiled for me without any problem.
Re: ET: Legacy
chernocba: as morsik said, the package you need is called libxxf86dga1:i386 and the curl error is also connected to your architecture - you need curl sources configured for 32 bit. Curl version
is not important. You can use that six year old one which was shipped with ET 2.60b or the newest one, it doesn't matter.
432hz: yea, most issues we have are to do with the input system which is based on SDL. It's now almost identical to the ioquake code, but it will need to be rewritten.
We now have an irc channel on Freenode, so you can also come to #etlegacy to discuss this project.
Re: ET: Legacy
Dragonji: and this defect is not present in the vanilla ET I presume (?) Strange.
ET:L beta 4 is out - please see the first post of this topic.
Edit: Oh yea, and I forgot to mention that we have a new website! www.etlegacy.com
Edit 2: FYI: nearly all the system configuration wizardry was done by Morsik. Again BIG thanks!
One more thing. I did not compile linux binaries, because there were way too many problems due to dynamic linking. I will provide statically linked binaries sometime in the future. In the
meantime please
1. download the sources with git: git clone git://github.com/etlegacy/etlegacy.git
2. and compile: premake4 gmake && make (you need curl sources configured for 32 bit to do this)
Edit 3: WARNING: We have just discovered a nasty bug which crashes the game when you host a server. It will be fixed in the next release.
Edit 5: UPDATE: bug was fixed, see the first post.
Edit 4: nevermind
Last edited by Radegast; 11th Apr 2012 at 10:37. Reason: server crash fixed
Re: ET: Legacy
Don't forget to test it with ETnam mod. Thank you
Re: ET: Legacy
Fix the "Power of 2 Scaled" Bug
remove this code @render/tr_image.c
#ifdef CHECKPOWEROF2
if ( ( ( width - 1 ) & width ) || ( ( height - 1 ) & height ) ) {
Com_Printf( "^1Image not power of 2 scaled: %s\n", name );
return NULL;
#define BSP_VERSION 47
delete this, play any IDTech3 maps
Last edited by Gir; 16th Apr 2012 at 17:30.
Re: ET: Legacy
Ah yes forgot, can't remember the location in the code, I did accomplish g this in my old ETGold mod
Re: ET: Legacy
I read power-of-2 textures was better looking in IDTech3 because in other format they're stretched or downsized by the engine at each image loading, which can eat some resources?
Re: ET: Legacy
Wolfenstien ET is the only IDTech3 game that blocks non power-of-2 textures, otherwise you get black/yellow when loading Quake 3 Maps, Quake 3 uses them alot.
Re: ET: Legacy
Re: ET: Legacy
Just did and I haven't encountered any bugs so far. Really nice mod btw.
To tell you the truth I know absolutely nothing about this stuff, so I'll read that article and educate myself.
Since you've done it before in ETGold, you could join us and supply a patch for ET:L?
http://www.etlegacy.com - Updated version of Wolfenstein: Enemy Territory for Windows, linux, Mac OS X, AROS and OpenBSD
Re: ET: Legacy
Its need for backwards compatibility with Q3/RTCW Maps, you could then potentially play RTCW MP maps in ET
Guess i have to download Visual C++ 2010 again
Last edited by daz2007; 19th Apr 2012 at 13:30.
Re: ET: Legacy
Here is a Typo fix @ tr_shader.c
Line 1034 should be
if ( stage->alphaGen == AGEN_IDENTITY ) {
Line 1998 should be
if ( stages[0].alphaGen == AGEN_WAVEFORM )
Re: ET: Legacy
There is lots of redundant/useless stuff in the renderer that should be removed
e.g. tr_init.c
cvar_t *r_ext_NV_fog_dist;
cvar_t *r_nv_fogdist_mode;
cvar_t *r_ext_ATI_pntriangles;
cvar_t *r_ati_truform_tess; //
cvar_t *r_ati_truform_normalmode; // linear/quadratic
cvar_t *r_ati_truform_pointmode; // linear/cubic
ATi Tessellation code that never worked c
//----(SA) added
void (APIENTRY *qglPNTrianglesiATI)(GLenum pname, GLint param);
void (APIENTRY *qglPNTrianglesfATI)(GLenum pname, GLfloat param);
The tessellation level and normal generation mode are specified with:
void qglPNTriangles{if}ATI(enum pname, T param)
If <pname> is:
GL_PN_TRIANGLES_NORMAL_MODE_ATI -
<param> must be one of the symbolic constants:
- GL_PN_TRIANGLES_NORMAL_MODE_LINEAR_ATI or
- GL_PN_TRIANGLES_NORMAL_MODE_QUADRATIC_ATI
which will select linear or quadratic normal interpolation respectively.
GL_PN_TRIANGLES_POINT_MODE_ATI -
<param> must be one of the symbolic constants:
- GL_PN_TRIANGLES_POINT_MODE_LINEAR_ATI or
- GL_PN_TRIANGLES_POINT_MODE_CUBIC_ATI
which will select linear or cubic interpolation respectively.
GL_PN_TRIANGLES_TESSELATION_LEVEL_ATI -
<param> should be a value specifying the number of evaluation points on each edge. This value must be
greater than 0 and less than or equal to the value given by GL_MAX_PN_TRIANGLES_TESSELATION_LEVEL_ATI.
An INVALID_VALUE error will be generated if the value for <param> is less than zero or greater than the max value.
Associated 'gets':
Get Value Get Command Type Minimum Value Attribute
--------- ----------- ---- ------------ ---------
PN_TRIANGLES_ATI IsEnabled B False PN Triangles/enable
PN_TRIANGLES_NORMAL_MODE_ATI GetIntegerv Z2 PN_TRIANGLES_NORMAL_MODE_QUADRATIC_ATI PN Triangles
PN_TRIANGLES_POINT_MODE_ATI GetIntegerv Z2 PN_TRIANGLES_POINT_MODE_CUBIC_ATI PN Triangles
PN_TRIANGLES_TESSELATION_LEVEL_ATI GetIntegerv Z+ 1 PN Triangles
MAX_PN_TRIANGLES_TESSELATION_LEVEL_ATI GetIntegerv Z+ 1 -
//----(SA) end
//----(SA) added
r_ext_ATI_pntriangles = ri.Cvar_Get("r_ext_ATI_pntriangles", "0", CVAR_ARCHIVE | CVAR_LATCH | CVAR_UNSAFE); //----(SA) default to '0'
r_ati_truform_tess = ri.Cvar_Get("r_ati_truform_tess", "1", CVAR_ARCHIVE | CVAR_UNSAFE);
r_ati_truform_normalmode = ri.Cvar_Get("r_ati_truform_normalmode", "GL_PN_TRIANGLES_NORMAL_MODE_LINEAR", CVAR_ARCHIVE | CVAR_UNSAFE);
r_ati_truform_pointmode = ri.Cvar_Get("r_ati_truform_pointmode", "GL_PN_TRIANGLES_POINT_MODE_LINEAR", CVAR_ARCHIVE | CVAR_UNSAFE);
r_ati_fsaa_samples = ri.Cvar_Get("r_ati_fsaa_samples", "1", CVAR_ARCHIVE | CVAR_UNSAFE); //DAJ valids are 1, 2, 4
Last edited by Gir; 19th Apr 2012 at 17:57.
Re: ET: Legacy
I have some great news to share with you. First, we have new developer Sol, so you can expect features from his ETeng project to be added into Enemy Territory: Legacy soon!
Next, ET:L Beta 5 was released, so check out the first post of this topic or head to http://etlegacy.com/projects/etlegacy/files and test it.
LINUX: There is now a statically linked binary for linux! I thought it was really easy to compile ET:L on various distributions, but that's true only for Gentoo and Arch, other (64bit)
distributions are (were) a nighmare when it comes (came) to compiling ET:L.
2.70 (beta 5) released (see the first post of this topic)
□ commands can now be executed ingame from the system terminal on unix
□ added minimize command (minimizes window into the taskbar)
□ added Dushan's anti-DDoS security fix
□ cleaned up the code and added LOTS of security fixes
□ premake doesn't force you to compile deps dynamically anymore
Unfortunately, building ET:L with Visual Studio is... problematic right now
http://www.etlegacy.com - Updated version of Wolfenstein: Enemy Territory for Windows, linux, Mac OS X, AROS and OpenBSD
28th Feb 2012, 15:21 #42
Junior Member
Join Date
Feb 2012
28th Feb 2012, 17:45 #43
Join Date
Jun 2011
Bochnia, Poland
29th Feb 2012, 01:30 #44
9th Mar 2012, 15:30 #45
9th Mar 2012, 19:05 #46
Tapir Stalker
Join Date
Jan 2011
10th Mar 2012, 19:40 #47
16th Apr 2012, 08:59 #48
16th Apr 2012, 14:59 #49
16th Apr 2012, 17:19 #50
16th Apr 2012, 17:28 #51
16th Apr 2012, 17:31 #52
16th Apr 2012, 17:35 #53
16th Apr 2012, 17:48 #54
16th Apr 2012, 18:34 #55
18th Apr 2012, 14:18 #56
19th Apr 2012, 13:26 #57
19th Apr 2012, 16:58 #58
19th Apr 2012, 17:54 #59
19th Apr 2012, 23:12 #60 | {"url":"http://forums.warchest.com/showthread.php/31857-ET-Legacy/page3","timestamp":"2014-04-21T02:24:25Z","content_type":null,"content_length":"179504","record_id":"<urn:uuid:b0a5c275-835e-44b7-8dc3-4cdd8039e922>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
12. How many grams of CO are needed to react with an excess of Fe2O3 to produce 209.7 g Fe?
Number of results: 53,036
How many atoms are there in 12 grams of Carbon? 12 grams 12.01 moles 6.02x10^23mol x x 1 gram 1 atom = 867.6024 Is this correct?
Tuesday, June 2, 2009 at 6:52pm by Narasaq
24 grams * (1 ml/12 grams) = 2 ml note I wrote 1 ml/12 g instead of 12 g/1 ml so that the grams cancel If you multiply something by something that is the same top and bottom, for example in this case
12 g and 1 ml, you do not change the amount, only the units. That is an easy ...
Sunday, February 15, 2009 at 6:33pm by Damon
I will assume you mean 24 grams of carbon burned C + O2 --> CO2 C is about 12 grams/mol O2 = 2*16 = 32 grams/mol CO2 = 12+32 = 44 grams/mol therefore 24/2 = 2 mols carbon that means 2 mols O2 and 2
mols CO2 2 mols CO2 = 88 grams --- answer they also gave you redundant ...
Friday, September 17, 2010 at 4:11pm by Damon
There isn't anything to explain. # mols = grams/molar mass. Plug in grams from the problem (12.6) and molar mass from the problem (84) and divide on your calculator. I get something like 0.15 or so #
mols = 12.6/84 = ??. If you prefer to do these by the dimensional method, ...
Tuesday, July 22, 2008 at 11:26pm by DrBob222
13 cents for A 12 cents for B. total protein greater than twelve grams. Total fat less than eight grams. Total grams less than thirty five grams. .5 grams protein and .3 grams fat for A. .6 grams
protein and .2 grams fat for B. what is optimal cost
Tuesday, February 12, 2013 at 8:43pm by Jerome
grams 20 % = a grams 60% = b total gold = .2 a + .6 b total mass = a + b = 40 so b = 40-a so .2 a + .6 b = .3 (40) = 12 .2 a + .6 (40-a) = 12 -.4 a + 24 = 12 .4 a = 12 a = 30 so b = 10
Saturday, June 1, 2013 at 1:36am by Damon
what will be the law of conversion of a) 65 grams 70 grams b) magnesium + oxygen = magnesium oxide 45 grams ? grams 80 grams c) sodium hydroxide + hydrochoride acid 40 grams 36.5 grams sodium
chloride + water ? grams 18 grams
Saturday, August 4, 2012 at 12:35pm by Magaly
50 what?
grams? kilograms ? Carats ? If grams (pretty big) 6*10^23 carbon atoms weigh 12 grams so 50 grams * 6*10^23 / 12 = 25*10^23 = 2.5*10^24 atoms
Saturday, September 8, 2012 at 8:51pm by Damon
First, you made an error before you get to that part. grams O = 2.00-grams C - grams H. You subtracted grams CO2 and grams H2O. Your grams C (and moles) is ok as is the grams H (and moles H) but the
oxygen must be redone.
Monday, October 17, 2011 at 9:21pm by DrBob222
Is that 31.45% by mass. If so, then the solution is 12.58 m. 31.45 g HCl(31.45 g HCl + 68.55 g H2O) and 31.45/36.46 = 0.8625 moles HCl in 0.06855 kg solvent which makes the solution 12.58%. Then m x
grams soln = m wanted x grams wanted 12.58 m x grams soln = 0.1 m x 500 Solve ...
Saturday, October 16, 2010 at 11:01am by DrBob222
So you follow the equation I gave above and solve for grams stock solution. 12.58 m x grams soln = 0.1 m x 500 grams soln = 0.1 x 500/12.58 = 3.97 g If you know the density you can convert 3.97g to
mL. The density of 31% HCl is about 1.16 in my tables. Again, that won't be ...
Saturday, October 16, 2010 at 11:01am by DrBob222
a chemist wanted to make 100 grams of SO2. She had only 12 grams of sulfur and 4 grams of oxygen. Unfortunately, the amounts of reactants were not enough to make the desired product. How much S and
O2 would she need in order to obtain the desired grams of product with no ...
Thursday, November 3, 2011 at 10:04pm by jermain
Calculate the solubility of Ag2CrO4 in grams/100mL of water? (Ksp = 1.12x10^-12 ) I know how to calcuate the molar solubility, and I know how to convert it into grams/L...but I'm stuck on how to make
it grams/100mL Any Help would be awesome! =D thx!
Saturday, August 9, 2008 at 11:54am by Sue Ellen
HOW MANY GRAMS OF CA ARE REQUIRED IF I HAVE 12.9 GRAMS OF OXYGEN TO FORM CAO
Sunday, October 14, 2012 at 6:39pm by TESYANA
URGENT Chemistry question
a chemist wanted to make 100 grams of SO2. She had only 12 grams of sulfur and 4 grams of oxygen. Unfortunately, the amounts of reactants were not enough to make the desired product. How much S and
O2 would she need in order to obtain the desired grams of product with no ...
Thursday, November 3, 2011 at 10:17pm by jermain
3 quarters = $.75 and 18 g 5 nickels = $25 and 25 grams (43 too heavy) 3 quarters = $.75 and 18 g 2 dimes = $.20 and 4 grams 1 nickel =$.05 and 5 grams (27 too light) 2 quarters = $.50 and 12 g 4
dimes = $.40 and 8 grams 2 nickels =$.10 and 10 grams (30g ah hah)
Thursday, October 16, 2008 at 7:33pm by Damon
AP Chemistry
When a sample of methyl salicylate weighing 5.287 grams is burned in excess oxygen, 12.24 grams of CO2 and 2.522 grams of H2O are formed. What is the empirical formula for oil of wintergreen? I know
the answer is C8H8O3, I just need help finding out how to solve the problem. ...
Tuesday, September 16, 2008 at 7:57pm by R
A house mouse has a mass of 12 grams. If you put a mouse on a scale, how many paper clips would be needed to balance it if a paper clip has a mass of 12 grams?
Friday, May 31, 2013 at 6:00pm by Anonymous
Organic Chemistry
A student dehydrated 12 grams of 2-methylcyclohexene with 85% H3PO4, and acquired 6.50 grams of alkene. What are the theoretical and percent yields?
Monday, April 22, 2013 at 2:32am by Josh
how many moles of H2O are in 12.4g? moles H2O = 12.4/18 grams so to get CuSO4.5H2O, you need (12.4/18)*1/5 moles of CuSO4 figure how many grams that is.
Tuesday, September 21, 2010 at 8:42pm by bobpursley
AP Chemistry
(a) Convert the grams of CO2 to moles of CO2 by dividing grams of CO2 by the molar mass of CO2. Moles of CO2 = moles of carbon, C. Multiply moles of C by 12.011g/mol to get grams of carbon in the
sample. (b) Find the moles of H2O in a similar way. Moles of H = (2)(moles of H2O...
Tuesday, September 16, 2008 at 7:57pm by GK
1.33 m = 1.33 moles (Et)2O in 1 kg THF. 1.33 moles = 1.33 x 74.12 grams = 98.58 g so the total solution has a mass of 1000 g + 98.58 = 1098.58 grams. You want 471.6 grams (Et)2O which is 471.6/74.12
= 6.363 moles. So you need to take 1098.58 g solution x (6.363 moles/1.33 ...
Monday, March 1, 2010 at 3:13am by DrBob222
1 teragram = 1,000,000,000,000 grams = 1012 grams 1 kilogram = 1,000 grams. Can you try to work out the answer?
Tuesday, August 24, 2010 at 1:39am by MathMate
I don't know how to answer but I'll give you information and let you choose. 10. A mole of carbon atoms has a mass of 12.011 grams and a mole of He atoms has a mass of 4.002602 grams. In my book,
12.011 is approximately 3 x 4.002602 but it all depends upon how approximate you ...
Thursday, March 19, 2009 at 8:44pm by DrBob222
Mr. Arswan iss preparing a mixture for a science experiment. He measures 10.5 grams of wood shavings, 150 grams of iron filings, and 18.50 grams of salt. What is the total mass of Mr. Arswan`s
mixture? A 30.5 grams B 179 grams C 150.5 grams D 17.9 grams
Sunday, October 6, 2013 at 6:55pm by na
It's too hard to follow all of the divisions etc in line formulas but your problem MAY be you didn't cube the 2.54 cm. The easy way to work the problem is Convert 20 lbs to grams. 20 x 453.59 = ?
grams. Then Convert 1 ft to 12" and 12 x 2.54 cm = ?cm 4 x 2.54 cm = ? 12 x 2.54 ...
Sunday, October 2, 2011 at 12:45am by DrBob222
The masses of 10 books are found to be in an arithmetic sequence. If their total mass is 13 kg and the lightest book has a mass of 400 grams, what is the mass of the heaviest book? 2,000 grams 2,100
grams 2,200 grams 2,300 grams 2,700 grams
Tuesday, November 27, 2012 at 11:07am by Chelsy
no. First how many moles are in 12 grams? 12/12.01 round to 1 mole. How many things is in one mole? 6.02*10^23
Tuesday, June 2, 2009 at 6:52pm by bobpursley
Is your son studying density? mass = volume x density. The 12 cm^3 stands for 12 cubic centimeters (a volume), usually written either as 12 cm3 or as 12 cc or 12 c.c. (but the periods USUALLY are not
written). Water has a density of approximately 1.0 gram/cc at temperatures ...
Tuesday, May 12, 2009 at 10:50pm by DrBob222
Organic Chemistry
What's the volume? Is it 12 mL or 112 mL. If 12 mL, then PV = nRT You know P, V, R, and T, solve for n Then n = grams/molar mass and solve for grams. Convert mass to volume using density.
Monday, October 18, 2010 at 11:52pm by DrBob222
If 12.8 grams of Hydrogen reacts with 18.6 grams of oxygen it produces water. How much water was produced
Monday, July 19, 2010 at 2:52pm by john
% w/w = (grams HCl/100 g soln)*100 12M is 12 mols/L soln 12 mols = 12*36.5 g/mol = about 438 g HCl mass 1000 mL = 1180 grams (1.18 x 1000) (438/1180)*100 = ? w/w
Friday, February 15, 2013 at 5:42pm by DrBob222
Elements A and B form two new compounds. In compound I, 2 grams of A is combined with 4 grams of B. In compound II, 4 grams of A is combined with 12 grams of B. If the formula of compound I is A2B,
what is the formula of compound II?
Friday, July 2, 2010 at 9:57pm by Argenta
pH = 12 pOH = 14-12 = 2 OH^- = 1 x 10^-2 Therefore, (NaOH) = 1 x 10^-2 M - 0.01 moles/L moles = grams/molar mass. You know moles and molar mass, calculate grams.
Wednesday, June 2, 2010 at 2:57am by DrBob222
Not a great deal of chemistry here as it is by proportion. He melted 30 grams of Au (gold) with 30 grams of Pb. This melting process resulted to 42 grams of an Au-Pb alloy and some leftover Pb. So
assume all the gold is incorporated then (42-30) g of lead used, which is 12 g. ...
Monday, October 31, 2011 at 8:53am by Dr Russ
phosphorus (12.28g) reacts with excess of oxygen gas to form 21.04 grams of diphosphorus penatoxide. what is the theoretical yile in grams of this reaction? im not sure how exactly do you start this?
Sunday, December 11, 2011 at 9:48pm by Skye
I need to equal $1.00 and 30 grams With quarter .25=6 grams dime .10=2 grams nickel .05=5 grams
Thursday, October 16, 2008 at 7:33pm by Corey
Maggie's surprise gift weighs between 8.48 grams and 11.25 grams. Which of these weights could be the weight of the gift? 8.19 Grams 8.75 grams 11.47 grams 11.75 grams ~~~~~~~~~~~~~~~~~~~ How would I
show my work on paper for this?
Monday, April 30, 2012 at 7:30pm by Syd
You omitted the units of 3.2? Is that 3.2 grams? If so, read on. If not, the following is not correct. So you have 6 + 1 + 1 total = 8 parts of which 6/8 is KNO3, 1/8 is C and 1/8 is S. 3.2 x (6/8) =
2.4 grams KNO3. 3.2 x (1/8) = 0.4 grams S. 3.2 x (1/8) = 0.4 grams C. Now ...
Tuesday, October 27, 2009 at 12:46am by DrBob222
You are in charge of quality control on a line which fills bottles of soda. The desired gross weight for a 2 liter bottle of soda is 2,160 grams. At the end of each shift you select 12 bottles at
random and weigh them. After one month, the overall average weight of all bottles...
Thursday, March 4, 2010 at 8:53am by Jon
Chemistry Conversions
no, not exactly Mass=Moles*molmass =.000523moles* (12*6 +1*12+16*6)grams/mole
Tuesday, January 27, 2009 at 5:12pm by bobpursley
adult education
A Chemist weighed three separate test at 0.17 gram, 0.204 gram, and 2.3 grams. What is total weight of the three samples? A. 2.44 grams B. 2.521 grams C. 2.674 grams D. 3.97 grams
Saturday, February 20, 2010 at 7:36pm by sabrina
2) YOur balanced equation gives the answer: 1 mole of calcium carbonate yields one mole of carbon dioxide, which is 12 grams carbon. So for .55grams, you must have had .55/12 moles of calcium
carbonate. 3. knowing the moles of calcium carbonate, change it to mass (grams).
Wednesday, February 17, 2010 at 1:04am by bobpursley
i need to know if I am in the right tract please let me know...I am calulating the molality of the following exercise so I could be able to apply the formula T=Kf * molality calculate the freezing
point of a solution containin 12.2 grams of benzoic acid, dissolved in 250 grams...
Monday, October 27, 2008 at 4:20pm by Joshua
Could someone please tell me if I did this question right? An incomplete reaction of 12.125 grams of propane proceeds as follows. 2C3H8+11O2 --> 3CO2+4H2O+2CO+C(s) How many grams of solid carbon
residue are produced as a product? My answer: Step 1) Balanced the given ...
Saturday, October 17, 2009 at 8:05pm by Adam
calculate what? assume you have 100 grams. then 80 grams is carbon, or 80/12 moles and 20 grams is H, or 20/1 moles H C=80/12=6.66 moles H= 20 divide each by the lowest C=1 H=3 empirical formula: CH3
Now, 22.4 liters has a mass of 1.35*22.4 = 30.24grams divide that by the ...
Thursday, December 1, 2011 at 7:12am by bobpursley
CH4 = 12+4 = 16 grams / mole He = 4 grams/mole O2 = 16*2 = 32 grams/mole N2 = 14*2 = 28 grams/mole By the way H2O = 2+16 18 grams/mole so water vapor is lighter than air which is about 80% N2 and 20%
O2 (steam rises :) A water vapor molecule is also much smaller than an ...
Wednesday, November 10, 2010 at 2:43pm by Damon
mg/1000 = grams. grams/100 x 10 converts to grams/L. grams/molar mass converts grams to moles. Now you have moles/L.
Tuesday, September 1, 2009 at 9:08pm by DrBob222
You're absolutely right. 12.5 g ethylene glycol in 100 g H2O is 12.5 g solute in (12.5g + 100 g) = 112.5 solution. Then use density to convert grams solution to L soln.
Friday, February 15, 2013 at 4:12pm by DrBob222
Could someone please tell me if I did this question correctly? An incomplete combustion reaction of 12.125 grams of propane proceeds as follows: 2C3H8 + 11O2 = 3CO2 + 4H2O + 2CO + C(s).How many grams
of solid carbon residue are produced as a product? my answer: known: molar ...
Tuesday, October 13, 2009 at 8:55pm by Will
when hydrocarbons are burned in a limited amount of air CO as well as CO2 form. When 0.450 grams of a particular hydrocarbon was burned in air, 0.467 grams of CO, 0.733 grams of CO2, and 0.450 grams
of H20 were formed. a. What is the empirical formula? I got C2H3! check? b. ...
Friday, August 17, 2012 at 1:19pm by skye
A student makes a solution by dissolving 55.8 grams of potassium hydroxide in 875.0 grams of water. The resulting solution has a density of 1.07 grams per milliliter. 1. What is the volume of this
solution? 2. Calculate the % concentration of this solution. 3. Calculate the ...
Monday, January 31, 2011 at 1:02pm by Paul
It's true that, although we usually talk about molar mass in grams, we CAN make it mg, or kg, or tons (or any other unit) and adjust our other numbers. For example, try a problem with, "How many
grams of O2 are formed from the decomposition of 12.25 g KClO3 by 2KClO3 ==> ...
Tuesday, January 6, 2009 at 9:25pm by DrBob222
A student makes a solution by dissolving 55.8 grams of potassium hydroxide in 875.0 grams of water. The resulting solution has a density of 1.07 grams per milliliter. 1. What is the volume of this
solution? 2. Calculate the % concentration of this solution. 3. Calculate the ...
Monday, January 31, 2011 at 9:59pm by Paul
A student makes a solution by dissolving 55.8 grams of potassium hydroxide in 875.0 grams of water. The resulting solution has a density of 1.07 grams per milliliter. 1. What is the volume of this
solution? 2. Calculate the % concentration of this solution. 3. Calculate the ...
Tuesday, February 1, 2011 at 4:42am by Paul
A student makes a solution by dissolving 55.8 grams of potassium hydroxide in 875.0 grams of water. The resulting solution has a density of 1.07 grams per milliliter. 1. What is the volume of this
solution? 2. Calculate the % concentration of this solution. 3. Calculate the ...
Tuesday, February 1, 2011 at 10:53am by Paul
I have always tried to impress upon students that reactions are between molecules (particles) an when 6.02 x 10^23 particles are put together we have a mole. Reactions are between moles and not
grams. Grams, of course, get into the equations because moles = g/molar mass but ...
Thursday, March 4, 2010 at 11:50pm by DrBob222
Consider: C6H12)6 + 6O2-> 6CO2 + 6H2O + Energy a)If I have 90 grams glucose and all is consumed by oxygen, then how many grams of water will be produced? b)How many grams of oxygen will be consumed?
c)How many grams of sugar will be burned off if 85 grams of O2 is consumed?
Saturday, January 22, 2011 at 5:59pm by Lindsey
The masses of 1 mole of various gases are as follows: hydrogen about 2 grams, helium about 4 grams, nitrogen about 28 grams, oxygen about 32 grams and carbon dioxide about 44 grams. On the average
how fast does a molecule of each gas move at 333 Celsius?
Tuesday, April 6, 2010 at 2:10pm by Benny
The masses of 1 mole of various gases are as follows: hydrogen about 2 grams, helium about 4 grams, nitrogen about 28 grams, oxygen about 32 grams and carbon dioxide about 44 grams. On the average
how fast does a molecule of each gas move at 333 Celsius?
Tuesday, April 6, 2010 at 11:22pm by Plaster
9th grade
A student put 12.4 grams of potassium sulfate in 29 ml and stirred for 3 hrs. he saw that not all of the solid dissolved, so he removed the extra solid and found it had a mass of 3.4 grams. what was
the solubility of the chemical?
Sunday, February 21, 2010 at 4:44pm by Anonymous
Ignore the 24 mL and 2.50 M. What you have is what you originally put in. That is 5.00mL x 12.0 M (you didn't put units but I'm guessing M). So mols = M x L and grams = mols x molar mass or about 2.4
grams of NaOH.
Tuesday, April 24, 2012 at 12:36pm by DrBob222
The USDA size standards for eggs are based on weight and are as follows: SMALL: greater than 43 grams, less than 50 grams MEDIUM: greater than 50 grams, less than 57 grams LARGE: greater than 57
grams, less than 64 grams EXTRA LARGE: greater than 64 grams, less than 71 grams ...
Thursday, March 4, 2010 at 9:00am by Ion
You quit writing too soon and the problem isn't clear. The ratio is C/H = 12/1 atom to atom in grams. C/H2 = 12/2 atom to molecule (grams). When one writes ...... mass of 1 mole of carbon to one mole
of hydrogen, I think of hydrogen as being H2. Finally, I could have ...
Tuesday, January 25, 2011 at 8:05pm by DrBob222
Calculate the solubility (in grams per 100 mL of solution) of magnesium hydroxide in a solution buffered at pH = 12. I can get the answer in grams per mole, but i am having difficulty changing it to
Monday, November 1, 2010 at 1:52am by moe
a = mass of one aple b = mass of basket 1 basket : 12 a + b = 3105 Subtract 12 a to both sides 12 a + b - 12 b = 3105 - 12 a b = 3105 - 12 a 2 basket : 7 a + b = 1980 7 a + 3105 - 12 a = 1980 - 5 a +
3105 = 1980 Subtract 3105 to both sides - 5 a + 3105 - 3105 = 1980 - 3105 - 5...
Thursday, October 4, 2012 at 8:43pm by Bosnian
thermal physics
Let the unknown mass of steam required by X grams. The amount of heat that is loses changing to 12 C liquid water equals the heat gained by the 340g of ice as it melts to form water at the same
temperature (12 C). If we use units of J/g and J/g K, X*[2256 + 4.186(100 - 12)] = ...
Wednesday, March 25, 2009 at 7:15am by drwls
earth science
what would be the chart to use for carbon 14 whcih has a half life of 5730 years and a piece of wood which originally has 12 grams pf radioactive isotope; If the half lives occurred and now has 0.75
grams left?
Thursday, November 29, 2012 at 3:46pm by lan
Please help, need advise. Thanks A The amount of pyridoxine (in grams) in a multiple vitamin is normally distributed with = 110 grams and = 25 grams. What is the probability that a randomly selected
vitamin will contain less than 100 grams of pyridoxine? ì ó B The amount of ...
Wednesday, July 23, 2008 at 8:17pm by Fiza
If a sample of Co2(CO)8 is found to contain 6*10^-2 mol of C atoms, how many grams of Co atoms are present? Molar Mass (g/mol) Co2(CO)8 - 341.95 Co atom - 58.933 C atom - 12.011 O atom - 15.999
Avaogadro's No.: 6.022*10^23 Do I need to use the 6*10^-2 mol of C atoms to find ...
Monday, January 16, 2012 at 11:56am by Kyle
Ibuprofen, the active ingredient in Advil, is made up of carbon, hydrogen, and oxygen atoms. When a sample of ibuprofen, weighing 5.000 g, burns in oxygen, 13.86g of CO2 and 3.926 g of water are
obtained. What is the simplest formula of ibuprofen? I need to find out if I'm ...
Saturday, October 4, 2008 at 1:14pm by robert
Let X = mass ZnS Let Y = mass PbS MM = molar mass ===================== You need two equations. X + Y = 6.12 is one. The second is X(MM H2S/MM ZnS) + Y(MMH2S/MMPbS) = grams H2S. The g H2S must be
calculated from PV = nRT and solve for n=number of moles H2S, then convert to ...
Sunday, October 24, 2010 at 8:39pm by DrBob222
[(12 + (3x1) + 12 + (2x16) + 1)]/6.02*10^23 = ? grams Add up the atomic mass numbers of all atoms in the molecule. Then divide by Avogadro's number
Friday, February 1, 2008 at 7:07pm by drwls
Chemistry- Drbob-help
I have placed the answer I got please help. A student makes a solution by dissolving 55.8 grams of potassium hydroxide in 875.0 grams of water. The resulting solution has a density of 1.07 grams per
milliliter. 1. What is the volume of this solution? 870mL 2. Calculate the % ...
Wednesday, February 9, 2011 at 11:31pm by Paul
Stoichiometry Test
Does it say percentage by mass or percentage of atoms or what? I will assume percentage of mass. Now O atoms have a mass of about 16 grams per about 6*10^23 of them (Avagadro's number, sort of like
12 is a dozen, 6*10^23 is a mole) So in a mole of this stuff we have 32 grams ...
Thursday, February 12, 2009 at 4:26pm by Damon
ml x N = number of milliequivalents 80 KOH added initially. The KOH reacted with the NH3 produced from the reaction of NH4Cl with the KOH). After the reaction the excess KOH was titrated with 12.5 mL
of 0.75 N acid. 12.5 x 0.75 N = 9.375 milliequivalents acid needed to ...
Sunday, February 10, 2013 at 6:43pm by DrBob222
I have a problem that has been driving me crazy trying to solve, and I was wondering if someone could help. The problem is this: "Given the reaction: CuSO4 + 4 NH3 ----> Cu(NH3)4SO4, if 10 grams of
CuSO4 reacts with 30 grams of NH3, what is the theoretical yield of Cu(NH3)...
Thursday, August 23, 2007 at 1:10pm by Josh
You need to find the arrow button and use it with equations. --> or ==> or >>>. I think the first thing to do is to convert 12% v/v to grams ethanol. That is 12 mL ethanol/100 mL soln. Scale that up
to 712 (I guess we assume ALL of the grape juice is glucose ...
Tuesday, November 13, 2012 at 7:56pm by DrBob222
I think maybe you better look up the solubility which is normally in grams or mols per liter but 36 grams in 100 grams is much higher than believable by any stretch of the imagination. A liter of
fresh water is about a 1000 grams.
Saturday, March 8, 2014 at 2:03pm by Damon
In the cocaine formula for every N (14 g/mole) atom here are 17 C (12g) atoms and 2 H (1g) atoms Therefore thee ratios of masses of those three elements must be: N -- 14 C --17*12 = 204 H --2*1 =2
Now how many moles of C in 150 g CO2? (I am going to use grams instead of ...
Saturday, October 4, 2008 at 11:00am by Damon
1. What is the percent by mass of a solution that contains 50 grams of solute dissolved in 200 grams of solution? What is the concentration of the same solution in parts per million? 2. What is the
percent mass of a solution that contains 75 grams of salt dissolved in 150 ...
Friday, March 25, 2011 at 11:03am by Leora
Chem.. stoichiometry
Obviously the equation is balanced. Each mole of C produces 1 mole of H2. How many moles of C in 34g? That's how many moles of H2 you get. Convert that back to grams. You'll get (34/12) * 2 grams
Friday, May 3, 2013 at 8:28am by Steve
1.30 grams of chromium reacts with 9.5 grams of iodine. What is the percent of chromium in this compound? How do I work out this answer. I know answer is 12.02, but can figure out how to get it!
Saturday, March 26, 2011 at 12:28pm by Quick chem question
I worked 1 and 2 for you a couple of days ago. g H(atoms that is) = 12.0-gO - gC = ? %H = (g H/12.O)*100 = ? 20.2% Al 100-20.2 = %Cl Take a 100 g sample which will give you 20.2g Al 79.8 g Cl Convert
grams t mols. mol = grams/molar mass. Then find the ratio of Al and Cl to ...
Tuesday, March 13, 2012 at 6:39pm by DrBob222
4C3H5N3O9 -> 6N2 + 12 CO2 + 10H20 + O2 a) how many grams of water would be produced if 75 g of nitroglycerin decompose? b) The creation of 3.89 moles of CO2 would require how many grams of
Saturday, October 9, 2010 at 5:48pm by MAX
I would try setting it up as a proportion. The percentage just means that there are 6.65 grams of NaOCl per 100 grams. Therefore, set up the problem of (6.65/100)=(x/12.14)
Wednesday, February 1, 2012 at 3:46am by Shannon
chem1 WORD PROBLEM!!
6. how many grams are in 4 moles of Be (OH)3? 7.How many atoms are in 4 moles of C6H12O6? 8.how many moles are in 12.04x10 23 to the power molecules of hamburgers? 9. how many molecules are in grams
of Helium gas (He2)? hint:convert then moles then moecules
Thursday, December 6, 2012 at 6:51pm by tyneisha
I have no idea what experiment 4 is nor what role the benzoic acid played in the experiment. However, g benzoic acid in the 20.5 mL is M x L x molar mass = grams. 5.256M x 0.0205 L x 122.12 = ?grams.
Monday, February 4, 2013 at 12:41am by DrBob222
grams CH3CH2CH2OH = volume x density = 12.8 x 0.803 = ? moles CH3CH2CH2OH = grams/molar mass Then M = moles/L soln. L soln = 0.075L
Saturday, October 1, 2011 at 7:32pm by DrBob222
2 Fe (s) + 6 HCl (aq) -> 2 FeCL3 (s) + 3 H2 (g) 12.2 grams of Fe reacts with excess of HCl. how many grams of FeCl3 will be produced?
Thursday, May 24, 2012 at 6:47pm by Lily
Neither. The molar mass = 180. moles in 12.0 grams is 12.0/180 = 0.06667 and I would keep it like that until the end, then round the final answer.
Wednesday, February 10, 2010 at 11:50pm by DrBob222
Assume the following daily closing for the Dow Jones Industrial Average: Day DJIA Day DJIA 1 12,010 7 12,220 2 12,100 8 12,130 3 12,165 9 12,250 4 12,080 10 12,315 5 12,070 11 12,240 6 12,150 12
12,310 a.
Wednesday, July 11, 2007 at 9:11pm by lia82
12% MgCl2 means 12 g MgCl2 in 100 g solution. Use the density to calculate the volume of 1000 mL. 1.105 g/mL x 1000 mL = 1105 grams. 1105 x 0.12 = 132.6 g MgCl2 and 1105-132.6 = ?? g H2O moles H2O =
grams/molar mass H2O moles MgCl2 = g MgCl2/molar mass MgCl2. XH2O = moles H2O/...
Saturday, June 26, 2010 at 9:54pm by DrBob222
1. The amount of pyridoxine (in grams) per multiple vitamin is normally distributed with = 110 grams and = 25 grams. A sample of 25 vitamins is to be selected. So, 95% of all sample means will be
greater than how many grams?
Wednesday, December 1, 2010 at 7:11pm by Anonymous
The weights of adult long-tongued fruit bats are known to be normally distributed with a mean of 20.22 grams and a standard deviation of 3.23 grams. What is the probability that a randomly selected
bat will: a) weigh at most 15 grams? b) weigh less than 30 grams?
Wednesday, March 21, 2012 at 7:57pm by becky
The weights of adult long-tongued fruit bats are known to be normally distributed with a mean of 20.22 grams and a standard deviation of 3.23 grams. What is the probability that a randomly selected
bat will: a) weigh at most 15 grams? b) weigh less than 30 grams?
Friday, March 23, 2012 at 2:15am by Becky
Here is what I posted previously. Convert 12 ounces to grams. (mass ice x 330 J/g) + [(mass water x specific heat water x (Tfinal-Tinitial)] = 0 Solve for mass ice. Show your work and I'll try to
find the error. Mass ice is the only unknown. mass water is 12 ounces; you need ...
Wednesday, September 15, 2010 at 7:25pm by DrBob222
Convert 4.88 g CO2 to grams C. Convert 1.83 g H2O to grams hydrogen (H, not H2).Your answer looks ok for this part. Then 1.83 - g C - g H = g O (not O2) Then g C/12 = mols C g H/1 = mols H g O/16 =
mols O Then find the ratio.
Sunday, August 19, 2012 at 1:14pm by DrBob222
Chemistry- Empirical Formula
Yes and no. Yes, you can convert 6.63 mg to grams, go through the calculation and convert grams back to mg but I didn't convert in the first place so I had no reconverting to do. Another way is to
convert 6.63 mg to grams, go through the calculation and have the answer for H ...
Monday, April 19, 2010 at 9:36pm by DrBob222
Aha!. I wondered how you did it. But you KNOW 1 atom of oxygen can't possibly weigh that much. You can't even see 1 atom of oxygen so you know the mass of 1 atom must be very small. And your number
isn't very small; it is very large. The mass of 1 mol is 32 grams. I told you ...
Saturday, November 3, 2007 at 4:17pm by DrBob222
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=12%2E+How+many+grams+of+CO+are+needed+to+react+with+an+excess+of+Fe2O3+to+produce+209%2E7+g+Fe%3F","timestamp":"2014-04-21T03:31:57Z","content_type":null,"content_length":"43597","record_id":"<urn:uuid:4d0eb033-4284-4043-9e92-6064f83a9f80>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monads in 15 minutes: Backtracking and Maybe
Monads in 15 minutes: Backtracking and Maybe
Posted by Eric Kidd Mon, 12 Mar 2007 19:39:00 GMT
This morning, a programmer visited #haskell and asked how to implement backtracking. Not surprisingly, most of the answers involved monads. After all, monads are ubiquitous in Haskell: They’re used
for IO, for probability, for error reporting, and even for quantum mechanics. If you program in Haskell, you’ll probably want to understand monads. So where’s the best place to start?
A friend of mine claims he didn’t truly understand monads until he understood join. But once he figured that out, everything was suddenly obvious. That’s the way it worked for me, too. But relatively
few monad tutorials are based on join, so there’s an open niche in a crowded market.
This monad tutorial uses join. Even better, it attempts to cram everything you need to know about monads into 15 minutes. (Hey, everybody needs a gimmick, right?)
Backtracking: The lazy way to code
We begin with a backtracking constraint solver. The idea: Given possible values for x and y, we want to pick those values which have a product of 8:
solveConstraint = do
x <- choose [1,2,3]
y <- choose [4,5,6]
guard (x*y == 8)
return (x,y)
Every time choose is called, we save the current program state. And every time guard fails, we backtrack to a saved state and try again. Eventually, we’ll hit the right answer:
> take 1 solveConstraint
Let’s build this program step-by-step in Haskell. When we’re done, we’ll have a monad.
Implementing choose
How can we implement choose in Haskell? The obvious version hits a dead-end quickly:
-- Pick one element from the list, saving
-- a backtracking point for later on.
choose :: [a] -> a
choose xs = ...
We could be slightly sneakier, and return all the possible choices as a list. We’ll use Choice whenever we talk about these lists, just to keep things clear:
type Choice a = [a]
choose :: [a] -> Choice a
choose xs = xs
Running this program returns all the possible answers:
Now, since Haskell is a lazy language, we can work with infinite numbers of choices, and only compute those we actually need:
> take 3 (choose [1..])
Because Haskell doesn’t compute answers until we ask for them, we get the actual backtracking for free!
Combining several choices
Now we have the list [1,2,3] from our example. But what about the list [4,5,6]? Let’s ignore guard for a minute, and work on getting the final pairs of numbers, unfiltered by any constraint.
For each item in the first list, we need to pair it with every item in the second list. We can do that using map and the following helper function:
pair456 :: Int -> Choice (Int,Int)
pair456 x = choose [(x,4), (x,5), (x,6)]
Sure enough, this gives us all 9 combinations:
> map pair456 (choose [1,2,3])
But now we have two layers of lists. We can fix that using join:
join :: Choice (Choice a) -> Choice a
join choices = concat choices
This collapses the two layers into one:
> join (map pair456 (choose [1,2,3]))
Now that we have join and map, we have two-thirds of a monad! (Math trivia: In category theory, join is usually written μ.)
In Haskell, join and map are usually combined into a single operator:
-- Hide the standard versions so we can
-- reimplement them.
import Prelude hiding ((>>=), return)
(>>=) :: Choice a -> (a -> Choice b) ->
Choice b
choices >>= f = join (map f choices)
This allows us to simplify our example even further:
> choose [1,2,3] >>= pair456
Completing our monad: return
We’re getting close! We only need to define the third monad function (and then figure out what to do about guard).
The missing function is almost too trivial to mention: Given a single value of type a, we need a convenient way to construct a value of type Choice a:
return :: a -> Choice a
return x = choose [x]
(More math trivia: return is also known as unit and η. That’s a lot of names for a very simple idea.)
Let’s start assembling the pieces. In the code below, (\x -> ...) creates a function with a single argument x. Pay careful attention to the parentheses:
makePairs :: Choice (Int,Int)
makePairs =
choose [1,2,3] >>= (\x ->
choose [4,5,6] >>= (\y ->
return (x,y)))
When run, this gives us a list of all possible combinations of x and y:
> makePairs
As it turns out, this is a really common idiom, so Haskell provides some nice syntactic sugar for us:
makePairs' :: Choice (Int,Int)
makePairs' = do
x <- choose [1,2,3]
y <- choose [4,5,6]
return (x,y)
This is equivalent to our previous implementation:
> makePairs'
The final piece: guard
In our backtracking monad, we can represent failure as a choice between zero options. (And indeed, this is known as the “zero” for our monad. Not all useful monads have zeros, but you’ll see them
-- Define a "zero" for our monad. This
-- represents failure.
mzero :: Choice a
mzero = choose []
-- Either fail, or return something
-- useless and continue the computation.
guard :: Bool -> Choice ()
guard True = return ()
guard False = mzero
And now we’re in business:
solveConstraint = do
x <- choose [1,2,3]
y <- choose [4,5,6]
guard (x*y == 8)
return (x,y)
Note that since the return value of guard is boring, we don’t actually bind it to any variable. Haskell treats this as if we had written:
-- "_" is an anonymous variable.
_ <- guard (x*y == 8)
That’s it!
> take 1 solveConstraint
Another monad: Maybe
Every monad has three pieces: return, map and join. This pattern crops up everywhere. For example, we can represent a computation which might fail using the Maybe monad:
returnMaybe :: a -> Maybe a
returnMaybe x = Just x
mapMaybe :: (a -> b) -> Maybe a -> Maybe b
mapMaybe f Nothing = Nothing
mapMaybe f (Just x) = Just (f x)
joinMaybe :: Maybe (Maybe a) -> Maybe a
joinMaybe Nothing = Nothing
joinMaybe (Just x) = x
Once again, we can use do to string together individual steps which might fail:
tryToComputeX :: Maybe Int
tryToComputeX = ...
maybeExample :: Maybe (Int, Int)
maybeExample = do
x <- tryToComputeX
y <- tryToComputeY x
return (x,y)
Once you can explain how this works, you understand monads. And you’ll start to see this pattern everywhere. There’s something deep about monads and abstract algebra that I don’t understand, but
which keeps cropping up over and over again.
Miscellaneous notes
In Haskell, monads are normally defined using the Monad type class. This requires you to define two functions: return and >>=. The map function for monads is actually named fmap, and you can find it
in the Functor type class.
Also, every monad should obey three fairly reasonable rules if you don’t want bad things to happen:
-- Adding and collapsing an outer layer
-- leaves a value unchanged.
join (return xs) == xs
-- Adding and collapsing an inner layer
-- leaves a value unchanged.
join (fmap return xs) == xs
-- Join order doesn't matter.
join (join xs) == join (fmap join xs)
That’s it! For more information, see Wadler’s Monads for Functional Programming and the excellent All About Monads tutorial. And if you liked the backtracking monad, you’ll also like The Reasoned
(This tutorial is dedicated to “ivanm” on #haskell. I hope this helps! And many thanks to Reg Braithwaite for commenting on an early draft.)
Update: Replaced mult456 with pair456, and explained what’s going on with guard in the final backtracking example.
Dan P said about 1 hour later:
Three cheers for ‘join’!
The function ‘join’ has an important advantage over ‘bind’ – it’s not a higher order function. I think this may be what makes it easier to grasp for some people.
ivanm said about 3 hours later:
Thanks for this! It makes the monad concept a little easier.
But now you’ve defined a new error type of mzero ;)
The only thing that confuses me (as I don’t have a Haskell interpreter handy to play with) is how “guard (x*y == 8)” works… aren’t x and y lists? Or does it try every value of the two lists compared
to each other?
Eric said about 3 hours later:
Oops! I should have explained that better. I may update the main post later, but for now, here are some notes.
First, guard (x*y 8) in the do-body is short for ignored <- guard (x*y 8). So you’re really looking at this:
solveConstraint :: Choice (Int,Int)
solveConstraint =
choose [1,2,3] >>= (\x ->
choose [4,5,6] >>= (\y ->
guard (x*y==8) >>= (\ignored ->
return (x,y))))
As to why x and y are integers, not lists: Remember that xs >>= f is just join (map f xs). So the individual bindings expand to something that looks like:
join (map (\x -> ...) xs)
...where x is clearly bound to an individual value from xs.
When I’m trying to understand something like this, I have the most luck if I go over things a couple of times (and maybe write a test program or two). Reading this kind of Haskell is more like
reading math than it is like reading regular code. Definitely a chance of pace. :-)
Good luck, and please don’t hesitate to ask if you have any more questions!
quicksilver said about 16 hours later:
Actually map for Monads isn’t called ‘fmap’ it’s called liftM, and you can define liftM given >>= and return (liftM f xs = xs >>= return.f).
Monads in haskell aren’t automatically Functors, although by rights they should be, and given a Monad you should get a valid functor instance using fmap = liftM.
The concept of failure, in monads which have it, is very powerful though. Nice tutorial.
Eric said about 16 hours later:
quicksilver: Thanks for the kind words!
And yeah, I’m exceedingly annoyed that Haskell monads aren’t automatically functors. But they should be! :-)
You’re absolutely right that liftM is guaranteed to exist, and that fmap is left up to the whims of library authors. But still, the connection between fmap and monads is pretty important from a
mathematical perspective.
petekaz said 1 day later:
I don’t understand the guard (x*y==8) piece of the puzzle. If the result of this is bound to ignored, I’m missing in the case of guard True, how does it return (x,y). And then conversely, in the case
of guard False, why does it skip return (x,y)?
Eric said 1 day later:
petekaz: Yup, that’s the tricky bit.
guard returns either [()] (if is succeeds), or [] (if it fails).
So if the guard succeeds, you can imagine it as:
-- Only one choice. Pick it and
-- continue.
choose [()]
...and if it fails:
-- No choices, so just give up with
-- this branch of the computation.
choose []
If we go back to the map example, we can see how it works:
> map guard [True,False,False,True]
[[()], [], [], [()]]
When we call join, two branches of the computation disappear entirely:
> join (map guard [True,False,False,True])
[(), ()]
So where are the x and y in this example? Well, they’re the arguments of the anonymous functions in my previous comment, so they’ll be available when we need them in the final step. (It might help to
read about closures if none of this makes any sense at all.)
Alternatively, are you familiar with list comprehensions? If so, this program is equivalent to:
[(x,y) |
x <- [1,2,3],
y <- [4,5,6],
ignored <-
if (x*y == 8)
then [()]
else []
If x*y doesn’t equal 8, then there are no values to pick from for ignored, and the final (x,y) won’t be computed.
Anyway, I hope one of these perspectives will help you puzzle it out!
Michael said 1 day later:
Nice article, Eric.
Another fun citation of type “If you like the backtracking monad…” is Oleg Kiselyov’s paper:
Eric said 1 day later:
Yeah, I was reading that yesterday evening! I recommended it highly for anybody who’s into backtracking monads.
Miles Gould said 17 days later:
A friend of mine claims he didn’t truly understand monads until he understood join.
There’s a reason why category theorists invariably define monads in terms of join rather than bind!
As for your question about monads and algebra: I don’t really understand monad transformers myself, so can’t help with that bit, but there is indeed a deep connection between monads and algebraic
theories. Any standard category theory book should help you here – try Borceux’s Handbook of Categorical Algebra. Or just google for “category theory lecture notes”.
I found enlightenment came from considering this: an /algebra/ for a monad T on a category C is an object A and a morphism a:TA->A, satisfying some obvious compatibility conditions with eta and mu.
Now, what is an algebra for the List monad, also known as the “free monoid” monad? How about the free group monad?
Sorry if I’m teaching you to suck eggs here…
Eric said 18 days later:
Initial algebras are definitely interesting! And they are related to monads, at least according to Edmund Robinson’s Variations on Algebra: monadicity and generalisations of equational theories
(which I haven’t finished reading yet).
A monad is a triple (M,η,μ), where M is a functor, and η: a → M a and μ: M (M a) → M a are natural transformations.
A monad morphism is a transformation from one monad to another, e.g., (M,η,μ) → (M′,η′,μ′). And you can build various interesting algebras by composing monad morphisms.
And that’s the bit I still haven’t wrapped my brain completely around. :-)
osfameron said 36 days later:
Your 15 minutes are not my 15 minutes ;-)
Some things are still unclear: like why you need to use “choose [1,2,3]” given that that just evaluates to [1,2,3]. I have a feeling that might be important though?
Still, working through the code bit by bit is very helpful, and I almost understand some of it now… though I am still having problems with the guard – I don’t understand how it affects the following
return statement.
I will give another “15 minutes” to this tonight if I get a chance…
Eric said 36 days later:
Y15MMY: Your 15 minutes may vary. ;-)
The choose function does nothing but make the code more readable; it’s not actually needed, not even for the type declaration.
The guard is fairly sneaky. If the condition is true, it chooses a dummy value from a single-element list and throws it away. If the condition is false, though, it “chooses” an element from an empty
Imagine, for a moment, the guard condition is always false. We bind x to one of 3 values, and y to one of 3 values, giving us 3×3=9 different possible answers.
If we imagine that the guard condition is always false, it would choose from 0 possibilities, giving us 3×3×0=0 possible answers. Do you see how the empty choice suppresses answers? It doesn’t have
to effect the return statement directly, because we never make it there.
Of course, if the guard is only false some of the time, then it only suppresses certain answers.
I hope this helps! Please don’t hesitate to ask more questions.
osfameron said 37 days later:
Thanks for the reply: ok, I think I get that better: because >>= is defined in terms of map, it combines with the output of guard as well as the unfiltered 3×3, that’s the piece that I wasn’t seeing.
I think in the description above, saying that the output of guard is “boring” maybe throws you off the scent a little? It’s only boring to us in that we don’t need a variable bound to it, but it
still participates in the join.
Steven Brandt said 106 days later:
As an old imperative programmer I can’t help but think this is just like:
for(int x=1;x<=3;x++) { for(int y=4;y<=6;y++) { if(x*y == 8) { list.append(new Pair(x,y)); } } }
Except the imperative code is a lot easier to understand. I’m trying really hard to understand monads and see their value, but I still can’t quite get there.
Eric said 108 days later:
Steven: Don’t assume that the simplest example of something is also the most compelling. :-)
For more interesting monads, see the the paper on composable memory transactions, the various probability monads, sigfpe’s article on graph-walking monads, and the paper on using monads for safe
hardware abstraction (PDF). If you work with .NET, you may also want to check out Microsoft’s new LINQ technology, which relies heavily on monads for database access and XML processing.
Once you understand monads, you start seeing them everywhere—they’re very general tools, and they can be used to solve a wide variety of problems.
As with any other abstraction, you can do without monads. But if one abstraction solves so many problems so elegantly, it’s worth learning about.
Erik said 138 days later:
So, before going back over this posting using a full fifteen minutes (I just skimmed it), why use an example that could be solved with the following?
fun :: [Int] → [Int] → [(Int, Int)] fun r s = [(x,y) | x <[DEL: r, y <:DEL] s, x*y == 8]
I find that when toy problems with both simple and obvious solutions are used to demonstrate hard concepts it can obfuscate the idea because while you may be showing the how of the concept, you
aren’t explaining the why.
Could you maybe show an example of join that wouldn’t be simple to express with a simple list comprehension?
Eric said 138 days later:
Erik: Well, a list comprehension is a monad. Or, if you want to look at it the other way around, a monad is a generalized comprehension that can be defined for almost any parameterized data type.
For a more complicated monad, see Bayes’ rule in Haskell. For a rather different monad, see SPJ’s papers on Software Transactional Memory. Basically, by slightly generalizing list comprehensions, you
get a tool which solves a surprisingly large and diverse set of problems.
I’ll write a longer post one of these days cataloging a bunch of useful monads. Until then, would anyone like to plug their favorite monads? :-)
Erik said 139 days later:
Eric: Yes, after making that post I went on to read the “All About Monads” link you placed at the end of the post and came to that understanding. Thanks!
Michael said 153 days later:
Nice tutorial, maybe would be a good idea to abstract to tuples of arbitrary length instead of just pairs, to make the backtracking even more obvious.
Drew Vogel said 220 days later:
I understand the Haskell constructs involved in your example, but I had trouble finding the actual backtracking. I learned of backtracking as an optimization of a search algorithm that worked by
tossing out large chunks of the search space. Because 2*4 is the only combination that equals 8, it isn’t obvious that the “take 1” is throwing out the rest of the problem space. Perhaps you could
use a combination of the even function and infinite lists to increase the search space and illustrate how you are tossing out a portion of the search space.
Also, a single file with the final code would be wonderful. It’s hard looking at it outside of vim :)
Sampo said 221 days later:
Now that was helpful. Time well spend even though it was a bit more than 15 minutes :-) I think you managed to bring me on the verge of understanding monads.
I think the Option -monad explanation did it for me… I’m reiterating just to see if I really understand: I see the the functions returnMaybe, mapMaybe and joinMaybe as ‘rules’. For example mapMaybe
tells how to handle function calls where arguments belong to Maybe -monad. 1) If function gets called with Nothing, the return value will always be Nothing. 2) If function gets called with a valid
value Just(x), the value x is extracted from Just(x) and handed to the function.
This ensures the Maybe works as expected but not every function needs to know how to handle them.
joinMaybe defines how to handle the returned (nested) values. If the nested values all are Just(something) we can get rid of extra Justs and keep values. If there is a single Nothing, everything will
end up being Nothing.
Doug Auclair said 246 days later:
Finally! (an aside: I did read the LogicT paper) I’ve been looking for concise backtracking in Haskell. Of course, it’s been there all the time, and there’s been several mentions that monads permit
backtracking, but your tutorial, Eric, gave me a simple and clear example. Thank you. I’ll start working through some other (similar) examples.
I’ve also read Reasoned Schemer (Dr. William Byrd autographed it for me at the PADL 2006) several times, so I’m looking to combine that and LogicT and your example into a full-blown (concise) rule-
or relational-based programming system. Backtracking is key, but do I need full unification? Not too sure.
Thanks, again.
Doug Auclair said 247 days later:
Of course, it’s been pointed out in other papers about monads that the do notation:
do x <- choose [1,2,3] ...
is very similar to the list compression notation: [x|x <- [1,2,3]]
(in fact, gopher, Haskell’s predecessor, they were so similar that they were the same thing)
And, seeing that the type Choice is an alias for the list type, the monadic expression can simply be rewritten: [(x,y)|x<[DEL: [1,2,3], y<:DEL][4,5,6], x*y == 8]
Haskell programmers claim their list compression is even more powerful that Prolog’s, and the above example is good supporting evidence …
Dan Dingleberry said 261 days later:
Dude, WTH is a monad? at least define the dang thing first for casual passers by.
Eric said 261 days later:
Monads are a somewhat mysterious programming language feature, most commonly encountered in Haskell. See the examples I link to in the first paragraph.
Basically, monads are a clever bit of category theory that can be used to structure control flow in a program.
If you weren’t already thinking, “Hmm, I really ought to learn more about monads,” then you might be happier avoiding the entire topic. :-)
Michael said 261 days later:
To add to what Eric wrote, I’d go so far as to say that monads aren’t even all that mysterious, but in fact you will have an easier time understanding them (at least in terms of their utility to a
programmer) if you just forget about category theory entirely.
From a programmer’s perspective, a monad is more or less a way of factoring out some common code from various types of operations that are composed sequentially. The fact that this is possible seems
strange at firstand I think that’s what leads people to feel monadic types are confusing. Well, that, and the terminology, which can be misleadingly abstruse.
David said 320 days later:
Thanks for this! I’ve read a lot about Haskell but I’ve been confounded by monads. I’ve read every tutorial I’ve found and each helps me understand a little bit more. This one was very helpful. I
have an unanswered question though…
The map part of a monad doesn’t seem to have to be the actual “map” function. The Maybe monad’s “map” seems to just propagate the Nothing or Just along as appropriate. Is this correct?
How does this work in the case of the IO monad? I’ve been staring at the GHC prelude code for the past 15 minutes but I’m not fully comprehending what it is doing. I think its >>= function is just
doing a function application instead of a “map”, but I’m not sure (that case part is confusing me). I do see the “join” where it strips out nested IOs. Can anyone help out?
Eric said 321 days later:
David: Excellent questions! The “map” function used in monads is a generalization of the regular “map”.
This will make more sense if you think of “Maybe” as a very specialized containera container which can only hold a single element. From that perspective, “mapMaybe” looks very natural:
mapMaybe :: (a -> b) -> Maybe a -> Maybe b
mapMaybe f Nothing = Nothing
mapMaybe f (Just x) = Just (f x)
In English, this says: “If the container is empty, leave it that way. If it contains an element, apply ‘f’ to the element.” Basically, it’s no different than applying “map” to a zero- or one-element
As for the IO monad, don’t worry too much about it. :-) It’s actually a pretty atypical monad.
But if it helps, try reading the type “IO a” as “an IO action returning type a”. Again, if you squint at it right, you can think of an IO action as a single-element collection. You just have to some
IO to find out what element is stored in it!
The “mapIO” function can be defined as:
1. Perform the IO action and extract the value of type “a”.
2. Calculate “f a”, and jam the result back into a dummy IO action.
Does this help any?
Joel Hough said 394 days later:
Great article! A few things clicked for me while reading. I found that writing out the choice monad example step by step helped me to understand. Like so:
choose [1,2,3] >>= (\x ->
choose [4,5,6] >>= (\y ->
guard (x*y==8) >>= (\ignored ->
return (x,y))))
is equivalent to…
step1 = concat $ map step2 [1,2,3]
step2 x = concat $ map (step3 x) [4,5,6]
step3 x y = concat $ map (step4 x y) (if x*y == 8 then [()] else [])
step4 x y _ = [(x, y)]
Thanks for the epiphanies!
Doug Auclair said 428 days later:
Linking this article on an introductory post about Maybe, List and Either monads and their uses for nondeterministic programming.
Coincidentally, I wrote a same-game-like puzzle solver (cf. comp.lang.haskell), where I needed to flatten a list of lists. I didn’t see `flatten` in the Prelude or List, so I wrote it out. I then
realized that `msum` is flatten for lists of depth 1 … then I reread this article where join :: m (m a) → m a does the same thing. Applied that in my code and noticed I had this pattern: join $ map f
x … and replaced that with x >>= f.
Sigh! Months later and I’m still not recognizing when to use a simple bind! | {"url":"http://www.randomhacks.net/articles/2007/03/12/monads-in-15-minutes","timestamp":"2014-04-20T08:46:34Z","content_type":null,"content_length":"102270","record_id":"<urn:uuid:86172376-2eb7-4256-9b80-298f13b3625d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Bayesian is someone who interprets probability as degree of belief. This interpretation contrasts with the frequentist interpretation.
Bayesian Interpretation of Probability
Almost anyone with the slightest numeracy has an understanding of probability. A statement such as "tomorrow it will rain with 60% probability" makes sense to anyone who has ever watched a weather
forecast. If we ask about the meaning of this statement, we might get a response such as "well, it's not certain that tomorrow it will rain, but it's rather likely that it will." If inquire about
some more, we might see that some people will carry an umbrella with them tomorrow, and some might not. Some people believe that it 60% chance of rain is a figure high enough to warrant a preventive
umbrella, and some do not.
This is a very common-sense interpretation of probability. A probability is a number between zero and one that we assign to events of which we are uncertain, with zero meaning absolute certainty of
the falsehood of some statement, and one is certainty of its truth, and there are varying degrees of truth and belief in between. This is how a Bayesian interprets a probability statement. Most
people, unless they have been exposed to some mathematical sophistication, hold Bayesian interpretations of probability without explicitly knowing so.
Under the Bayesian interpretation, a probability of 60% means different things to different people; some are willing to risk their money at a game of poker and some are not. Thus probabilities are
subjective measurements, and this gives Bayesians their nickname: subjectivists. This is a strong objection against the Bayesian interpretation of probability. Some people, believe that probabilities
are numbers that can be assigned objectively to statements about the world, that objectively there either is or isn't a good reason for playing poker, and anyone who doesn't adhere to these reasons
is simply being irrational.
But this is mere philosophy. We can discuss all day the meaning of probability until the rain soaks us wet. The mathematical treatment of probability leaves no room for interpretation. There are
certain rules to follow while we perform probabilistic calculations, and they are based on three simple axioms. Mathematical abstract nonsense allows us to circumvent unpleasant discussions.
Let us stick to pure mathematics for a moment. Let P(A|B) denote the conditional probability that event A happens given event B has happened; by definition, P(A|B) = P(A & B)/P(B), where P(A & B)
denotes the probability that both events A and B happen simultaneously. We wish to find a formula for P(B|A), e.g. if we know the probability that it rains on Tuesdays, we would like to calculate the
probability that today is Tuesday given that it has rained. Straight from definitions,
P(A & B)
P(B|A) = ---------,
P(A|B) P(B)
= ------------.
This is known as Bayes' Theorem.
In most situations of interest, we do not know P(A) a priori and must calculate it from something else. Often, what we have is a partition of the sample space into exhaustively many events B[1], B
[2], ..., B[n] such that Σ[i=1]^n P(B[i]) = 1. Under such conditions, by the law of total probability Bayes' Theorem becomes
P(A|B ) P(B )
k k
P(B |A) = ---------------.
k n
Σ P(A|B ) P(B )
i=1 i i
Another case that is often interesting involves continuous random variables and their density functions. Suppose that θ is some random variable with density f(θ), and x is a vector of random
variables (such as a sequence of observations!) with joint density function g(x). Then Bayes' Theorem takes the form
g(x|θ) f(θ)
f(θ|x) = ----------------,
∫ g(x|θ)f(θ) dθ
where the discrete sum has been replaced by a continuous integration taken over all possible values of θ.
Why is this any of our business? Because Bayesians take their name from the application of this theorem, formulated first in a paper by the late Reverend Thomas Bayes (1702-1761), published
posthumously in 1763. The above development is purely mathematical, follows from axioms and definitions of probability. Bayesians, however, have an interesting way of applying this to the world.
Bayesians interpret Bayes' Theorem as a mathematical statement of how experience modifies our beliefs.
Bayesian Statistics
The Bayesian interpretation only becomes important once we start to indulge in statistical inference. The situation is the following: the world is one big complicated mess, and there are many things
of which we aren't sure. Nevertheless, there are some things that we can approximate, and our beliefs about the world are among them. We come to the world with certain pre-suppositions and beliefs,
and we refine and modify them as we make observations. If we once believed that every day it is equally likely or not to rain, we might modify our beliefs after spending a few months in the Amazon
We therefore postulate a model of the world, that there are certain parameters that describe probability distributions of which we take samplings. Such parameters could be proportion of people who
respond positively to a certain medication, the mean annual temperature of the Gobi desert, or the mass of a proton. These parameters are fixed, but we allow ourselves to express our uncertainty of
their true values by giving them probability distributions. We assign them a probabilities based on hunches or intuitive reasoning. This distribution we assign before making any observations is
called a prior distribution. Then we conduct some experiments and apply Bayes' Theorem in order to modify this into a posterior distribution that reflects the new information we have. This procedure
can be repeated, with our posterior as a new prior distribution, and further experimentation may yield an even better second posterior distribution.
I will present the general idea in more detail with an example. Suppose that we would like to estimate the proportion of Bayesian statisticans who are female. We will begin with a clean slate and
make no assumption as to what this proportion is, and we shall quantify this un-assumption by stating that, as far as we know, the proportion of female Bayesians is a random variable that is equally
likely to lie anywhere between zero and one (this is known as a uniform random variable or a uniform distribution). Let θ denote this uniform random variable. Its density function is
/ 1 if 0 < x < 1,
f (θ) = {
\ 0 otherwise.
This shall be our prior distribution. In this case, it is called an uninformative prior distribution, because it does not tell us to expect any particular value of θ. Its graph is a boring straight
2 +
0 |
0.2 0.4 0.6 0.8 1
This is a very bare-bones model about the world so far, but it's about to get better. We go out amongst all our Bayesian friends (ahem, a "random sample") and count the number of X chromosomes, to
the best of our abilities. Suppose there were twelve X chromosomes and eight Y chromosomes (eight boys, two girls). Let x denote the random variable "number of female Bayesians in a random sample of
10"; this is a binomial random variable with probability of success equal to θ. In our example, we observed on this particular instance that x = 2. The conditional probability density function of x
given θ is
/ /10\ x 10-x
| ( ) θ (1-θ) if x = 0, 1, ..., 10
| \ x/
g(x|θ) = {
| 0 otherwise
In light of this new information, we shall now modify the distribution of θ. To this effect, we invoke Bayes' Theorem that in this instance reads as
g(x=2|θ) f(θ)
f(θ|x=2) = ----------------------.
∫ g(x=2|θ) f(θ) dθ
A computation now ensues:
/10\ 2 8
( ) θ (1-θ)
\ 2/
f(θ|x=2) = ---------------------------
/1 /10\ 2 8
| ( ) θ (1-θ) dθ
/ 0 \ 2/
θ (1-θ)
= ---------------------------
/1 2 8
| θ (1-θ) dθ
/ 0
θ (1-θ)
= --------------------------- (This integration can be performed by observing
1 that the integrand is a Beta distribution with parameters
---- α=9 and β=3.)
= 495 θ (1-θ).
This is the posterior distribution. We recognise it to be a Beta distribution with parameters α=9 and β = 3. Its graph has a big hump around 0.2 and looks like this:
+ AAAAAA
+ AA AA
+ A AA
| A A
3 + A AA
+ A AA
+ A A
+ A A
+ A A
2.5+ A A
+ A A
| AA A
+ A A
+ A A
+ A AA
2 + A A
+ A A
+ A A
| A A
+ A A
+ A AA
1.5+ A A
+ A A
+ A A
+ A A
+ A A
| A AA
1 + AA A
+ A A
+ A AA
+ A AA
+ A A
0.5+ A AA
| AA AAA
+ A AA
+ AA AAA
+ A AAA
+A AAAAAA
0 |
0.2 0.4 0.6 0.8 1
Thus, even though we still are not sure of the true proportion of female Bayesians in the world, experience has taught us that we may expect about 20% of all Bayesians in the world to be female, and
we can even quantify with probability statements the strength of our beliefs. The cool thing is that we can keep on modifying our distribution as we see fit. We could peform more experiments and
surveys with this Beta distribution as our new prior distribution. If we work out another example, we will get another Beta distribution with different parameters. I was a bit sneaky and chose the
uniform distribution because I knew it was a Beta distribution with parameters α=1 and β=1, and I knew that I would get another Beta distribution for the posterior. The mathematics doesn't always
work out so nicely, unfortunately. When it does, and the prior and posterior distribution belong to the same family, we call them a conjugate pair of priors.
In the Bayesian framework, we can construct probability intervals, in analogy to the more common confidence intervals of frequentist statistics, except that now we can make true probability
statements as to where the parameter will lie, because we have assigned a probability distribution to said parameter. For example, with our posterior distribution, we can correctly make a statement
such as "as far as we know, the proportion of female Bayesians is between 0.1 and 0.3 with probability 59.8%".
Bayesian statistics begins here, with the assumption that it makes sense to quantify our beliefs by probabilities. More sophisticated techniques will rely on this basic postulate, and prior and
posterior probability distributions will almost always be present in one form or another during our investigations. Some people find Bayesian statistics more intuitive and straightforward than the
complicated interpretation that frequentist statistics require. It is perhaps for these reasons that Bayesian statistics have gained popularity in recent years, although it is probably safe to say
(with probability 80%) that the majority of statistics conducted nowadays are of a frequentist fashion.
Some Objections to Bayesian Interpretations
Not everyone is convinced by Bayesian statistics. For some, the base assumptions are very fishy. Probabilities are subjective measurements? Nonsense! And how are you going to choose your prior
distribution? Different priors will yield different posteriors; your prejudices will forever affect the way you see the world! Not to mention that calculations are often more involved in Bayesian
statistics, and complicated integrals will abound. It also seems to require more assumptions than frequentist statistics, and it is a good rule to take the simplest model of the world possible. These
are all valid points, and I shall briefly address them in turn.
That probabilities are subjective measurements should not bother us, since the actual mathematical theory itself does not make any subjective judgements based upon the numbers. Bayesian statistics
offers probabilities and numbers, beginning with an assumption that it makes sense to quantify belief with probability, but does not actually impose any further subjective judgements. Instead, the
theory allows for every individual to make the appropriate decision. As for the impact of the prior distribution, there are few situations where we are so completely ignorant of the situation as to
have to assign a completely arbitrary prior distribution. Even in situations where our knowledge is very limited, we can reflect this by an uninformative uniform prior over some interval. Hopefully,
the impact of our prior distribution will fade as we make more and more experiments. In fact, it can be shown that over repeated experimentation, almost any reasonable prior distribution will
converge to a determined posterior distribution. The complexity of calculations should not bother us so much in this day where computers have facilitated numerical methods. We can always resort to
them if needed. As for the extra assumptions required to do Bayesian statistics, I will say that yes, the Bayesian model is slightly more complicated than the frequentist, but it is thanks to this
that the Bayesian model also has the ability to predict more. It is also true that sometimes nature just isn't as simple as we might hope, and a more complicated model is necessary.
Bayesian statistics are favoured in many areas of modern scientific research, particularly in biostatistics. The Bayesian model also has been used to great advantage in computer algorithms for
blocking unwanted spam email, for example. I can understand why, regardless, many people would prefer to stick to the frequentist interpretation of probability and remain as objective as possible. It
is important to keep extraneous assumptions to a mininum. | {"url":"http://www.everything2.com/index.pl?node=Bayesian","timestamp":"2014-04-17T04:07:54Z","content_type":null,"content_length":"41399","record_id":"<urn:uuid:eefdbf46-b567-4bcb-9891-d9ee6f016f60>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Factoring
Replies: 1 Last Post: Apr 21, 1999 9:28 AM
Messages: [ Previous | Next ]
Posted: Jul 30, 1998 1:11 PM
Here is a very useful method of factoring. Sorry, this is not
formatted very well here. If you would like a better format, send me
an e-mail request and I will send an attachment. If you find this
method useful, I would love to hear from you. mjbell@gte.net
By Mary Jane Bell
The object is to present students with an alternative to
trial-and-error factoring techniques for quadratic trinomials. The
method is also valuable for factoring expressions with 4 terms that
can be factored into two binomials. The table becomes a fun à  puzzleà Â
that provides students with a clear-cut starting place and sequential
steps and an immediate check on their work.
VERTICAL TEAMING (Pre-algebra)
The object in a pre-algebra class is to practice multiplying and
factoring integers in a fun à  puzzleà  and with a format that can work
easily into algebraic factoring.
Start with a 3 x 3 table with one extra box on top.
The basic operational rules are:
MULTIPLY ACROSS Ã Â MULTIPLY DOWN Ã Â ADD UP (to the top box)
| Factor Table | 18 |
| 3 | 2 | 6 |
| 4 | 3 | 12 |
| 12 | 6 | 72 |
By strategically leaving blanks, the table becomes a puzzle. You may
add signs and variables as the students are ready for them. (I
usually start with pre-algebra examples to get the students familiar
with the operation of the table.)
PRE-ALGEBRA example #1:
| Factor Table | - 5 |
| | | | < Key
| | | | < Key
| - 4 | 6 | | < Checking Box
1. Start by filling in the à  Checking Boxà  . Remember: MULTIPLY ACROSS
2. Find the two à  Key Boxesà  in any order. Remember that the product
of the 2 Keys is on the bottom and the sum is on the top. So you are
looking for two numbers whose product is à  24 and whose sum is à  5.
3. The rest of the table is easy. You may choose which of the four
remaining à  Factor Boxesà  to do next. Whichever is chosen should be
the GCF of the product of that row and column. (If you donà  t use the
GCF, you may get into fractions, which can also be an interesting and
worthwhile exercise for pre-algebra students.) You have some options
with signs. If you use some large numbers, you may want to let the
students use calculators to find the key numbers.
Solution to example #1
| Factor Table | - 5 |
| 4 | - 2 | - 8 | < Key
| 1 | 3 | 3 | < Key
| - 4 | 6 | - 24 | < Checking Box
ALGEBRA TABLE: Example #2:
After the students understand the basic operation of the table they
are ready for the following algebraic example.
| Factor Table | - 7x |
| | | |
| | | |
| 6x^2 | - 3 | |
Solution to example #2:
The boxes are marked (in parentheses) in the order completed, but the
students certainly have some choices. The table is quick and easy to
check at a glance in any direction.
| Factor Table | - 7x |
|(4) 3 x | (5) - 3 | (2) - 9 x |
|(6) 2 x | (7) 1 | (3) 2 x |
| 6x^2 | - 3 | (1) -18x^2 |
What the students donà  t know is that they just factored their first
6x^2 à  7x à  3 = (3x + 1)(2x à  3)
Note that the factors are found by taking the diagonals of the last
four boxes in the table. If the students have any doubts, they can
multiply it out and observe where each term in the table fits into the
product. This is not a problem you would usually attempt as the first
try at factoring a trinomial, but it is possible with the factor
table. Most classes of beginning algebra should be able to do this in
one long block period or maybe 2 shorter periods.
The main thing the students need to remember is where to put the
terms. If a trinomial is in standard order, ax^2 + bx + c , then the
first and last terms must go in the first 2 boxes on the bottom row
and the middle term always goes at the top. The factors are always
the diagonals of the four à  Factor Boxesà  in the upper left corner.
| Factor Table | middle |
| factor | factor | key |
| factor | factor | key |
| first | last | checking |
One nice thing about the Table that it works equally well on perfect
square trinomials and the difference of two squares (Let the middle
term = 0.) and the students quickly see the pattern for these special
cases that allows them to do them without the table. If the leading
coefficient is 1, most students see that the checking number is the
same as the last term and all they need are the key numbers for the
last term and the middle coefficient and many students begin to do
these in their head fairly rapidly. It also works exceptional well
with trinomials that are prime over the integers. They quickly see
that they cannot find the 2 key numbers. If the checking number is
large, they may need to use a calculator to try different key factors.
This is one of my favorite uses of factor table. These are the
expressions that normally are factored by grouping. That has always
been a difficult procedure for my students to understand and then to
execute correctly. It is now a breeze. Note this technique only
works for expressions that will factor into two binomials.
Example #3
1. Factor: 3x^3 + 6x^2y à  2xy^2 à  4y^3
2. Make sure the terms are in a standard order. See example 4 for
terms that do not appear to have a standard order.
3. The first and last terms go in their traditional boxes.
4. The two middle terms become the à  Key Numbersà Â
5. The top term is not necessary, but you can put the two middle terms
here so that the students see that this is really no different than a
trinomial with 2 middle terms.
6. If the checking box does not work in both directions, then it
cannot be factored into 2 binomials.
| Factor Table | 6x^2y à  2xy^2 |
| 3x^2 | 2y | 6x^2y |
| x | Ã Â 2y^2 | Ã Â 2xy^2 |
| 3x^3 | Ã Â 4y^3 | Ã Â 12x^3y^3 |
Thus: 3x^3 + 6x^2y à  2xy^2 à  4y^3 = (3x^2 à  2y^2 )( x + 2y)
Example #3
Sometimes it is difficult to decide on a standard order and the
students just work with the 4 terms until the à  Checking Boxà  works.
They begin to see the pattern quickly that the product of the two
outer terms must equal the product of the two inner terms.
Factor: 2ab à  3cd à  bc + 6ad
Change the order so that the à  Checking Boxà  will work.
New order: 2ab à  bc + 6ad à  3cd
| Factor Table | Ã Â bc + 6ad |
| b | Ã Â c | Ã Â bc |
| 2a | 3d | + 6ad |
| 2ab | Ã Â 3cd | Ã Â 6abcd |
Factors: 2ab à  bc + 6ad à  3cd = (b + 3d) (2a à  c)
The thing that I like most about the factor table is that it gives the
student something to write immediately. Many students will just sit
and stare at the problem and hope the factors will miraculously pop
out. If they donà  t see the factors immediately, all I have to do is
say à  Make a factor tableà  and they get it quickly. I have not solved
the problem of removing the GCF first, but the factor table is so easy
that most students can still factor it with the GCF in place. (Maybe
they will notice it later.) This is one of the first things that I
teach all my students (from algebra 2 to calculus). Many of them tell
me that they could never factor until they used the factor table. It
makes factoring fun and easy. I know that factoring is not considered
an à  inà  topic with people who want to do it all with their graphing
calculator, but I still feel that factoring is a proper and necessary
skill for people who want to be well rounded in mathematics. | {"url":"http://mathforum.org/kb/thread.jspa?threadID=188601","timestamp":"2014-04-19T00:05:22Z","content_type":null,"content_length":"27125","record_id":"<urn:uuid:5e27579c-68e2-4aa8-a13b-5531a5ff9e3c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brevet US5023912 - Pattern recognition system using posterior probabilities
1. Field of the Invention
The present invention relates to a pattern recognition system capable of improving recognition accuracy by combining posterior probabilities obtained from similarity values (or differences between
reference patterns and input patterns) of input acoustic units or input characters in pattern recognition such as speech recognition or character string recognition and, more particularly, to a
pattern recognition system in which an a priori probability based on contents of a lexicon is reflected in a posterior probability.
2. Description of the Related Art
Known conventional pattern recognition systems recognize continuously input utterances or characters in units of word or character sequences. As one of such pattern recognition systems, a connected
digit speech recognition algorithm using a method called a multiple similarity (MS) method will be described below.
Continuously uttered input utterances in a system are divided into frames of predetermined times. For example, an input utterance interval [1, m] having 1st to m-th frames as shown in FIG. 1 will be
described. In preprocessing of speech recognition, a spectral change is extracted each time one frame of an utterance is input, and word boundary candidates are obtained in accordance with the
magnitude of the spectral changes. That is, a large spectral change can be considered a condition of word boundaries. In this case, the term "word" means a unit of an utterance to be recognized. The
referred speech recognition system is composed of a hierarchy of lower to higher recognition levels, e.g., a phoneme level, a syllable level, a word level and a sentence level. The "words" as units
of utterances to be recognized correspond to a phoneme, a syllable, a word, and a sentence at the corresponding levels. Word recognition processing is executed whenever the word boundary candidate is
In the word sequence recognition processing, the interval [1, m] is divided into two partial intervals, i.e., intervals [1, ki] and [ki, m]. ki indicates the frame number of the i-th word boundary
candidate. The interval [1, ki] is an utterance interval corresponding to a word sequence wi, and the interval [ki, m] is a word utterance interval corresponding to a single word wi. A word sequence
Wi is represented by:
Wi=wi+wi (1)
and corresponds to a recognition word sequence candidate of the utterance interval [1, m] divided by the i-th frame. The recognition word sequence candidates Wi are obtained for all the word boundary
candidates ki (i=1, 2, . . . , l). Of these candidates thus obtained, a word sequence W having a maximum similarity value (value representing a similarity of this pattern with respect to a reference
pattern) is adopted as a recognition word sequence of the utterance interval [1, m]. Note that l represents the number of recognition word sequence candidates corresponding to partial intervals to be
stored upon word sequence recognition and is a parameter set in the system. By sequentially increasing m by this algorithm, recognition word sequences corresponding to all the utterance intervals can
be obtained.
In the above continuous speech recognition method, the number of input words is unknown. Therefore, in order to correctly recognize an input utterance pattern L as a word sequence W, whether each
detected interval correctly corresponds to an uttered word must be considered. Even if this is considered, it is difficult to obtain a high recognition rate in the word sequence recognition as long
as the similarity values are merely combined. This is because the similarity is not a probabilistic measure.
Therefore, some conventional systems transform an obtained similarity value into a posterior probability and use this posterior probability as a similarity measure for achieving higher accuracy than
that of the similarity.
Assume that speech recognition is to be performed for an input word sequence ##EQU1## including n words belonging to word set C ={cl, c2, . . . , cN} so as to satisfy the following two conditions:
(1) A word boundary is correctly recognized.
(2) The word category of each utterance interval is correctly recognized.
In this case, as shown in FIG. 2, assume that each word wi corresponds to a pattern li in each partial utterance interval to satisfy the following relation:
L=l1 l2 . . . ln
In this case, if the word sequence W has no grammatical structure, wi and wj can be considered independent events (i≠j). Hence the probability that each utterance interval is correctly recognized to
be a corresponding word is represented by the following equation: ##EQU2## In this equation, P(W|L) is called likelihood. Upon calculation of the P(W|L), in order to prevent repetition of
multiplication, logarithms of both sides of equation (2) are often taken to obtain logarithmic likelihood as follows: ##EQU3## In this equation, P(wi|li) is a conditional probability that an interval
li corresponds to wi and is a posterior probability to be obtained.
Therefore, by transforming an obtained similarity value into a posterior probability by a table, a high recognition rate can be obtained.
Since it is practically difficult to obtain the posterior probability P(wi|li), however, a similarity value is normally used instead of a probability value, while properly biasing the similarity
value to make it approximate to a probability value. For example, Ukita et al. performed approximation by an exponential function as shown in FIG. 3 ("A Speaker Independent Recognition Algorithm for
Connected Word Boundary Hypothesizer," Proc. ICASSP, Tokyo, April, 1986): ##EQU4## A logarithm of the equation (4) is calculated and the relation A·B^Smax =1.0 is utilized to obtain the following
equation: ##EQU5## By subtracting a fixed bias Smax from similarity S, a similarity value is transformed into a probability value. When this measure is used in connected digit speech recognition, the
bias Smax is set to be 0.96.
A posterior probability curve, however, is not generally a fixed curve but a variable one depending on a size of a lexicon or the contents of the lexicon (e.g., the number of similar words is large).
Therefore, the conventional method of transforming a similarity value into a posterior probability on the basis of only one fixed curve as described against many applications cannot perform
recognition with high accuracy.
As described above, in the conventional pattern recognition system for estimating similarity by transforming the similarity into a posterior probability, a transformation curve for obtaining the
posterior probability is approximated to a fixed curve because it is difficult to obtain a curve corresponding to the contents of a lexicon or the number of words. Therefore, recognition cannot be
performed with high accuracy.
It is an object of the present invention to provide a pattern recognition system capable of performing recognition with high accuracy by performing similarity-posterior transformation on the basis of
a parameter easily obtained by learning the training data belonging to the lexicon.
The pattern recognition system according to the present invention performs posterior probability transformation processing for transforming a similarity value calculated from the feature vectors of
an input pattern and a reference pattern for each category into a posterior probability calculated on the basis of the recognized category, the calculated similarity and a transformation parameter
acquired from learning in advance.
The transformation parameter is a parameter set including parameters for defining a distribution of similarities of correctly recognized input patterns in recognition processing acquired from the
similarity value training data of each category, parameters for defining a distribution of similarities of erroneously recognized input patterns in the recognition processing, and a weighting
coefficient ω required for calculating the posterior probability from the distributions of the two parameters. In transformation calculation, the posterior probability is calculated on the basis of
the similarity value calculated and the above transformation parameter set.
That is, in the pattern recognition process, predetermined calculation is performed by using transformation parameters corresponding to the recognition result, thereby transforming a similarity value
into a desired posterior probability. In addition, the transformation requires complicated calculations. Therefore, by setting the calculation results into a table in advance, a processing speed can
be increased.
Therefore, according to the pattern recognition system of the present invention, a correct posterior probability transformation parameter can be obtained by a small number of samples, and the
accuracy of recognition processing can be greatly improved by using the parameters.
FIG. 1 is a view for explaining an input utterance interval and a word boundary candidate;
FIG. 2 is a view showing a correspondence between an utterance pattern and a word sequence;
FIG. 3 is a graph showing an approximate transformation function used in a conventional system;
FIG. 4 is a block diagram of a continuous speech digit recognition system according an embodiment of the present invention;
FIG. 5 is a block diagram showing an arrangement of a similarity-posterior probability transformation section of the system shown in FIG. 4;
FIG. 6 is a flow chart showing parameter learning steps of the similarity-posterior probability transformation section of the system shown in FIG. 4; and
FIG. 7 is a graph showing posterior probability curves obtained in the parameter learning steps of the similarity-posterior probability transformation section of the system shown in FIG. 4.
A principle of a pattern recognition system according to an embodiment of the present invention will be described below.
In the system of the present invention, posterior probability transformation processing for transforming a similarity value calculated from a feature vector of an input pattern and a reference
pattern for each category into a posterior probability is typically performed by a transformation parameter memory section and a transformation calculation section to be described below or by a table
having functions of the two sections.
The transformation parameter memory section stores, in units of categories, a parameter set including parameters (α, β) for defining a distribution of similarity values correctly recognized in
recognition processing derived from the similarity value training data parameters (α, β) for defining a distribution of similarities erroneously recognized in the recognition processing, and a
weighting coefficient ω required for calculating a posterior probability from the distributions of the two parameters.
The transformation calculation section calculates a posterior probability on the basis of a similarity value and the parameter set stored in the above transformation parameter memory section.
Assuming that a partial utterance pattern li is classified into a word recognition result as its category and a similarity value (especially, a multiple similarity), a posterior probability P(wi|li)
is rewritten as follows:
P(wi|li)→P(wi|TiΛsi) (6)
(where Ti is an event in which a recognized category of li in a multiple similarity method is wi, and si is a multiple similarity value of li concerning a word wi)
Relation (6) can be transformed as follows by using the Bayes' theorem: ##EQU6## where wi is an event in which a pattern li does not belong to the category wi.
Statistics in the equation (7) will be described below.
P(si|TiΛwi) will be described first.
P(si|TiΛwi) is a probability that an event in which a recognized category obtained in the multiple similarity method is wi and the category of input data is wi occurs. This curve can be approximated
by the following equation: ##EQU7## where α and β are parameters obtained from training data: α represents the number of components not involved in the reference pattern in the multiple similarity
method; and β, its mean variance. In this parameter estimation method, as described in "Distribution of Similarity Values in Multiple Similarity Method" by Hideo Segawa et al. (Shingaku Giho
PRU87-18, June 1987), an effective amount of training data for parameter estimation is only several tens of samples.
P(si|TiΛwi) will be described below.
P(si|TiΛwi) is a probability that an event in which a recognized category in the multiple similarity method is wi while the category of input data is not wi occurs. In continuous speech recognition,
especially wi is problematic. Therefore, not only a combination of categories which easily causes wi to be erroneously recognized as wi, but also word contexts which are patterns not corresponding to
a particular category involved in the lexicon and easily causing erroneous recognition such as:
______________________________________(1) Part of a certain word (Ex) "6 [roku]" →"6-9 [roku-kyuu]"(2) Transient part between words (Ex) "3-1 [san-ichi]" →"3-2-1 [san-ni-ichi]"(3) Combination of two word patterns (Ex) "2-2 [ni-ni]" →"2 [ni]"______________________________________
must be examined, and their similarity distributions must be estimated. (Within the brackets are phonetic symbols indicating how the numerals are pronounced in the Japanese language.) The similarity
distribution can be approximated by the equation (8). Parameters in this similarity distribution are (αi, βi) so as to be distinguished from the parameters (αi, β8i) in the equation (8). The
parameters (αi, βi) can be easily calculated similarly to the parameters (αi, βi).
P(TiΛwi)/P(TiΛwi) will be taken into consideration. This statistic corresponds to an a priori probability in the Bayes' probability and to an occurrence frequency ratio of a category. P(TiΛwi)
represents a probability that an event in which a recognition result obtained by a subspace method is wi and an input pattern is wi occurs. This statistic is calculated in a learning procedure as
follows: ##EQU8## The obtained ω is a weighting coefficient.
As described above, each parameter set is a statistic which can be easily calculated by learning.
In pattern recognition, a set of necessary parameters α, β, α, β and ω are read out from the transformation parameter memory section in accordance with the obtained similarity si to perform a
calculation based on the equation (7) in the transformation calculation section, thereby transforming the similarity value into a desired posterior probability. The transformation calculation section
must perform complicated calculations. Therefore, by setting the results of transformation calculation into a table, a processing speed can be further increased.
As a result, a posterior probability transforming means can be constituted by a small data amount with high accuracy, thereby improving recognition accuracy.
A word sequence recognition system as the pattern recognition system according to the embodiment of the present invention based on the above principle will be described below.
FIG. 4 shows an arrangement of the word sequence recognition system for connected digit speech recognition.
Referring to FIG. 4, an utterance input section 1 transforms a continuous utterance into a predetermined electrical signal and supplies the signal to a preprocessor 2. The preprocessor 2 comprises an
acoustic process section 3, a spectral change extraction section 4, an utterance start/end point determination section 5, and a word boundary candidate generation section 6. The acoustic process
section 3 performs spectral analysis for the input utterance data in units of frames by using a filter bank of, e.g., 8 to 30 channels, thereby extracting a feature pattern. The spectral change
extraction section 4 extracts a difference ΔU between spectrum data Um of each frame. The utterance start/end point determination section 5 detects the start and end points of the utterance on the
basis of the magnitude of the extracted spectral change. When the spectral change ΔU is larger than a predetermined threshold value θ, the word boundary candidate generation section 6 outputs the
corresponding frame as a word boundary candidate ki.
The feature patterns corresponding to n word interval candidates [ki, m] obtained by the boundary candidates ki (i=1 to n) are supplied to a word recognition section 7 and subjected to word
recognition using a word dictionary 8 therein. A word recognition candidate of each word interval candidate is transformed into a posterior probability by a similarity-posterior probability
transformation section 9 and supplied to a word sequence recognition section 10. The word sequence recognition section 10 combines a word sequence candidate for each word sequence interval [1, ki] (i
=1 to n) registered in a recognition word sequence candidate registration section 11 with the similarity transformed into the posterior probability to perform word sequence recognition. Word sequence
recognition candidates thus obtained are stored in the recognition word sequence candidate registration section 11. When the utterance start/end point determination section 5 detects the end point of
the utterance, one of the registered word sequence candidates having a highest similarity is output as a recognized word.
FIG. 5 shows an arrangement of the similarity-posterior probability transformation section 9. The section 9 comprises a transformation calculation section 21 and a transformation parameter memory
section 22. The transformation parameter memory section 22 is a table which stores parameters such as:
______________________________________α, β similarity distribution of correct patterns^-- α, ^-- β similarity distribution of incorrect patternsω a priori probability ratio of correct pattern to incorrect pattern______________________________________
These parameter sets can be calculated by learning. FIG. 6 shows an algorithm of this learning.
That is, this learning processing includes first and second learning steps 31 and 32. In the first learning step 31, uttered word sequence data is divided or classified in accordance with word
boundary data and a word category given as instructive data to form a reference pattern (template) of a word utterance based on the multiple similarity method. In the second learning step 32, a word
sequence is uttered again in accordance with the word boundary data and the word category given as the instructive data to generate a word utterance interval candidate, and a word similarity
calculation with respect to the reference pattern (template) formed in the above first learning step is performed on the basis of the generated word interval candidate data, thereby obtaining word
similarity data and a word recognition result. On the basis of the result and the given instructive data, correct and incorrect data similarity distributions and a category appearance frequency are
calculated to obtain a posterior probability curve concerning a similarity value.
The posterior probability curve obtained as a result of the above learning is shown in FIG. 7.
When the learning is performed for all categories, parameters (αi, βi, αi, βi, ω) for all the categories can be obtained. These parameters are stored in the transformation parameter memory section
The transformation calculation section 21 transforms similarities into the following equations: ##EQU9## and then calculates the posterior probability by the following transformation equation: ##
As described above, according to the system of the present invention, the similarity-posterior probability transformation section can be easily formed by the simple learning processing, and the
recognition processing can be performed with high accuracy by using the obtained transformation section.
Upon transformation into a posterior probability, different transformation curves are preferably used for the respective recognition categories. Since a common transformation curve is used depending
on the recognition category result, however, the following equation may be used: ##EQU11##
In addition, since the transformation calculation section must perform complicated calculations, the transformation calculation section and the transformation parameter memory section may be combined
into a table. As a result, a transformation speed can be increased.
The present invention can be applied to not only speech recognition but also another pattern recognition such as character recognition. | {"url":"http://www.google.fr/patents/US5023912","timestamp":"2014-04-21T07:06:15Z","content_type":null,"content_length":"90820","record_id":"<urn:uuid:005edfd6-1640-42a3-9e91-0cd1364c66b2>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Exercises
Model answers are in red.
Comments are in blue.
1) Problems 1 and 2 are classical probability problems going back centuries.
The Chevalier de Mere bets he can get a "6" in four rolls of a fair die. If he get a "6" in four throws, you give him a dollar. If he doesn’t, he gives you a dollar.
Do you want to play? Yes No
Explain your answer:
The chance of beating the Chevalier is equal to 625/1296. The Chevalier has a better chance of winning.
A complete answer should explain where the number 625/1296 comes from. It can be found several ways. Any one (or more) of the following explanations will do:
1. If four dice are rolled (or a single die is rolled 4 times) there are 6*6*6*6 = 1296 ways for the four dice to come up. (The asterisk means multiply.) That's because on each die there are 6
faces. On the other hand, there are only 5*5*5*5 = 625 ways for all four dice to come up with numbers other than "6". All 1296 rolls are equally likely. There are 625 outcomes that favor you, so
the chance of you winning is 625/1296.
2. The chance of not rolling a 6 in one throw is 5/6. Now picture a bazillion people rolling dice. After the first roll, 5/6 of these people have not lost to the Chevalier. After the second roll, 5/
6 of the survivors of the first roll have still not lost. That means (5/6) of (5/6) of the original bazillion remain. After the third throw (5/6) of these remain, etc. After four rolls, the
proportion remaining is (5/6)*(5/6)*(5/6)*(5/6) = 625/1296.
3. The probability of not getting a "6" in one roll is 5/6. So, the probability of not getting a "6" in four rolls is (5/6)*(5/6)*(5/6)*(5/6) = 625/1296.
The first way keeps the "experiment-outcome-event" thought-model out in the open. The second way emphasizes thinking about probability in terms of relative frequency. It's a different kind of
thought-mdel (but hopefully you see the close relation to the first model). The third approach uses a rule for probability calculations, namely: If two events are independent, then the probability of
both ocurring together is equal to the product of their probabilities.
Who would come out better off if we played this bet over and over and over? the Chevalier Why?
In the long run, he will win more often. In fact, by the Law of Large Numbers, he will win in close to 671/1296 = approximately 51.77% of the time.
If we played 1,000 times, how much would I probably win (or lose)?
The Chevalier would be expected to win about 518 times. He would come out about 18 dollars ahead.
The Chevalier de Mere bets he can get a "double 6" in 24 throws of a pair of fair dice. If he gets a "double 6" in 24 throws, you give him a dollar. If he doesn't don’t, he gives you a dollar.
For your information:
35^24 =11419131242070580387175083160400390625;
36^24 = 22452257707354557240087211123792674816
(35/36)^24 = 0.508596 (approximately)
(The symbol "^" means exponentiation; a^3 means a*a*a.)
Do you want to play? Yes No
Explain your answer:
3) This problem tests your understanding of the terms "experiment", "outcome" and "event".
a) In problem 1), what is the experiment? Rolling four dice.
b) What set would be most reasonable to take for the collection of outcomes?
The set of all 4-term sequences of the digits 1, 2, 3, 4, 5, 6. In other words the outcomes are: 1111, 1112, 1113, 1114, 1115, 1116, 1121, 1122, ..., 5656, 5661, 5662, 5663, 5664, 5665, 5666, 6666.
(I skipped writing a whole bunch.)
c) Are there other sets that would be reasonable candidates for the “set of outcomes”? For example?
Actually, in this case, any other choice would be more clumsy or difficult to use. You could make up a symbol---say "N"---to stand for "any number but 6". Then the outcomes would look like: NNNN,
NNN6, NN6N, NN66, N6NN, N6N6, N66N, N666, 6NNN, 6NN6, 6N6N, 6N66, 66NN, 66N6, 666N, 6666. But now these oucomes ARE NOT EQUALLY LIKELY. So it's much harder to think about them.
d) Consider the set of all 4-term-sequences built out of the numbers 1, 2, 3, 4, 5, 6. How many elements are there in this set? 1296
e) How can the set in c) be viewed as the collection of outcomes for the experiment in 1)? Each sequence would tell the way each of the four dice came up. For example, NNN6 would mean that the first
three dice got numbers other than 6 and the last got a 6. Are the outcomes equally likely? No. The chance of NNNN is (5/6)^4. The chance of 6666 is (1/6)^4.
f) Using the set in c) to represent outcomes, what is the event that problem 1) asks about? How many outcomes are in this event?
4) This problem concerns another situation where the language of outcome and event is useful, but not obvious. Without explicitly saying so, this problems also introduces the ideas of "conditional
probability" and "dependent events" and is related to tests of statistical significnce for a relationship between two boolean variables.
In a certain city ( very similar to East Baton Rouge), 33/100 of all registered voters are black and 67/100 are white. Also, 3/10 of all registered voters are Republican and 7/10 are Democrat.
Moreover, 29/100 of all registered voters are BOTH white and Republican. As you can see, almost all Republicans are white.
a) How would you answer the following questions instinctively? In this city, is a Republican more likely white or black? In this city, is a Democrat more likely white or black? Is a white voter more
likely Republican or Democrat?
b) Draw a two-by-two table with rows labeled “Black” and “White” and columns labeled “Republican” and “Democrat”. Fill in the squares with the percentages in each of the 4 categories.
Black White row total
Republican ____ 29 30
Democrat ____ ____ 70
column total 33 67 100
c) Use this to answer the three questions above. Were your instincts correct? If someone’s instincts on this were wrong, do you think it is a sign of prejudice?
d) Viewing “choosing a voter at random” as an experiment, explain why the four boxes in the table from b) may be viewed as outcomes. Taking this point of view, is “choosing a Democrat” an event or an
5) Though this is problem involves using tree diagrams to calculate probabilities, like problem 3, it is mostly about understanding and using the language of outcomes and events.
a) You have a jar containing 3 red, 2 green, and one white marble. You choose three marbles without replacing any into the jar. Explain why it would be reasonable to think of (RRR, RRG, RGR, GRR,
RRW, RWR, WRR, RGG, RGW, RWG, GRG, GRW, WRG, GGR, GWR, WGR, GGW, GWG, WGG) as the set of outcomes.
b) Draw a tree diagram that shows how these outcomes can be achieved by three successive draws. Label the branches with the appropriate probabilities, and use this to determine the probability of
each of the outcomes.
c) Are the outcomes equally likely? Explain.
d) What is the probability of each of the events:1) getting no red marbles, 2) choosing a green marble on the last draw, 3) leaving a marble of each color in the jar.
e) Discuss variants of this problem. For example, suppose I replace marbles? Or, suppose I replace all marbles drawn with red ones. What if I replace red marbles with green ones, greens with whites
and whites with reds? | {"url":"https://www.math.lsu.edu/~madden/M1100/Probability_exercises.html","timestamp":"2014-04-16T07:25:05Z","content_type":null,"content_length":"10471","record_id":"<urn:uuid:abb2b7a0-93d5-4890-a375-e08477c56085>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ware County 5th grade
In October 2009
by apcoker
5th grade Science Chapter 9 6 terms
by apcoker
5th grade Science Chapter 8 19 terms
by apcoker
5th grade Science Chapter 7 20 terms
by apcoker
5th grade Social Studies Vocabulary 10 terms
In September 2009
by apcoker
5th grade Magnets Study Guide 6 terms
by apcoker
5th grade Electrical Current Study Guide 7 terms
In August 2009
by apcoker
5th grade Weight, Mass, Volume, Metric System, and Density 9 terms
by apcoker
5th grade Civil War Study Guide 27 terms
In April 2009
by apcoker
5th grade math area of parallelograms 1 10 terms
by apcoker
5th grade math area of triangles 1 10 terms
by apcoker
5th grade math area of circles given radius 1 10 terms
by apcoker
5th grade math area of circles/given diameter 1 10 terms
by apcoker
5th grade math change decimals to fractions 1 10 terms
by apcoker
5th grade math change fractions to decimals 2 10 terms
by apcoker
5th grade math change fractions to decimals 1 10 terms
by apcoker
5th grade math capacity 1 10 terms
by apcoker
5th grade math capacity 2 10 terms
by apcoker
5th grade math capacity 3 10 terms
by apcoker
5th grade math reducing fractions 3 10 terms
by apcoker
5th grade math comparing fractions 1 10 terms
by apcoker
5th grade math divide fractions 2 10 terms
by apcoker
5th grade math divide fractions 1 10 terms
by apcoker
5th grade multiply fractions 2 10 terms
by apcoker
5th grade math read and write whole #'s 1 10 terms
by apcoker
5th rade reading/writing whole #'s 10 terms
No sets matching found in this class | {"url":"http://quizlet.com/class/32453/","timestamp":"2014-04-16T07:29:16Z","content_type":null,"content_length":"40913","record_id":"<urn:uuid:96d928bf-0dc9-44b2-8d33-425fe0eeab56>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issue Date Title Author(s)
Dec-2004 Digit and command interpretation for electronic book using neural network and genetic algorithm Lam, H. K.; Leung, Frank H. F.
2001 A fast path planning-and-tracking control for wheeled mobile robots Lee, Tat-hoi; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
1999 Fast simulation of PWM inverters using MATLAB Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
Apr-2005 Fuzzy combination of fuzzy and switching state-feedback controllers for nonlinear systems subject to parameter Lam, H. K.; Leung, Frank H. F.
2001 Fuzzy control of DC-DC switching converters : stability and robustness analysis Lam, H. K.; Lee, Tat-hoi; Leung, Frank H. F.; Tam, Peter K. S.
2000 Fuzzy control of multivariable nonlinear systems subject to parameter uncertainties : model reference approach Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2001 Fuzzy model reference control of wheeled mobile robots Lam, H. K.; Lee, Tat-hoi; Leung, Frank H. F.; Tam, Peter K. S.
Feb-2001 A fuzzy sliding controller for nonlinear systems Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
2000 Fuzzy state feedback controller for nonlinear systems : stability analysis and design Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2003 Gain estimation for an AC power line data network transmitter using a neural-fuzzy network and an improved Lam, H. K.; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K. S.; Lee, Yim-shu
genetic algorithm
2002 Gain estimation for an AC power line data network transmitter using a self-structured neural network and genetic Lam, H. K.; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K. S.; Lee, Yim-shu
2003 A genetic algorithm based fuzzy-tuned neural network Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Lee, Yim-shu
2003 A genetic algorithm based neural-tuned neural network Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Lee, Y. S.
2003 A genetic algorithm based variable structure neural network Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Lee, Y. S.
2004 Genetic algorithm based variable-structure neural network and its industrial application Ling, S. H.; Leung, Frank H. F.; Lam, H. K.
2005 Genetic algorithm-based variable translation wavelet neural network and its application Ling, S. H.; Leung, Frank H. F.
2002 Graffiti commands interpretation for eBooks using a self-structured neural network and genetic algorithm Leung, Koon-fai; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
Jun-2008 Hybrid particle swarm optimization with wavelet mutation and its industrial applications Ling, S. H.; Iu, Herbert Ho-ching; Chan, K. Y.; Lam, H. K.; Yeung, Benny C.
W.; Leung, Frank H. F.
1-Jun-2007 An improved GA based modified dynamic neural network for Cantonese-digit speech recognition Ling, S. H.; Leung, Frank H. F.; Leung, K. F.; Lam, H. K.; Iu, H. H. C.
2003 Improved genetic algorithm for economic load dispatch with valve-point loadings Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Lee, Y. S.
Sep-2008 Improved hybrid particle swarm optimized wavelet neural network for modeling the development of fluid dispensing Ling, S. H.; Iu, Herbert Ho-ching; Leung, Frank H. F.; Chan, K. Y.
for electronic packaging
Oct-1993 An improved LQR-based controller for switching dc-dc converters Leung, Frank H. F.; Tam, Peter K. S.; Li, Chi-kwong
2000 An improved Lyapunov function based stability analysis method for fuzzy logic control systems Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
2001 Linear controllers for fuzzy systems subject to unknown parameters : stability analysis and design based on Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
linear matrix inequality (LMI) approach
Oct-2007 LMI-based stability and performance conditions for continuous-time nonlinear systems in Takagi-Sugeno's form Lam, H. K.; Leung, Frank H. F.
2006 LMI-based stability and performance conditions for continuous-time nonlinear systems in Takagi-Sugeno's form Lam, H. K.; Leung, Frank H. F.
2006 LMI-based stability and performance design of fuzzy control systems : fuzzy models and controllers with Lam, H. K.; Leung, Frank H. F.
different premises
2006 LMI relaxed stability conditions for fuzzy-model-based control systems Lam, H. K.; Leung, Frank H. F.
Jun-1998 Lyapunov-function-based design of fuzzy logic controllers and its application on combining controllers Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
1997 Lyapunov function based design of heuristic fuzzy logic controllers Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
1998 Lyapunov function based design of robust fuzzy controllers for uncertain nonlinear systems : distinct Lyapunov Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
2008 Modelling the development of fluid dispensing for electronic packaging : hybrid particle swarm optimization Ling, S. H.; Iu, Herbert Ho-ching; Leung, Frank H. F.; Chan, K. Y.
based-wavelet neural network approach
2003 Neural fuzzy network and genetic algorithm approach for Cantonese speech command recognition Leung, Koon-fai; Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
2001 A neural fuzzy network with optimal number of rules for short-term load forecasting in an intelligent home Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
1999 Neural-network-controlled single-phase UPS inverters with improved transient response and adaptability to Sun, Xiao; Xu, Dehong; Leung, Frank H. F.; Wang, Yousheng; Lee, Yim-shu
various loads
1995 A neuro-fuzzy controller applying to a Cuk converter Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
2007 A new hybrid particle swarm optimization with wavelet theory based mutation operation Ling, S. H.; Yeung, C. W.; Chan, K. Y.; Iu, Herbert Ho-ching; Leung, Frank
H. F.
Aug-2001 Nonlinear state feedback controller for nonlinear systems : stability analysis and design based on fuzzy plant Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2002 A novel GA-based neural network for short-term load forecasting Ling, S. H.; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
Aug-2003 A novel genetic-algorithm-based neural network for short-term load forecasting Ling, S. H.; Leung, Frank H. F.; Lam, H. K.; Lee, Yim-shu; Tam, Peter K. S.
2001 On design of a switching controller for nonlinear systems with unknown parameters based on a model reference Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
1997 On fuzzy model reference adaptive control systems : full-state feedback and output feedback Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
2001 On interpretation of graffiti commands for eBooks using a neural network and an improved genetic algorithm Lam, H. K.; Ling, S. H.; Leung, Koon-fai; Leung, Frank H. F.
Apr-2004 On interpretation of graffiti digits and characters for eBooks : neural-fuzzy network and genetic algorithm Leung, Koon-fai; Leung, Frank H. F.; Lam, H. K.; Ling, S. H.
2002 On interpretation of graffiti digits and commands for eBooks : neural fuzzy network and genetic algorithm Lam, H. K.; Leung, Koon-fai; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K.
approach S.
Feb-2004 Optimal and stable fuzzy controllers for nonlinear systems based on an improved genetic algorithm Leung, Frank H. F.; Lam, H. K.; Ling, S. H.; Tam, Peter K. S.
2001 Optimal and stable fuzzy controllers for nonlinear systems subject to parameter uncertainties using genetic Lam, H. K.; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K. S.
2001 A path planning method for micro robot soccer game Lee, Tat-hoi; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2002 Playing tic-tac-toe using a modified neural network and an improved genetic algorithm Lam, H. K.; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K. S.; Lee, Yim-shu
1999 Position control for wheeled mobile robots using a fuzzy logic controller Lee, Tat-hoi; Leung, Frank H. F.; Tam, Peter K. S.
2001 A practical fuzzy logic controller for the path tracking of wheeled mobile robots Lee, Tat-hoi; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
Apr-2003 A practical fuzzy logic controller for the path tracking of wheeled mobile robots Lee, Tat-hoi; Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2005 Real-coded genetic algorithm with average-bound crossover and wavelet mutation for network parameters learning Ling, S. H.; Leung, Frank H. F.
2003 Recognition of speech commands using a modified neural fuzzy network and an improved GA Leung, Koon-fai; Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
May-2007 Sampled-data fuzzy controller for time-delay nonlinear systems : fuzzy-model-based LMI approach Lam, H. K.; Leung, Frank H. F.
2002 Short-term daily load forecasting in an intelligent home with GA-based neural network Ling, S. H.; Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
Dec-2003 Short-term electric load forecasting based on a neural fuzzy network Ling, S. H.; Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
1996 A simple adaptive control strategy for regulated switching dc-dc converter based on grid-point concept Ng, T. C. T.; Leung, Frank H. F.; Tam, Peter K. S.
1996 A simple large-signal non-linear model for fast simulation of zero-current-switch quasi-resonant converters Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
May-1997 A simple large-signal nonlinear modeling approach for fast simulation of zero-current-switch quasi-resonant Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
2001 Stability analysis and design of fuzzy observer-controller for fuzzy systems Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2007 Stability analysis and performance design for fuzzy-model-based control system under imperfect premise matching Lam, H. K.; Yeung, C. W.; Leung, Frank H. F.
Dec-2005 Stability analysis of fuzzy control systems subject to uncertain grades of membership Lam, H. K.; Leung, Frank H. F.
2002 Stability analysis of systems with non-symmetric dead zone under fuzzy logic control Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
1998 Stability analysis of systems with parameter uncertainties under fuzzy logic control Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
2008 Stability analysis of T-S fuzzy-model-based control systems using fuzzy Lyapunov function Lam, H. K.; Narimani, M.; Lai, J. C. Y.; Leung, Frank H. F.
2004 Stability analysis, synthesis and optimization of radial-basis-function neural-network based controller for Lam, H. K.; Leung, Frank H. F.
nonlinear systems
2000 Stability and robustness analysis and gain design for fuzzy control systems subject to parameter uncertainties Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
1998 Stability and robustness analysis of uncertain multivariable continuous-time nonlinear systems with digital Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
fuzzy controller
1998 Stability and robustness analysis of uncertain multivariable fuzzy digital control systems Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.
2007 Stability conditions for fuzzy control systems with fuzzy feedback gains Lam, H. K.; Yeung, C. W.; Leung, Frank H. F.
1997 Stability design of TS model based fuzzy systems Wong, L. K.; Leung, Frank H. F.; Tam, Peter K. S.
Nov-2000 Stable and robust fuzzy control for uncertain nonlinear systems Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
1997 Stable and robust fuzzy control for uncertain nonlinear systems based on a grid-point approach Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
2003 Stable fuzzy controller design for uncertain nonlinear systems : genetic algorithm approach Leung, Frank H. F.; Lam, H. K.; Tam, Peter K. S.; Lee, Yim-shu
2000 Switching controller for fuzzy systems subject to unknown parameters : analysis and design based on a linear Lam, H. K.; Leung, Frank H. F.; Tam, Peter K. S.
matrix inequality (LMI) approach
Jan-2003 Tuning of the structure and parameters of a neural network using an improved genetic algorithm Leung, Frank H. F.; Lam, H. K.; Ling, S. H.; Tam, Peter K. S.
2001 Tuning of the structure and parameters of neural network using an improved genetic algorithm Lam, H. K.; Ling, S. H.; Leung, Frank H. F.; Tam, Peter K. S.
2006 A variable node-to-node-link neural network and its application to hand-written recognition Ling, S. H.; Leung, Frank H. F.; Lam, H. K.
2005 A variable-parameter neural network trained by improved genetic algorithm and its application Ling, S. H.; Lam, H. K.; Leung, Frank H. F. | {"url":"http://repository.lib.polyu.edu.hk/jspui/browse?type=author&sort_by=1&order=ASC&rpp=80&etal=-1&value=Leung%2C+Frank+H.+F.&starts_with=X","timestamp":"2014-04-20T03:53:03Z","content_type":null,"content_length":"66191","record_id":"<urn:uuid:f4476a5e-443c-4061-ae41-5d628baf346a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
GMAT Diagnostic Test Question 25
Author Message
GMAT Diagnostic Test Question 25 [#permalink] 06 Jun 2009, 22:06
This post received
Expert's post
65% (medium)
Question Stats:
Status: In
Mexico.... NO 43%
PM's :-)
(03:16) correct
UA-1K, SPG-G, 56% (02:42)
Joined: 04
Dec 2002 based on 76 sessions
GMAT Diagnostic Test Question 25
Posts: 11829
Field: word problems (rate)
United States Difficulty: 700
A train is traveling at a constant speed from city A to city B. Along this trip the train makes three one-hour stops and reaches city B. At city B the train is stopped again for 1 hour. After
GMAT 1: 750 that the train makes the return trip from city B to city A at a constant speed which is twice the speed of the first trip. Along this return trip the train makes ten thirty minutes stops and
Q49 V42 reaches city A. If both trips took the same amount of time, how many hours was the round trip?
GPA: 3.5 A. 14
WE: B. 15
Technology C. 16
and Tourism) D. 17
Followers: E. 18
Solution: gmat-diagnostic-test-question-79355-20.html#p1071311
Kudos [?]: [Reveal]
6437 [4] ,
given: 3569 Spoiler: OA
Founder of GMAT Club
Just starting out with GMAT? Start here... | Want to know your GMAT Score? Try GMAT Score Estimator
Need GMAT Book Recommendations? Best GMAT Books
Co-author of the GMAT Club tests
Have a blog? Feature it on GMAT Club!
Last edited by
on 06 Oct 2013, 23:26, edited 3 times in total.
Kaplan Promo Code Knewton GMAT Discount Codes Manhattan GMAT Discount Codes
Re: GMAT Diagnostic Test Question 26 [#permalink] 06 Jul 2009, 05:27
This post received
Official Answer: B
First, we have to calculate the amount of time the train spent for the stops.
hours for the first trip and
dzyubam 10*0.5=5
CIO for the return trip. Now, we can write an equation with
Joined: 02 S
Oct 2007
for one-way distance and
Posts: 1218
Followers: 85
for train's speed:
Kudos [?]:
599 [3] , \frac{S}{V} + 3 = \frac{S}{2V} + 5\frac{S}{V} - \frac{S}{2V} = 2\frac{2S-S}{2V} = 2\frac{S}{2V} = 2\frac{S}{V} = 4
given: 334
So, the roundtrip lasted for
hours (we should count the 1 hour stop in the destination point as well).
Welcome to GMAT Club!
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Are GMAT Club Test sets ordered in any way?
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
Re: GMAT Diagnostic Test Question 26 [#permalink] 23 Aug 2009, 08:36
This post received
dzyubam wrote:
Official Answer: B
First, we have to calculate the amount of time the train spent for the stops.
flyingbunny hours for the first trip and
Manager 10*0.5=5
Joined: 14 for the return trip. Now, we can write an equation with
Aug 2009
Posts: 124
for one-way distance and
Followers: 2
Kudos [?]: 89
[1] , given: for train's speed:
\frac{S}{V} + 3 = \frac{S}{2V} + 5\frac{S}{V} - \frac{S}{2V} = 2\frac{2S-S}{2V} = 2\frac{S}{2V} = 2\frac{S}{V} = 4
So, the roundtrip lasted for
hours (we should count the
1 hour stop in the destination point
as well).
Damn, this one is tricky!
Kudos me if my reply helps!
Re: GMAT Diagnostic Test Question 26 [#permalink] 05 Oct 2009, 01:40
This post received
freakedgod We can take another approach too..and kill the problem in a shorter time just coz the answer choices permit.
Intern We have
1.Stoppage time for journey=3hrs.
Joined: 11 2.Stoppage time for return journey-30min*10=300 min=5 hrs.
Jun 2009 3.Additional stoppage time=1 hr.
Posts: 2 Now Total stoppage =3+1+5=9 hrs.
Followers: 0 Now as the speeds are in the ratio 1:2 and distance is constant, time take should be in ratio 2:1.
Kudos [?]: 3 Time running should be a multiple of 2+1=3
[3] , given:
1 So probable answer choice=9+3x hrs.
Options are 14,15,16,17,18
15 is the rite one as 15-9=6 divisible by 3 to give an answer as integer.
This method mite be incorrect for generic problems..but suits the most here as I previously said, the answer choices permit.
Re: GMAT Diagnostic Test Question 26 [#permalink] 19 Oct 2009, 17:19
I didn't know where to write this comment, so here i go: the diagnostic problems in general are great. The difficulty level, and especially the explanations are absolutely fantastic. I'm learning
Joined: 16 a lot. Sometimes I feel like sitting in a lecture hall listening to a brilliant professor giving one of his best early morning lectures at an ivory tower.
Oct 2009
cheers. All good babe!
Posts: 3
Followers: 0
Re: GMAT Diagnostic Test Question 26 [#permalink] 20 Oct 2009, 01:30
Thank you for the kind words and welcome to GMAT Club!
We're gradually improving the wording where necessary. You're welcome to suggest any changes to the questions you might find ambiguous. The questions were not worded poorly on purpose. Feel free
to post in the threads of Diagnostic Test questions to comment on the wording. Thanks
dzyubam sirdookie wrote:
CIO I didn't know where to write this comment, so here i go: the diagnostic problems in general are great. The difficulty level, and especially the explanations are absolutely fantastic. I'm learning
a lot. Sometimes I feel like sitting in a lecture hall listening to a brilliant professor giving one of his best early morning lectures at an ivory tower.
Joined: 02
Oct 2007 cheers. All good babe!
Posts: 1218 _________________
Followers: 85 Welcome to GMAT Club!
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Are GMAT Club Test sets ordered in any way?
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
sirdookie Re: GMAT Diagnostic Test Question 26 [#permalink] 20 Oct 2009, 16:15
Intern dyzybam,
Joined: 16 thanks for welcoming me here. i think i've found this board just in time. i was looking for some challenging quant problems and this site provides those. hey i wouldn't have paid $79 to be a
Oct 2009 "premier" member if i wasn't imprssed by the test
Posts: 3 All good!!!
Followers: 0
Re: GMAT Diagnostic Test Question 26 [#permalink] 21 Oct 2009, 04:53
Thank you for the kind words. We are glad you like the Tests. Hope your Quant score goes up after practicing with GMAT Club Test
sirdookie wrote:
dzyubam dyzybam,
CIO thanks for welcoming me here. i think i've found this board just in time. i was looking for some challenging quant problems and this site provides those. hey i wouldn't have paid $79 to be a
"premier" member if i wasn't imprssed by the test
Joined: 02
Oct 2007 All good!!!
Posts: 1218 _________________
Followers: 85 Welcome to GMAT Club!
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Are GMAT Club Test sets ordered in any way?
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
Intern Re: GMAT Diagnostic Test Question 26 [#permalink] 22 Oct 2009, 18:10
Joined: 21 this is very helpful - although troubling how much work i have yet to do. only 6 more weeks before the big day. I took one of the GMAC practice tests last night (saving the second one until test
Sep 2009 day nears) and scored a 630 (Q 38). I've pretty much given up all social life to prep for this thing and am really hoping to flirt with the magical 700. I hope i am not being unrealistic. That
being said, this site one of the best resources i have found. Thanks to all that have contributed and i will try and be a gracious consumer of the knowledge and share anything i have that is
Posts: 4 helpful.
Rice, Tulane
Followers: 0
Re: GMAT Diagnostic Test Question 26 [#permalink] 22 Oct 2009, 23:18
Welcome to GMAT Club!
You will find a lot of useful info on our forums. Here's a couple of threads that might be helpful:
dzyubam Have a good prep time and do nail that 700
CIO cmintz wrote:
Joined: 02 this is very helpful - although troubling how much work i have yet to do. only 6 more weeks before the big day. I took one of the GMAC practice tests last night (saving the second one until test
Oct 2007 day nears) and scored a 630 (Q 38). I've pretty much given up all social life to prep for this thing and am really hoping to flirt with the magical 700. I hope i am not being unrealistic. That
being said, this site one of the best resources i have found. Thanks to all that have contributed and i will try and be a gracious consumer of the knowledge and share anything i have that is
Posts: 1218 helpful.
Followers: 85 _________________
Welcome to GMAT Club!
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Are GMAT Club Test sets ordered in any way?
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
Re: GMAT Diagnostic Test Question 26 [#permalink] 07 Nov 2009, 16:06
This post received
Ikowill KUDOS
Intern Another way of solving this problem:
Joined: 16 We know that each trip took the same amount of time and has the same distance.Then we can equal either distance or time. I chose distance, dzyubam chose time.
Oct 2009
t = total time. This is important because t is the total time for each way of the round trip including stops
Posts: 7 Total time of stop first part of the trip = 1 hour * 3 = 3 hours
Total time of stop second part of the trip = 0.5 hour *10 = 5 hours
Followers: 0
d = s (t -3) First part of the trip
Kudos [?]: 16 d= 2s (t-5) Second part of the trip
[12] , given:
5 Both distance are the same
s (t – 3) = 2s (t – 5)
(t – 3) = 2 (t – 5)
t – 3 = 2t – 10
t = 7
(7 * 2) both ways + 1 hour stop when reached destination
14 + 1 = 15 hours Total Trip
Re: GMAT Diagnostic Test Question 26 [#permalink] 13 Nov 2009, 09:23
freakedgod wrote:
We can take another approach too..and kill the problem in a shorter time just coz the answer choices permit.
We have
ajonas 1.Stoppage time for journey=3hrs.
2.Stoppage time for return journey-30min*10=300 min=5 hrs.
Intern 3.Additional stoppage time=1 hr.
Joined: 09 Now Total stoppage =3+1+5=9 hrs.
Nov 2009
Now as the speeds are in the ratio 1:2 and distance is constant, time take should be in ratio 2:1.
Posts: 21
Time running should be a multiple of 2+1=3
Followers: 1
So probable answer choice=9+3x hrs.
Kudos [?]: 1
[0], given: 3 Options are 14,15,16,17,18
15 is the rite one as 15-9=6 divisible by 3 to give an answer as integer.
This method mite be incorrect for generic problems..but suits the most here as I previously said, the answer choices permit.
what about 18? Also a multiple of 3. Doesn't help.
Re: GMAT Diagnostic Test Question 26 [#permalink] 21 Nov 2009, 03:51
Joined: 22
Sep 2009 Crap, worked everything out right but forgot the last 1 hr stop at destination
Posts: 222
Tokyo, Japan
Followers: 2
Kudos [?]: 16
[0], given: 8
Re: GMAT Diagnostic Test Question 26 [#permalink] 23 Nov 2009, 05:23
That's the reason why we all have to practice and eliminate as many mistakes of this kind as possible
dzyubam lonewolf wrote:
CIO Crap, worked everything out right but forgot the last 1 hr stop at destination
Joined: 02 _________________
Oct 2007
Welcome to GMAT Club!
Posts: 1218
Want to solve GMAT questions on the go? GMAT Club iPhone app will help.
Followers: 85 Please read this before posting in GMAT Club Tests forum
Result correlation between real GMAT and GMAT Club Tests
Are GMAT Club Test sets ordered in any way?
Take 15 free tests with questions from GMAT Club, Knewton, Manhattan GMAT, and Veritas.
Re: GMAT Diagnostic Test Question 26 [#permalink] 26 Nov 2009, 22:23
Joined: 25
Nov 2009 2
Posts: 16 This post received
Location: San
Francisco aren't we missing a really simple explanation? trips both took the same time. so when the train went twice as fast it go away with 5 hrs stopped instead of 3. i.e. saved 2 hours. t+5 = 2t+3 t = 2
I guess I'm saying a similar solution but this is a simpler way for me to consider it.
Wharton West
eMBA, Haas
EW, Haas eMBA
Followers: 0
Kudos [?]: 6
[2] , given:
coryking Re: GMAT Diagnostic Test Question 26 [#permalink] 08 Aug 2010, 11:51
Intern This question could have been a bit better worded. It should make it more obvious the distance in both trips are the same. I was taught to never assume things in problems like this and unless it
said something like "the train made a return trip the same way back" this question can not be solved.
Joined: 01
Aug 2010 Just saying "return trip" and assuming it means "return trip the same way" is a bit culturally biased (I hate going back the same way).
Posts: 1 Course, once you state the distance is constant, this becomes an easy problem
Followers: 0
Re: GMAT Diagnostic Test Question 26 [#permalink] 03 Sep 2010, 21:17
bb wrote:
Joined: 24 A train is traveling at a constant speed and after making three one-hour stops reaches its destination.
Apr 2010
i think though it is not totally wrong but for gmat which seeks more right things ....that three one-hour stops could easily understoood as stop after every one hour....5 dollar ride...may mean
Posts: 62 ride which cost 5 dollar...
Followers: 1 so as it is quantative question not SC i think making it clear will help ...even to understand and solve question with in 2 minutes
Kudos [?]: 7 just a suggestion
[0], given: 0
Intern Re: GMAT Diagnostic Test Question 26 [#permalink] 12 Sep 2010, 16:38
Joined: 19 wow, this one left me scratching my nerves. The way i did was lets suppose initial speed is D/T MILES/HR. if D is total distance it would take train to travel one way T hours. + 3 hour rest makes
Jul 2010 it T+3 hours. 2ndly on return speed doubles hence it travels D MILES in T/2hrs. +5 hours rest makes it T/2+5=(T+10)/2. we know both times are equal hence T+3=(T+10)/2 then T=4. then each way
becomes 7HRS+1 hr rest between 2 trips gives 15 total.
Posts: 16
Followers: 0
Kudos [?]: 1
[0], given: 7
Re: GMAT Diagnostic Test Question 26 [#permalink] 04 Nov 2010, 06:56
dzyubam wrote:
Official Answer: B
First, we have to calculate the amount of time the train spent for the stops.
maddy2u hours for the first trip and
Manager 10*0.5=5
Status: I am for the return trip. Now, we can write an equation with
Child ! S
Joined: 04 for one-way distance and
Dec 2009
Posts: 148
for train's speed:
WE 1:
Software \frac{S}{V} + 3 = \frac{S}{2V} + 5\frac{S}{V} - \frac{S}{2V} = 2\frac{2S-S}{2V} = 2\frac{S}{2V} = 2\frac{S}{V} = 4
Design and
Development So, the roundtrip lasted for
Followers: 1 7+7+1=15
Kudos [?]: 22 hours (we should count the 1 hour stop in the destination point as well).
[0], given:
11 How did you arrive at the value 7 from the ration S/V=4 ?
I did it till the last step but i got stuck at that step and forfeited this method .
Argument : If you love long trips, you love the GMAT.
Conclusion : GMAT is long journey.
What does the author assume ?
Assumption : A long journey is a long trip.
Re: GMAT Diagnostic Test Question 26 [#permalink] 08 Nov 2010, 10:36
Fijisurf This post received
I set a little bit different equation:
Joined: 10
Sep 2010 v-speed
T-time in either direction
Posts: 133
v(T-3) - for the way there
Followers: 2 2v(T-5) - on the way back
Kudos [?]: 10 v(T-3)=2v(T-5)
[1] , given: vT-3v=2vT-10v
7 7v-vT=0
The total time is (do not forget to add extra hour of waiting before return trip) 7+7+1=15
gmatclubot Re: GMAT Diagnostic Test Question 26 [#permalink] 08 Nov 2010, 10:36 | {"url":"http://gmatclub.com/forum/gmat-diagnostic-test-question-79355.html?fl=similar","timestamp":"2014-04-24T11:16:38Z","content_type":null,"content_length":"228347","record_id":"<urn:uuid:350619ae-5e60-4b47-aa52-33d090ccace5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00571-ip-10-147-4-33.ec2.internal.warc.gz"} |
MemberShip Tables
July 17th 2010, 07:09 AM
How did you do that, I really don't see where the -10, 13 and 15 came from unless im being really stupid :|
July 17th 2010, 09:45 AM
That was a hypothetical example; I meant for you to use it as a model to solve the actual problem. It was so that I could show you how to do it without actually solving the problem for you.
July 17th 2010, 11:14 PM
Oh right okay, well I solved it to have:
$-4 \le x \le 7$
Seeing as -4 is impossible the smallest value must be 0 and the maxium, 7
July 17th 2010, 11:32 PM
You seem to have kept the loosest bounds and discarded the strictest.
$\displaystyle x \ge 0$
$7-x \ge 0 \implies x \le 7$
$6-x \ge 0 \implies x \le 6$
$4+x \ge 0 \implies x \ge -4$
Discarding the second and fourth, I get $0 \le x \le 6$ .
IMPORTANT: x is not the value you seek to find the min and max of. You want to find min and max of "the number of male patients under 50 without a back problem" which is 7-x. So... | {"url":"http://mathhelpforum.com/discrete-math/150703-membership-tables-2-print.html","timestamp":"2014-04-18T07:34:43Z","content_type":null,"content_length":"10245","record_id":"<urn:uuid:e8914e57-334f-47e6-b197-d0a1520ad44b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Güralp Systems Limited
ART 3.0, Güralp Systems' Strong-Motion Analysis and Research Tool is a windows program which allows users of seismometers (accelerometers or velocimeters) produced by Güralp Systems Ltd, to process
and analyze their recorded data for engineering seismology and earthquake engineering purposes. The time-histories can be exported in a number of different strong-motion record formats that are
currently in use today.
ART3.0 is a major update of the second version of ART (ART2.0), which was released in 2006. A number of improvements were made following requests received from users, which are listed in section 5.1,
page 60.
ART 3.0 is supplied in the standard distribution of Scream! versions 4.5 and later. It is also compatible with older versions of Scream!.
ART works closely with Scream! to make analysing seismic data easy. Scream!'s visualization and filtering capabilities allow you to view time series and quickly identify events. Strong-motion records
can then be directly imported into ART from Scream! by selecting the appropriate portion of the record in Scream! - this will automatically start ART. Previously recorded data in Güralp Compressed
Format (GCF) can be read in from pre-recorded files and analyzed. In addition, data can be imported into ART via a modem.
Currently the following functions, which are important for engineering seismologists and earthquake engineers, are supported (in addition most of these functions allow selection of multiple
time-histories so a comparison between records is possible).
• plotting uncorrected acceleration, velocity and displacement against relative or absolute time;
• automatic correcting of recorded time-history for instrument response to obtain ground acceleration;
• filtering of acceleration time-history using user-defined filters;
• plotting corrected acceleration, velocity and displacement against relative or absolute time;
• calculation and plotting of Fourier amplitude spectra of time-histories and of pre-event portions of records including the signal-to-noise ratios;
• calculation and plotting of Arias intensities against time;
• calculation and plotting of energy densities against time;
• calculation and plotting, both on standard and tripartite graphs, of linear elastic response spectra;
• calculation and plotting of linear elastic absolute and relative input energy spectra;
• calculation and plotting of drift spectra for a cantilever shear-beam for different material types;
• calculation of peak ground acceleration (PGA), peak ground velocity (PGV) and peak ground displacement (PGD);
• calculation of PGV/PGA;
• calculation of A95 parameter;
• calculation of sustained maximum acceleration and velocity;
• calculation of JMA instrumental intensities;
• calculation of response spectrum intensities using user-defined limits;
• calculation of acceleration spectrum intensities using user-defined limits;
• calculation of RMS acceleration, velocity and displacement;
• calculation of cumulative absolute velocities using user-defined minimum acceleration thresholds;
• calculation of absolute and relative bracketed, significant and uniform strong-motion durations using user-defined limits;
• calculation of number of absolute and effective cycles of acceleration using peak counting - including or excluding non-zero crossings and rainflow counting techniques;
• calculation of mean, predominant spectral, smoothed spectral predominant and average spectral periods;
• plotting particle motions both in two and three dimensions;
• basic database functionality to allow earthquake and station metadata to be added, used and exported;
• comparison of observed elastic response spectra to predicted spectra from various ground-motion prediction equations and seismic design codes;
• plotting of acceleration, velocity and displacement time-histories on map;
• exporting the uncorrected and corrected spectra in these commonly used strong-motion record formats:
• Columns;
• CSMIP as used by the California Strong-Motion Instrumentation Program;
• ISESD as used by the Internet Site for European Strong-Motion Data;
• K-Net as used by Kyoshin Net;
• PEER as used by Pacific Earthquake Engineering Research Center;
• SMC as used by the US Geological Survey;
• SAC as used by Seismic Analysis Code;
• Microsoft Excel .xls;
• Matlab .mat. | {"url":"http://www.guralp.com/documents/html/MAN-SWA-0003/","timestamp":"2014-04-18T23:40:52Z","content_type":null,"content_length":"16960","record_id":"<urn:uuid:5e16621a-8cfe-42a9-8030-61d80b6cfc1a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buck Converter
SMPS Basics
The Buck Converter
BUCK_BASIC.CIR Download the SPICE file
Switch-Mode Power Supplies (SMPS) deliver lot's of power while wasting very little. Why? SW1 delivers pulses of current to the output by being in one of two states - fully ON or fully OFF. Both of
these states dissipate little power in SW1! And conserving power is what battery/portable design is all about. If you need an output voltage that's smaller than the input voltage, then the Buck
Converter is your choice. There's just a handful of components. Their job is to transform the current pulses via SW1 into a constant voltage at the load. How? By looking at some voltage / current
waveforms and playing with component values, you'll get a feel for each one's role and how to optimize performance.
The Buck Converter is easy to understand if you look at the two main states of operation: SW1 ON and SW1 OFF.
◄ SW1 is ON ► L1 delivers current to the load
With a voltage (Vin - Vo) across L1, current rises linearly. How fast? The rise (in amps per second) is determined by
ΔI / ΔT = ( Vin - Vo) / L1
C1 smooths out L1's current changes into a stable voltage at Vo. Also, C1 is big enough such that Vo doesn't change significantly during one switching cycle. Where's D1? Its reversed biased
and out of the picture for now.
◄ SW1 is OFF ► L1 maintains current to the load
As L1's magnetic field collapses, current falls linearly through L1. How fast? The fall (amps per second) is again determined by the voltage across L1 and its inductance.
ΔI / ΔT = ( Vo + VD) / L1
Although L1's current direction is the same, what's happened to L1's voltage? It's reversed! That's L1 maintaining current flow by reversing its voltage when the applied voltage is removed. Also,
check out what happens to D1 when the left end of L1 swings negative. Yes, it turns ON providing a path for L1's current to flow.
Given the components above, how do you control the exact output voltage? Typically, by using a Pulse-Width-Modulation (PWM) signal to drive SW1. This implies you need a pulse train that looks like
♦ A switching period of TS.
♦ An adjustable Pulse Width of TON (the time SW1 is ON)
Simply adjust the Duty Cycle (D = TON / TS) to get the output voltage you need!
At which frequency do you run this pulse train? Typically in the range of 10s to 100s of kHz. Why so high? There are two big benefits here:
1. As frequency goes up, parts usually get smaller, lighter and cheaper - very cool in portable design! You get a lot of power from a small volume of stuff. Or in other words - a high power
density (W/in^3).
2. The delay from input to output created by the switching time (Ts) becomes smaller.
So what's the big deal about delay? Later, when we place the Buck inside of a control loop, this delay can cause dreadful things to happen to the closed-loop response like overshoot, ringing or
oscillation! Shorter cycle times (smaller delay times) - compared to the LC or controller response time - means less potential trouble when closing the loop.
Simulating switch-mode supplies can be fun, but challenging. Why? There are two time frames we interested in:
1) The short cycle-by-cycle period of the pulse train turning SW1 ON and OFF (micro seconds) .
2) The longer response of LC components as they respond to input or load changes (milli seconds)
As you can imagine, simulating the switching action may only require a few cycles. On the other hand, investigating the overall response may require simulating thousands of switching cycles.
Let's simulate the file BUCK_BASIC.CIR. VCTRL generates a pulse train of period TS = 20us and pulse-width TON = 5μs. When VCTRL is at 5V, SW1 drops to 0.01 Ω connecting 20V (VIN) to L1. When VCTRL is
at 0V, SW1 pops open to 1 MΩ. effectively disconnecting VIN from L1. RL represents the load (analog/digital circuitry, motors, lights, etc.) powered by the Buck Converter.
CIRCUIT INSIGHT First we'll take a look at the longer overall response. Run a simulation and take a look at Vo by plotting V(3). How much overshoots happens due to the LC components? What voltage
does output settle to? We might expect Vo to be related to VIN and D.
Vo = VIN ∙ D
= VIN ∙ ( TON / TS )
= 20 V ∙ ( 5 μs / 20 μs)
= 5 V
Add VCTRL to the plot by including trace V(10). Change its duty cycle by increasing or decreasing TON from 5 μs to values like 2.5, 10 or 15 μs. To do this, just change the 5US parameter in the PULSE
definition of VCTRL. Does the above equation do a decent job of predicting Vo?
Finally, check out the current through L1 by opening a new plot window and adding trace I(L1). Awesome! See it rise and fall as SW1 turns ON and OFF (controlled by VCTRL). Let's take a closer look in
the next section.
We'd like to see a few cycles of the Buck Converter's operation, but here's the challenge: we want to see the simulation results after a few hundreds cycles, when the supply has settled to a steady
state. How? Luckily, the Transient Analysis command comes with a handy feature that lets you throw away simulation results up to a specific delay time. For example, the statement
.TRAN 0.1US 840US 800US 0.1US
simulates the circuit to until 840 μs, but discards the data before 800 μs. Cool! The 40 μs saved represents two switching cycles for our viewing pleasure. Place an "*" in front of the original TRAN
statement and remove the "*" from the new statement with the delay.
CIRCUIT INSIGHT Set TON to 5 μs and run a simulation of BUCK_BASIC.CIR. Plot Vo at V(3), VCTRL at V(10) and in a separate plot window, view the inductor current I(L1). Here we have wonderful view of
I(L1) rising and falling as SW1 turns ON (VCTRL = 5 V) and SW1 turns OFF (VCTRL = 0 V).
Let's see the two different paths the inductor current takes as it rises and falls. Open a new plot window and add trace I(SW1). Wow, SW1's current is the same as L1's current, but only when SW1 is
ON. Then it drops to 0 A. Sure, that makes sense as SW turns ON and OFF. Now add trace I(D1). Here D1's current is initially 0, then equals L1's current when D1 turns ON.
Finally take a look at the SW1's voltage at V(2). Basically we see VSW1 slammed to VIN = 20 V and then it drops to -0.3 V as D1 (Schottky diode) turns ON providing a pathway for L1's falling current.
Does L1's current rise and fall as expected? Let's check by first calculating the rise rate of
ΔI / ΔT = ( Vin - Vo) / L1
= ( 20 V - 5 V ) / 50 μH
= 300,000 A/s
= 0.3 A/μs
Then calculate the total rise while SW1 is ON for 5 μs
ΔI = (Vin - Vo)/L1 ∙ ΔT
= 0.3 A/μs ∙ 5 μs
= 1.5 A
Now, check your SPICE plot. Does the current rise by approximately 1.5 A? (Actually, the rise might be a bit higher because Vo is slightly less than 5V.) You can also predict the fall rate by the
equations above. What do you notice about the rise and fall of L1's current? Yes, they are equal! And this current change is appropriately named the inductor ripple current, ΔI.
What about the average inductor current, Iave? Find out by plotting AVG( I(L1) ). (If not using PSPICE, your simulator should have a similar function to plot the average of a variable.) Iave is
important because this is the current that gets delivered to the load RL. What is the load current?
Io = Vo / RL ≈ 5 V / 5 Ω = 1 A
Does Iave match Io? Now, suppose the demand for Io increases. What happens to the ripple and average inductor current? Double Io by cutting RL from 5 Ω to 2.5 Ω. Rerun the simulation and check out ΔI
and Iave? Iave doubles as expected! But notice - the ripple current ΔI remains the same! Why? Because Vin or Vo hasn't really changed. Next, we'll discover how the inductor ripple current plays a
factor in the output ripple voltage.
Here's the big question on everyone's mind - especially the folks using the your supply: how much voltage ripple ΔVo rides on the output? Why ask? This ripple gets thrown on anything driven by Vo:
IC's, transistors, voltage references, speakers, motors, etc. A large ΔVo could cause unexpected or poor behaviors for some components.
CIRCUIT INSIGHT Set TON = 5 μs and RL = 5 Ω. Run a simulation from 800 to 840 μs. Plot the output V(3) and inductor current I(L1) in separate windows. How big is ΔVo? There should be about 160 mVp-p
riding on the output!
HANDS-ON DESIGN What if the design goal is less that 50 mVp-p of ripple? There's several options available for lowering the ripple. Return the components to their original values: L1 = 50 μH, C1 = 25
μF and RL = 5.
CAPACITOR - C1
Given an inductor ripple current, C1 has the lone responsibility for absorbing ΔI to minimize ΔVo. Try increasing C1 from 25 μF to a value like 50 or 100 μF. Has ΔVo reduced? Excellent! Side note
- you might have to extend the simulation delay from 800 to 1000 μs. Why? A bigger C means a longer settling time for the LC combo.
INDUCTOR - L1
Let's lighten C1's burden of minimizing ΔVo by decreasing ΔI. The equation
ΔI = ( Vin - Vo)/L1 ∙ ΔT tells us that ΔI gets smaller as L1 gets bigger. With C1=25 μF, try increasing L1 from 50 μH to a value like 75 or 100 μH. Rerun the simulation. Did ΔVo shrink as
Cool, let's just put the mother of all inductors (and capacitors) in the circuit - thereby reducing ΔVo to nothing! Not so fast. Remember, larger values means bigger, bulkier and and more
expensive components. Also keep in mind that big Ls and Cs slow down the supply's response time to input or load changes.
SWITCHING TIME - TS
The equation ΔI = ( Vin - Vo)/L1∙ΔT shows us another ripple reducing parameter - ΔT. If possible, reduce ΔT by choosing a shorter switching time Ts! Suppose you decreased Ts from 20 μs to 10 μs.
To maintain Vo = TON/Ts∙Vin = 5V, change TON from 5 to 2.5 μs. To do this in SPICE, change the WIDTH and PERIOD parameters of the PULSE statement from
5US 20US to 2.5US 10US. Rerun the simulation with L1=50 μH and C1=25 μF. Has a smaller ΔT shrunk ΔI and consequently ΔVo?
Okay faster is better - up to a point! Faster can mean more expensive components. Also, it takes a finite time to turn SW1 ON and OFF. During these times (ton, toff), SW1 dissipates a fair amount
of power. Unfortunately, you can make Ts so small that ton and toff take up a significant portion of it. You end up wasting lots of power and your efficiency ends up in the basement.
Up to now, we've seen current flowing continuously through L1. But there's a mode were the current goes to zero during the last portion of the switching cycle.
Continuous Mode (2 states of L1)
1) Current rises with SW1 ON.
2) Current falls with SW1 OFF.
Discontinuous mode (3 states of L1)
1) Current rises with SW1 ON.
2) Current falls with SW1 OFF.
3) Current goes to 0 A with SW1 OFF.
How does L1's current go to zero? Let's find out.
CIRCUIT INSIGHT Set TON = 5 μs, TS = 20 μs and RL = 5 Ω. Run a SPICE simulation and plot the output V(3) and inductor current I(L1) in separate windows. We see ΔI riding on top of Iave = 1A. Now
reduce the load by raising RL to 10 Ω. Rerun the circuit. A couple of interesting things here. First, L1's falling current drops to 0A! Why? Iave is not big enough to keep ΔI above 0 A. And second,
Vo has risen significantly. Question: does Vo = VIN∙( TON/TS ) still hold in discontinuous mode? Raise RL to 20 Ω and vary TON to find out.
CIRCUIT INSIGHT Let's look at SW1's voltage VSW1. Set TON = 5 μs, TS = 20 μs, RL = 5 Ω, and run SPICE simulation. Plot SW1's voltage V(2) and inductor current I(L1). As before, we see VSW1 swing to
20 V with SW1 ON and then swing back to -0.3 V as D1 turns ON providing a pathway for L1s falling current.
But now raise RL to 10 Ω and rerun the circuit. Wow! What's happening to VSW1 when L1's current goes to zero and D1 turns OFF? You've got some major ringing here. Why? With D1 and SW1 OFF you'd
expect a high impedance at one end of L1. However, D1 presents some parasitic capacitance to the circuit. And when this capacitance sees L1, they hit the dance floor and ring until SW1 turns ON
again. (SW1 also presents some parasitic capacitance not modeled here.)
DESIGN TIME Typically, application notes recommend running your supply in continuous mode for your expected loads. Why? You get big benefits when optimizing a Buck converter inside of a control
□ First, the gain is stable. In continuous mode, Vo is approximately set by VIN and D only, regardless of load or other component values. In discontinuous mode, Vo depends on VIN, D, L1, RL and
□ Second, for continuous and discontinuous modes, the frequency responses are different. You can spend time tuning the control loop for a good transient response in continuous only to see it
change in discontinuous mode.
For a given load, how do you place your supply in continuous mode? Increase L1 until ΔI is small enough compared to Iave to keep the current above zero during the entire cycle. However, some
applications can have a wide range of load conditions where entering discontinuous mode may be unavoidable.
Up to now, C1 has successfully reduced ΔVo. But real capacitors behave as if there's a small resistor in series with its capacitance - the Equivalent Series Resistance (ESR). Change the C1 statement
to two statements.
C1 3 4 25UF
RC1 4 0 0.5
RC1 = 0.5 Ω models the ESR of C1. With L1 = 50 μH, C1 = 25 μF and RL = 5, we saw ΔVo = 160 mVp-p. Will adding ESR have an effect on ΔVo?
CIRCUIT INSIGHT Run a simulation and plot the V(3) and I(L1) in separate windows. How big is ΔVo? Wow, the output ripple is horrible. Why? The inductor ripple ΔI, normally absorbed by C1, flows right
through the ESR adding to the voltage ripple! Can we predict the ripple from ESR?
ΔVo = ΔI ∙ ESR = 1.5 A ∙ 0.5 = 0.75 V
HANDS-ON DESIGN Okay let's crank up C1 to 50 or 100μF and rerun the simulation. Unfortunately, no progress here - C1 just looks like more of a short circuit to the ESR. What other options do you
have? Basically, you need to reduce ΔI, ESR or both. Try increasing L1 to 100μH to knock down ΔI. Any improvement? Suppose you buy better capacitor with a lower ESR. Reduce RC1 to 0.2 or 0.1 Ω. How
much of ΔVo remains? Okay, trying reducing ΔI by picking a higher switching frequency (smaller Ts) to squash ΔVo.
If you made it this far - high fives for you! Hopefully, by experimenting with component values, you've got a good feel for switch-mode power supplies. The only way to learn the river's rapids is to
launch the kayak and start paddling. We've got more thrills ahead - closing the Buck converter inside of a control tool! And we'll measure and optimize efficiency in upcoming topics.
Check out Voltage Control Mode to see how the feedback loop is closed.
The Buck Converter AC Model helps you create an AC SPICE model.
Tuning a Buck Converter lets you compensate a converter for minimum overshoot and ringing.
Find out where power gets lost in the topic Buck Converter Power Loss.
Download the file or copy this netlist into a text file with the *.cir extension.
VCTRL 10 0 PULSE(0V 5V 0 0.01US 0.01US 5US 20US)
R10 10 0 1MEG
VIN 1 0 DC 20
SW1 1 2 10 0 SW
D1 0 2 DSCH
L1 2 3 50UH
C1 3 0 25UF
* LOAD
RL 3 0 5
.MODEL SW VSWITCH(VON=5V VOFF=0V RON=0.01 ROFF=1MEG)
.MODEL DSCH D( IS=0.0002 RS=0.05 CJO=5e-10 )
.TRAN 1US 800US
*.TRAN 0.1US 840US 800US 0.1US
* VIEW RESULTS
.PLOT TRAN V(2) V(3)
Top ↑
© 2005 eCircuit Center | {"url":"http://www.ecircuitcenter.com/Circuits/smps_buck/smps_buck.htm","timestamp":"2014-04-21T09:46:33Z","content_type":null,"content_length":"31728","record_id":"<urn:uuid:89f192db-d50a-4e0e-9ca3-6268509a9391>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
PICS for questions :) @Captain_Page_Turner :)
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50f44943e4b0694eaccfc760","timestamp":"2014-04-18T10:36:15Z","content_type":null,"content_length":"68060","record_id":"<urn:uuid:6f83d6ee-487b-4184-8ce0-a46d2a9920bf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
6. Multiplication and Division of Fractions
Recall the following fraction facts:
When multiplying by a fraction, multiply numerators and multiply denominators:
`2/3xx 5/7=(2xx5)/(3xx7)=10/21`
If you can, simplify first :
`13/24xx 12/39=(1xx1)/(2xx3)=1/6`
(I canceled the `13` & `39` to give `1/3` and the `12` with the `24` to give `1/2`.)
When dividing by a fraction, invert and multiply:
`3/5-:2/7=3/5xx7/2=(3xx7)/(5xx2)=21/10=2 1/10`
(I multiplied by the inverse of `2/7`, which is `7/2`.)
When we do the same things with algebraic expressions, remember to SIMPLIFY FIRST, so that the problem is easy to perform.
Example 1
Simplifying first, we cancel the 11 in the first fraction with the 33 on the bottom of the second fraction:
Example 2
Example 3
(1) `5/16-:25/13`
(2) `(9x^2-16)/(x+1)-:(4-3x)`
(3) `(2x^2-18)/(x^3-25x)xx(3x-15)/(2x^2+6x)`
Didn't find what you are looking for on this page? Try search:
Online Algebra Solver
This algebra solver can solve a wide range of math problems. (Please be patient while it loads.)
Go to: Online algebra solver
Ready for a break?
Play a math game.
(Well, not really a math game, but each game was made using math...)
The IntMath Newsletter
Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents!
Share IntMath!
Short URL for this Page
Save typing! You can use this URL to reach this page:
Algebra Lessons on DVD
Easy to understand algebra lessons on DVD. See samples before you commit.
More info: Algebra videos | {"url":"http://www.intmath.com/factoring-fractions/6-multiplication-division-fractions.php","timestamp":"2014-04-21T12:08:23Z","content_type":null,"content_length":"22906","record_id":"<urn:uuid:44732c90-e0c9-4919-82db-4f213c23ec0c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Typed MSR: Syntax and examples
Results 1 - 10 of 28
, 2002
"... We present the first type and effect system for proving authenticity properties of security protocols based on asymmetric cryptography. The most significant new features of our type system are:
(1) a separation of public types (for data possibly sent to the opponent) from tainted types (for data pos ..."
Cited by 69 (9 self)
Add to MetaCart
We present the first type and effect system for proving authenticity properties of security protocols based on asymmetric cryptography. The most significant new features of our type system are: (1) a
separation of public types (for data possibly sent to the opponent) from tainted types (for data possibly received from the opponent) via a subtype relation; (2) trust effects, to guarantee that
tainted data does not, in fact, originate from the opponent; and (3) challenge/response types to support a variety of idioms used to guarantee message freshness. We illustrate the applicability of
our system via protocol examples.
- Journal of Computer Security , 2002
"... We formalize the Dolev-Yao model of security protocols, using a notation based on multi-set rewriting with existentials. The goals are to provide a simple formal notation for describing security
protocols, to formalize the assumptions of the Dolev-Yao model using this notation, and to analyze the ..."
Cited by 56 (5 self)
Add to MetaCart
We formalize the Dolev-Yao model of security protocols, using a notation based on multi-set rewriting with existentials. The goals are to provide a simple formal notation for describing security
protocols, to formalize the assumptions of the Dolev-Yao model using this notation, and to analyze the complexity of the secrecy problem under various restrictions. We prove that, even for the case
where we restrict the size of messages and the depth of message encryption, the secrecy problem is undecidable for the case of an unrestricted number of protocol roles and an unbounded number of new
nonces. We also identify several decidable classes, including a dexp-complete class when the number of nonces is restricted, and an np-complete class when both the number of nonces and the number of
roles is restricted. We point out a remaining open complexity problem, and discuss the implications these results have on the general topic of protocol analysis.
, 2002
"... CLF is a new logical framework with an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives # of
intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the ..."
Cited by 46 (30 self)
Add to MetaCart
CLF is a new logical framework with an intrinsic notion of concurrency. It is designed as a conservative extension of the linear logical framework LLF with the synchronous connectives # of
intuitionistic linear logic, encapsulated in a monad. LLF is itself a conservative extension of LF with the asynchronous connectives #.
, 2003
"... We present the propositional fragment CLF0 of the Concurrent Logical Framework (CLF). CLF extends the Linear Logical Framework to allow the natural representation of concurrent computations in
an object language. The underlying type theory uses monadic types to segregate values from computations ..."
Cited by 31 (3 self)
Add to MetaCart
We present the propositional fragment CLF0 of the Concurrent Logical Framework (CLF). CLF extends the Linear Logical Framework to allow the natural representation of concurrent computations in an
object language. The underlying type theory uses monadic types to segregate values from computations. This separation leads to a tractable notion of definitional equality that identifies computations
di#ering only in the order of execution of independent steps. From a logical point of view our type theory can be seen as a novel combination of lax logic and dual intuitionistic linear logic. An
encoding of a small Petri net exemplifies the representation methodology, which can be summarized as "concurrent computations as monadic expressions ".
- IN PROC. WITS’06 , 2006
"... We report on a man-in-the-middle attack on PKINIT, the public key extension of the widely deployed Kerberos 5 authentication protocol. This flaw allows an attacker to impersonate Kerberos
administrative principals (KDC) and end-servers to a client, hence breaching the authentication guarantees o ..."
Cited by 30 (5 self)
Add to MetaCart
We report on a man-in-the-middle attack on PKINIT, the public key extension of the widely deployed Kerberos 5 authentication protocol. This flaw allows an attacker to impersonate Kerberos
administrative principals (KDC) and end-servers to a client, hence breaching the authentication guarantees of Kerberos. It also gives the attacker the keys that the KDC would normally generate to
encrypt the service requests of this client, hence defeating confidentiality as well. The discovery of this attack caused the IETF to change the specification of PKINIT and Microsoft to release a
security update for some Windows operating systems. We
, 2004
"... We give three formalizations of the Kerberos 5 authentication protocol in the Multi-Set Rewriting (MSR) formalism. One is a high-level formalization containing just enough detail to prove
authentication and confidentiality properties of the protocol. A second formalization refines this by adding a v ..."
Cited by 22 (10 self)
Add to MetaCart
We give three formalizations of the Kerberos 5 authentication protocol in the Multi-Set Rewriting (MSR) formalism. One is a high-level formalization containing just enough detail to prove
authentication and confidentiality properties of the protocol. A second formalization refines this by adding a variety of protocol options; we similarly refine proofs of properties in the first
formalization to prove properties of the second formalization. Our third formalization adds timestamps to the first formalization but has not been analyzed extensively. The various proofs make use of
rank and corank functions, inspired by work of Schneider in CSP, and provide examples of reasoning about real-world protocols in MSR. We also note some potentially curious protocol behavior; given
our positive results, this
- Proceedings of the Sixteenth Annual Symposium on Logic in Computer Science | LICS'01 , 2001
"... Most systems designed for the verification of security protocols operate under the unproved assumption that an attack can only result from the combination of a fixed number of message
transformations, which altogether constitute the capabilities of the so-called Dolev-Yao intruder. In this paper, we ..."
Cited by 22 (1 self)
Add to MetaCart
Most systems designed for the verification of security protocols operate under the unproved assumption that an attack can only result from the combination of a fixed number of message
transformations, which altogether constitute the capabilities of the so-called Dolev-Yao intruder. In this paper, we prove that the Dolev-Yao intruder can indeed emulate the actions of an arbitrary
adversary. In order to do so, we extend MSR, a flexible specification framework for security protocols based on typed multiset rewriting, with a static check called access control, aimed at catching
specification errors such as a principal trying to use a key that she is not entitled to access. Cryptographic protocols are increasingly used to secure transactions over the Internet and protect
access to computer systems. Their design and analysis are notoriously complex and error-prone. Sources of difficulty include subtleties in the cryptographic primitives they rely on, and their
deployment in distributed envi...
, 2001
"... MSR is an unambiguous, flexible, powerful and relatively simple specification framework for crypto-protocols. It uses multiset rewriting rules over first-order atomic formulas to express
protocol actions and relies on a form of existential quantification to symbolically model the generation of no ..."
Cited by 19 (10 self)
Add to MetaCart
MSR is an unambiguous, flexible, powerful and relatively simple specification framework for crypto-protocols. It uses multiset rewriting rules over first-order atomic formulas to express protocol
actions and relies on a form of existential quantification to symbolically model the generation of nonces and other fresh data. It supports an array of useful static checks that include type-checking
and data access verification. In this paper, we give a detailed presentation of the typing infrastructure of MSR, which is based on the theory of dependent types with subsorting. We prove that
type-checking protocol specifications is decidable and show that execution preserves well-typing. We illustrate these features by formalizing a well-known protocol in MSR.
- THEOR. COMP. SCI., SPECIAL , 2006
"... We report on the detailed verification of a substantial portion of the Kerberos 5 protocol specification. Because it targeted a deployed protocol rather than an academic abstraction, this
multi-year effort led to the development of new analysis methods in order to manage the inherent complexity. Thi ..."
Cited by 15 (3 self)
Add to MetaCart
We report on the detailed verification of a substantial portion of the Kerberos 5 protocol specification. Because it targeted a deployed protocol rather than an academic abstraction, this multi-year
effort led to the development of new analysis methods in order to manage the inherent complexity. This enabled proving that Kerberos supports the expected authentication and confidentiality
properties, and that it is structurally sound; these results rely on a pair of intertwined inductions. Our work also detected a number of innocuous but nonetheless unexpected behaviors, and it
clearly described how vulnerable the cross-realm authentication support of Kerberos is to the compromise of remote administrative domains.
- In Formal Aspects in Security and Trust , 2004
"... Abstract. Cryptographic protocols often make use of nested cryptographic primitives, for example signed message digests, or encrypted signed messages. Gordon and Jeffrey’s prior work on types
for authenticity did not allow for such nested cryptography. In this work, we present the pattern-matching s ..."
Cited by 14 (0 self)
Add to MetaCart
Abstract. Cryptographic protocols often make use of nested cryptographic primitives, for example signed message digests, or encrypted signed messages. Gordon and Jeffrey’s prior work on types for
authenticity did not allow for such nested cryptography. In this work, we present the pattern-matching spi-calculus, which is an obvious extension of the spi-calculus to include pattern-matching as
primitive. The novelty of the language is in the accompanying type system, which uses the same language of patterns to describe complex data dependencies which cannot be described using prior type
systems. We show that any appropriately typed process is guaranteed to satisfy a strong robust safety property. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=481235","timestamp":"2014-04-16T09:02:30Z","content_type":null,"content_length":"38293","record_id":"<urn:uuid:dd545603-40df-466d-b67f-fc2cb0fbe9a0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
RMS Speed
for the PV = nRT part, see if all your units cancel out. Keep in mind that the R value you have used will only cancel out pascals.
also how do you fin d out mass from the number of moles
number of moles = mass / molar mass
whati s the molar mass of N2??
Ok I took all your suggestions and I found the new Volume to be 7000 L, which would make the new T about 223859 Kelvin.
Then the molar mass I found by dividing (4.648e-26 kg)/1600 mol and got 2.905e-29
I subsitituted all those in and got 564826 m/s and that still seems far off.
I think I messed up the molar mass, any ideas on what I am doing wrong?
Thanks so much for all your help!! I really appreciate it! | {"url":"http://www.physicsforums.com/showthread.php?t=88339","timestamp":"2014-04-19T07:29:55Z","content_type":null,"content_length":"35402","record_id":"<urn:uuid:79a1fba6-5b1c-4bae-9e75-37be4736cdc7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
[11.06] Instabilities of Stellar Disks
34th Meeting of the AAS Division on Dynamical Astronomy, May 2003
11 Disks
Oral, Wednesday, May 7, 2003, 8:30-10:30am,
[Previous] | [ 11] | [Next]
[11.06] Instabilities of Stellar Disks
M. A. Jalali, C. Hunter (Dept. of Mathematics, Florida State U.)
We investigate the instabilities of the flat isochrone and Kuzmin-Toomre disks using Kalnajs's matrix method. For the unperturbed disks in equilibrium, we introduce a new class of anisotropic
distribution functions (DF) in the form f(E,L[z])=f[0](E)+f[1](E,L[z]). At first, we prescribe f[1](E,L[z]), determine its corresponding surface density and subtract it from the model density. We
then reproduce the remainder density by the isotropic part of the DF, f[0](E). The DFs that we generate, enable us to control the population of circular, radial and rosette orbits. We investigate how
the populations of these orbits influence the instability of our axisymmetric disks. To compute the matrix elements of Kalnajs's method, we choose orbital frequencies as the integration variables and
regularize resonance singularities using the Legendre functions of the second kind. The response of the disk, and its unstable modes are then computed through an iterative scheme.
The author(s) of this abstract have provided an email address for comments about the abstract: mjalali@math.fsu.edu
[Previous] | [ 11] | [Next]
Bulletin of the American Astronomical Society, 35 #4
© 2003. The American Astronomical Soceity. | {"url":"http://aas.org/archives/BAAS/v35n4/dda2003/56.htm","timestamp":"2014-04-20T18:58:05Z","content_type":null,"content_length":"2776","record_id":"<urn:uuid:64f5bf8e-c7f5-4eae-bb9b-8ab540f6f84f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Collegeville, PA Calculus Tutor
Find a Collegeville, PA Calculus Tutor
...I have learned a great deal from my students in this process! My tutoring focuses on a solid understanding of the material and a consistent and methodical approach to problem-solving, with
special attention paid to a good foundation in mathematical methods. I am a native German-speaker, and have been working for several years as a German-to-English translator.
21 Subjects: including calculus, reading, writing, algebra 1
...Someone who wants to make a difference in your child's life. Every minute that I spend tutoring is time away from my kids, so I'm determined to make sure that those minutes are worth it. Hope
to hear from you!I have both a B.S. and a Ph.D. in Chemical Engineering.
16 Subjects: including calculus, chemistry, physics, geometry
...I started my career in my college's learning lab working with college students of all levels. At the same time I became employed by Sylvan Learning Center. I would instruct three students at a
time with levels ranging from elementary math through all levels of algebra, geometry, trigonometry, and calculus.
10 Subjects: including calculus, geometry, algebra 1, ASVAB
...The ACT English section will test your ability to write, to read, and to detect subtle errors in various passages. You will need to master both common writing conventions (such as proper use
of the comma and semicolon, etc.) and more advanced grammatical concepts (who vs whom, irregular past par...
34 Subjects: including calculus, English, writing, physics
...I was fortunate enough to earn an MS in Applied Physics from the University of Maryland Baltimore County. After receiving my MS I was an instructor in the Physics department at the US Naval
Academy for 2 years. One of the great parts about teaching in a smaller school like the US Naval Academy is the experience of working with students one-on-one.
3 Subjects: including calculus, chemistry, physics
Related Collegeville, PA Tutors
Collegeville, PA Accounting Tutors
Collegeville, PA ACT Tutors
Collegeville, PA Algebra Tutors
Collegeville, PA Algebra 2 Tutors
Collegeville, PA Calculus Tutors
Collegeville, PA Geometry Tutors
Collegeville, PA Math Tutors
Collegeville, PA Prealgebra Tutors
Collegeville, PA Precalculus Tutors
Collegeville, PA SAT Tutors
Collegeville, PA SAT Math Tutors
Collegeville, PA Science Tutors
Collegeville, PA Statistics Tutors
Collegeville, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Collegeville_PA_Calculus_tutors.php","timestamp":"2014-04-16T13:14:55Z","content_type":null,"content_length":"24328","record_id":"<urn:uuid:4a31533d-e434-448e-942f-136e6f68b2a8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
■[Science][E] AllRGB, Optimization, Polymer, and Origami (draft)
http://allrgb.com challenges hackers to create a 4096x4096 image, using every 2^24 color once and only once.
There are many possible solutions, and I thought applying some optimization would produce interesting images. One choice is to make the image as smooth as possible.
Almost all models in physics contains so-called gradient term in its energy, which drives the system to smoother state. It is written as $(¥nabla f)^2$ and it evaluates how rapidly some quantity f,
be it wave function, air density, etc.., changes in space. That is, the more smoothly f changes, the lower the energy becomes. In general, lower energy state has smoother appearance. In particular,
if f is constant everywhere, the gradient term is zero. In case of allrgb image, however, the pixel value can't be the same everywhere because all kinds of value must be there, thus the optimization
will produce some smooth, non-trivial image. This is an interesting optimization problem.
Lets say we have an image R(x,y), G(x,y), B(x,y) where R, G, B=(0 .. 255) is the pixel value of RGB at position (x,y). The gradient term, hereafter we call it "Energy" and denote by E, is calculated
as $(R(x,y)-R(x+1,y))^2 + (R(x,y)-R(x,y+1))^2$ + (same for G and B), summed over all positions (x,y). In other word, it is squared color difference summed over all neighbor pixel pairs as shown
figure here
Since each pixel differs in color, the difference can't be zero, and minimum value is 1, when only one of R, G, B differs by 1 and the rest is the same. Or equivalently, it is minimum when the colors
of neighbor pixels are also neighbors in the 256^3 RGB color space. Thus, when the all neighbor pixel pair is also neighbors in RGB space, it gives theoretical minimum. Is it possible?
First, let us consider a simpler case: We have 65536x1 image and place 256^2=65536 kinds of colors, R=(0..255), G=(0..255), and B=0, onto the image. We require that neighbor pixel pair on the image
to be also neighbor in the 256^2 Red-Green color space. In this case, there are many possible solutions. The following figure shows typical example.
figure here
The solution can be visualized as a winding curve in the RG color space which consists of only 1-length segments and visits each grid once and only once. There are several names for this kind of
curve. One is "self-avoiding path", because the curve never intersects with itself, otherwise a certain color appears twice or more. This kind of model is used to study the behavior of flexible
polymer chains, since it consists of constant length line segments and can't cut across itself.
"Hamiltonian path" is more specific classification: it visits every point once and only once. The Hilbert curve, nicely explained here http://corte.si/posts/code/hilbert/portrait/index.html in
detail, is a typical example. These are 1D lines which fills 2D plane.
Now back to the RGB case: We place 256^3 colors on a 4096^2 image. Consider four pixels shown below:
figure here
Suppose that pixel P1 and P2 is already a neighbor in RGB space, say (R,G,B) and (R+1,G,B). P3 should be a neighbor of P1 in RGB space. Since one neighbor (R+1, G,B) is already occupied, P3 should be
either (R-1,G,B), (R,G+-1,B) or (R,G,B+-1). But if we set P3=(R-1,G,B), there is no way to place P4 so that it is neighbor of P2 and P3: the only valid choise is (R,G,B) but it is already occupied by
P1. Thus P3 should be either (R,G+-1,B) or (R,G,B+-1). If we set P3=(R,G+1,B), The only valid choise for P4 is (R+1,G+1,B), and the 4 pixels forms a 1x1 square in the RGB space too. This is true
whichever value you choose for P3. Thus the theoretical minimum of E is realized when all squares on the image also forms 1x1 square in the RGB space.
It corresponds to folding a sheet of paper consisting of 4096x4096 small squares and put it into 256^3 box. Is it possible? Unfortunately the answer is no. Unlike a 1D curve which can bend in any
direction at any point, a sheet of paper can't: Imagine folding a sheet of paper into a square pillar and then trying to bend that pillar 90 degrees. You can't, because a paper can bend into only one
Suppose that you have a sheet of rubber instead of paper. Then you can bend it as shown below:
Here the stretched part corresponds to a link between neighbor pixels which is "second neighbor" in the RGB space, that is, two of R,G,B values differ by 1. Thus it is expected that an optimized
image has neighbor pixel color distance 1 at a large fraction of neighbors and some neighbors have 2 or longer color distances.
Using simulated annealing technique, in which one starts from high-temperature, high-energy state and gradually cools down the system to get lower energy state, I have obtained images with average
squared neighbor color distance about 1.38, which indicates that many of neighbors have 2 or longer color distances.
Here are my Simulated Annealing codes:
This one is easier to read
This one is faster but unkind for source diver | {"url":"http://d.hatena.ne.jp/ita/20100222","timestamp":"2014-04-18T10:46:07Z","content_type":null,"content_length":"41466","record_id":"<urn:uuid:a6faf3dd-0a34-499b-b9f3-89de8e57872e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elgin, IL Algebra 2 Tutor
Find an Elgin, IL Algebra 2 Tutor
...SYSTEMS OF LINEAR EQUATIONS AND INEQUALITIES. Systems of Linear Equations in Two Variables. Systems of Linear Equations in Three Variables.
17 Subjects: including algebra 2, reading, calculus, geometry
...I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus
7 Subjects: including algebra 2, geometry, algebra 1, trigonometry
I have mastered skills in many diverse areas. Determination, confidence, and knowing how to study have been the keys to my success. I focus on helping my students build these fundamental
attributes in addition to learning the subject.
67 Subjects: including algebra 2, chemistry, Spanish, English
...I am currently teaching 6th grade math and science for my third year. For 2 years I taught 7th grade geometry. I taught in Carpentersville for District 300 for 2 years and am now in my third
year in Round Lake.
18 Subjects: including algebra 2, reading, writing, geometry
...Over the course of my education (Ph.D. Electrical Engineering) I have done various courses that required knowledge of Linear Algebra. My Ph.D. thesis also involved the use of linear algebra.
30 Subjects: including algebra 2, calculus, physics, statistics
Related Elgin, IL Tutors
Elgin, IL Accounting Tutors
Elgin, IL ACT Tutors
Elgin, IL Algebra Tutors
Elgin, IL Algebra 2 Tutors
Elgin, IL Calculus Tutors
Elgin, IL Geometry Tutors
Elgin, IL Math Tutors
Elgin, IL Prealgebra Tutors
Elgin, IL Precalculus Tutors
Elgin, IL SAT Tutors
Elgin, IL SAT Math Tutors
Elgin, IL Science Tutors
Elgin, IL Statistics Tutors
Elgin, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/Elgin_IL_Algebra_2_tutors.php","timestamp":"2014-04-21T14:53:00Z","content_type":null,"content_length":"23573","record_id":"<urn:uuid:8702ea6d-a996-46bd-92ab-ca9f999dd428>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using substitution to solve
June 10th 2009, 08:40 PM #1
Junior Member
Oct 2007
Using substitution to solve
I have two problems...
$({{x+5}\over{x}})^{1\over2}+4({{x}\over{x+5}})^{1\ over2}=4$
My teacher said substitution is preffered, but I only seem to make it more complicated. If substitution is not possible, solve regularly.
Can you get me started? Thanks!
I have two problems...
$({{x+5}\over{x}})^{1\over2}+4({{x}\over{x+5}})^{1\ over2}=4$
My teacher said substitution is preffered, but I only seem to make it more complicated. If substitution is not possible, solve regularly.
Can you get me started? Thanks!
This should get you started:
1) Substitute $t = \left(\frac{x+5}{x}\right)^{\frac12}$, and you will get a quadratic...
2) Just try squaring both sides, you will see the magic
June 10th 2009, 08:46 PM #2 | {"url":"http://mathhelpforum.com/algebra/92521-using-substitution-solve.html","timestamp":"2014-04-16T11:04:22Z","content_type":null,"content_length":"33406","record_id":"<urn:uuid:40ba4ef4-ddf6-4f8c-8beb-269059b2fba7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbourne, PA Trigonometry Tutor
Find a Millbourne, PA Trigonometry Tutor
...While purchasing an SAT book is a good first step in preparation, having someone to guide you through a topic you are unfamiliar with can make all the difference in understanding how to solve
a problem. Whether it is just clarification for some problems you are having for a one-time meeting, or ...
9 Subjects: including trigonometry, chemistry, algebra 2, geometry
...I have been trained to teach Geometry according to the Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many
independently. I have been trained to teach Trigonometry according to the Common Core Standards.
11 Subjects: including trigonometry, calculus, geometry, algebra 1
...Perhaps you're ready to make a change in your own life and need a little help to get started. Tutoring can help you hone your skills or to catch up in areas of weakness, and I'm eager to help!
While I have successfully tutored students in many subjects, I consider myself first and foremost a teacher of writing.
47 Subjects: including trigonometry, chemistry, English, reading
...I love working with students and helping them to reach their full potential. Although the majority of my years working with students has been at the high school level, I am very willing,
capable and interested in working with younger students, also. I enjoy working with students and take pride in showing students their full potential can be obtained through hard work and
9 Subjects: including trigonometry, geometry, algebra 2, algebra 1
...I have ten years of tutoring experience, a bachelors degree in Math, and a Master's degree in Education. I teach test prep for Temple University, and have been very successful in tutoring
students in math and statistics. I have a bachelor's degree in math from the University of London, UK.
22 Subjects: including trigonometry, calculus, writing, statistics
Related Millbourne, PA Tutors
Millbourne, PA Accounting Tutors
Millbourne, PA ACT Tutors
Millbourne, PA Algebra Tutors
Millbourne, PA Algebra 2 Tutors
Millbourne, PA Calculus Tutors
Millbourne, PA Geometry Tutors
Millbourne, PA Math Tutors
Millbourne, PA Prealgebra Tutors
Millbourne, PA Precalculus Tutors
Millbourne, PA SAT Tutors
Millbourne, PA SAT Math Tutors
Millbourne, PA Science Tutors
Millbourne, PA Statistics Tutors
Millbourne, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Millbourne_PA_trigonometry_tutors.php","timestamp":"2014-04-20T02:20:11Z","content_type":null,"content_length":"24541","record_id":"<urn:uuid:73d4286f-a552-4a2d-ac9c-ff2c922f17c1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
The 3D Coordinate System
Two-dimensional shapes have an x-y plane to go home to at the end of the day, and what do the solids have? Nothing. Three-dimensional shapes have been homeless for such a long time, they've begun to
sell their surface areas for shelter.
Well, it's high time for solids to have a place of their own. We've set up a home so that 3D shapes don't have to sleep on park benches with newspapers for blankets anymore. Here, solids can find
their place and finally feel welcome in the mathematical world. It's called the 3D coordinate system.
An x-y coordinate system won't be enough to contain three-dimensional figures. If we try to squish a 3D shape into a 2D coordinate plane, it won't be comfortable for the shape and we might rip the
plane (and we can't afford a new one). Instead, we can set up an x-y-z coordinate system to accommodate any and all 3D shapes.
How can we envision the 3D coordinate system? Easy. First, we draw an x-y plane down on a sheet of paper and look down at it.
That's where all the 2D shapes like triangles and circles and quadrilaterals live. If we look up, we can imagine another axis coming up and out of the page through the origin and perpendicular to the
other axes.
That's the z-axis. That's 3D space. That's what solids live in. And that's what the real world is: a 3D coordinate system.
Doesn't it look pretty? It's newly renovated with hardwood floors and everything.
Just like in a 2D graph, we mark the points of shapes with coordinates. This time, since there are three axes, we need three (preferably real) numbers to identify points in space. These numbers are a
coordinate called an ordered triple and are in order of (x, y, z).
Point P, for instance, has the ordered triple (3, 1, 3). We'll need to calculate distances and stuff in this coordinate system too, so a formula would be useful. It's just like the 2D distance
formula, but with a z coordinate added to it like an extra limb.
The Malcom in the Midpoint formula can also be extended to the third dimension so that a point equidistant between two points in 3D space has the ordered triple
Sample Problem
What's the distance and midpoint between points T (6, 2, 3) and U (1, 7, –4)?
d ≈ 9.95
We've found our distance. Now for the midpoint.
See? Piece of cake.
We can do more with these coordinates than just calculate this, that, and the other thing (all of which you'll need to know). We can draw stuff, too.
For example, let's say we want to draw a triangular prism with a base that has vertices of (0, 0, 0), (1, 2, 0), and (4, 0, 0) and a height of 5 units.
We can start off drawing the base of the prism and then decide where to go from there (Hawaii, anyone?).
That's the 2D shape. To make it 3D, we have to add the 5 units of height in. Since it's not specified, we can choose where we want to take the height (Hawaii, anyone?).
Nice. That's our triangular prism in 3D coordinate space. It's found a home, so Hawaii is probably out of the question…or is it?
If we want to move a shape in 3D space, all we have to do is change every point of that shape by the same amount. This is called translation (no, not into Latin). For instance, to move a rectangular
prism up 13 units in the y-axis, we just have to add 13 to every y-coordinate in every ordered triple.
The same goes with increasing or decreasing a shape's size. To find the coordinates of a solid that's similar to a given one, all we need to know are the coordinates and the scale factor. Multiply
each value of each point by the scale factor, and we're set.
The box above has the following coordinates: A (0, 0, 0), B (2, 0, 0), C (2, 1, 0), D (0, 1, 0), E (0, 1, 3), F (0, 0, 3), G (2, 1, 3), and H (2, 0, 3).
If we wanted to triple the size of the solid and move it over from the x-axis by 5 points, all we'd have to do is multiply each number in every point by 3 (to triple it) and add 5 to all the x
A' (5, 0, 0), B' (11, 0, 0), C' (11, 3, 0), D' (5, 3, 0), E' (5, 3, 9), F' (5, 0, 9), G' (11, 3, 9), and H' (11, 0, 9).
The figure would look like this.
Three times as big, and moved over five units to the right. Mission accomplished. Our dear 3D solids finally have a home where they can move and grow in peace rather than in pieces. | {"url":"http://www.shmoop.com/surface-area-volume/3d-coordinate-system.html","timestamp":"2014-04-16T13:24:21Z","content_type":null,"content_length":"40212","record_id":"<urn:uuid:a9390db2-4015-413d-879a-24d218ed5748>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pocopson Algebra 1 Tutor
Find a Pocopson Algebra 1 Tutor
...During my time in college, I took one 3-credit course in Differential Equations. While I was studying, I worked in the Math Center at my college. This gave me the opportunity to tutor students
in a variety of math subjects, including Differential Equations.
11 Subjects: including algebra 1, calculus, algebra 2, geometry
...Currently my 4th grader is doing 5th grade math in school and is completely comfortable with it. My approach is one of measured urgency, which allows me to be a very patient, yet result
oriented and encouraging tutor. This approach allows me to adapt my tutoring to suit the students' needs and level of understanding.
14 Subjects: including algebra 1, calculus, algebra 2, precalculus
...As an executive assistant for over 20 years, I am at the advanced level of MS Word. I can do resumes, letters, mail merges, tables/charts, brochures, flyers, invites, multipage reports and a
lot more. Anita G. is a dynamic, natural-born teacher, performer and educator.
51 Subjects: including algebra 1, English, reading, chemistry
...Additionally, I was two classes away from getting a B.S. in general science. Standardized test prep is a huge industry. Many of my students have tried several centers and packaged systems
before beginning with me, and I often hear them say that they've never learned like this before and that my advice is perfect for their unique needs.
47 Subjects: including algebra 1, chemistry, reading, English
...I was a chemical engineer with DuPont for 33 years. My focus is always to try to make sure my students understand the basic fundamentals and do sufficient practice problems with the
fundamentals to instill confidence and train their brains to think differently about the new subject matter they are trying to learn. Math skills are cumulative and must be practiced to develop
7 Subjects: including algebra 1, chemistry, geometry, algebra 2 | {"url":"http://www.purplemath.com/Pocopson_algebra_1_tutors.php","timestamp":"2014-04-19T15:10:08Z","content_type":null,"content_length":"24073","record_id":"<urn:uuid:85b08af1-310e-4d18-bad8-3159644d3709>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equations used in the calculation of
where N is the stand density (ha^-1); hdom is the dominant height (m); t is the stand age (years); dg is the quadratic mean dbh (cm); cop is a dummy variable for coppice, assuming the value 0 for
planted stands and 1 for coppice stands; h is the total tree height of the average tree (m); h[st] is the stump height considered 0.15 m; h[c] is the tree height up to the base of the crown (m); h[i]
is the top height (m); d[i] is a top diameter (cm); w[i di] is the biomass of the component below diameter d[i] (Mg ha^-1) and where i represents: w for wood, b for bark, br for branches and l for
leaves; wa[di] is the aboveground biomass below a top diameter d[i]. | {"url":"http://www.ecologyandsociety.org/vol17/iss2/art14/table1.html","timestamp":"2014-04-21T02:19:55Z","content_type":null,"content_length":"3316","record_id":"<urn:uuid:39cb4836-f84e-4fa9-a11a-7bd5845e186b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Brainteaser n
Replies: 15 Last Post: Aug 6, 2003 7:10 AM
Messages: [ Previous | Next ]
Re: Brainteaser n
Posted: Jan 13, 2003 5:27 PM
On Mon 13 Jan 2003 05:43:16 GMT Virgil <vmhjr2@attbi.com> wrote
>(Bourbaki) wrote:
>> 3^54
snipping low powers
snipping low powers
snipping low powers
>> 9091^11
>> 9901^9
>> 909091^7
>> 99990001^4
>> 999999000001^3
>> 9999999900000001^2
>> 909090909090909091^2 * 1111111111111111111^5
>> 11111111111111111111111^4
>> 900900900900990990990991^2
>> 909090909090909090909090909091
>> 9090909090909090909090909090909090909090909090909091
>> 900900900900900900900900900900990990990990990990990990990991
>> the only operators being add and multiply; no factorials
>> The expression is elegant in its economy.
>I reckon some kind of factorial, say of the largest prime
>factor of the given number, with all the 2's and maybe some
>other stuff divided out might do it.
No, no, no!
Given what is above (and I have snipped away all the subtler clues, leaving
only the blatant ones), your answer is, well, quite ludicrous. :-)
Correct is:
PI J(n):J(1)=1,J(k+1)=10*J(k)+1
or, if you prefer
PI REP(n)
Please see my reply to Bill Hart, who got half way there.
Intuition and induction are important in mathematical creativity. Virgil, I
suggest you spend time pondering the (base-independent) shapes of naturals,
which will enhance your enjoyment and comfort in your search for beauty and | {"url":"http://mathforum.org/kb/message.jspa?messageID=449939","timestamp":"2014-04-23T23:31:43Z","content_type":null,"content_length":"35632","record_id":"<urn:uuid:d88a6b5a-836d-420f-98b8-6f33f3ce1ddb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
zero slope
5 parallel horizontal lines with arrows on both ends to show that they extend indefinitely.
6 parallel horizontal lines with arrows on both ends to show that they extend indefinitely.
A horizontal line with arrows at both ends to show that it extends indefinitely.
A horizontal line with arrows at both ends to show that it extends indefinitely. | {"url":"http://etc.usf.edu/clipart/keyword/zero-slope","timestamp":"2014-04-20T13:51:20Z","content_type":null,"content_length":"13378","record_id":"<urn:uuid:37e828d0-d471-432d-977d-822b6a341c5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Graph Linear Equations
Edit Article
Edited by Ijustwannaknow!!, Caidoz, Katy Linsao, Harri and 14 others
Are you stuck not knowing how to draw a linear equation without using a calculator. Luckily, drawing a graph of a linear equation is pretty simple once you know how. All you need to know is a couple
things about your equation and you're good to go. Let's get started.
1. 1
Make sure the linear equation is in the form y = mx + b. This is called the y-intercept form, and it's probably the easiest form to use to graph linear equations. The values in the equation do
not need to be whole numbers. Often you'll see an equation that looks like this: y = 1/4x + 5, where 1/4 is m and 5 is b.
□ m is called the "slope," or sometimes "gradient." Slope is defined as rise over run, or the change in y over the change in x.
□ b is defined as the "y-intercept." The y-intercept is the point at which the line crosses the Y-axis.
□ x and y are both variables. You can solve for a specific value of x, for example, if you have a y point and know the m and b values. x, however, is never merely one value: its value changes
as you go up or down the line.
2. 2
Plot the b number on the Y-axis. Your b is always going to be a rational number. Just whatever number b is, find its equivalent on the Y-axis, and put the number on that spot on the vertical
□ For example, let's take the equation y = 1/4x + 5. Since the last number is b, we know that b equals 5. Go 5 points up on the Y-axis and mark the point. This is where your straight line will
pass through the Y-axis.
3. 3
Convert m into a fraction. Often, the number in front of x is already a fraction, so you won't have to convert it. But if it isn't, convert it by simply placing the value of m over 1.
□ The first number (numerator) is the rise in rise over run. It's how far the line travels up, or vertically.
□ The second number (denominator) is the run in rise over run. It's how far the line travels to the side, or horizontally.
□ For example:
☆ A 4/1 slope travels 4 points up for every 1 point over.
☆ A -2/1 slope travels 2 points down for every 1 point over.
☆ A 1/5 slope travels 1 points up for every 5 points over.
4. 4
Start extending the line from b using slope, or rise over run. Start at your b value: we know that the equation passes through this point. Extend the line by taking your slope and using its
values to get points on the equation.
□ For example, using the illustration above, you can see that for every 1 point the line rises up, it travels 4 to the right. That's because the slope of the line is 1/4. You extend the line
indefinitely along both sides, continuing to use rise over run to graph the line.
□ Whereas positive-value slopes travel upward, negative-value slopes travel downward. A slope of -1/4, for example, would travel down 1 point for every 4 points it travels side to side.
5. 5
Continue extending the line, using a ruler and being sure to use the slope, m, as a guide. Extend the line indefinitely and you're done graphing your linear equation. Pretty easy, isn't it?
Article Info
Thanks to all authors for creating a page that has been read 45,786 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Graph-Linear-Equations","timestamp":"2014-04-18T05:40:26Z","content_type":null,"content_length":"68096","record_id":"<urn:uuid:6009300c-5819-4994-a09a-5bb855fcbef9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
P is true, and Q is false, the truth-value of "P v Q" is (Points : 1)
Cannot be determined
All of the above
P is true, and Q is false, the truth-value of "P v Q" is (Points : 1) false. true. Cannot be determined All of the above
If P is false, and Q is false, the truth-value of "P ?Q" is False.
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/?ConversationId=73912C44","timestamp":"2014-04-18T05:38:30Z","content_type":null,"content_length":"38125","record_id":"<urn:uuid:55d5670b-812f-4dc1-837f-7dae064375f3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
The FPS Camera problem [Archive] - OpenGL Discussion and Help Forums
05-10-2010, 06:12 AM
Hi, I am novice in the 3D Math. I have a problem about FPS camera when i try to implement it using the quaternion.hope someone can help me.
(I use a quaternion to record the orientation of camera in world space
when the application need to update the camera, i actually orient the camera in world space.
after update,i use the orientation to build the matrix that can transform the camera to desired orientaion and postion in world space, then i inverse this matrix to get the view matrix.)
// pitch axis(1.0, 0.0, 0.0) yaw axis(0.0, 1.0, 0.0);
there are two rotation orders to orient the FPS camera:
(1) yaw firstly,and pitch secondly
(2) pitch firstly, and yaw.
according to my understand to the math,if i orient camera in world space using the the order(1), there is no roll in the scene. because after first rotation around the unit y axis, i will rotate
around the new local x axis which deriveing from the prevoius yaw.
if i use the order (2) to transform camera, it will lead to roll of it.
unfortunately, the actual effect is not the same as what i expect when i using the order(1), if i use the order(2), it works well. i dont known the reason , hope someone can give me a explanation
about it
Thanks in advanced. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-170946.html","timestamp":"2014-04-19T10:10:42Z","content_type":null,"content_length":"4647","record_id":"<urn:uuid:c26377ae-2415-4c08-b7ea-2a1b9c4393a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Concerning Asymptotic Behavior for Extremal Polynomials Associated to Nondiagonal Sobolev Norms
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 628031, 11 pages
Research Article
Concerning Asymptotic Behavior for Extremal Polynomials Associated to Nondiagonal Sobolev Norms
^1Faculty Mathematics and Computer Science, St. Louis University (Madrid Campus), Avenida del Valle 34, 28003 Madrid, Spain
^2Departamento de Matemáticas Puras y Aplicadas, Edificio Matemáticas y Sistemas (MYS), Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080 A, Venezuela
^3Departamento de Matemáticas, Universidad Carlos III de Madrid, Avenida de la Universidad 30, Leganés, 28911 Madrid, Spain
^4Departamento de Matemáticas, Facultad de Ciencias, Universidad Autónoma de Madrid, Campus de Cantoblanco, 28049 Madrid, Spain
Received 28 January 2013; Accepted 8 March 2013
Academic Editor: Józef Banaś
Copyright © 2013 Ana Portilla et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Let ℙ be the space of polynomials with complex coefficients endowed with a nondiagonal Sobolev norm , where the matrix and the measure constitute a -admissible pair for . In this paper we establish
the zero location and asymptotic behavior of extremal polynomials associated to , stating hypothesis on the matrix rather than on the diagonal matrix appearing in its unitary factorization.
1. Introduction
In the last decades the asymptotic behavior of Sobolev orthogonal polynomials has been one of the main topics of interest to investigators in the field. In [1] the authors obtain the th root
asymptotic of Sobolev orthogonal polynomials when the zeros of these polynomials are contained in a compact set of the complex plane; however, the boundedness of the zeros of Sobolev orthogonal
polynomials is an open problem, but as was stated in [2], it could be obtained as a consequence of the boundedness of the multiplication operator . Thus, finding conditions to ensure the boundedness
of would provide important information about the crucial issue of determining the asymptotic behavior of Sobolev orthogonal polynomials (see, e.g., [3–13]). The more general result on this topic is [
3, Theorem 8.1] which characterizes in terms of equivalent norms in Sobolev spaces the boundedness of for the classical diagonal norm (see Theorem 3 below, which is [3, Theorem 8.1] in the case ).
The rest of the above mentioned papers provides conditions that ensure the equivalence of norms in Sobolev spaces, and consequently, the boundedness of .
Results related to nondiagonal Sobolev norms may be found in [5, 6, 14–19]. Particularly, in [5, 6, 15, 18, 19] the authors establish the asymptotic behavior of orthogonal polynomials with respect to
nondiagonal Sobolev inner products and the authors in [5] deal with the asymptotic behavior of extremal polynomials with respect to the following nondiagonal Sobolev norms.
Let be the space of polynomials with complex coefficients and let be a finite Borel positive measure with compact support consisting of infinitely many points in the complex plane; let us consider
the diagonal matrix , with being positive -almost everywhere measurable functions, and , a matrix of measurable functions such that the matrix, is unitary -almost everywhere. If, where denotes
the transpose conjugate of (note that then is a positive definite matrix -almost everywhere), and we define the Sobolev norm on the space of polynomials
In [20, Chapter XIII] certain general conditions imposed on the matrix are requested in order to guarantee the existence of an unitary representation with measurable entries.
If is not the identity matrix -almost everywhere, then (2) defines a nondiagonal Sobolev norm in which the product of derivatives of different order appears. We say that is an th monic extremal
polynomial with respect to the norm (2) if
It is clear that there exists at least an th monic extremal polynomial. Furthermore, it is unique if . If , then the th monic extremal polynomial is precisely the th monic Sobolev orthogonal
polynomial with respect to the inner product corresponding to (2).
In [5, Theorem 1] the authors showed that the zeros of the polynomials in are uniformly bounded in the complex plane, whenever there exists a constant such that , -almost everywhere for . This
property made possible to obtain the th root asymptotic behavior of extremal polynomials (see [5, Theorems 2 and 6]). Although it is required compact support for , this is, certainly, a natural
hypothesis: if is not bounded, then we cannot expect to have zeros uniformly bounded, not even in the classical case (orthogonal polynomials in ); see [21].
Taking , and setting up hypothesis on the matrix (see (4)) rather than on the diagonal matrix , the authors of [22] the following equivalent result to [5, Theorem1].
Theorem 1 (see [22, Theorem 4.3]). Let be a finite union of rectifiable compact curves in the complex plane, a finite Borel measure with compact support , a positive definite matrix -almost
everywhere and Assume that , , and the norms in and are equivalent on . Let be a sequence of extremal polynomials with respect to (2). Then the multiplication operator is bounded with the norm and
the zeros of lie in the bounded disk .
In this paper we improve Theorem 1 in two directions: on the one hand, we enlarge the class of measures considered and, on the other hand, we prove our result for (see Theorem 19). In order to
describe the measures we will deal with, we introduce the definition of -admissible pairs as follows: given , we say that the pair is -admissible if is a finite Borel measure which can be written as
, its support is a compact subset of the complex plane which contains infinitely many points, and is a positive definite matrix-almost everywhere with ,-almost everywhere for some fixed ; the
support is contained in a finite union of rectifiable compact curves with if , and is the Radon-Nykodim derivative of with respect to the Euclidean length in .
We want to make three remarks about this definition. First of all, since is a positive definite matrix -almost everywhere, also has this property and hence , -almost everywhere.
In order to obtain the best choice for is the restriction of to .
Note that the support of is an arbitrary compact set: we just require that (the part of in which is about to be a degenerated quadratic form, when is very close to ) is a union of curves.
Therefore, with the results on -admissible pairs we complement and improve the study started in [22], where the case with was considered.
Another interesting property which could be studied is the asymptotic estimate for the behavior of extremal polynomials because, in this setting, there does not exist the usual three-term recurrence
relation for orthogonal polynomials in and this makes it really difficult to find an explicit expression for the extremal polynomial of degree . In this regard, Theorems 22 and 23 deduce the
asymptotic behavior of extremal polynomials as an application of Theorems 18 and 19. More precisely, we obtain the th root and the zero counting measure asymptotic both of those polynomials and their
derivatives to any order. The study of the th root asymptotic is a classical problem in the theory of orthogonal polynomials; see for instance, [1, 2, 5, 23, 24].
Furthermore, in Theorem 23 we find the following asymptotic relation: for any .
The main idea of [5, 6, 22] and this paper is to compare nondiagonal and diagonal norms.
When it comes to compare nondiagonal and diagonal norms, [25] is remarkable, since the authors show that symmetric Sobolev bilinear forms, like symmetric matrices, can be rewritten with a diagonal
representation; unfortunately, the entries of these diagonal matrices are real measures, and we cannot use this representation since we need positive measures for the Sobolev norms.
Finally, we would like to note that the central obstacle in order to generalize the results given in this paper and [22] to the case of more derivatives is that there are too many entries in the
matrix and just a few relations to control them (see Lemma 8 and notice that some limits appearing in that Lemma do not provide any new information). In that case we have just three entries , but in
the simple case of two derivatives we have and we would need to control six functions ; in the general case with derivatives, we would need to control functions.
The outline of the paper is as follows. In Section 2 we provide some background and previous results on the multiplication operator and the location of zeros of extremal polynomials. We have devoted
Section 3 to some technical lemmas in order to simplify the proof of Theorem 17 about the equivalence of norms; in fact, in these lemmas the hardest part of this proof is collected. In Section 4 we
give the proof of that Theorem and in Section 5 we deduce some results on asymptotic of extremal polynomials.
2. Background and Previous Results
In what follows, given we define for every polynomial .
It is obviously much easier to deal with the norms and than with the one . Therefore, one of our main goals is to provide weak hypotheses to guarantee the equivalence of these norms on the linear
space of polynomials (see Section 4).
In order to bound the zeros of polynomials, one of the most successful strategies has certainly been to bound the multiplication operator by the independent variable , where Regarding this issue, the
following result is known.
Theorem 2 (see [5, Theorem 3]). Let be a finite Borel measure in with compact support let and . Let be a sequence of extremal polynomials with respect to (2). Then the zeros of lie in the disk .
It is also known the following simple characterization of the boundedness of .
Theorem 3 (see [3, Theorem 8.1]). Let be a finite Borel measure in with compact support; nonnegative measurable functions; and . Then the multiplication operator is bounded in if and only if the
following condition holds:
It is clear that if there exists a constant such that -almost everywhere, then (9) holds. In [8, 13] some other very simple conditions implying (9) are shown.
In what follows, we will fix a -admissible pair with ; then is contained in a finite union of rectifiable compact curves in the complex plane; each of these connected components of is not required to
be either simple or closed.
3. Technical Lemmas
For the sake of clarity and readability, we have opted for proving all the technical lemmas in this section. This makes the proof of Theorem 17 much more understandable.
The following result is well known.
Lemma 4. Let us consider . Then
Lemma 5 (see [22, Lemma 3.1]). Let us consider . Then (1) for every ; (2) for every .
Lemma 6 (see [22, Lemma 3.2]). Let and be two sequences of positive numbers. Then
In what follows , and refer to the coefficients of the fixed matrix .
Definition 7. We say that is an extremal sequence for if, for every , and
Lemma 8. If and is an extremal sequence for , then
Proof. The case is a consequence of [22, Lemmas 3.5 and 3.6]. We deal now with the case . First note that we can rewrite limit (12) in Definition 7 as the limit of the following product:
Since the limit of the product is 1, if we prove that the first, third, and fourth factors tend to 1 as tends to infinity, then the limit of the second factor must also be 1.
So, our problem is reduced to show
Again, we can rewrite the limit in the definition of extremal sequence as the limit of the following product:
The two factors above are nonnegative and less than or equal to 1 using, respectively, that -almost everywhere and . Thus, and (15) holds.
Given , for each let us define the following sets:
Let us consider the strictly decreasing function on . If , then . Consequently, if , then , and if , then and . Therefore,
Using this fact and (20), we have
If we assume that , then from the previous inequality we have , and this is a contradiction. Hence, and consequently,
Since for each , we have then (24) implies that
On the other hand, using (20) it is easy to deduce that
Consequently, (24), (26), and (27) give
Furthermore, since we obtain
Therefore, (24), (28), and (30) give
Similar arguments allow us to show
From (31) and (32) we obtain
As a consequence of (33) we have
In a similar way we obtain Since these inequalities hold for every , we conclude that (18) holds. Applying now Lemma 6 we obtain (16).
Using Lemma 4, (18), and (34) we obtain that for every there exists such that for every the following holds:
Then (17) follows from the previous inequalities, since are arbitrary.
This completes the proof.
Definition 9. For each , we define the sets and as
Lemma 10. If and is an extremal sequence for and is small enough, then
Remark 11. The statement of the lemma might seem strange, because we could have a priori ; however, the existence of the fundamental sequence implies .
Proof. If , then the result follows from [22, Lemma 3.8]. For the case it suffices to follow the proof of [22, Lemma 3.8] applying Lemma 8 to conclude the result.
Lemma 12. If , is an extremal sequence for and is small enough, then
Proof. If , then the result follows from [22, Lemma 3.10]. For the case it suffices to follow the proof of [22, Lemma 3.10] applying Lemmas 8 and 10 to conclude the result.
Lemma 13. If , is an extremal sequence for and is small enough, then
Proof. If , then the result follows from [22, Lemma 3.11]. For the case it suffices to follow the proof of [22, Lemma 3.11] applying Lemmas 8, 10, and 12 to conclude the result.
Lemma 14. If and is an extremal sequence for , then for every small enough with and for every there exists such that for every .
Proof. If , then the result follows from [22, Lemma 3.12]. For the case it is sufficient to follow the proof of [22, Lemma 3.12] applying Lemma 13 to conclude the result.
Definition 15. If is a continuous function on , we define the oscillation of on , and we denote it by , as
Lemma 16 (see [22, Lemma 3.14]). For , let us assume that is connected and , where is the Radon-Nykodim derivative of with respect to the Euclidean length in . (According to one’s notation, if then
.) Then for every polynomial .
4. Equivalent Norms
Now we prove the announced result about the equivalence of norms for .
Theorem 17. Let one consider and a -admissible pair. Then the norms , , and defined as in (3) are equivalent on the space of polynomials .
Proof. The equivalence of the two first norms is straightforward, by Lemmas 4 and 5. We prove now the equivalence of the two last norms.
Let us prove that there exists a positive constant such that
Let us prove first the second inequality .
Note that ; therefore, for every polynomial it holds that
In order to prove the first inequality, , note that
If (i.e., ), then we have finished the proof. Assume that then we prove , seeking for a contradiction. It is clear that it suffices to prove it when is connected, that is, when is a rectifiable
compact curve. Let us assume that there exists a sequence such that
If , then [22, Lemma 3.1] (with ) gives
This right-hand side of the inequality is positive, because -almost everywhere. This implies and hence
If , then since -almost everywhere. Therefore, or, equivalently, and (50) also holds for .
If is constant for some , then ; therefore, taking a subsequence if it is necessary, without loss of generality we can assume that is nonconstant and for every . Then is an extremal sequence for .
Applying Lemma 8,
By Lemma 14, there exists such that for every . Now, taking into account that and that is connected, we can apply Lemma 16, and then for every , with .
Let us fix small enough. On the one hand, by Lemma 13 it holds that for every .
On the other hand, we have This implies Given any there exists with . Hence, for every . Therefore, , which is a contradiction with (54) and (55).
The following result is a direct consequence of Theorems 3 and 17.
Theorem 18. Let one consider and a -admissible pair. Then the multiplication operator is bounded in if and only if the following condition holds:
This latter theorem and Theorem 2 give the following result.
Theorem 19. Let one consider and a -admissible pair such that (59) takes place. Let be a sequence of extremal polynomials with respect to (2). Then the multiplication operator is bounded and the
zeros of lie in the bounded disk .
In general, it is not difficult to check wether or not (59) holds. It is clear that if there exists a constant such that -almost everywhere, then (59) holds. In [8, 13] some other very simple
conditions implying (59) are shown.
The following is a direct consequence of Theorem 19.
Corollary 20. Let one consider and a -admissible pair. Assume that ,-almost everywhere for some constant . Let be a sequence of extremal polynomials with respect to (2). Then the zeros of are
uniformly bounded in the complex plane.
Finally, we have the following particular consequence for Sobolev orthogonal polynomials.
Corollary 21. Let be a -admissible pair. Assume that there exists a constant such that ,-almost everywhere. Let be the sequence of Sobolev orthogonal polynomials with respect to . Then the zeros of
the polynomials in are uniformly bounded in the complex plane.
5. Asymptotic of Extremal Polynomials
We start this section by setting some notation. Let , , , and denote, respectively, the th monic orthogonal polynomial with respect to , the usual norm in the space , the logarithmic capacity of ,
and the equilibrium measure of . Furthermore, in order to analyze the asymptotic behavior for extremal polynomials we will use a special class of measures, “regular measures,” denoted by and defined
in [24]. In that work, the authors proved (see Theorem 3.1.1) that, for measures supported on a compact set of the complex plane, if and only if
Finally, if denote the zeros, repeated according to their multiplicity, of a polynomial whose degree is exactly , and is the Dirac measure with mass one at the point , the expression defines the
normalized zero counting measure of .
We can already state the first result in this section.
Theorem 22. Let one consider , a -admissible pair and the sequence of extremal polynomials with respect to . Assume that the following conditions hold: (i);(ii) is regular with respect to the
Dirichlet problem; (iii)condition (59) takes place. Then, Furthermore, if the complement of is connected, then in the weak star topology of measures.
Proof. Note that, in our context, the hypothesis removed with respect to [5, Theorem 2] is equivalent to the following two facts: on the one hand, the multiplication operator is bounded (see Theorem
3), and on the other hand, the norms of and defined as in (3) are equivalent (see Theorem 18). With this in mind, we just need to follow the proof of [5, Theorem 2] to conclude the result.
In the following theorem, we use to denote the Green's function for with logarithmic singularity at , where is the unbounded component of the complement of . Notice that, if is regular with respect
to the Dirichlet problem, then is continuous up to the boundary and it can be extended continuously to all , with value zero on .
Theorem 23 23. Let one consider , a -admissible pair and the sequence of extremal polynomials with respect to . Assume that the following conditions hold: (i);(ii) is regular with respect to the
Dirichlet problem;(iii)condition (59) takes place. Then, for each , uniformly on compact subsets of . Furthermore, for each , uniformly on each compact subset of . Finally, if the complement of is
connected, one has equality in (64) for all , except for a set of capacity zero, and uniformly on each compact subset of .
Proof. Note that, in our context, the multiplication operator is bounded (see Theorem 3) and the norms of and defined as in (3) are equivalent (see Theorem 18). This is the crucial fact in the proof
of this theorem; once we know this, we just need to follow the proof given in [5, Theorem 6] point by point to conclude the result.
Ana Portilla and Eva Tourís are supported in part by a grant from Ministerio de Ciencia e Innovación (MTM 2009-12740-C03-01), Spain. Yamilet Quintana is supported in part by the Research Sabbatical
Fellowship Program (2011-2012) from Universidad Simón Bolvar, Venezuela. Ana Portilla, José M. Rodríguez, and Eva Tourís are supported in part by two grants from Ministerio de Ciencia e Innovación
(MTM 2009-07800 and MTM 2008-02829-E), Spain. José M. Rodríguez is supported in part by a grant from CONACYT (CONACYT-UAG I0110/62/10 FON.INST.8/10), México. This word is dedicated to Francisco
Marcellán Español on his 60th birthday.
1. G. López Lagomasino and H. Pijeira Cabrera, “Zero location and nth root asymptotics of Sobolev orthogonal polynomials,” Journal of Approximation Theory, vol. 99, no. 1, pp. 30–43, 1999. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
2. G. López Lagomasino, H. Pijeira Cabrera, and I. Pérez Izquierdo, “Sobolev orthogonal polynomials in the complex plane,” Journal of Computational and Applied Mathematics, vol. 127, no. 1-2, pp.
219–230, 2001, Numerical analysis 2000, Vol. V, Quadrature and orthogonal polynomials. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
3. V. Alvarez, D. Pestana, J. M. Rodríguez, and E. Romera, “Weighted Sobolev spaces on curves,” Journal of Approximation Theory, vol. 119, no. 1, pp. 41–85, 2002. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
4. E. Colorado, D. Pestana, J. M. Rodrguez, and E. Romera, “Muckenhoupt inequality with three measures and Sobolev orthogonal polynomials”.
5. G. L. Lagomasino, I. Pérez Izquierdo, and H. Pijeira Cabrera, “Asymptotic of extremal polynomials in the complex plane,” Journal of Approximation Theory, vol. 137, no. 2, pp. 226–237, 2005. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
6. A. Portilla, J. M. Rodríguez, and E. Tourís, “The multiplication operator, zero location and asymptotic for non-diagonal Sobolev norms,” Acta Applicandae Mathematicae, vol. 111, no. 2, pp.
205–218, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
7. J. M. Rodríguez, “The multiplication operator in Sobolev spaces with respect to measures,” Journal of Approximation Theory, vol. 109, no. 2, pp. 157–197, 2001. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
8. J. M. Rodríguez, “A simple characterization of weighted Sobolev spaces with bounded multiplication operator,” Journal of Approximation Theory, vol. 153, no. 1, pp. 53–72, 2008. View at Publisher
· View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
9. J. M. Rodríguez, “Zeros of Sobolev orthogonal polynomials via Muckenhoupt inequality with three measures”.
10. J. M. Rodríguez, E. Romera, D. Pestana, and V. Alvarez, “Generalized weighted Sobolev spaces and applications to Sobolev orthogonal polynomials. II,” Approximation Theory and its Applications,
vol. 18, no. 2, pp. 1–32, 2002. View at Zentralblatt MATH · View at MathSciNet
11. J. M. Rodríguez, V. Álvarez, E. Romera, and D. Pestana, “Generalized weighted Sobolev spaces and applications to Sobolev orthogonal polynomials. I,” Acta Applicandae Mathematicae, vol. 80, no. 3,
pp. 273–308, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
12. J. M. Rodríguez, V. Álvarez, E. Romera, and D. Pestana, “Generalized weighted Sobolev spaces and applications to Sobolev orthogonal polynomials: a survey,” Electronic Transactions on Numerical
Analysis, vol. 24, pp. 88–93, 2006. View at Zentralblatt MATH · View at MathSciNet
13. J. M. Rodriguez and J. M. Sigarreta, “Sobolev spaces with respect to measures in curves and zeros of Sobolev orthogonal polynomials,” Acta Applicandae Mathematicae, vol. 104, no. 3, pp. 325–353,
2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
14. M. Alfaro, F. Marcellán, M. L. Rezola, and A. Ronveaux, “Sobolev-type orthogonal polynomials: the nondiagonal case,” Journal of Approximation Theory, vol. 83, no. 2, pp. 266–287, 1995. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
15. A. Branquinho, A. Foulquié Moreno, and F. Marcellán, “Asymptotic behavior of Sobolev-type orthogonal polynomials on a rectifiable Jordan curve or arc,” Constructive Approximation, vol. 18, no. 2,
pp. 161–182, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
16. H. Dueñas and F. Marcellán, “Asymptotic behaviour of Laguerre-Sobolev-type orthogonal polynomials. A nondiagonal case,” Journal of Computational and Applied Mathematics, vol. 235, no. 4, pp.
998–1007, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. H. Dueñas and F. Marcellán, “The holonomic equation of the Laguerre-Sobolev-type orthogonal polynomials: a non-diagonal case,” Journal of Difference Equations and Applications, vol. 17, no. 6,
pp. 877–887, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. A. Foulquié, F. Marcellán, and K. Pan, “Asymptotic behavior of Sobolev-type orthogonal polynomials on the unit circle,” Journal of Approximation Theory, vol. 100, no. 2, pp. 345–363, 1999. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
19. F. Marcellán and J. J. Moreno-Balcázar, “Strong and Plancherel-Rotach asymptotics of non-diagonal Laguerre-Sobolev orthogonal polynomials,” Journal of Approximation Theory, vol. 110, no. 1, pp.
54–73, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
20. N. Dunford and J. T. Schwartz, Linear Operators. Part II: Spectral Theory. Self Adjoint Operators in Hilbert Space, John Wiley & Sons, New York, NY, USA, 1988. View at MathSciNet
21. M. Castro and A. J. Durán, “Boundedness properties for Sobolev inner products,” Journal of Approximation Theory, vol. 122, no. 1, pp. 97–111, 2003. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH · View at MathSciNet
22. A. Portilla, Y. Quintana, J. M. Rodríguez, and E. Tourís, “Zero location and asymptotic behavior for extremal polynomials with non-diagonal Sobolev norms,” Journal of Approximation Theory, vol.
162, no. 12, pp. 2225–2242, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
23. E. B. Saff and V. Totik, “Logarithmic potentials with external fields,” in Grundlehren Der Mathematischen Wissenschaften, vol. 316, Springer, New York, NY, USA, 1998.
24. H. Stahl and V. Totik, General Orthogonal Polynomials, Cambridge University Press, Cambridge, UK, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
25. K. H. Kwon, L. L. Littlejohn, and G. J. Yoon, “Ghost matrices and a characterization of symmetric Sobolev bilinear forms,” Linear Algebra and its Applications, vol. 431, no. 1-2, pp. 104–119,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jfs/2013/628031/","timestamp":"2014-04-18T08:22:21Z","content_type":null,"content_length":"974689","record_id":"<urn:uuid:01649884-247e-4429-8ad4-9e986b6f3161>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
This was first posted in May of 2009. Go to
Why Isn't Steve Garvey In The Hall Of Fame?
. It has generated comments every few months, so if people are interested I thought I would post it again. What I tried to show was that he seems to be the kind of player the writers like to vote in
and that you could make a good case for him, that is, write an impressive plaque. But he has not made it. I don't think he was good enough, but the puzzle is why he has not made it.
Here is one slightly new tidbit. Last year I mentioned that Garvey had 6 200+ hit seasons. Through 2009, here are all the players who had 4 or more. Alot of them are in the Hall of Fame or will
probably make it, or would have made it without doing something scandalous:
Pete Rose 10
Ty Cobb 9
Ichiro Suzuki 9
Lou Gehrig 8
Willie Keeler 8
Paul Waner 8
Rogers Hornsby 7
Derek Jeter 7
Wade Boggs 7
Charlie Gehringer 7
Steve Garvey 6
Bill Terry 6
Stan Musial 6
Jesse Burkett 6
George Sisler 6
Sam Rice 6
Al Simmons 6
Kirby Puckett 5
Chuck Klein 5
Tony Gwynn 5
Michael Young 5
Harry Heilmann 4
Jack Tobin 4
Roberto Clemente 4
Joe Jackson 4
Tris Speaker 4
Paul Molitor 4
Juan Pierre 4
Jim Rice 4
Joe Medwick 4
Heinie Manush 4
Vada Pinson 4
Lou Brock 4
Vladimir Guerrero 4
Lloyd Waner 4
Nap Lajoie 4
Rod Carew 4
The first link tells you the batting average (AVG) and isolated power (ISO) each year in the AL for both close and late hitting vs. non-close and late hitting since 1950. All data from Retrosheet.
AL Close and late hitting vs. non-close and late hitting
Now the same thing for the NL
NL Close and late hitting vs. non-close and late hitting
This next link simply shows the differences in each stat between the situations in both leagues. Numbers in red are positive or zero. Those pretty much stopped in the 1980s. That is, since 1990, AVG
and ISO have pretty much been lower in the close and late situations than otherwise.
Yearly differences of each league
The graph below shows the annual difference in AVG in the AL.
Now for the annual differences in ISO in the AL
The next two graphs do the same thing for the NL.
I first reported on this issue in a post last year Did The Increased Use Of Relief Pitching Cause A Decline In Clutch Hitting? Back in the 1950s and 1960s, as I show at this earlier post, hitting in
non-close and late situations was not much better than in close and late situations. But, as I also showed, as the use of relief pitching grew, batting averages and isolated power started to decline,
relatively, in close and late situations. So it looks like it might have been easier to hit well in the clutch in the "old" days.
Recently Tom Tango (aka tangotiger) had a post titled Best and Worst Clutch Hitters of the Retrosheet era .
Tom has a clutch stat based on WPA or "win probability added." The idea there is that every plate appearance by a hitter either increases or decreases his team's probability of winning. A HR with the
score tied in the bottom of the 9th has more impact than one in the first inning with the score 10-0.
But Tom adjusts this by how often a hitter gets to hit in "high leverage" situations. Then that it is compared to what his WPA would be if he always hit in average leverage situations. I hope I got
that right. But, of course, Tom explains it much better. That stat ends up telling us how many more games a player's team wins (or loses) because he hits better or worse in high leverage situations
than he does overall. It is just called "Clutch."
To see if this stat changed over time, I took all the players with 4000+ PAs from 1950-2009 and found their Clutch/PA (758 players). Then I found the the year which was the mid-point of each player's
career (the data all comes from Baseball Reference which showed the first and last year of each guy's career). Of course, that is not a perfect way to do it since that may not be finding the exact
middle of a player's career in terms of PAs. But it is a reasonable approximation. Call this mid-point "Year."
Anyway, the correlation between Year and Clutch/PA is -.12. That is, as time goes on, batters are doing worse in the clutch. That makes sense given the increasing use and specialization of relief
pitching. The -.12 is small, though. But, as time went on, there were more players reaching the 4000 PA minimum because there were more teams and in the early 1960s, the season grew to 162 games. So
any correlation will have alot more guys from the later years when everyone was doing worse in the clutch. This waters down any correlation we might find (by the way, if you are interested, Yogi
Berra, famous for being a clutch hitter, ranks 37th out of 758 players).
But if we look at the top and bottom 25 in Clutch/PA, we can see some interesting trends. The table below shows the top 25 along with their mid-point year.
If you look carefully at the mid-point years, you can see that there are more players from earlier years. But this will be summarized below. The next table shows the bottom 25.
It actually turns out that the top 25 has a disproportionate number of guys from earlier years and the bottom 25 has disproportionate number of guys from later years. The next two tables shows this.
The first table shows what percentge or share of the 758 players comes from each decade.
The next table shows how many players were in the top 25 and the bottom 25 from each decade and the expected number based on the percentage from the table above. 2.18 is about 8.7% of 25, for
example. Notice that the 1950s had 4 guys in the top 25 while its expected number is 2.18. The 1960s and 1970s also had more guys in the top 25 than expected. We can also see that the 2000s had none
in the top 25 even though 4.49 were expected.
The 1950s, 60s and 70s did not have as many guys as expected in the bottom 25 while the 1990s and 2000s had more than expected. So it could be that it is harder to hit well in the clutch now than 4-5
decades ago. My guess is that this is due to relief pitching.
Update July 26:
I divided all the players into 6 groups since there are about 6 decades. And since 758/6 = 126.33, I looked at the top 126 and the bottom 126. The table below summarizes how many guys from each
decade were in each group along with the expected number.
The 1950s don't have as many as expected in the top 126 and more than expected in the bottom. But the 1960s do have more in the top and fewer in the bottom. Same for the 1970s and 1980s. But the last
two decades are very much under-represented in the top and very over-represented in the bottom. So this suggests it is harder to be a good clutch hitter in current times.
This is prompted by a post by Tom Tango (aka tangotiger) titled Best and Worst Clutch Hitters of the Retrosheet era .
Tom has a clutch stat based on WPA or "win probability added." The idea there is that every plate appearance by a hitter either increases or decreases his team's probability of winning. A HR with the
score tied in the bottom of the 9th has more impact than one in the first inning with the score 10-0.
But Tom adjusts this by how often a hitter gets to hit in "high leverage" situations. Then that it is compared to what his WPA would be if he always hit in average leverage situations. I hope I got
that right. But, of course, Tom explains it much better. That stat ends up telling us how many more games a player's team wins (or loses) because he hits better or worse in high leverage situations
than he does overall.
Nellie Fox is #1 with +13.4 wins since 1950. That is, by hitting better than he normally did in high leverage situations, he added 13.4 wins to his teams over his whole career. Sammy Sosa was last
with -16.8 wins. That is, he hit worse in high leverage situations than he normally did and this cost his teams 16.8 wins over the course of his career. These two hitters maybe could not be more
different and they may be good illustrations of what is going on with this clutch stat.
So let's call Tom's stat Clutch. That's what it is called at Baseball Reference. I took all the right-handed batters and left-handed batters since 1950 who had 4000+ PAs (653 players). Then I divided
their Clutch stat by their PAs. I did the same thing for HRs and strikeouts. The I ran a regression with Clutch/PA being the dependent variable and HR/PA and SO/PA being the independent variables. I
also added a dummy variable for being a righty (1 for righties and 0 for lefties).
Here is the regression equation
Clutch/PA = 0.0007 - .00025*Righty - .0169*HR/PA - .00157*SO/PA
All three variables seem to be significant. Here are the t-values:
Righty -6.31
HR/PA -10.49
SO/PA -3.06
R-squared is .314 (meaning that 31.4% of the variation in Clutch/PA across players is explained by the equation) and the standard error per 700 PAs is .33.
Mutltiplying -.00025*700 gives us -.172 (assuming 700 PAs is a full season). So simply being a righty means you will have a negative Clutch rating of -.172, meaning you will cost your team .172 wins.
This could be because righties can't use the hole at first base with a runner on as well as lefties. When a runner is on first, it makes for a slightly higher leverage situation. Also, righties might
have to face right-handed pitchers more often in high leverage situations than lefties face left-handed pitchers.
To see the impact of HRs and SOs, I found the standard deviation of HR/PA and SO/PA and then checked to see how much Clutch/PA would change with a one standard deviation increase in both stats. Here
they are
HR/PA: .014
SO/PA: .0449
The coefficient on HR/PA was -.0169. That times .014 = -0.00024. But that times 700 PAs is about -.166. So being one standard deviation above average in HR/PA costs your team .166 wins per season.
Maybe HR hitters cannot adapt well in high leverage situations since they generally just swing for the fences. But that is just a guess.
Something similar could be going on for guys who strikeout alot. The coefficient on SO/PA was -.00157. That times .0449 = -0.00007. That times 700 = -.049. So increasing your strikeout rate by one
standard deviation costs your team .049 wins per season. Maybe guys who don't strikeout alot have better bat control and they can hit the ball the hole at first base better than average or they can
adapt to the situation better.
Let's look at how all this affects Nellie Fox. He was a lefty, so he does not get the righty penalty. His career HR/PA = .003488. The average for all the players in the sample was .0268. So he was
.0233 below that. To see the effect for the whole season, we multiply that first by -.0169, the coefficient on HR/PA from the regression equation and then times 700. This gives us -.0233*-.0169*700 =
.276. So his lack of power added .276 wins to his teams each year.
What about for his entire career. He had 10,035 career PAs or 14.33 seasons. With 14.33*.276 = 3.96, Fox gets 3.96 clutch wins for his whole career just due to his lack of power.
For SO, Fox had a career rate of .0206. The average was .133. So he was .112 below that. Let's multiply that by -.00157 and then 700 to get .122 (-.00157 was the coefficient on SO/PA). It amounts to
-.112*-.00157*700 = .122. So his ability to not strike out gave his teams .122 clutch wins per season. For his career that would be 1.76 Clutch wins. Then 3.96 + 1.76 = 5.72. Just by being a low HR,
low SO guy added 5.72 clutch wins. That is nearly half his total.
For Sosa, we have a HR/PA rate of .06154 and a SO/PA rate of .233. Doing the same exercise as I did above for Fox has him with the following "clutch losses" per season due to his high HR rate and
high SO rate:
HR/PA = .41
SO/PA = .11
Sosa had 9,986 career PAs or 14.14 seasons. His HR hitting cost him 5.81 clutch wins and his striking out cost him 1.54. And being a righty cost him 2.43 wins (14.14*.172 = 2.43). The .172 was how
many wins a righty lost per year, as explained above. Then 5.81 + 1.54 + 2.43 = 9.78. That is more than half of his clutch losses.
All of this, is, of course, an approximation. The regression is not perfect, since the r-squared was only .314. But the variables all were significant and the F-stat was 98 (that is significant and
it means that the 3 variables together probably explain some part of the dependent variable).
So Tom Tango's clutch stat is great in terms of what clutch stats should do but it may have some biases. But those biases might be ones teams should care about since HR hitting ability and SO
avoidance ability are identifiable traits.
I did a very different kind of study several years ago called Do Power Hitters Choke in the Clutch?. I have a link to a similar study by Andrew Dolphin. In this other study it did not look like they
did choke. Also, here are some other comments I made at the tangotiger link:
I happened to have a list of players with 6000+ PAs from 1987-2001 with their OPS in close and late situations (CL) and their OPS in non-CL situations. I took the ratio of CL/nonCL. Tino Martinez did
the best, with 1.095, meaning that his OPS in CL situations was 9.5% higher than nonCL. The correlation between CL OPS/nonCL OPS and SO/PA is -.364. So it looks like guys who strikeout alot have a
little harder time doing well in the clutch
Also, if you go to the rankings, you can see that 10 of the 12 best players in maintaining their OPS in the CL were lefties or switch hitters
And it looks like 8 of the bottom twelve are righties
He hit only his 6th career triple in the game and that was after over 4,000 ABs. That sounds slow.
I have a theory that you can get a general idea of a guy's speed or base running ability by looking at his triple-to-double ratio. Some fast guys don't hit the ball hard enough or often enough to get
many triples, so just using triples is not enough to gauge speed. And some guys who may not be that fast might get alot of triples more because they are good hitters.
But if you look at this ratio, it tells you how often a guy made it to third relative to how many times they had to stop at second. And if you get thrown out at third, you get a double. Fast guys
will turn long hits into triples more often than slow guys who must stop at 2nd.
But Voros McCracken has a better way to do it. Take the following ratio: 3B/(2B + 3B). This makes it an average or a rate. It tells us what percentage of the time a batter was successful when he had
a chance to make it to third with a triple instead of a double.
Let's see where Molina ranks in this stat. To do that, I found all the right-handed batters from 1960-2009 who had 4,000+ ABs. The table below shows the top ten and the bottom ten.
The average rate for righties was .110. That means that the average righty was 4.6 times more likely to get a triple instead of a double than Molina (.110/.024 = 4.62). The next table shows the top
ten and bottom ten for lefties. Their average rate was .131.
The next table shows the top ten and bottom ten for switch hitters. Their average rate was .146 (I have no idea why it is higher than the lefties' rate).
Here is what was said at the Dallas Morning News blog by Guy Reynolds, Photo editor
"I found this photo in the AP archives yesterday and wrote about the differences from 1949 and today on the Photography blog here . It's hard to believe that this shot was made as Tommy Henrich
approached home plate after hitting a walk-off home run to win the first game of the '49 series 1-0. A World Series game! For some reason all the excessive exuberance shown today by players after
every little thing bothers me. Seeing this old photo just made me smile."
The whole team did not go out to mob Henrich, as you can clearly see in the photo. See No jube at the plate? Archival photo shows walk-off home run in 1949 World Series. (Hat Tip: David Pinto's
Baseball Musings)
I guess it depends on what you call old school (David Pinto's blog entry on this was titled "Old School Walk-Off"). This link shows Mazeroski’s series winning HR in 1960. I know it is different
because it ended the series where as this one from 1949 was just the first game. But it looks like alot of the team came out to home plate to congratulate him as fans poured onto the field.
Then there is Bobby Thomson’s series winning HR in the 1951 NL pennant playoff
Bobby Thomson
Again, it is a little different since it won a pennant.
But here is Dusty Rhode’s HR to win game 1 in the 1954 World Series. The whole team comes out to congratulate him at home plate and you also see Willie Mays jumping up and down as he rounds the
bases. It is about 10 minutes long and the HR comes at the end, of course.
Dusty Rhodes
Here is a video that shows Eddie Mathews hitting a walkoff HR in game 4 in the 1957 World Series. It looks like a big celebration at home plate
Eddie Mathews
So sometimes in the old days they had big celebrations after walk-off HRs.
My first post on the Astros was Astros Offense On Record Setting Low Pace. Right now their OPS is .643 and the league average is .729. So .643/.729 = .882. That would be the 11th worst since 1969, as
you can see from the table below.
They have been doing better lately. In June, the Astros had an OPS of .691 while the league average was .720. That is a ratio of .96. So far in July, it is .689/.733 for a ratio of .94.
The Astros have an OPS+ of 73 according to Baseball Reference. It takes park effects into effect as well as the league average (it is calculated a little differently than above). The lowest team OPS+
I found going all the way back to 1920 was 69 for the 1920 Philadelphia A's. So the Astros are close to that.
I am not sure what to make from the Astro's park ratings in the Bill James Handbook. For the years 2007-9, they have a run rating of 96, meaning that the runs scored in their park is 96% of the
league average. But the rating for AVG is 101 and for HRs 108. So that indicates a slightly above average hitter's park. The walk rate is 98. That does not seem like enough to offset the HR and AVG
ratings to say their park is a little hard on the hitters. The error rate is only 87. That might hold down the runs. My best guess is that when it comes to OPS, Minute Maid should be a little helpful
to the Astros' hitters.
The Blue Jays have an isolated power (ISO) of .205 since their SLG is .445 and their AVG is .240. That is higher than the all-time record of .205 by the 1997 Mariners. Relative to the league average,
it would be the third highest since 1900, at 138 (.205/.148 = 1.38). The league ISO in the AL this year is .148. The 1927 Yankees are the highest in realtive ISO at 153. My first post on this was
Blue Jays On Record Power Pace.
It began with a 15-3 win over the Tigers on June 9. The Sox had 16 hits, including 3 HRs. They beat Detroit again the next day to take two out of three. In the last 30 games, the Sox have outscored
their opponents 156-77. That gives them a Pythagorean winning pct of .804. In 30 games that would be 24.1 wins. All data is from Baseball Reference.
The Sox hitters have a .793 OPS in these games. The following formula shows the relationship between runs per game and OPS from 2001-04 (may not be the most accurate formula for this case, but I had
it handy).
R/G = 13.27*OPS - 5.29
It predicts the Sox would score 5.22 runs per game. That is very close to what they have actually done (5.2). The Sox pitchers have allowed an OPS of .626. That would work out to about 3 runs per
game or 90 total runs. They have actually only allowed 77.
The Sox OPS differential is .167. The next formula shows the relationship between OPS differential and winning pct.
Pct = 1.26*OPSDIFF + .5
This gives the Sox a pct of about .710. That would be only about 21 wins (which would still be very good). The big thing is that the Sox pitchers are allowing fewer runs than expected based on the
OPS they have allowed. They must be doing well with runners on base in the last 30 games (but for the year they have allowed a .704 OPS overall while it is .741 with runners on). The Sox have out
homered their opponents 34-17.
They have won or swept 9 of their last 10 series. The only series they lost was 2 out of 3 to the Royals a couple of weeks ago in KC. They avenged that with a 3 game sweep in Chicago, outscoring them
28-8. They beat the Tigers 2 out of 3 to start this run when the Tigers were in 2nd place. But the Tigers were recently in first place until yesterday. The Sox also swept the first place Braves and
took 2 of 3 from the Rangers. And last week they swept the 2nd place Angels 4 straight in Chicago, outscoring them 19-5. In the five losses, the Sox have not lost by more than 2 runs, losing all 5 by
a total of 9 runs.
Now if we can only get Oswalt from the Astros to take Peavy's place.
Derrick Gold and Joe Strauss recently came up with a stat they call the "splinter score." It is HRs divided by strikeouts. But neither HRs or K's are adjuted for era or the league average. See Bird
Land 10@10: Pujols, Musial & the Splendid Splinter Scale. (hat tip: Baseball Think Factory)
Just about a year ago I posted an entry called Which Players Had The Best HR-To-Strikeout Ratios? Here is that post.
I looked at every player with 5000+ PAs since 1920. I found their relative HRs and their relative strikeouts. Then found the ratio of the two. Ken Williams, for example, hit 3.70 times as many HRs as
the average player of his time and league while striking out only 75% as often as the average player. Since his ratio of ratios (3.7/.75 = 4.93) is the highest of anyone in the study, he is ranked
first. The data comes from the Lee Sinins Complete Baseball Encyclopedia. The table below shows the top 25:
DiMaggio hit only 41% of his HRs at home in his career while Williams hit 72%. So it is likely the case that DiMaggio would rank first, and probably by a wide margin, if HRs were park adjusted. Ted
Williams hit less than 50% of his HRs at home.
The next table shows which players had the lowest relative strikeout rates among guys who hit 40+ HRs. Again, no pikers here. In 2004, Bonds had only 41 strikeouts while the average player would have
had 100. I am so proud to see the demonstration of Polish power with 3 for Ted Kluszewski and 1 for Carl Yastrzemski (whose 1970 season ranks 27th). Don't forget Stan Musial is 13th on the above
Below is a post from last year called Starting Pitchers As Relievers Over Time. Alot of people have been talking about starter Matt Garza coming in as a reliever the other day. But it was once fairly
common for starters to pitch in relief. I don't claim to know all the reasons why the usage of pitchers has changed over time. But here is that post.
Many fans know that starters were often also used as relievers in the past. Lefty Grove, for example, only started 30 games the year he won 31 games (in 1931). He came in 11 times as a reliever. In
1930, he won 28 games while starting 32 and coming in to relieve 18 times.
On May 23, 1911, Christy Mathewson pitched a complete game victory giving up only 1 earned run. Then on May 26, he pitched the last 1 and 2/3 innings to get a win. When he came in in the 8th, the
Phillies had two men on and had just scored 2 runs to tie the game. Then he got a double play. The Giants scored 2 in the bottom of the 8th and Mathewson pitched the 9th for the win, giving up no
hits. The next day he pitched a complete game shutout.
But how often did starters pitch in relief in the past and how has this changed over time? I looked at the percentage of games pitched in relief by starters each decade starting with 1900-09. In each
decade I found this % for the season leaders in games started. The number of pitchers in the leaders were 3 for each team in each year. I figured that each team would have at least 3 guys who started
fairly often. But I also looked at the % for all pitchers who started at least 31 games (and at least 33 beginning in 1960). So the table below shows these percentages:
The first column shows the % of games pitched in relief by the leaders in starts. That would be the top 480 in games started in a season for the 1920s, for example. So in that group, 19.5% of their
games were in relief. The next column shows the % of games pitched in relief by pitchers who started at least 31 games (up to the 1950s) or 33 games since the 1960s. The trends are pretty clear.
The graph below shows the percentages over time.
The first post on this is right before this one. It was generated by an announcer saying something like "Sosa hit alot of HRs in when the scored was one sided."
I thought of another way to look at this. In his career, Sosa had the following HR%'s in various situations. Data from Baseball Reference
Tie Game 0.0642
Within 1 R 0.0669
Within 2 R 0.0676
Within 3 R 0.0669
Within 4 R 0.0668
Margin > 4 R 0.0842
Ahead 0.0710
Behind 0.0710
So, yes, his % his much higher in games when his team was ahead by more than 4 runs or behind by more than 4 runs. He had 1139 ABs in those situations. What if he had had his "Within 4 R" HR% in the
"Margin > 4 R" ABs? He would have had 76 HRs in those cases instead of the 96 he actually had. So he would lose 20 career HRs. That would still give him 589. Notice that his HR%'s in other cases are
all pretty close together.
What about from 1998-2001? Using Baseball Reference again, here are his HR totals for each season followed by how many he hit in "Margin > 4 R" cases preceded by the totals for the 4 years
So 18.1% of his HRs were hit in "Margin > 4 R" cases. What about the entire NL for these years? Here is the same thing for the whole league
The league hit 15.75% of it's HRs in "Margin > 4 R" cases. What if Sosa had the same %? Well, 15.75% of 243 is about 38. He actually hit 44 in "Margin > 4 R" cases. So we should take 6 HRs away from
him. That would leave him 237 for the whole 1998-2001 period, still an amazing total.
Now 38/237 = .16 or 16% of HRs in "Margin > 4 R" cases. If I dropped him down to 37/236, it would be .1568. So a loss of about 6 HRs is fairly accurate.
So, bottom line, if Sosa matched the league average in when he hit HRs according to run margin, he would not lose very many HRs. How could anyone fault him for this?
While watching the Ranger's broadcast of their game vs. the White Sox last night, one of the announcers said something about how Josh Hamilton's HRs are usually very important, like the one that put
them ahead 2-1. Then he said something about Sammy Sosa like "in those years when he was hitting 60 HRs, he alot of them when the game was one sided."
I don't know if that is true. Here is one way to look at it. Sosa had a HR% in non-close and late situations of 10.12% during these years. In close and late (CL) situations, it was 8.85% (so he
dropped off, but that is still much higher than most players in CL situations). But hitters generally have a lower HR% in CL situations. From 1991-2000, it was 2.99% in non-CL cases and 2.63 in CL
situations, for a decline of about .0036.
So let's suppose that Sosa's differential should have been only .0036, then he should have had a 9.76 HR% in CL situations. He had 384 CL ABs. A 9.76 HR% would give him 37.47 HRs in CL situations. He
actually had 34. So maybe he should have had 3.47 More HRs in those years, if he had not "choked" in the clutch. This is not a big deal.
Conversely, if we add .0036 to his .0885 CL HR% to get his expected non-CL HR%, we would have a HR% of 9.21%. In his 2,165 non-CL ABs, he would hit 199 HRs. He actually hit 219 in non-CL situations
during those years. If we take away 20 HRs over these four years, he ends up with 233 instead of 253. That is still an average of 58.25 per season. Pretty incredible.
But we can easily imagine that Sosa had to face some very tough relievers in those CL situations, who might have been told to not give him much to hit. The following table summarizes his stats in
various situations in each of the four seasons. Sept and Oct data are shown for 1998 & 2001 because in those years the Cubs were fighting for a playoff spot. In general, I think the numbers show that
he hit very well with runners on or in CL situations or late in the season when the Cubs were trying to make the post season (they finished last in both 1999 & 2000). Sosa hit alot of meanigful HRs
in these years. There is nothing misleading or deceiving about his performance or his stats.
To see what the normal clutch/non-clutch differentials are, go to General Clutch Data.
Dave Studeman brought this up topic up earlier this week at The Hardball Times with Koufax’s peak. I have done some research on a related note. It is not as sophisticated as what Dave has done since
it is not clutch-based. But I did not find that Koufax had the best peak. This is after taking park effects and league averages into account. Also, I tried only using fielding independent stats. Here
are the links:
Bert Blyleven: As Dominating as Sandy Koufax
How Good Was Sandy Koufax Outside of Dodger Stadium? ((I compared him to Gibson, Marichal and Bunning)
The Best Five-Year Pitching Performances Since 1920 Based on Fielding Independent ERA
The Best Five-Year Pitching Performances | {"url":"http://cybermetric.blogspot.com/2010_07_01_archive.html","timestamp":"2014-04-16T04:44:40Z","content_type":null,"content_length":"120802","record_id":"<urn:uuid:6ace47d9-7839-457b-93d7-f293c30ab60a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Formulas
Now that students know the formula for the area of a triangle, it's time to build on this knowledge to develop a formula for the area of a parallelogram.
Materials: Overhead transparency, an activity sheet with six parallelograms on it with base and height measures given for four of the parallelograms, rulers for students to use
Preparation: Draw a parallelogram on an overhead transparency.
• Ask: Does anyone know what this figure is called?
Students should say that it is a parallelogram. If they don't, tell them what it is.
• Say: Let's list some of the properties of this figure on the board. Who can tell me something that is true about all parallelograms? Students will probably come up with several ideas such as the
opposites sides are parallel and congruent, the opposite angles are congruent, all rectangles are parallelograms, a rhombus is a parallelogram, the sum of the angles of a parallelogram is 360
degrees, etc.
Draw one of the diagonals of the parallelogram as shown below.
• Ask: What can you tell me about the two triangles triangle MNP and triangle OPN?
Students should say that the two triangles are congruent. If they don't, suggest that they are. If you need to, cut out a parallelogram then cut the parallelogram in half and show that the two
triangles are congruent.
• Say: That's right. They are congruent. Now we know how to find the area of a triangle and since this parallelogram is made up of two congruent triangles, we could find the area of one of the
triangles and double it.
• Ask: How would I find the area of triangle MNP?
Students should say you need to draw a line segment to show the height of the triangle. So, draw the line segment PQ in triangle MNP as shown below.
• Ask: What do I do now?
Students will say you need to find the measure of line segments MN and PQ and substitute them into the formula A = 1/2 (b x h). So measure the sides and find the area.
• Say: Now that I know the area of the triangle, how does that help me find the area of the parallelogram?
Students will say you need to double that value to find the area of the parallelogram.
• Say: That's right, so the formula for the area of a parallelogram is twice the area for a triangle, A = 1/2 (b x h) + A = 1/2 (b x h), or A = b x h.
Do another problem like this in which the measure for the height 8 feet and the base is 12 feet. Draw the picture of that parallelogram on the board. Have them find the area of the figure at
their desks. Have someone come to the board to do it for the class to see.
Pass out an activity sheet for students to work on individually or in pairs. | {"url":"http://www.eduplace.com/math/mathsteps/5/d/5.formulas.develop.html","timestamp":"2014-04-19T15:00:27Z","content_type":null,"content_length":"7664","record_id":"<urn:uuid:b56ad7e3-bd38-4d17-b0b7-329fa4a45349>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anomalous dispersion and resonant absorption
Next: Wave propagation through a Up: Electromagnetic wave propagation in Previous: The form of the
Thus, the phase velocity of the wave is determined by the real part of the refractive index via
Note that a positive imaginary component of the refractive index leads to the attenuation of the wave as it propagates.
Let us adopt the physical ordering
The maximum value of the function
Note that
Figure 5 shows a sketch of the variation of the functions anomalous dispersion. It is clear from the figure that normal dispersion occurs everywhere except in the immediate neighbourhood of the
resonant frequency resonant absorption. Anomalous dispersion and resonant absorption take place in the vicinity of the
The dispersion relation (4.18) only takes electron resonances into account. Of course, there are also resonances associated with displacements of the ions (or atomic nuclei). The off-resonance
contributions to the right-hand side of Eq. (4.18) from the ions are smaller than those from the electrons by a factor of order
Next: Wave propagation through a Up: Electromagnetic wave propagation in Previous: The form of the Richard Fitzpatrick 2002-05-18 | {"url":"http://farside.ph.utexas.edu/teaching/jk1/lectures/node57.html","timestamp":"2014-04-21T02:00:30Z","content_type":null,"content_length":"17081","record_id":"<urn:uuid:760d3d66-ce62-476a-94b9-bd150aa191cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHI 112 - LOGIC II (3 CR.)
Evaluates deductive arguments utilizing methods of symbolic logic. Lecture 3 hours per week.
To introduce the student to contemporary symbolic logic. Several methods of testing arguments for validity using symbolic logic will be stressed.
At the completion of this course, the student will have an appreciation for the use of symbols in expediting the process of determining the validity or invalidity of arguments. Specifically, the
student should be able to:
A. Define such terms as "truth-function," "simple statement," "compound statement," "conjunction," "negation," "disjunction," "material implication," "material equivalence," etc.
B. Be able to use the symbols for basic truth functions I preparing truth tables to: (a) determine the truth or falsity of compound statements; (b) determine the validity or invalidity of
arguments in symbolic logic, (c) determine statement-forms.
C. Be able to use the basic Rules of Inference to prove the validity of arguments using the Method of Deduction.
D. Be familiar with the basic techniques of Quantification Theory in testing arguments which cannot be dealt with using syllogistic or ordinary symbolic logic.
E. Be aware of the difference between inductive and deductive logical techniques.
A. The nature of symbolic logic; the distinction between simple and compound truth-functional statements; the truth-tables which define conjunction, negation, disjunction, material implication,
and material equivalence.
B. Use of truth tables in determining the truth or falsity of compound truth-functional statements.
C. Use of truth tables in determining the validity or invalidity of arguments.
D. The distinctions among tautologies, self-contradictions, and contingent statements.
E. The distinctions between statements and statement-forms, and between arguments and argument-forms.
F. The Method of Deduction; the Rules of Inference including the Replacement Rules.
G. Use of the Method of Deduction to determine the validity of arguments by means of a formal proof.
H. Short Proofs of Invalidity.
I. Quantification Theory: quantifiers, quantification rules, and their application.
Revised 10-95
Top of Page C&ES Home Page | {"url":"http://www.nvcc.edu/depts/academic/coursecont/summaries/phi112.htm","timestamp":"2014-04-16T05:01:51Z","content_type":null,"content_length":"6045","record_id":"<urn:uuid:f2f51fc5-0f02-4d94-a17e-8bc40f2be6d7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Abhi on Thursday, June 14, 2012 at 1:43pm.
If tan inverse(x^2-y^2/x^2+y^2)=a then prove that dy/dx=x/y (1-tana/1+tana).
• Maths - MathMate, Thursday, June 14, 2012 at 5:44pm
write equation as:
Cross multiply to eliminate denominators:
Implicitly differentiate both sides
(2x+2yy')tan(a) = 2x-2yy'
Group terms and express y'(=dy/dx) in terms of x, y and tan(a) to get the required expression.
• Maths - abhi, Tuesday, September 18, 2012 at 9:30am
Thanku soo much
Related Questions
calculus - find dy/dx y=ln (secx + tanx) Let u= secx + tan x dy/dx= 1/u * du/dx ...
Math - Prove the identity: tanA/secA+1 = secA-1/tanA Can u please explainit to ...
Maths - Prove sqrt(Sec^2 A+Cosec^2 A)=TanA+CotA
maths - tanA/secA-1 + tanA/secA+1 = 2 cosecA
Calculus : Derivative - If tan^-1(x^2-y^2/x^2+y^2) = a, prove that dy/dx = x(1...
Amaths - prove sin(A+B)/sin(A-B)=(tanA+tanB)/(tanA-tanB)
Maths - if sqrt(y)=(arctan)tan-1(x), show that (1+x^2)dy/dx((1+x^2)dy/dx)=2
maths trigo - prove that sin(n+1)A-sin(n-1)A/cos(n+1)A+2cos(nA)+cos(n-1)A=tanA/2
maths(trigonometry) - prove that sinA-cosA+1/sinA+cosA-1=1/secA-tanA
Amaths - prove tanA+tanB=sin(A+B)/cosAcosB | {"url":"http://www.jiskha.com/display.cgi?id=1339695809","timestamp":"2014-04-17T02:00:12Z","content_type":null,"content_length":"8493","record_id":"<urn:uuid:ea9c34cc-ec70-45b7-bc01-fbf5d5d6abb7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
the direction of electrical field in Colomb's formalism is an experimental work? can we detect the direction theoretically?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
I am absolutely sure we can. At the moment a friend of mine has a course on that. But I don't know exactly how to do it. It somehow works with the electrostatic potential. \[ \vec{E} = -\vec{\
nabla}\phi\] Than the electrical field is the way a free electron would take. And the electrical field is orthogonal to equipotential. I hope that'll give you an Idea on how to solve the problem.
Best Response
You've already chosen the best response.
may be! I have to think and then continue this discussion! thanks Kathi!
Best Response
You've already chosen the best response.
it is just a definition. You just have to define it once and then stick to that definition, otherwise your theories become inconstant. After all, there is no "plus" or "minus" in nature, only two
opposite charges that we needed a name for...
Best Response
You've already chosen the best response.
@r0oland so you said that the base of getting direction is only theory work?
Best Response
You've already chosen the best response.
It's a theory that describes what happens in the experiment. Maybe I am not getting your question right. You define the direction of your field as going from "plus" to "minus" (ie. from the
positive to the negative charge), while "plus" and "minus" are also only a definition for two opposite charges. The direction of the E-field is also the direction in which a positive test charge
will feel a force. So the direction can be predicted theoretically and has also a meaning in the "real world"... I hope that helps, otherwise I would ask you to restate your question so we might
find a solution... regs, Joe
Best Response
You've already chosen the best response.
hey joe! your answer educate me! but not in this case exactly! the answer of @kathi26 is exact! but I'm thankful Joe!
Best Response
You've already chosen the best response.
wikipedia quote: "The strength or magnitude of the field at a given point is defined as the force that would be exerted on a positive test charge of 1 coulomb placed at that point; the direction
of the field is given by the direction of that force." Kathi26s formula just states the connection between the E-Field and the electric potential. But the electric potential is only defined using
the electric field, hence we will run in circles using that. In what way is my answer not exact?
Best Response
You've already chosen the best response.
I only want a theoretical description to direction of field!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fffbca0e4b0848ddd64d16b","timestamp":"2014-04-18T23:15:23Z","content_type":null,"content_length":"46012","record_id":"<urn:uuid:004b6dda-cef5-4285-b5db-438e84d00780>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: polredabs(,16) for quadhilbertreal
Karim BELABAS on Tue, 22 Oct 2002 19:48:04 +0200 (MEST)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: polredabs(,16) for quadhilbertreal
On Mon, 21 Oct 2002, Igor Schein wrote:
> On Sat, Oct 19, 2002 at 06:42:20PM +0200, Bill Allombert wrote:
>> quadhilbertreal call polredabs0 via makescind via rnfpolredabs.
>> Should it be possible to use polredabs(,16) to avoid hanging on the factorization ?
Yes, I have made the necessary changes [ I had not implemented the relevant
portions of rnfpolredabs ].
> Based on my experience, as of current implementation, I wouldn't want
> to see it as a default, as long as it's not a default for polredabs()
> itself. I've see polynomials where polredabs(,16) is a noop until
> you, say, increase primelimit. So maybe, make lazy factorization a
> default for all related functions, but leave an option to do a
> complete one with a certain flag. Or, even make lazy factorization a
> global default(). As long as there's consistency, I have no
> problem.
I think this is irrelevant for quadhilbert. The problem with partial (lazy)
factorisation is when a "large" prime actually divides the field
discriminant. In general (assume K = Q[X]/(P)), you write disc(P) = D * f^2,
and you know that
v_p(disc(K)) <= v_p(D) for all p, with equality if p <= factorization bound
polredabs then LLL-reduces an order of discriminant D. If D >> disc(K), you
get little or no reduction.
For quadhilbert the ramification is under control, by definition, so the
fields which arise will have smooth discriminants and the above should never
occur [ until bnrstark() is able to treat huge conductors... ].
I've added the relevant nf_PARTIALFACT flags. Any regression ?
Karim Belabas Tel: (+33) (0)1 69 15 57 48
Dép. de Mathematiques, Bat. 425 Fax: (+33) (0)1 69 15 60 19
Université Paris-Sud Email: Karim.Belabas@math.u-psud.fr
F-91405 Orsay (France) http://www.math.u-psud.fr/~belabas/
PARI/GP Home Page: http://www.parigp-home.de/ | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0210/msg00097.html","timestamp":"2014-04-20T23:46:17Z","content_type":null,"content_length":"5659","record_id":"<urn:uuid:7a83c771-6991-4756-897f-eef1c4b8bc99>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Corte Madera Algebra 1 Tutor
Find a Corte Madera Algebra 1 Tutor
I have been tutoring friends and family for the past 5 years. I am passionate about education; especially in regards to youth. Some of my experience with youth includes: being a YMCA Youth Court
bailiff, church youth group leader, being Adept drug and alcohol prevention Youth Leader, and being Ambassador for Meaningful Life International to Ghana.
30 Subjects: including algebra 1, Spanish, English, reading
I graduated from Cornell University and Yale Law. I have several years of experience tutoring in a wide variety of subjects and all ages, from small kids to junior high to high school, and kids
with learning disabilities. I am also available to tutor adults who are preparing for the GRE, LSAT, or wish to learn a second language.
48 Subjects: including algebra 1, English, Spanish, reading
...I have taught 6th grade earth science and 8th grade physical science. I have tutored algebra 2, geometry, and Spanish as well as various sciences. I also have experience in the "Lindamood-Bell"
literacy, comprehension, and math techniques.
24 Subjects: including algebra 1, reading, chemistry, physics
...I've tutored family, students, and friends alike. Being a student, I haven't had the experience in teaching that others can offer, but I have a love of math that will hopefully make up for any
shortcomings. In my (relatively short) experience,I've learned that each student is unique and requires a different approach.
19 Subjects: including algebra 1, calculus, physics, writing
Dr. S. taught pre-medical Chemistry, nursing Chemistry and Algebra at Dominican University in San Rafael California before starting her own business. She has been teaching and tutoring since 1979.
29 Subjects: including algebra 1, chemistry, calculus, reading | {"url":"http://www.purplemath.com/Corte_Madera_algebra_1_tutors.php","timestamp":"2014-04-16T07:21:35Z","content_type":null,"content_length":"24107","record_id":"<urn:uuid:15968d24-17ed-4ff9-980a-52a2335ff460>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rotate a vector about another vector? [Archive] - OpenGL Discussion and Help Forums
01-08-2003, 03:32 AM
Hello there,
I need some help with rotating vectors. I'm building a flight simulator and when the plane is rolling left or right I need to rotate the lift vector(up) around the thrust vector(straight out).
I think I have to do it using cos, sin and tan, but not sure how.
Any help would be muchly appreciated. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-128925.html","timestamp":"2014-04-17T16:02:19Z","content_type":null,"content_length":"6162","record_id":"<urn:uuid:6bd5229b-f840-41c6-af2b-06aed87e7277>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ecological Archives
Ecological Archives E085-101-A2
Merel B. Soons, Gerrit W. Heil, Ran Nathan, and Gabriel G. Katul. 2004. Determinants of long-distance seed dispersal by wind in grasslands. Ecology 85:3056 3068.
Appendix B. The Markov chain Synthetic Turbulence Generation (STG) model. A pdf file of this appendix is also available.
The Markov chain STG model is a version of the forest seed dispersal model of Nathan et al. (2002b), adapted for grassland ecosystems. The general structure is presented in the Methods section and by
Katul and Albertson (1998) and Katul and Chang (1999). In this appendix the estimation of the Eulerian flow statistics needed to drive the Lagrangian dispersion is described, followed by descriptions
of the computational grid, numerical scheme, boundary conditions and closure constants.
Upon temporal and spatial averaging the conservation of momentum equations, and following the closure approximations of canopy flows, the Wilson and Shaw (1977) closure model simplifies to the
following set of ordinary differential equations (ODEs):
Mean Momentum: []
Tangential Stress Budget: []
Longitudinal Velocity Variance: []
Lateral Velocity Variance: []
Vertical Velocity Variance: []
Where [] is a characteristic velocity scale, C[d] is a drag coefficient, a(z) is a leaf area density, [j] = a[j] × L[ws] , with L[ws] a characteristic length scale specified using the formulation
given by Katul and Albertson (1998) and not permitted to increase at a rate larger than k, and a[1], a[2], a[3] and C are determined so that the flow conditions well above the canopy reproduce
well-established surface layer similarity relations. With estimates of the five constants (a[1], a[2], a[3], C, and [], [], [], [], and [], which are used to drive the Lagrangian model.
The computational grid
The computational flow domain was set from zero to 20 × h. The grid node spacing is z = 0.005 m. This grid density was necessary due to rapid variability in leaf area density close to the canopy top.
Parameter values at the exact location of the seed are calculated by interpolation between the grid nodes, or extrapolation in the case of x[3] > 20 × h. To ensure seeds do not exit the atmospheric
boundary layer during the computation of dispersal trajectories, the vertical position of a seed is not allowed to increase above 800 m.
The numerical scheme
The five ODEs for the Wilson and Shaw (1977) model were first discretized by central differencing all derivatives. An implicit numerical scheme was constructed for each ODE with boundary conditions
to be discussed in the following section. The tridiagonal system resulting from the implicit forms of each discretized equation was solved using the Tridag routine from Press et al. (1992; pp.42 43)
to produce the turbulent statistic profile. Profiles for all variables were initially assumed, and a variant of the relaxation scheme described by Wilson (1988) was used for all computed variables.
Relaxation factors as small as 5% were necessary in the iterative scheme because of the irregularity in the leaf area density profile. The measured leaf area density was interpolated at the
computational grid nodes by a cubic-spline discussed in Press et al. (1992; pp.107 111) to insure finite second derivatives of a(z). Convergence is achieved when the maximum difference between two
successive iterations in [] did not exceed 0.0001%. We checked that all solutions were independent of z (as described in Katul and Albertson 1998). Calculation of dt is described in Appendix A.
Boundary conditions and closure constants
Typically, the well-established flow statistics in the atmospheric surface layer provide convenient upper-boundary conditions for closure models. The boundary conditions used are:
Where [] is the standard deviation of any flow variable [] (= []), A[u] = 2.2, A[v ] = 2.0, and A[w] = 1.4 (Panofsky and Dutton 1984).
The closure constants are dependent on the choice of the boundary conditions and are determined by assuming that in the atmospheric surface layer (z > 2h), the flux-transport term is negligible and
that [], [], and [] become independent of z for near-neutral conditions. These simplifications result, after some algebraic manipulations (e.g., see Katul and Albertson 1998; Katul and Chang 1999),
in the following relationships between A[u], A[v], and A[w] and a[2], a[3], and C:
Where A[q] = (A[u]^2 + A[v]^2 + A[w]^2)^1/2. The closure constant a[1] is determined by noting that the eddy-diffusivity is k × (z d) × u[*] in the surface layer. Hence, q[1] becomes identical to k
× (z d) × u[*] leading to a[1] = 1/A[q]. The above equations are the first analytic expressions relating closure constants to ASL boundary conditions for the Wilson and Shaw (1977) model as
described by Katul and Albertson (1998) and Katul and Chang (1999). Table B1 summarizes the closure constants used resulting from our choice of A[u], A[v], and A[w].
Table B1. Closure constants used in the Markov chain STG model for A[u ] = 2.2, A[v] = 2.0, and A[w] = 1.4.
│Closure constant │Value│
│(Wilson and Shaw 1977) │ │
│A[1] │0.30 │
│A[2] │1.58 │
│A[3] │20.8 │
│ │0.07 │
│C │0.12 │
│C[d] │0.20 │
Katul, G. G., and J. D. Albertson. 1998. An investigation of higher order closure models for a forested canopy. Boundary-Layer Meteorology 89:47 74.
Katul, G.G., and W.H. Chang. 1999. Principal length scales in second-order closure models for canopy turbulence. Journal of Applied Meteorology 38:1631 1643.
Nathan, R., G.G. Katul, H.S. Horn, S.M. Thomas, R. Oren, R. Avissar, S.W. Pacala, and S.A. Levin. 2002b. Mechanisms of long-distance dispersal of seeds by wind. Nature 418:409 413.
Panofsky, H. and J. Dutton. 1984. Atmospheric Turbulence: Models and Methods for Engineering Applications. John Wiley, New York.
Press, W.H., S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. 1992. Numerical Recipes in Fortran. Cambridge University Press.
Wilson, J.D. 1988. A second order closure model for flow through vegetation. Boundary-Layer Meteorology 42:371 392.
Wilson, N.R., and R.H. Shaw. 1977. A higher order closure model for canopy flow. Journal of Applied Meteorology 16:1198 1205.
[Back to E085-101] | {"url":"http://esapubs.org/archive/ecol/E085/101/appendix-B.htm","timestamp":"2014-04-21T08:22:02Z","content_type":null,"content_length":"16031","record_id":"<urn:uuid:ab5fd9a2-f2f1-45d5-a6bf-64b8921aa540>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00607-ip-10-147-4-33.ec2.internal.warc.gz"} |
My favorite annoyance.
Re: My favorite annoyance.
But I have to admit I thought of that. It is well known that Borg nanoprobes can be anywhere ( see the book, "Nanoprobes, where are they and how do we stop them."
So to test the assertion that I am not a computer I drew some blood and subjected it to an electropheretic analysis.
It is a homebuilt unit but adequate. Nanoprobes react strongly to the microcurrents in the apparatus and will drift towards the anode.
I did not detect a single one. As the test is only 96.3% accurate, I sent it to 4 labs for confirmation.
So the chance that all five tests failed to detect that I am a computer is:
So there is about 1 chance in 14 400 000 that I am a computer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=152747","timestamp":"2014-04-17T01:05:12Z","content_type":null,"content_length":"12706","record_id":"<urn:uuid:9cea4fcd-b1c6-4f86-91a5-a9bc7d70829d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: A 2nTH ORDER LINEAR
Doug Anderson
Department of Mathematics and Computer Science, Concordia College
Moorhead, MN 56562, USA
ABSTRACT: We give a formulation of generalized zeros and (n, n)-disconjugacy
for even order formally self-adjoint scalar linear difference equations. Sign conditions
on the coefficient functions ensure (n, n)-disconjugacy. This and additional results
are obtained with the use of an associated nonlinear operator.
AMS Subject Classification. 39A10.
In this paper we will be concerned with the 2nth-order linear difference equation
ri(t - i)i
y(t - i) = 0 (1)
for t in the discrete interval [a, ) {a, a + 1, a + 2, . . .}, where ri(t) is real-valued
on [a + n - i, ) for 0 i n, and
rn(t) > 0 (2) | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/471/1077780.html","timestamp":"2014-04-21T07:16:19Z","content_type":null,"content_length":"7834","record_id":"<urn:uuid:1611037f-4fe1-4b92-81e4-64dced03cd5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
Issaquah Algebra 2 Tutor
Find a Issaquah Algebra 2 Tutor
...I feel that this is a very important asset, as I will be able to relate to the student on a more personal level. I have a strong understanding of chemistry and neuroscience, as my undergraduate
degree included work in these fields. I am also proficient with all medical school-related subjects, ...
23 Subjects: including algebra 2, English, reading, chemistry
...I had been teaching Chinese in heritage Chinese school for years. Recently, I had a Washington State teacher license in world language. I like to spend my leisure time travel to my home country
-- Taiwan and China, and volunteer myself in local community in Chinese cultural related work.
11 Subjects: including algebra 2, physics, geometry, Chinese
...Organic Chemistry was covered in my high school. When I attended Washington State University, I was placed directly into O-Chem; where I earned A's in lecture and lab. Immediately following the
class, I was selected by the WSU Chem department to be an O-chem peer-tutor (a tutorial instructor). I hold a BS in Chem E (GPA 3.7) with minors in Mathematics and Material Science Engineering.
62 Subjects: including algebra 2, English, chemistry, reading
...It is my belief that students are capable of understanding their weaker subjects as long as it is explained in a manner most fitted for the student: whether that be in drawing, words, or
otherwise. Schedule: My current schedule (as of March 2014), is fairly competitive and any scheduled sessions should be made a week in advanced. Cancellations should be made 6 hours in advanced.
17 Subjects: including algebra 2, chemistry, calculus, physics
...I mean, who moves from Hawaii to eastern Washington? Well, I did, because despite how beautiful and easygoing life in the islands is, there just wasn't the right opportunities for me to pursue
the higher education I wanted. So I moved to Pullman where I studied at Washington State University and received my B.S in biotechnology and continued on for a master's in science.
14 Subjects: including algebra 2, writing, geometry, biology | {"url":"http://www.purplemath.com/Issaquah_Algebra_2_tutors.php","timestamp":"2014-04-18T13:59:36Z","content_type":null,"content_length":"24122","record_id":"<urn:uuid:db447889-d523-43e1-a456-60f3e7778cd2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00423-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help solving for x!
sin(x) = sin(x + pi/3) I know this is simple but so am i... thanks in advance for any help
but there are solutions. When entering both of these in my graping calculator there are multiple points in which the two functions intersect. I just dont know how to solve for them. I'm looking for
the the solutions between 0 and 2pi.
Use the identity that $\sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)$ This gives the equation $\sin(x)=\frac{1}{2}\sin(x)+\frac{\sqrt{3}}{2}\cos( x)$ Or $\tan(x)=\sqrt{3}$ This will give you the solutions
you want.
If $x$ is a solution then $\sin(x)$ and $\sin(x+\pi/3)$ must lie at the intersections of some horizontal line through the unit circle. Furthermore, the angle between these points (measured from the
origin) must be $\pi/3$. But the other angles between the rightmost point and the positive x axis and the leftmost point and the negative x axis must be equal by symmetry, so call them $\theta$. This
leads to the equation $2\theta+\pi/3=\pi$, since together the three angles make a straight line. This gives you one solution and then the other is easily found.
Visually, you could use the Unit Circle. $sin(x)$ gives the y co-ordinate of a point on the Unit Circle, hence the y-axis from $-1$ to $1$ acts as an axis of symmetry. $0\le\ x\ \le \pi\Rightarrow\
sin(x)=sin\left(x+60^o\right)$ requires that $x$ and $x+60^0$ are $30^o$ either side of the y-axis. $x=90^o-30^o=60^o$ The exact same logic may be used underneath the x-axis for $\pi\le\ x\ \le\ 2\
pi$ to obtain a 2nd solution. Complete the analysis by adding multiples of 360 degrees to the two solutions. | {"url":"http://mathhelpforum.com/trigonometry/131444-need-help-solving-x-print.html","timestamp":"2014-04-23T20:09:58Z","content_type":null,"content_length":"11364","record_id":"<urn:uuid:e39ea899-b0fa-4c3a-baf3-540a191ec81d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00476-ip-10-147-4-33.ec2.internal.warc.gz"} |
Euler transformation
From Encyclopedia of Mathematics
The Euler transformation of series. Given a series
the series
is said to be obtained from (1) by means of the Euler transformation. Here
If the series (1) converges, then so does (2), and to the same sum as (1). If the series (2) converges (in this case (1) may diverge), then the series (1) is called Euler summable.
If (1) converges, if
are monotone, and if
then the series (2) converges more rapidly than (1) (see Convergence, types of).
L.D. Kudryavtsev
Euler's transformation is the integral transformation
The Euler transformation is applied to linear ordinary differential equations of the form
where the
is called the Euler transform of (2). If
provided that the integrated term arising from integration by parts vanishes. From this it follows that if
The Euler transformation makes it possible to reduce the order of (2) if Pochhammer equation).
[1] E.L. Ince, "Ordinary differential equations" , Dover, reprint (1956)
[2] E. Kamke, "Differentialgleichungen: Lösungen und Lösungsmethoden" , 1. Gewöhnliche Differentialgleichungen , Teubner (1943)
M.V. Fedoryuk
The Euler transform of the first kind is the integral transform
The Euler transform of the first kind is also called the fractional Riemann–Liouville integral of order
The Euler transform of the second kind is the integral transform
The Euler transform of the second kind is sometimes called the fractional Weyl integral of order
The above transforms have also been introduced for generalized functions.
[1] Y.A. Brychkov, A.P. Prudnikov, "Integral transformations of generalized functions" , Gordon & Breach (1988) (Translated from Russian)
Yu.A. BrychkovA.P. Prudnikov
See also Fractional integration and differentiation.
[a1] A. Erdélyi, W. Magnus, F. Oberhetinger, F.G. Tricomi, "Tables of integral transforms" , II , McGraw-Hill (1954) pp. Chapt. 13
[a2] A.C. McBride, "Fractional calculus and integral transforms of generalized functions" , Pitman (1979)
How to Cite This Entry:
Euler transformation. L.D. Kudryavtsev, M.V. Fedoryuk, Yu.A. Brychkov, A.P. Prudnikov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Euler_transformation","timestamp":"2014-04-19T04:21:38Z","content_type":null,"content_length":"27686","record_id":"<urn:uuid:e4caf4f8-18fe-4ef3-92e8-8b373fb5fc2f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Decimal fractions and the numer 1
I have wondered about this question. I know it may seem simplistic but maybe someone can explain to me how it occurs. Also, maybe there is a relationship between Mathematics and the concept of TIME.
At the very beginning of TIME, before you get to the first second of existence, you had the start of the first second, for example, lets start at 0.00000001> second. Now, since this number could go
on and on indefinitely, how would you ever get to the very first second of TIME? Or, is there some law of mathematics that would jump to the 1 second count because TIME requires that the number stop
going to infinitum and proceed to the first (1) second? I have always wondered about this. | {"url":"http://www.physicsforums.com/showpost.php?p=121548&postcount=1","timestamp":"2014-04-17T15:28:52Z","content_type":null,"content_length":"9062","record_id":"<urn:uuid:571d7ca8-2971-4b90-a303-838aa62affda>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 7. Energy-Efficient Power Contro
7.5 The Hierarchical Power Control Game 191 quasi-concavity property is not preserved by addition of a linear function, which is what happens here for u i , i K. Therefore, the DebreuFanGlicksberg
existence theorem cannot be used anymore. But, as noticed in Saraydar et al. (2002), by mak- ing an additional change, the resulting static game becomes super-modular, which allows one to apply an
existence theorem for super-modular games (Theorem 59 in Chapter 2). Indeed, the trick proposed in Saraydar et al. (2002) is to assume that each transmitter operates in the interval P = [P c , P max
], where P c is the power level i i i which transmitter i requires to operate at an SINR equal to 0 , 0 being the unique inflection point of f . It is simple to check that if i K, p -i p -i , then
the quan- tity u i (p) - u i p i , p -i is non-decreasing in p i on P i . The interpretation is that if the other transmitters, -i, increase their power (generating more interference), then
transmitter i has to increase his. The game: G = (K, { P i } iK , {u i } iK ) (7.10) is therefore super-modular, which ensures the existence of at least one pure Nash equilibrium, following Theorem
59 in Chapter 2. The uniqueness problem is not trivial, and so far only simulations (Saraydar et al., 2002) have been used to analyze this issue. Another disadvantage of pricing- | {"url":"http://my.safaribooksonline.com/book/-/9780123846983/chapter-7dot-energy-efficient-power-control-games/75_the_hierarchical_power_cont","timestamp":"2014-04-19T12:18:13Z","content_type":null,"content_length":"71240","record_id":"<urn:uuid:e1f51bcc-1a7f-40f7-ab8f-00ac618b393a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |