content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How can i learn Linear algebra
Check out MIT's open course website, or more specifically:
Those are great video lectures on linear algebra by prof. Gilbert Strang. I'm learning from them myself at the moment, bought the book also.
The content of the lectures is not that abstract, and really good as an introduction..and that's all it is really, an introduction. So if you like a more rigorous approach, I can't give you any
|
{"url":"http://www.physicsforums.com/showthread.php?t=78427","timestamp":"2014-04-18T03:00:06Z","content_type":null,"content_length":"43400","record_id":"<urn:uuid:5fb70a50-8e06-4f9c-9e95-22f7d47dde1c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Norridge, IL Algebra 2 Tutor
Find a Norridge, IL Algebra 2 Tutor
...In addition, Matlab was the primary computational tool used for my master's thesis. I have an undergraduate degree from Purdue University in Mechanical Engineering, with a GPA of 3.83. In
addition, I have a masters degree in Mechanical Engineering from the University of Texas at Austin, with a GPA of 3.83.
17 Subjects: including algebra 2, calculus, physics, geometry
...I discovered the ability to connect all types of inquiring minds with abstract concepts by using concrete examples. Consider the formula d=vt. It is difficult to process the practical use of
these symbols unless you might multiply 60mph by 2hrs to get 120 miles.
7 Subjects: including algebra 2, calculus, physics, geometry
...I received my PhD in Molecular Genetics in 2006 from the University of Illinois at Chicago, and I currently work at Loyola University as a research scientist. I use math and science in my
everyday life. My goal as a tutor is not only to help students learn math and science, but also to share my enthusiasm and passion for these subjects.
21 Subjects: including algebra 2, reading, study skills, algebra 1
...I now teach AP Calculus instead, but I love the topic of statistics and feel it is one of the most useful of applied mathematics skills to master. I have successfully tutored numerous students
in the topic over the years and would love to find out how I might be able to help you. Where the ACT ...
11 Subjects: including algebra 2, calculus, geometry, statistics
I am an incoming graduate student in math at the University of Chicago. I received my undergraduate degree in math and cello performance last May from Indiana University. During my undergraduate
years, I received national awards (the Goldwater scholarship, a top 25 finish in the Putnam exam), and high recognition from the IU math department (the Ciprian Foias Prize, the Marie S.
13 Subjects: including algebra 2, calculus, geometry, statistics
Related Norridge, IL Tutors
Norridge, IL Accounting Tutors
Norridge, IL ACT Tutors
Norridge, IL Algebra Tutors
Norridge, IL Algebra 2 Tutors
Norridge, IL Calculus Tutors
Norridge, IL Geometry Tutors
Norridge, IL Math Tutors
Norridge, IL Prealgebra Tutors
Norridge, IL Precalculus Tutors
Norridge, IL SAT Tutors
Norridge, IL SAT Math Tutors
Norridge, IL Science Tutors
Norridge, IL Statistics Tutors
Norridge, IL Trigonometry Tutors
|
{"url":"http://www.purplemath.com/norridge_il_algebra_2_tutors.php","timestamp":"2014-04-21T07:50:52Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:63108c3d-ed48-4408-8c6c-ce458d44536d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Euler Angles to a Matrix [Archive] - OpenGL Discussion and Help Forums
05-10-2010, 07:45 PM
I don't understand what happen here.
Why is it [ cos h 0 sin h ] on the first row? Isn't it should be [ cos h 0 -sin h ] ?
It explains here but still didn't understand what it stated. Can anyone here explain it more. Proving will really help.
Thank you
|
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-170957.html","timestamp":"2014-04-17T04:03:10Z","content_type":null,"content_length":"5078","record_id":"<urn:uuid:e2314bc1-523b-43b1-a1b2-db21f7ee6f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Montrose, CA Algebra 2 Tutor
Find a Montrose, CA Algebra 2 Tutor
...Even very difficult problems can be broken down into basic components, and applying the relevant formula becomes, well, formulaic. Being able to recognize the right patterns in math and
physics can turn a mind-bending problem into a step-by-step procedure.I have earned a Bachelor's degree in Eng...
11 Subjects: including algebra 2, calculus, SAT math, physics
...I also took an advanced music theory course called Romantic Symphony and wrote two papers on Schubert's "Pathetique" and Mahler's "Resurrection" symphony (I can email my papers if interested).
I have been playing saxophone since fourth grade (11 years and counting!). I love the saxophone for its...
18 Subjects: including algebra 2, chemistry, physics, calculus
...Professorship in Chemistry in the United States. More than 10 years of experience in teaching math and calculus from middle school-level, high school-level and college-level students. Earned
Ph.D. degree in physical Chemistry.
10 Subjects: including algebra 2, chemistry, calculus, statistics
...I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, Probability and Statistics. I have also helped students out
with their coursework in Physics and Finance. When I am tutoring, I enjoy getting to know the student and understanding the way the student learns.
14 Subjects: including algebra 2, physics, calculus, geometry
...I know what a student needs to learn to be successful not only in precalculus but also in calculus. I will help you do well in Precalculus so you can have a good mathematical foundation when
you take Calculus. I tutored 2 college students in precalculus last semester and they both did a great job.
9 Subjects: including algebra 2, calculus, geometry, algebra 1
Related Montrose, CA Tutors
Montrose, CA Accounting Tutors
Montrose, CA ACT Tutors
Montrose, CA Algebra Tutors
Montrose, CA Algebra 2 Tutors
Montrose, CA Calculus Tutors
Montrose, CA Geometry Tutors
Montrose, CA Math Tutors
Montrose, CA Prealgebra Tutors
Montrose, CA Precalculus Tutors
Montrose, CA SAT Tutors
Montrose, CA SAT Math Tutors
Montrose, CA Science Tutors
Montrose, CA Statistics Tutors
Montrose, CA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Montrose_CA_Algebra_2_tutors.php","timestamp":"2014-04-20T19:26:47Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:61e27623-bc5e-44b2-b862-100a5fbe4a36>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Discussion: All Tools on Computer, algebra 1
Discussion: All Tools on Computer
Topic: algebra 1
<< see all messages in this topic
< previous message | next message >
Subject: RE: algebra 1
Author: Hightower
Date: Apr 18 2007
On Apr 18 2007, zenus wrote:
I would like to find any dvd's for the explantion on how to do algebra 1 and two
help a depreate mom needing help to understand what my son is learning.
There are several excellent DVD's on Algebra that will assist you and your son.
You should obtain a copy of the Standard Deviants Algebra I, Parts One and
This "challeging tutorial" covers everything from Algebraic properties, Linear
Equations, and Functions in Part One to Quadratic Equations, Quadratic Roots and
Factors and Higher Order Polynomials in Part Two.
You can probably purchase a new Algebra I DVD online at www.cerebellum.com or a
cheap used copy at Amazon.com Marketplace. The Teaching Company,
1-800-Teach-12, also has an excellent Algebra I DVD taught by Dr. Monica
Neagoy, a Georgetown University Math professor.
The big difference between the Teaching Company and Standard Deviant DVD is the
price, with the Teaching Company DVD being much more expensive, and the fact
that the Teaching Company DVD comes with an excellent workbook, and the Standard
Deviants DVD does not.
I hope that this helps.
Larry Hightower, CPA
Math Teacher and Tutor
Reply to this message Quote this message when replying?
yes no
Post a new topic to the All Tools on Computer discussion
Visit related discussions:
Discussion Help
|
{"url":"http://mathforum.org/mathtools/discuss.html?pl=h&context=cell&do=r&msg=30905","timestamp":"2014-04-17T05:59:16Z","content_type":null,"content_length":"17124","record_id":"<urn:uuid:3d22b53a-2bd6-45c6-ad12-c9d65d0cc096>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chaitin's construction/Parser
From HaskellWiki
< Chaitin's construction
(Difference between revisions)
(Moving text and modifying section hierarchy. Mentioning term generators) (Moving combinatory logic term modules to a separate pege)
← Older edit
Line 33: Line 33:
=== Combinatory logic term modules === === Combinatory logic term modules ===
− ==== CL ==== + See [[../Combinatory logic | combinatory logic term modules here]].
− <haskell>
− module CL (CL, k, s, apply) where
− import Tree (Tree (Leaf, Branch))
− import BaseSymbol (BaseSymbol, kay, ess)
− type CL = Tree BaseSymbol
− k, s :: CL
− k = Leaf kay
− s = Leaf ess
− apply :: CL -> CL -> CL
− apply = Branch
− </haskell>
− ==== CL extension ====
− <haskell>
− module CLExt ((>>@)) where
− import CL (CL, apply)
− import Control.Monad (Monad, liftM2)
− (>>@) :: Monad m => m CL -> m CL -> m CL
− (>>@) = liftM2 apply
− </haskell>
− ==== Base symbol ====
− <haskell>
− module BaseSymbol (BaseSymbol, kay, ess) where
− data BaseSymbol = K | S
− kay, ess :: BaseSymbol
− kay = K
− ess = S
− </haskell>
=== Utility modules === === Utility modules ===
Latest revision as of 14:31, 4 August 2006
Let us describe the seen language with a LL(1) grammar, and let us make use of the lack of backtracking, lack of look-ahead, when deciding which parser approach to use.
Some notes about the used parser library: I shall use the didactical approach read in paper Monadic Parser Combinators (written by Graham Hutton and Erik Meier). The optimalisations described in the
paper are avoided here. Of course, we can make optimalisations, or choose sophisticated parser libraries (Parsec, arrow parsers). A pro for this simpler parser: it may be easier to augment it with
other monad transformers. But, I think, the task does not require such ability. So the real pro for it is that it looks more didactical for me. Of couse, it may be inefficient at many other tasks,
but I hope, the LL(1) grammar will not raise huge problems.
[edit] 1 Decoding function illustrated as a parser
[edit] 1.1 Decoding module
module Decode (clP) where
import Parser (Parser, item)
import CL (CL, k, s, apply)
import CLExt ((>>@))
import PreludeExt (bool)
clP :: Parser Bool CL
clP = item >>= bool applicationP baseP
applicationP :: Parser Bool CL
applicationP = clP >>@ clP
baseP :: Parser Bool CL
baseP = item >>= bool k s
kP, sP :: Parser Bool CL
kP = return k
sP = return s
[edit] 1.2 Combinatory logic term modules
See combinatory logic term modules here.
[edit] 1.3 Utility modules
[edit] 1.3.1 Binary tree
module Tree (Tree (Leaf, Branch)) where
data Tree a = Leaf a | Branch (Tree a) (Tree a)
[edit] 1.3.2 Parser
module Parser (Parser, runParser, item) where
import Control.Monad.State (StateT, runStateT, get, put)
type Parser token a = StateT [token] [] a
runParser :: Parser token a -> [token] -> [(a, [token])]
runParser = runStateT
item :: Parser token token
item = do
token : tokens <- get
put tokens
return token
[edit] 1.3.3 Prelude extension
module PreludeExt (bool) where
bool :: a -> a -> Bool -> a
bool thenC elseC t = if t then thenC else elseC
[edit] 2 Using this parser for decoding
[edit] 2.1 Approach based on decoding with partial function
Seen above, dc was a partial function (from finite bit sequences to combinatory logic terms). We can implement it e.g. as
dc :: [Bit] -> CL
dc = fst . head . runParser clP
where the use of
reveals that it is a partial function (of course, because not every bit sequence is a correct coding of a CL-term).
[edit] 2.2 Approach based on decoding with total function
If this is confusing or annoying, then we can choose another approach, making dc a total function:
dc :: [Bit] -> Maybe CL
dc = fst . head . runParser (neverfailing clP)
neverfailing :: MonadPlus m => m a -> m (Maybe a)
neverfailing p = liftM Just p `mplus` return Nothing
then, Chaitin's construction will be
$\sum_{p\in 2^*,\;\mathrm{maybe}\;\downarrow\;\mathrm{hnf}\;\left(\mathrm{dc}\;p\right)} 2^{-\left|p\right|}$
where $\downarrow$ should denote false truth value.
[edit] 3 Term generators instead of parsers
All these are illustrations -- they will not be present in the final application. The real software will use no parsers at all -- it will use term generators instead. It will generate directly “all”
combinatory logic terms in an “ascending length” order, attribute “length” to them, and approximating Chaitin's construct this way. It will not use strings / bit sequences at all: it will handle
combinatory logic-terms directly.
|
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Chaitin's_construction/Parser&diff=5180&oldid=5167","timestamp":"2014-04-19T10:02:45Z","content_type":null,"content_length":"39374","record_id":"<urn:uuid:5f16b29f-82d9-4f44-ba80-5a5bc26f5412>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Denton, TX Statistics Tutor
Find a Denton, TX Statistics Tutor
...I have worked for 19 years with middle school, and high schoolers working on basic math operations (elementary level) in a special education class setting (resource, content mastery,
inclusion)in Lewiville and Lake Dallas ISD. I am currently tutoring a 1st grade student in Frisco Texas in math. ...
24 Subjects: including statistics, English, ESL/ESOL, biology
...I use gcc and fortran compilers to compile the code. I have an MS degree in materials engineering. My thesis work focused on mechanical properties measurement.
60 Subjects: including statistics, chemistry, reading, calculus
I graduated from Brigham Young University in 2010 with a degree in Statistical Science and I am looking into beginning a master's program soon. I have always loved math and took a variety of math
classes throughout high school and college. I taught statistics classes at BYU for over 2 years as a TA and also tutored on the side.
7 Subjects: including statistics, geometry, algebra 1, SAT math
...Sometimes, this is all a student needs in order to achieve success in the classroom.I have a degree in elementary education from Texas Wesleyan University, as well as a grade 1 - 8 Texas
teaching certificate. I have more than 10 years of public school teaching experience, a master's degree in gi...
39 Subjects: including statistics, chemistry, English, writing
...I currently work full-time and tutor a small roster of students as time permits in the areas of economics, chemistry, and statistics. Help me help you! Having put myself through school, I
understand the need for comprehension in class, homework help, and maintaining good grades.
4 Subjects: including statistics, chemistry, economics, vocabulary
Related Denton, TX Tutors
Denton, TX Accounting Tutors
Denton, TX ACT Tutors
Denton, TX Algebra Tutors
Denton, TX Algebra 2 Tutors
Denton, TX Calculus Tutors
Denton, TX Geometry Tutors
Denton, TX Math Tutors
Denton, TX Prealgebra Tutors
Denton, TX Precalculus Tutors
Denton, TX SAT Tutors
Denton, TX SAT Math Tutors
Denton, TX Science Tutors
Denton, TX Statistics Tutors
Denton, TX Trigonometry Tutors
Nearby Cities With statistics Tutor
Carrollton, TX statistics Tutors
Corinth, TX statistics Tutors
Flower Mound statistics Tutors
Frisco, TX statistics Tutors
Garland, TX statistics Tutors
Keller, TX statistics Tutors
Lewisville, TX statistics Tutors
Mckinney statistics Tutors
N Richland Hills, TX statistics Tutors
N Richlnd Hls, TX statistics Tutors
North Richland Hills statistics Tutors
Northlake, TX statistics Tutors
Plano, TX statistics Tutors
Shady Shores, TX statistics Tutors
The Colony statistics Tutors
|
{"url":"http://www.purplemath.com/Denton_TX_statistics_tutors.php","timestamp":"2014-04-16T13:47:32Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:0caf1c8a-5567-4593-8f66-06c97c8b4055>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00553-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Semantic Domain
I am going to put off syntactic extensions for a bit, and talk about an entirely different bit of proof theory instead. I am going to talk about a new lambda-calculus for the logic of bunched
implications that I have recently been working on. $\newcommand{\lolli}{\multimap} \newcommand{\tensor}{\otimes}$
First, the logic of bunched implications (henceforth BI) is the substructural logic associated with things like
separation logic
. Now, linear logic basically works by replacing the single context of intuitionistic logic with two contexts, one for unrestricted hypotheses (which behaves the same as the intuitionistic one, and
access to which is controlled by a modality $!A$), and one which is
--- the rules of contraction and weakening are no longer allowed. The intuitionistic connectives are then encoded, so that $A \to B \equiv !A \lolli B$. BI also puts contraction and weakening under
control, but does not do so by simply creating two zones. Instead, contexts now become
, which lets BI simply
the substructural connectives $A \tensor B$ and $A \lolli B$ to intuitionistic logic. Here's what things look like in the implicational fragment:
A & ::= & P \bnfalt A \lolli B \bnfalt A \to B \\
\Gamma & ::= & A \bnfalt \cdot_a \bnfalt \Gamma; \Gamma \bnfalt \cdot_m \bnfalt \Gamma, \Gamma \\
Note that contexts are trees, since there are
context concatenation operators $\Gamma;\Gamma'$ and $\Gamma,\Gamma'$ (with units $\cdot_a$ and $\cdot_m$) which can be freely nested. They don't distribute over each other in any way, but they do
satisfy the commutative monoid properties. The natural deduction rules look like this (implicitly assuming the commutative monoid properties):
\frac{}{A \vdash A} &
\frac{\Gamma; A \vdash B}
{\Gamma \vdash A \to B}
\frac{\Gamma \vdash A \to B \qquad \Gamma' \vdash A}
{\Gamma;\Gamma' \vdash B}
\frac{\Gamma, A \vdash B}
{\Gamma \vdash A \lolli B}
\frac{\Gamma \vdash A \lolli B \qquad \Gamma' \vdash A}
{\Gamma,\Gamma' \vdash B}
\frac{\Gamma(\Delta) \vdash A}
{\Gamma(\Delta;\Delta') \vdash A}
\frac{\Gamma(\Delta;\Delta) \vdash A}
{\Gamma(\Delta) \vdash A}
Note that the substructural implication adds hypotheses with a comma, and the intuitionistic implication uses a semicolon. I've given the weakening and contraction explicitly in the final two rules,
so we can reuse the hypothesis rule.
Adding lambda-terms to this calculus is pretty straightforward, too:
\frac{}{x:A \vdash x:A} &
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
\frac{\Gamma \vdash e : A \to B \qquad \Gamma' \vdash e' : A}
{\Gamma;\Gamma' \vdash e\;e' : B}
\frac{\Gamma, x:A \vdash e : B}
{\Gamma \vdash A \lolli \hat{\lambda}x.e : B}
\frac{\Gamma \vdash e : A \lolli B \qquad \Gamma' e': \vdash A}
{\Gamma,\Gamma' \vdash e\;e' : B}
\frac{\Gamma(\Delta) \vdash e : A}
{\Gamma(\Delta;\Delta') \vdash e : A}
\frac{\Gamma(\Delta;\Delta') \vdash \rho(e) : A \qquad \Delta \equiv \rho \circ \Delta'}
{\Gamma(\Delta) \vdash e : A}
these terms is a knotty little problem. The reason is basically that we want lambda terms to tell us what the derivation should be, without requiring us to do much search. But contraction can rename
a host of variables at once, which means that there is a lot of search involved in typechecking these terms. (In fact, I personally don't know how to do it, though I expect that it's decidable, so
somebody probably does.) What would be really nice is a calculus which is saturated with respect to weakening, so that the computational content of the weakening lemma is just the identity on
derivation trees, and for which the computational content of contraction is an \emph{easy} renaming of a single obvious variable.
Applying the usual PL methodology of redefining the problem to one that can be solved, we can give an alternate type theory for BI in the following way. First, we'll define nested contexts so that
they make the alternation of spatial and intuitionistic parts explicit:
\Gamma & ::= & \cdot \bnfalt \Gamma; x:A \bnfalt r[\Delta] \\
\Delta & ::= & \cdot \bnfalt \Delta, x:A \bnfalt r[\Gamma] \\
& & \\
e & ::= & x \bnfalt \fun{x}{e} \bnfalt \hat{\lambda}x.\;e \bnfalt e\;e' \bnfalt r[e] \bnfalt \rho r.\;e \\
The key idea is to make the alternation of the spatial parts syntactic, and then to
each level shift with a variable $r$. So $x$ are ordinary variables, and $r$ are variables naming nested contexts. Then we'll add syntax $r[e]$ and $\rho r.e$ to explicitly annotate the level shifts:
\frac{}{\Gamma; x:A \vdash x:A} &
\frac{}{\cdot,x:A \vdash x:A}
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
\frac{\Gamma \vdash e : A \to B \qquad \Gamma \vdash e' : A}
{\Gamma \vdash e\;e' : B}
\frac{\Delta, x:A \vdash e : B}
{\Delta \vdash A \lolli \hat{\lambda}x.e : B}
\frac{\Delta \vdash e : A \lolli B \qquad \Delta' e': \vdash A}
{\Delta,\Delta' \vdash e\;e' : B}
\frac{r[\Gamma] \vdash e : A}
{\Gamma \vdash \rho r.\;e : A}
\frac{r[\Delta] \vdash e : A}
{\Delta \vdash \rho r.\;e : A}
\frac{\Gamma \vdash e : A}
{\cdot, r[\Gamma] \vdash r[e] : A}
\frac{\Delta \vdash e : A}
{\Gamma; r[\Delta] \vdash r[e] : A}
It's fairly straightforward to prove that contraction and weakening are admissible, and doing a contraction is now pretty easy, since you just have to rename some occurences of $r$ to two different
variables, but may otherwise leave the branching contexts to be the same.
It's obvious that you can take any derivation in this calculus and erase the $r$-variables to get a well-typed term of the $\alpha\lambda$-calculus. It's only a bit more work to go the other way:
given an $\alpha\lambda$-derivation, you prove that given any new context which erases to the $\alpha\lambda$-context, you can find a new derivation proving the same thign. Then there's an obvious
proof that you can indeed find a new-context which erases properly.
One interesting feature of this calculus is that the $r$-variables resemble regions in region calculi. The connection is not entirely clear to me, since my variables don't show up in the types. This
reminds me a little of the
contextual modal type theory
of Nanevski, Pfenning and Pientka. It reminds me even more of
Bob Atkey's PhD thesis
on generalizations of BI to allow arbitrary graph-structured sharing. But all of these connections are still speculative.
There is one remaining feature of these rules which is still non-algorithmic: as in linear logic, the context splitting in the spatial rules with multiple premises (for example, the spatial
application rule) just assumes that you know how to divide the context into two pieces. I think the standard state-passing trick should still work, and it may even be nicer than the additives of
linear logic. But I haven't worked out the details yet.
Bob Harper sometimes gets grumbly when people say that ML is an impure language, even though he knows exactly what they mean (and, indeed, agrees with it), because this way of phrasing things does
not pay data abstraction its full due.
Data abstraction lets us write verifiably pure programs in ML. By "verifiably pure", I mean that we can use the type system to guarantee that our functions are pure and total, even though ML's native
function space contains all sorts of crazy effects involving higher-order state, control, IO, and concurrency. (Indeed, ML functions can even spend your money to rent servers: now that's a side
effect!) Given that the ML function space contains these astonishing prodigies and freaks of nature, how can we control them? The answer is data abstraction: we can define new types of functions
which only contains well-behaved functions, and ensure through type abstraction that the only ways to form elements of this new type preserve well-behavedness.
Indeed, we will not just define a type of pure functions, but give an interface containing all the type constructors of the guarded recursion calculus I have described in the last few posts. The
basic idea is to give an ML module signature containing:
• One ML type constructor for each type constructor of the guarded recursion calculus.
• A type constructor for the hom-sets of the categorical interpretation of this calculus
• One function in the interface for each categorical combinator, such as identity, composition, and each of the natural transformations corresponding to the universal properties of the functors
interpreting the type constructors.
That's a mouthful, but it is much easier to understand by looking at the following (slightly pretty-printed) Ocaml module signature:
module type GUARDED =
type one
type α × β
type α ⇒ β
type α stream
type num
type •α
type (α, β) hom
val id : (α, α) hom
val compose : (α, β) hom -> (β, γ) hom -> (α, γ) hom
val one : (α, one) hom
val fst : (α × β, α) hom
val snd : (α × β, β) hom
val pair : (α, β) hom -> (α, γ) hom -> (α, β × γ) hom
val curry : (α × β,γ) hom -> (α,(β,γ) exp) hom
val eval : ((α ⇒ β, α) times, β) hom
val head : (α stream, α) hom
val tail : (α stream, •(α stream)) hom
val cons : (α × •(α stream), α stream) hom
val zero : (one,num) hom
val succ : (num, num) hom
val plus : (num × num, num) hom
val prod : (num × num, num) hom
val delay : (α, •α) hom
val next : (α,β) hom -> (•α, •β) hom
val zip : (•α × •β, •(α × β)) hom
val unzip : (•(α × β), •α × •β) hom
val fix : (•α ⇒ α, α) hom
val run : (one, num stream) hom -> (unit -> int)
As you can see, we introduce abstract types corresponding to each of our calculus's type constructors. So α × β and α ⇒ β are not ML pairs and functions, but rather is the type of products and
functions of our calculus. This is really the key idea -- since ML functions have too much stuff in them, we'll define a new type of pure functions. I replaced the "natural" numbers of the previous
posts with a stream type, corresponding to our LICS 2011 paper, since they are really a kind of lazy conatural, and not the true inductive type of natural numbers. The calculus guarantees definitions
are productive, but it's kind of weird in ML to see something called nat which isn't. So I replaced it with streams, which are supposed to yield an unbounded number of elements. (For true naturals,
you'll have to wait for my post on Mendler iteration, which is a delightful application of parametricity.)
The run function takes a num stream, and gives you back an imperative function that successively enumerates the elements of the stream. This is the basic trick for making streams fit nicely into an
event loop a la FRP.
However, we can implement these functions and types using traditional ML types:
module Guarded : GUARDED =
type one = unit
type α × β = α * β
type α ⇒ β = α -> β
type •α = unit -> α
type α stream = Stream of α * α stream delay
type num = int
type (α, β) hom = α -> β
let id x = x
let compose f g a = g (f a)
let one a = ()
let fst (a,b) = a
let snd (a,b) = b
let pair f g a = (f a, g a)
let curry f a = fun b -> f(a,b)
let eval (f,a) = f a
let head (Stream(a, as')) = a
let tail (Stream(a, as')) = as'
let cons (a,as') = Stream(a,as')
let zero () = 0
let succ n = n + 1
let plus (n,m) = n + m
let prod (n,m) = n * m
let delay v = fun () -> v
let next f a' = fun () -> f(a'())
let zip (a',b') = fun () -> (a'(), b'())
let unzip ab' = (fun () -> fst(ab'())), (fun () -> snd(ab'()))
let rec fix f = f (fun () -> fix f)
let run h =
let r = ref (h()) in
fun () ->
let Stream(x, xs') = !r in
let () = r := xs'() in
Here, we're basically just implementing the operations of the guarded recursion calculus using the facilities offered by ML. So our guarded functions are just plain old ML functions, which happen to
live at a type in which they cannot be used to misbehave!
This is the sense in which data abstraction lets us have our cake (effects!) and eat it too (purity!).
Note that when we want to turn a guarded lambda-term into an ML term, we can basically follow the categorical semantics to tell us what to write. Even though typechecking will catch all misuses of
this DSL, actually using it is honestly not that much fun (unless you're a Forth programmer), since even even small terms turn into horrendous combinator expressions -- but in another post I'll show
how we can write a CamlP4 macro/mini-compiler to embed this language into OCaml. This macro will turn out to involve some nice proof theory there, just as this ML implementation shows off how to use
the denotational semantics in our ML programming.
I will now give a termination proof for the guarded recursion calculus I sketched in the last two posts. This post got delayed because I tried to oversimplify the proof, and that didn't work -- I
actually had to go back and look at Nakano's original proof to figure out where I was going wrong. It turns out the proof is still quite simple, but there's one really devious subtlety in it.
First, we recall the types, syntax and values.
A ::= N | A → B | •A
e ::= z | s(e) | case(e, z → e₀, s(x) → e₁) | λx.e | e e
| •e | let •x = e in e | μx.e | x
v ::= z | s(e) | λx.e | •e
The typing rules are in an earlier post, and I give some big-step evaluation rules at the end of this post. Now, the question is, given · ⊢ e : A[n], can we show that e ↝ v?
To do this, we'll turn to our old friend, Mrs. step-indexed logical relation. This is a Kripke logical relation in which the Kripke worlds are given by the natural numbers and the accessibility
relation is given by ≤. So, we define a family of predicates on closed values indexed by type, and by a Kripke world (i.e., a natural number n).
V(•A)ⁿ = {•e | ∀j<n. e ∈ E(A)ʲ}
V(A ⇒ B)ⁿ = {λx.e | ∀j≤n, v ∈ V(A)ʲ. [v/x]e ∈ E(B)ʲ}
V(N)ⁿ = {z} ∪ { s(e) | ∀j<n. e ∈ E(N)ʲ}
E(A)ʲ = {e | ∃v. e ↝ v and v ∈ V(A)ʲ}
This follows the usual pattern of logical relations, where we give a relation defining values mutually recursively with a relation defining well-behaved expressions (i.e., expressions are ok if they
terminate and reduce to a value in the relation at that type).
Note that as we expect, j ≤ n implies V(A)ⁿ ⊆ V(A)ʲ. (The apparent antitonicity comes from the fact that if v is in the n-relation, it's also in the j relation.) One critical feature of this
definition is that at n = 0, the condition on V(•A)⁰ always holds, because of the strict less-than in the definition.
The fun happens in the interpretation of contexts:
Ctxⁿ(· :: j) = ()
Ctxⁿ(Γ,x:A[i] :: j) = {(γ,[v/x]) | γ ∈ Ctxⁿ(Γ) and v ∈ Vⁿ(A)}
when i ≤ j
Ctxⁿ(Γ,x:A[j+l] :: j) = {(γ,[e/x]) | γ ∈ Ctxⁿ(Γ) and •ˡe ∈ Vⁿ(•ˡA)}
when l > 0
The context interpretation has a strange dual nature. At times less than or equal to j, it is a familiar context of values. But at future times, it is a context of expressions. This is because the
evaluation rules substitute values for variables at the current time, and expressions for variables at future times. We abuse the bullet value relation in the third clause, to more closely follow
Nakano's proof.
On the one hand, the fixed point operator is μx.e at any type A, and unfolding this fixed point has to substitute an expression (the mu-term itself) for the variable x. So the fixed point rule tells
us that there is something necessarily lazy going on.
One the other hand, the focusing behavior of this connective is quite bizarre. It is not apparently positive or negative, since it distribute neither through all positives (eg, •(A + B) ≄ •A + •B)
nor is it the case that it distributes through all negatives (eg, •(A → B) ≄ •A → •B). (See Noam Zeilberger, The Logical Basis of Evaluation Order and Pattern Matching.)
I take this to mean that •A should probably be decomposed further. I have no present idea of how to do it, though.
Anyway, this is enough to let you prove the fundamental property of logical relations:
Theorem (Fundamental Property). If Γ ⊢ e : A[j], then for all n and γ ∈ Ctxⁿ(Γ :: j), we have that γ(e) ∈ Eⁿ(A).
The proof of this is a straightforward induction on typing derivations, with one nested induction at the fixed point rule. I'll sketch that case of the proof here, assuming an empty context Γ just to
reduce the notation:
Case: · ⊢ μx.e : A[j]
By inversion: x:A[j+1] ⊢ e : A[j]
By induction, for all n,e'. if •e' ∈ Vⁿ(•A) then [e'/x]e ∈ Eⁿ(A)
By nested induction on n, we'll show that [μx.e/x]e ∈ Eⁿ(A)
Subcase n = 0:
We know if •μx.e ∈ V⁰(•A) then [μx.e/x]e ∈ E⁰(A)
We know •μx.e ∈ V⁰(•A) is true, since · ⊢ μx.e : A[j]
Hence [μx.e/x]e ∈ E⁰(A)
Hence μx.e ∈ E⁰(A)
Subcase n = x+1:
We know if •μx.e ∈ Vˣ⁺¹(•A) then [μx.e/x]e ∈ Eˣ⁺¹(A)
By induction, we know [μx.e/x]e ∈ Eˣ(A)
Hence μx.e ∈ Eˣ(A)
Hence •μx.e ∈ Vˣ⁺¹(A)
So [μx.e/x]e ∈ Eˣ⁺¹(A)
Hence μx.e ∈ Eˣ⁺¹(A)
Once we have the fundamental property of logical relations, the normalization theorem follows immediately.
Corollary (Termination). If · ⊢ e : A[n], then ∃v. e ↝ v.
Evaluation rules:
e₁ ↝ (λx.e) e₂ ↝ v [v/x]e ↝ v'
—————— ————————————————————————————————
v ↝ v e₁ e₂ ↝ v'
e ↝ z e₀ ↝ v e ↝ s(e) [e/x]e₁ ↝ v
—————————————————————————————— —————————————————————————————————
case(e, z → e₀, s(x) → e₁) ↝ v case(s(e), z → e₀, s(x) → e₁) ↝ v
e₁ ↝ •e [e/x]e' ↝ v [μx.e/x]e ↝ v
————————————————————— ——————————————
let •x = e₁ in e₂ ↝ v μx.e ↝ v
In my previous post, I sketched some typing rules for a guarded recursion calculus. Now I'll give its categorical semantics. So, suppose we have a Cartesian closed category with a delay functor and
the functorial action and natural transformations:
•(f : A → B) : •A → •B
δ : A → •A
ι : •A × •B → •(A × B)
ι⁻¹ : •(A × B) → •A × •B
fix : (•A ⇒ A) → A
I intend that the next modality is a Cartesian functor (ie, distributes through products) and furthermore we have a delay operator δ. We also have a fixed point operator for the language. However, I
don't assume that the delay distributes through the exponential. Now, we can follow the usual pattern of categorical logic, and interpret contexts and types as objects, and terms as morphisms. So
types are interpreted as follows:
〚A → B〛 = 〚A〛 ⇒ 〚B〛
〚•A〛 = •〚A〛
Note that we haven't done anything with time indices yet. They will start to appear with the interpretation of contexts, which is relativized by time:
〚·〛ⁿ = 1
〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × •⁽ʲ⁻ⁿ⁾〚A〛 if j > n
〚Γ, x:A[j]〛ⁿ = 〚Γ〛 × 〚A〛 if j ≤ n
The idea is that we interpret a context at time n, and so all the indices are interpreted relative to that. If the index j is bigger than n, then we delay the hypothesis, and otherwise we don't. Then
we can interpret morphisms at time n as 〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛, which we give below:
〚Γ ⊢ e : A[n]〛 ∈ 〚Γ〛ⁿ → 〚A〛
〚Γ ⊢ μx.e : A[n]〛 =
fix ○ λ(〚Γ, x:A[n+1] ⊢ e : A[n]〛)
〚Γ ⊢ x : A[n]〛 =
π(x) (where x:A[j] ∈ Γ)
〚Γ ⊢ λx.e : A → B[n]〛 =
λ(〚Γ, x:A[n] ⊢ e : B[n]〛)
〚Γ ⊢ e e' : B[n]〛 =
eval ○ ⟨〚Γ ⊢ e : A → B[n]〛, 〚Γ ⊢ e' : B[n]〛⟩
〚Γ ⊢ •e : •A[n]〛 =
•〚Γ ⊢ e : A[n+1]〛 ○ Nextⁿ(Γ)
〚Γ ⊢ let •x = e in e' : •B[n]〛 =
〚Γ, x:A[n+1] ⊢ e' : B[n]〛 ○ ⟨id(Γ), 〚Γ ⊢ e : •A[n]〛⟩
Most of these rules are standard, with the exception of the introduction rule for delays. We interpret the body •e at time n+1, and then use the functorial action to get an element of type •A. This
means we need to take a context at time n and produce a delayed one interpreted at time n+1.
Nextⁿ(Γ) ∈ 〚Γ〛ⁿ → •〚Γ〛ⁿ⁺¹
Nextⁿ(·) = δ₁
Nextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δʲ⁻ⁿ) if j > n
Nextⁿ(Γ, x:A[j]) = ι ○ (Nextⁿ(Γ) × δ) if j ≤ n
I think this is a pretty slick way of interpreting hybrid annotations, and a trick worth remembering for other type constructors that don't necessarily play nicely with implications.
Next up, if I find a proof small enough to blog, is a cut-elimination/normalization proof for this calculus.
We have a new draft paper up, on controlling the memory usage of FRP. I have to say that I really enjoy this line of work: there's a very strong interplay between theory and practice. For example,
this paper --- which is chock full of type theory and denotational semantics --- is strongly motivated by questions that arose from thinking about how to make our implementation efficient.
In this post, I'm going to start spinning out some consequences of one minor point of our current draft, which we do not draw much attention to. (It's not really the point of the paper, and isn't
really enough to be a paper on its own -- which makes it perfect for research blogging.) Namely, we have a new delay modality, which substantially differs from the original Nakano proposed in his
LICS 2000 paper.
Recall that the delay modality $\bullet A$ is a type constructor for guarded recursion. I'll start by giving a small type theory for guarded recursion below.
A & ::= & A \to B \;\;|\;\; \bullet A \;\;|\;\; \mathbb{N} \\
\Gamma & ::= & \cdot \;\;|\;\; \Gamma, x:A[i]
As can be seen above, the types are delay types, functions, and natural numbers, and contexts come indexed with time indices $i$. So is the typing judgement $\Gamma \vdash e : A[i]$.
\frac{x:A[i] \in \Gamma \qquad i \leq j}
{\Gamma \vdash x : A[j]}
\Gamma \vdash e:A[i] &
\Gamma,x:A[i] \vdash e' : B[i]
{\Gamma \vdash \mathsf{let}\; x = e \;\mathsf{in}\; e' : B[i]}
\\ & \\
\frac{\Gamma, x:A[i] \vdash e : B[i]}
{\Gamma \vdash \lambda x.\;e : A \to B}
\Gamma \vdash e : A \to B [i] &
\Gamma \vdash e' : A[i]
{\Gamma \vdash e \; e' : B[i]}
\\ & \\
\frac{\Gamma \vdash e : A[i+1]}
{\Gamma \vdash \bullet e : A[i]}
\Gamma \vdash e : \bullet A[i] &
\Gamma, x:A[i+1] \vdash e' : B[i]
{\Gamma \vdash \mathsf{let}\; \bullet x = e \;\mathsf{in}\; e' : B[i]}
\\ & \\
{\Gamma \vdash \mathsf{z} : \mathbb{N}[i]}
\frac{\Gamma \vdash e : \mathbb{N}[i+1]}
{\Gamma \vdash \mathsf{s}(e) : \mathbb{N}[i]}
\\ & \\
\frac{\Gamma, x:A[i+1] \vdash e : A[i]}
{\Gamma \vdash \mu x.\;e : A[i]}
\Gamma \vdash e : \mathbb{N}[i] \\
\Gamma \vdash e_1 : A[i] \\
\Gamma, x:\mathbb{N}[i+1] \vdash e_2 : A[i]
{\Gamma \vdash \mathsf{case}(e, \mathsf{z} \to e_1, \mathsf{s}(x) \to e_2) : A[i]}
The $i+1$ in the successor rule for natural numbers pairs with the rule for case statements, and this is what allows the fixed point rule to do its magic. Fixed points are only well-typed if the
recursion variable occurs at a later time, and the case statement for numbers gives the variable one step later. So by typing we guarantee well-founded recursion!
The intro and elim rules for delays internalize increasing the time index, so that an intro for $\bullet A$ at time $i$ takes an expression of type $A$ at time $i+1$. We have a let-binding
elimination form for delays, which differs from our earlier LICS paper, which had a direct-style elimination for delays. The new elimination is weaker than the old one, in that it cannot prove the
isomorphism of $\bullet(A \to B) \simeq \bullet A \to \bullet B$.
This is really quite great, since that isomorphism was really hard to implement! (It was maybe half the complexity of the logical relation.) The only question is whether or not we can still give a
categorical semantics to this language. In fact, we can, and I'll
describe it in my next post.
|
{"url":"http://semantic-domain.blogspot.com/2011_07_01_archive.html","timestamp":"2014-04-18T14:25:25Z","content_type":null,"content_length":"118202","record_id":"<urn:uuid:6abb0723-d4d7-4ea7-970c-83de6a36af7e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Student Support Forum: 'Cubic splines as explicit functions' topic
Author Comment/Response
Fibonacci Prower
Given a sequence of abscissas t0 < t1 < ... < tn, I want a (actually, the unique) cubic spline S such that S(t0)=y0, ..., S(tn)=yn, and in addition, S'(t0)=k0, S'(tn)=k1.
First question: How can I obtain such a function in Mathematica?
Second question: Having obtained such a function, how can I see the explicit cubic polynomials of which it is composed?
URL: ,
|
{"url":"http://forums.wolfram.com/student-support/topics/25421","timestamp":"2014-04-20T21:12:07Z","content_type":null,"content_length":"33385","record_id":"<urn:uuid:1107051d-bb61-4f67-a0de-e0e407c8831c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
First-order Fragments with Successor over Infinite Words
when quoting this document, please refer to the following
http://drops.dagstuhl.de/opus/volltexte/2011/3026/ Kallas, Jakub
Kufleitner, Manfred
Lauser, Alexander
First-order Fragments with Successor over Infinite Words
We consider fragments of first-order logic and as models we allow finite and infinite words simultaneously. The only binary relations apart from equality are order comparison < and the successor
predicate +1. We give characterizations of the fragments Sigma_2 = Sigma_2[<,+1] and FO^2 = FO^2[<,+1] in terms of algebraic and topological properties. To this end we introduce the factor topology
over infinite words. It turns out that a language $L$ is in FO^2 cap Sigma_2 if and only if $L$ is the interior of an FO^2 language. Symmetrically, a language is in FO^2 cap Pi_2 if and only if it is
the topological closure of an FO^2 language. The fragment Delta_2 = Sigma_2 cap Pi_2 contains exactly the clopen languages in FO^2. In particular, over infinite words Delta_2 is a strict subclass of
FO^2. Our characterizations yield decidability of the membership problem for all these fragments over finite and infinite words; and as a corollary we also obtain decidability for infinite words.
Moreover, we give a new decidable algebraic characterization of dot-depth 3/2 over finite words. Decidability of dot-depth 3/2 over finite words was first shown by Glasser and Schmitz in STACS 2000,
and decidability of the membership problem for FO^2 over infinite words was shown 1998 by Wilke in his habilitation thesis whereas decidability of Sigma_2 over infinite words is new.
BibTeX - Entry
author = {Jakub Kallas and Manfred Kufleitner and Alexander Lauser},
title = {{First-order Fragments with Successor over Infinite Words}},
booktitle = {28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011) },
pages = {356--367},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-25-5},
ISSN = {1868-8969},
year = {2011},
volume = {9},
editor = {Thomas Schwentick and Christoph D{\"u}rr},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2011/3026},
URN = {urn:nbn:de:0030-drops-30267},
doi = {http://dx.doi.org/10.4230/LIPIcs.STACS.2011.356},
annote = {Keywords: infinite words, regular languages, first-order logic, automata theory, semi-groups, topology}
Keywords: infinite words, regular languages, first-order logic, automata theory, semi-groups, topology
Seminar: 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011)
Issue date: 2011
Date of publication: 2011
|
{"url":"http://drops.dagstuhl.de/opus/volltexte/2011/3026/","timestamp":"2014-04-16T05:43:59Z","content_type":null,"content_length":"9920","record_id":"<urn:uuid:7c27256f-886a-48c4-a885-ab7dd0848c72>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Notes by Dr. Optoglass: Sensitivity and ISO of the Human Eye
Topics Covered:
The ISO Range of the Human Eye
Photography to the amateur is recreation, to the professional it is work, and hard work too, no matter how pleasurable it my be – Edward Weston
A camera sensor (or film) is like a sponge:
It has the ability to soak in light. How does this ability affect the exposure?
Here’s the formula for exposure once again:
EV is the exposure value
N is the f-number
t is the shutter speed in seconds
S is the ISO arithmetic speed
E is the illuminance in lux
C is the incident-light meter calibration constant
For a given exposure, the third formula clearly shows S is inversely proportional to the illuminance.
In other words, when the light level reduces, the sensor must soak up more of whatever light is available to give the same exposure. When the light rises, the sensor must soak up less light to give
the same exposure. Is the ability of the sensor to act like a sponge unlimited?
Definitely not. Every sensor or film has a limited sensitivity. It has a minimum range and a maximum range. This range is called the sensitivity of the sensor.
Since aperture and shutter speed are changed in ‘stops’ (fixed values) – with each stop doubling or halving the light – the sensitivity of the sensor or film is also measured in stops. The standard
that is currently in vogue is the ISO standard (short for International Organization for Standardization).
The ISO standard is defined in two ways – linear and log (Since the formula for exposure is related to aperture and the shutter by a log function). Most of the time, though, the log value is ignored,
and the ISO value displayed or talked about is the arithmetic value. The ISO arithmetic scale directly corresponds to the ASA standard of rating film. The ISO log scale directly corresponds to the
DIN standard of rating film. The ISO range is also called the Exposure Index (EI).
The formula to convert the arithmetic ISO value to the logarithmic ISO value is
S is the arithmetic ISO (the same used in the formula for EV)
S^o is the log ISO value
The arithmetic value in absolute stops are 0.8, 1.6, 3, 6, 12, 25, 50, 100, 200, 400, 800, 1600, 3200, 6400, and so on.
Modern camera manufacturers rate their cameras beyond 10,000 ISO as follows: 12800, 25600, 51200, 102400, 204800. Values in between that represent half a stop, a third of a stop, etc are also
available on some cameras.
The f-number is based on a well-defined strict standard (mm), and the shutter speed is based on a well-defined strict standard (s), but the ISO – even though specified in detail – is still at the
mercy of how camera manufacturers choose to measure their devices. No two sensors are alike.
ISO Range of the Human Eye
A camera sensor has a limited and fixed ISO range – the human eye does not behave similarly. When in low light, the rods are excited, and as we have seen, a single photon can excite a single rod in
the human eye. However, the eye needs some time to adjust to low light, and don’t have the luxury of long exposures like a still camera.
Point being, there is no direct comparison. It’s like asking what the gas mileage of a human being is. Still, for fun’s sake, we can make some calculations. Here’s a formula:
H is in lux seconds
L is the luminance of the scene (in nits)
t is the exposure time (in seconds)
N is the aperture f-number
q is π/4, a factor taking into account transmission, vignetting and the angle relative to the axis.
This formula is used for digital cameras. What if we could measure the eye like a digital camera, and just showed the results to people?
Assuming the lowest lux level is 0.0001 lux, at an f-number of f/2, with an exposure time of 30 minutes (the time it takes the rods to become most sensitive in low light), the value of H-low is 0.11.
To the higher extreme, the maximum lux level that the eye can withstand might be about 35,000 (direct sunlight at noon is about 130,000 lux – but we can’t look at the sun, can we?) lux. At this
level, our eye stops down to f/8 and for fun’s sake let’s assume the shutter is 1/48s. H-high is then about 55.
The formula for ISO saturated is:
S[sat] is the ISO based on saturation
H[sat] is the values we got above for H in lux seconds
For these values, the lower ISO is about 1.4 and the higher ISO is about 709. I’d say from this we can assume the ISO range of the eye is from 1 to 800 ISO.
In addition to aperture and shutter speed, the exposure of a system also depends on the sensitivity of its sensor.
ISO is a standard that defines sensitivity in stops of light.
The ISO range of the human eye is from 1 to 800 ISO.
Links for further study:
Next: Dynamic Range of the Human Eye
Previous: The F-numbers of the Human Eye
Please share this primer with your friends:
August 10, 2012
|
{"url":"http://wolfcrow.com/blog/notes-by-dr-optoglass-sensitivity-and-iso-of-the-human-eye/","timestamp":"2014-04-19T07:59:50Z","content_type":null,"content_length":"50465","record_id":"<urn:uuid:92dcef9a-c147-4efc-9a0a-3d5ec4d3b477>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Inequalities help with word problem
January 6th 2009, 08:05 PM
Inequalities help with word problem
Suppose you want to cover the back yard with decorative rock and plants and some trees. I need 30 tons of rock to cover the area. If each ton cost $60 and each tree is $84, what is the maximum
number of trees you can buy with a budget for rocks and trees of $2,500? Write and inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you
arrived at your answer.
Now, I can do most of this by doing basic math. I know that 30 tons of rock at 60 dollars a ton is $1800 total. I subtracted the $1800 from the $2500 and I have $700 left for trees. Trees being
$84 dollars each I know that I can only by 8 trees for a total cost of $672 dollars. But I am so lost as to writing this as an inequality.
I thought inequality symbols were greater than or less than or greater than or equal to or less than or equal to.
I'm a bit confused as to how to write this problem as an inequality.
Im asked if 5 would be the solution to the inequality and I know that it wont because 5 does not get me close to the 700 mark for trees.
Im just a bit lost here.
January 7th 2009, 09:17 PM
Suppose you want to cover the back yard with decorative rock and plants and some trees. I need 30 tons of rock to cover the area. If each ton cost $60 and each tree is $84, what is the maximum
number of trees you can buy with a budget for rocks and trees of $2,500? Write and inequality that illustrates the problem and solve. Express your answer as an inequality and explain how you
arrived at your answer.
Now, I can do most of this by doing basic math. I know that 30 tons of rock at 60 dollars a ton is $1800 total. I subtracted the $1800 from the $2500 and I have $700 left for trees. Trees being
$84 dollars each I know that I can only by 8 trees for a total cost of $672 dollars. But I am so lost as to writing this as an inequality.
I thought inequality symbols were greater than or less than or greater than or equal to or less than or equal to.
I'm a bit confused as to how to write this problem as an inequality.
Im asked if 5 would be the solution to the inequality and I know that it wont because 5 does not get me close to the 700 mark for trees.
Im just a bit lost here.
Suppose max tree numbers those can buy is x, so x must be a nonnegative integer(U cann't buy or use half tree,:), so inequality is
x*84+30*60<=2500 (x is a nonnegative integer)
January 8th 2009, 05:26 AM
Ok..so...I just want to make sure I have this correct..so I hope I say it correctly..
The inequality is < because once the problem is solved it does not equal 2500 correct? The total amount of money used in this problem is $2472.00 making it less than $2500.00 right?
Now to solve for the problem I need to find out what x is right?
Then in the last part of the problem, when I am asked if 5 is a solution to the problem the answer is going to be no becuase it actually would be 8 trees that the person could buy for the the
money left over from purchasing the rocks.
Gahh...math is so hard..
Thank you for showing me how to write it as an inequality. I feel a bit better..I just am having trouble trying to figure out when to use x as the substitution in the equation..or any equation
for that matter.
January 11th 2009, 05:27 PM
Ok..so...I just want to make sure I have this correct..so I hope I say it correctly..
The inequality is < because once the problem is solved it does not equal 2500 correct? The total amount of money used in this problem is $2472.00 making it less than $2500.00 right?
You are all right.
Now to solve for the problem I need to find out what x is right?
Then in the last part of the problem, when I am asked if 5 is a solution to the problem the answer is going to be no becuase it actually would be 8 trees that the person could buy for the the
money left over from purchasing the rocks.
Gahh...math is so hard..
Thank you for showing me how to write it as an inequality. I feel a bit better..I just am having trouble trying to figure out when to use x as the substitution in the equation..or any equation
for that matter.
x just a meaning.
|
{"url":"http://mathhelpforum.com/algebra/67121-inequalities-help-word-problem-print.html","timestamp":"2014-04-24T16:22:04Z","content_type":null,"content_length":"9022","record_id":"<urn:uuid:15290fa0-70b2-4746-a4b5-fb209dd62a90>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Function restriction.
November 26th 2011, 03:13 PM #1
Junior Member
Oct 2011
Function restriction.
$f : R-> R$
$f(x)= x^{2}+2x+3$
a) Define a restriction of $f$ that admits inverse.
Can u help me ?? I dont have any idea, how to do that.
Last edited by mr fantastic; November 26th 2011 at 03:57 PM. Reason: Title.
Re: Function restriction help plz
$f(x) = x^2 + 2x + 3 = x^2 + 2x + 1 + 2 = (x+1)^2 + 2$
note that the graph of f(x) is a parabola with vertex at (-1,2)
for f(x) to have an inverse, the domain must be restricted so that f(x) is 1-1 ... that would be values of x such that x > -1
Re: Function restriction help plz
Re: Function restriction help plz
I already know the inverse function.
$x = +/-\sqrt{y-2}-1$
The inverse graph is a parabole with vertex(2,-1)
I dont understood the part f(x) is 1-1
btw: Sorry for posting in wrong section(this problem is from a university calculus test)
Re: Function restriction help plz
A function is 1-1 (or injective) if it maps different arguments to different values. For a function to have an inverse, it must be injective: if $y = f(x_1) = f(x_2)$ where $x_1e x_2$, then $f^
{-1}(y)$ is not defined because it cannot be both $x_1$ and $x_2$. Therefore, you need to find a restriction where the function is 1-1, as skeeter said.
Re: Function restriction help plz
Hum....so to be injective:
we know that vertex of f is (-1,2) so,
the restriction is [-1, +oo[ ??? as skeeter said...
Re: Function restriction help plz
Re: Function restriction help plz
Ok, so to f admit inverse we must restrict the domain in order to make f injective. (Sorry my english)
Re: Function restriction help plz
Yes, that's right.
November 26th 2011, 03:35 PM #2
November 26th 2011, 03:36 PM #3
November 26th 2011, 03:51 PM #4
Junior Member
Oct 2011
November 26th 2011, 04:07 PM #5
MHF Contributor
Oct 2009
November 26th 2011, 04:16 PM #6
Junior Member
Oct 2011
November 26th 2011, 04:22 PM #7
MHF Contributor
Oct 2009
November 26th 2011, 04:36 PM #8
Junior Member
Oct 2011
November 26th 2011, 04:37 PM #9
MHF Contributor
Oct 2009
|
{"url":"http://mathhelpforum.com/pre-calculus/192762-function-restriction.html","timestamp":"2014-04-18T07:21:54Z","content_type":null,"content_length":"57485","record_id":"<urn:uuid:45c4c627-68c3-4d3f-bee5-1f054078dc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sorting an array of unique random numbers at insertion
September 30th, 2011, 02:44 PM #1
Junior Member
Join Date
Sep 2011
I have a piece of code that works well to fill an array with a unique random numbers. My problem now is, that I want to sort these numbers, but not after the array is full but as new numbers are
being inserted. So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
using namespace std;
#define MAX 2000 // Values will be in the range (1 .. MAX)
static int seen[MAX]; // These are automatically initialised to zero
// by the compiler because they are static.
static int randomNum[1000];
int main (void) {
int i;
srand(time(NULL)); // Seed the random number generator.
for (i=0; i<1000; i++)
int r;
r = rand() / (RAND_MAX / MAX + 1);
while (seen[r]);
seen[r] = 1;
randomNum[i] = r + 1;
for (i=0; i<1000; i++)
cout << randomNum[i] << endl;
return 0;
Re: Sorting an array of unique random numbers at insertion
If you're looking for efficiency, a std::set may be more suitable than an array.
Re: Sorting an array of unique random numbers at insertion
...I want to sort these numbers, but not after the array is full but as new numbers are being inserted.
What is the difference?
Anyway, instead of assigning your new random number to the randomNum[i], you could loop from randomNum[i] to randomNum[0] looking for the insertion point. If the new number is greater – assign;
if it is less – move last element up and keep going.
Vlad - MS MVP [2007 - 2012] - www.FeinSoftware.com
Convenience and productivity tools for Microsoft Visual Studio:
FeinViewer - an integrated GDI objects viewer for Visual C++ Debugger, and more...
Re: Sorting an array of unique random numbers at insertion
If you're looking for efficiency, a std::set may be more suitable than an array.
Not just for efficiency reasons. Std::set has the two properties the OP is asking for - elements are unique and they're kept sorted at all times.
So it's just to keep inserting random ints until the set reaches the wanted size. Whenever the set is iterated the ints will appear in sorted order (and they will be unique).
Last edited by nuzzle; September 30th, 2011 at 03:17 PM.
Re: Sorting an array of unique random numbers at insertion
What is the difference?
Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all
insertions are done is O(n + n*log n), much faster.
If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good.
If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower.
Last edited by Lindley; September 30th, 2011 at 03:36 PM.
Re: Sorting an array of unique random numbers at insertion
So as each new number is inserted into the array, if finds the position it is meant to be in. The code I have for creating unique random numbers is below, thank you in advance:
Note that strictly the randomNum[] array isn't necessary since seen[] already holds all information about which ints have been randomly selected. Also seen[] is inherently sorted. Each newly
selected int will be at the "position it is meant to be in" because the position is the int.
Last edited by nuzzle; September 30th, 2011 at 04:35 PM.
Re: Sorting an array of unique random numbers at insertion
Well, running time of course. Assuming a naive insertion sort is used, keeping the data sorted after every insertion results in a running time of O(n*(n+log n)) at best. Sorting once after all
insertions are done is O(n + n*log n), much faster.
If a std::set is used, keeping the data sorted after every insertion is O(n*log n) but in some cases cache locality may not be as good.
If an array is required, "library sort" (basically insertion sort where some empty slots are left between valid elements) can be done in O(n*log n) but in practice it's still slower.
That's not correct.
O(n*(n+log n)) and O(n + n*log n) are not even proper complexity measures. It's O(n*n) and O(n*log n) respectively.
In the first case, even if you use an O(log n) binary search to locate the insertion point you need to insert it which is an O(n) operation. Repeating n insertions gives a complexity of O(n * n).
In the second case you place the items in the array in arbitrary order which is an O(n) operation. Afterwards you sort at O(n* log n). Taken together this gives O(n * log n) complexity.
Inserting all items in an ordered set has O(n * log n) complexity as you stated.
Re: Sorting an array of unique random numbers at insertion
Some would argue that when it comes to reads, a sorted vector is faster than a set, because of locality (I believe Meyers mentions it too in one of his "more effective"). This is an adaptor that
transforms the vector's interface into that of a set.
That said, it has methods specifically dedicated to NOT sort after every insertion, given its higher price.
Is your question related to IO?
Read this C++ FAQ LITE article at parashift by Marshall Cline. In particular points 1-6.
It will explain how to correctly deal with IO, how to validate input, and why you shouldn't count on "while(!in.eof())". And it always makes for excellent reading.
September 30th, 2011, 02:49 PM #2
Elite Member Power Poster
Join Date
Oct 2007
Fairfax, VA
September 30th, 2011, 03:12 PM #3
Elite Member Power Poster
Join Date
Aug 2000
New York, NY, USA
September 30th, 2011, 03:14 PM #4
Elite Member
Join Date
May 2009
September 30th, 2011, 03:25 PM #5
Elite Member Power Poster
Join Date
Oct 2007
Fairfax, VA
September 30th, 2011, 03:35 PM #6
Elite Member
Join Date
May 2009
September 30th, 2011, 04:33 PM #7
Elite Member
Join Date
May 2009
October 1st, 2011, 09:22 AM #8
Elite Member
Join Date
Jun 2009
|
{"url":"http://forums.codeguru.com/showthread.php?516860-Sorting-an-array-of-unique-random-numbers-at-insertion&p=2036053","timestamp":"2014-04-18T12:27:09Z","content_type":null,"content_length":"116552","record_id":"<urn:uuid:63d6c2d3-93d0-4a39-8c9a-5a525de66c27>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
|
maximal ideals
April 6th 2013, 01:52 AM
maximal ideals
Can someone please explain the meaning of "maximal ideals" to me ? I am really having a hard time, trying to understand the definition.Also, why is zero a maximal ideal for any field ?Please
April 6th 2013, 11:08 AM
Re: maximal ideals
A maximal ideal or a ring R is any ideal M (other than the whole ring R) that is not contained in any non-trivial ideal (M or R). Thus to verify an ideal is maximal, it must be shown that if an
ideal I contains M, then I must be R or M.
In any field, {0} is an ideal by itself since it is closed under addition and absorbs products. {0} is clearly not the whole field, so it might be an ideal. To verify this we must check that
there are no other ideal containing zero that is not the whole ring or {0}. Suppose I contained M and is not {0}. Then there exists another element r in I. But then r*r^-1=1 is in the ideal I
since I absorbs products. If 1 is in an ideal, than the ideal I is the whole ring. Thus M is maximal.
An example of something that is not a maximal ideal of the ring of integers (which is not a field) is M=(4). Since I=(2) contains M but is not the integers or M, M is not maximal. Here (4) means
the principle ideal generated by 4.
|
{"url":"http://mathhelpforum.com/advanced-algebra/216795-maximal-ideals-print.html","timestamp":"2014-04-20T05:45:31Z","content_type":null,"content_length":"4407","record_id":"<urn:uuid:144eb880-a46e-4e1f-acf5-2c46899745f7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Tools Discussion: All Topics, geom construction software with compass -> arcs, not circles
Discussion: All Topics
Topic: geom construction software with compass -> arcs, not circles
<< see all messages in this topic
< previous message
Subject: RE: geom construction software with compass -> arcs, not circles
Author: SGulati
Date: Oct 3 2011
I think you are looking for a software which will help your son construct arcs
rather than the circles.Math Open Reference(
http://www.mathopenref.com/tocs/constructionstoc.html ) illustrates step be step
constructions using compass.
Hope it helps.
Reply to this message Quote this message when replying?
yes no
Post a new topic to the General Discussion in Geometry on Computer discussion
Visit related discussions:
Discussion Help
|
{"url":"http://mathforum.org/mathtools/discuss.html?context=all&do=r&msg=140979","timestamp":"2014-04-17T08:06:27Z","content_type":null,"content_length":"16211","record_id":"<urn:uuid:fd11be81-e593-4229-9702-5ebe48d492e7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Penn Wynne, PA Math Tutor
Find a Penn Wynne, PA Math Tutor
...Maybe I'll be your chemistry teacher one day! I have tutored many friends throughout high school and college in chemistry. I have also tutored privately for several years.
14 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2
I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching
because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun...
12 Subjects: including calculus, trigonometry, SAT math, ACT Math
...There is a big difference between a student striving for 700s on the SAT and one hoping to reach the 500s. Likewise, a student struggling with Algebra I is a far cry from one going for an A in
honors pre-calculus. I am comfortable and experienced with all levels of students.
23 Subjects: including algebra 1, algebra 2, calculus, vocabulary
...I have experience in tutoring high school and college algebra I and II. I focus at identifying common errors and mistakes made by the student, point them out and thoroughly explain the right
concept and how it comes about. I have a great passion for the combination of numbers and letters and I ...
28 Subjects: including linear algebra, differential equations, trigonometry, precalculus
...I love learning and teaching. I have a lot of patience and want to see everyone succeed. Everyone has the ability to learn, it's just a matter of finding the right way to reach them.
8 Subjects: including calculus, trigonometry, SAT math, precalculus
Related Penn Wynne, PA Tutors
Penn Wynne, PA Accounting Tutors
Penn Wynne, PA ACT Tutors
Penn Wynne, PA Algebra Tutors
Penn Wynne, PA Algebra 2 Tutors
Penn Wynne, PA Calculus Tutors
Penn Wynne, PA Geometry Tutors
Penn Wynne, PA Math Tutors
Penn Wynne, PA Prealgebra Tutors
Penn Wynne, PA Precalculus Tutors
Penn Wynne, PA SAT Tutors
Penn Wynne, PA SAT Math Tutors
Penn Wynne, PA Science Tutors
Penn Wynne, PA Statistics Tutors
Penn Wynne, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Bala, PA Math Tutors
Belmont Hills, PA Math Tutors
Bywood, PA Math Tutors
Carroll Park, PA Math Tutors
Cynwyd, PA Math Tutors
Drexelbrook, PA Math Tutors
Kirklyn, PA Math Tutors
Llanerch, PA Math Tutors
Lower Merion, PA Math Tutors
Merion Park, PA Math Tutors
Overbrook Hills, PA Math Tutors
Penn Valley, PA Math Tutors
Pilgrim Gardens, PA Math Tutors
Pilgrim Gdns, PA Math Tutors
Rosemont, PA Math Tutors
|
{"url":"http://www.purplemath.com/penn_wynne_pa_math_tutors.php","timestamp":"2014-04-18T23:44:42Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:4c304aeb-40ea-496f-bd3d-f681be35fbdc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fermat's Last Theorem
In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n
for any integer value of n greater than two.
This theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful
proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th
century and the proof of the modularity theorem in the 20th century. It is among the most famous theorems in the history of mathematics and prior to its 1995 proof by Andrew Wiles it was in the
Guinness Book of World Records for "most difficult mathematical problems".
The problem[edit]
Fermat's Last Theorem (known by this title historically although technically a conjecture, or unproven speculation, until proven in 1995) stood as an unsolved riddle in mathematics for over three
centuries. The theorem itself is a deceptively simple statement that Fermat famously stated he had proved around 1637. His claim was discovered some 30 years later, after his death, written in the
margin of a book, but with no proof provided.
The claim eventually became one of the most famous unsolved problems of mathematics. Attempts to prove it prompted substantial development in number theory, and over time Fermat's Last Theorem itself
gained legendary prominence as an unsolved problem in popular mathematics. It is based on the Pythagorean theorem, which states that a^2 + b^2 = c^2, where a and b are the lengths of the legs of a
right triangle and c is the length of the hypotenuse.
The Pythagorean equation has an infinite number of positive integer solutions for a, b, and c; these solutions are known as Pythagorean triples. Fermat stated that the more general equation a^n + b^n
= c^n had no solutions in positive integers if n is an integer greater than 2. Although he claimed to have a general proof of his conjecture, Fermat left no details of his proof apart from the
special case n = 4.
Subsequent developments and solution[edit]
With the special case n = 4 proven, the problem was to prove the theorem for exponents n that are prime numbers (this limitation is considered trivial to prove^[note 1]). Over the next two centuries
(1637–1839), the conjecture was proven for only the primes 3, 5, and 7, although Sophie Germain innovated and proved an approach which was relevant to an entire class of primes. In the mid-19th
century, Ernst Kummer extended this and proved the theorem for all regular primes, leaving irregular primes to be analyzed individually. Building on Kummer's work and using sophisticated computer
studies, other mathematicians were able to extend the proof to cover all prime exponents up to four million, but a proof for all exponents was inaccessible (meaning that mathematicians generally
considered a proof to be either impossible, or at best exceedingly difficult, or not achievable with current knowledge).
The proof of Fermat's Last Theorem in full, for all n, was finally accomplished, however, after 358 years, by Andrew Wiles in 1995, an achievement for which he was honoured and received numerous
awards. The solution came in a roundabout manner, from a completely different area of mathematics.
Around 1955 Japanese mathematicians Goro Shimura and Yutaka Taniyama suspected a link might exist between elliptic curves and modular forms, two completely different areas of mathematics. Known at
the time as the Taniyama–Shimura-Weil conjecture, and (eventually) as the modularity theorem, it stood on its own, with no apparent connection to Fermat's Last Theorem. It was widely seen as
significant and important in its own right, but was (like Fermat's equation) widely considered to be completely inaccessible to proof.
In 1984, Gerhard Frey noticed an apparent link between the modularity theorem and Fermat's Last Theorem. This potential link was confirmed two years later by Ken Ribet (see: Ribet's Theorem and Frey
curve). On hearing this, English mathematician Andrew Wiles, who had a childhood fascination with Fermat's Last Theorem, decided to try to prove the modularity theorem as a way to prove Fermat's Last
Theorem. In 1993, after six years working secretly on the problem, Wiles succeeded in proving enough of the modularity theorem to prove Fermat's Last Theorem. Wiles paper was massive in size and
scope. A flaw was discovered in one part of his original paper during peer review and required a further year and collaboration with a past student, Richard Taylor, to resolve. As a result, the final
proof in 1995 was accompanied by a second, smaller, joint paper to that effect. Wiles's achievement was reported widely in the popular press, and was popularized in books and television programs. The
remaining parts of the modularity theorem were subsequently proven by other mathematicians, building on Wiles' work, between 1996 and 2001.
Mathematical history[edit]
Pythagoras and Diophantus[edit]
Pythagorean triples[edit]
A Pythagorean triple – named for the ancient Greek Pythagoras – is a set of three integers (a, b, c) that satisfy a special case of Fermat's equation (n = 2)^[1]
$a^2 + b^2 = c^2.\$
Examples of Pythagorean triples include (3, 4, 5) and (5, 12, 13). There are infinitely many such triples,^[2] and methods for generating such triples have been studied in many cultures, beginning
with the Babylonians^[3] and later ancient Greek, Chinese, and Indian mathematicians.^[4] The traditional interest in Pythagorean triples connects with the Pythagorean theorem;^[5] in its converse
form, it states that a triangle with sides of lengths a, b, and c has a right angle between the a and b legs when the numbers are a Pythagorean triple. Right angles have various practical
applications, such as surveying, carpentry, masonry, and construction. Fermat's Last Theorem is an extension of this problem to higher powers, stating that no solution exists when the exponent 2 is
replaced by any larger integer.
Diophantine equations[edit]
Fermat's equation, x^n + y^n = z^n with positive integer solutions, is an example of a Diophantine equation,^[6] named for the 3rd-century Alexandrian mathematician, Diophantus, who studied them and
developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two
given numbers A and B, respectively:
$A = x + y\$
$B = x^2 + y^2.\$
Diophantus's major work is the Arithmetica, of which only a portion has survived.^[7] Fermat's conjecture of his Last Theorem was inspired while reading a new edition of the Arithmetica,^[8] which
was translated into Latin and published in 1621 by Claude Bachet.^[9]
Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x^2 + y^2 = z^2 are given by the Pythagorean triples, originally
solved by the Babylonians (c. 1800 BC).^[10] Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC).^[11] Many Diophantine
equations have a form similar to the equation of Fermat's Last Theorem from the point of view of algebra, in that they have no cross terms mixing two letters, without sharing its particular
properties. For example, it is known that there are infinitely many positive integers x, y, and z such that x^n + y^n = z^m where n and m are relatively prime natural numbers.^[note 2]
Fermat's conjecture[edit]
Problem II.8 of the Arithmetica asks how a given square number is split into two other squares; in other words, for a given rational number k, find rational numbers u and v such that k^2 = u^2 + v^2.
Diophantus shows how to solve this sum-of-squares problem for k = 4 (the solutions being u = 16/5 and v = 12/5).^[12]
Around 1637, Fermat wrote his Last Theorem in the margin of his copy of the Arithmetica next to Diophantus' sum-of-squares problem:^[13]
Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et generaliter nullam it is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in
in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere cuius rei general, any power higher than the second, into two like powers. I have discovered a truly
demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet. marvellous proof of this, which this margin is too narrow to contain.^[14]
It is not known whether Fermat had actually found a valid proof. His proof of one case (n = 4) by infinite descent has survived.^[15] Fermat posed the cases of n = 4 and of n = 3 as challenges to his
mathematical correspondents, such as Marin Mersenne, Blaise Pascal, and John Wallis.^[16] However, in the last thirty years of his life, Fermat never again wrote of his "truly marvellous proof" of
the general case.
After Fermat's death in 1665, his son Clément-Samuel Fermat produced a new edition of the book (1670) augmented with his father's comments.^[17] The margin note became known as Fermat's Last Theorem,
^[18] as it was the last of Fermat's asserted theorems to remain unproven.^[19]
Proofs for specific exponents[edit]
Only one mathematical proof by Fermat has survived, in which Fermat uses the technique of infinite descent to show that the area of a right triangle with integer sides can never equal the square of
an integer.^[20] His proof is equivalent to demonstrating that the equation
$x^4 - y^4 = z^2$
has no primitive solutions in integers (no pairwise coprime solutions). In turn, this proves Fermat's Last Theorem for the case n=4, since the equation a^4 + b^4 = c^4 can be written as c^4 − b^4 = (
Alternative proofs of the case n = 4 were developed later^[21] by Frénicle de Bessy (1676),^[22] Leonhard Euler (1738),^[23] Kausler (1802),^[24] Peter Barlow (1811),^[25] Adrien-Marie Legendre
(1830),^[26] Schopis (1825),^[27] Terquem (1846),^[28] Joseph Bertrand (1851),^[29] Victor Lebesgue (1853, 1859, 1862),^[30] Theophile Pepin (1883),^[31] Tafelmacher (1893),^[32] David Hilbert
(1897),^[33] Bendz (1901),^[34] Gambioli (1901),^[35] Leopold Kronecker (1901),^[36] Bang (1905),^[37] Sommer (1907),^[38] Bottari (1908),^[39] Karel Rychlík (1910),^[40] Nutzhorn (1912),^[41] Robert
Carmichael (1913),^[42] Hancock (1931),^[43] and Vrǎnceanu (1966).^[44]
For another proof for n=4 by infinite descent, see Infinite descent: Non-solvability of r^2 + s^4 = t^4. For various proofs for n=4 by infinite descent, see Grant and Perella (1999),^[45] Barbara
(2007),^[46] and Dolan (2011).^[47]
After Fermat proved the special case n = 4, the general proof for all n required only that the theorem be established for all odd prime exponents.^[48] In other words, it was necessary to prove only
that the equation a^n + b^n = c^n has no integer solutions (a, b, c) when n is an odd prime number. This follows because a solution (a, b, c) for a given n is equivalent to a solution for all the
factors of n. For illustration, let n be factored into d and e, n = de. The general equation
a^n + b^n = c^n
implies that (a^d, b^d, c^d) is a solution for the exponent e
(a^d)^e + (b^d)^e = (c^d)^e.
Thus, to prove that Fermat's equation has no solutions for n > 2, it suffices to prove that it has no solutions for at least one prime factor of every n. All integers n > 2 contain a factor of 4, or
an odd prime number, or both. Therefore, Fermat's Last Theorem can be proven for all n if it can be proven for n = 4 and for all odd primes p (the only even prime number is the number 2).
In the two centuries following its conjecture (1637–1839), Fermat's Last Theorem was proven for three odd prime exponents p = 3, 5 and 7. The case p = 3 was first stated by Abu-Mahmud Khojandi (10th
century), but his attempted proof of the theorem was incorrect.^[49] In 1770, Leonhard Euler gave a proof of p = 3,^[50] but his proof by infinite descent^[51] contained a major gap.^[52] However,
since Euler himself had proven the lemma necessary to complete the proof in other work, he is generally credited with the first proof.^[53] Independent proofs were published^[54] by Kausler (1802),^[
24] Legendre (1823, 1830),^[26]^[55] Calzolari (1855),^[56] Gabriel Lamé (1865),^[57] Peter Guthrie Tait (1872),^[58] Günther (1878),^[59] Gambioli (1901),^[35] Krey (1909),^[60] Rychlík (1910),^[40]
Stockhaus (1910),^[61] Carmichael (1915),^[62] Johannes van der Corput (1915),^[63] Axel Thue (1917),^[64] and Duarte (1944).^[65] The case p = 5 was proven^[66] independently by Legendre and Peter
Gustav Lejeune Dirichlet around 1825.^[67] Alternative proofs were developed^[68] by Carl Friedrich Gauss (1875, posthumous),^[69] Lebesgue (1843),^[70] Lamé (1847),^[71] Gambioli (1901),^[35]^[72]
Werebrusow (1905),^[73] Rychlík (1910),^[74] van der Corput (1915),^[63] and Guy Terjanian (1987).^[75] The case p = 7 was proven^[76] by Lamé in 1839.^[77] His rather complicated proof was
simplified in 1840 by Lebesgue,^[78] and still simpler proofs^[79] were published by Angelo Genocchi in 1864, 1874 and 1876.^[80] Alternative proofs were developed by Théophile Pépin (1876)^[81] and
Edmond Maillet (1897).^[82]
Fermat's Last Theorem has also been proven for the exponents n = 6, 10, and 14. Proofs for n = 6 have been published by Kausler,^[24] Thue,^[83] Tafelmacher,^[84] Lind,^[85] Kapferer,^[86] Swift,^[87
] and Breusch.^[88] Similarly, Dirichlet^[89] and Terjanian^[90] each proved the case n = 14, while Kapferer^[86] and Breusch^[88] each proved the case n = 10. Strictly speaking, these proofs are
unnecessary, since these cases follow from the proofs for n = 3, 5, and 7, respectively. Nevertheless, the reasoning of these even-exponent proofs differs from their odd-exponent counterparts.
Dirichlet's proof for n = 14 was published in 1832, before Lamé's 1839 proof for n = 7.^[91]
Many proofs for specific exponents use Fermat's technique of infinite descent, which Fermat used to prove the case n = 4, but many do not. However, the details and auxiliary arguments are often ad
hoc and tied to the individual exponent under consideration.^[92] Since they became ever more complicated as p increased, it seemed unlikely that the general case of Fermat's Last Theorem could be
proven by building upon the proofs for individual exponents.^[92] Although some general results on Fermat's Last Theorem were published in the early 19th century by Niels Henrik Abel and Peter Barlow
,^[93]^[94] the first significant work on the general theorem was done by Sophie Germain.^[95]
Sophie Germain[edit]
Main article:
Sophie Germain
In the early 19th century, Sophie Germain developed several novel approaches to prove Fermat's Last Theorem for all exponents.^[96] First, she defined a set of auxiliary primes θ constructed from the
prime exponent p by the equation θ = 2hp+1, where h is any integer not divisible by three. She showed that if no integers raised to the p^th power were adjacent modulo θ (the non-consecutivity
condition), then θ must divide the product xyz. Her goal was to use mathematical induction to prove that, for any given p, infinitely many auxiliary primes θ satisfied the non-consecutivity condition
and thus divided xyz; since the product xyz can have at most a finite number of prime factors, such a proof would have established Fermat's Last Theorem. Although she developed many techniques for
establishing the non-consecutivity condition, she did not succeed in her strategic goal. She also worked to set lower limits on the size of solutions to Fermat's equation for a given exponent p, a
modified version of which was published by Adrien-Marie Legendre. As a byproduct of this latter work, she proved Sophie Germain's theorem, which verified the first case of Fermat's Last Theorem
(namely, the case in which p does not divide xyz) for every odd prime exponent less than 100.^[96]^[97] Germain tried unsuccessfully to prove the first case of Fermat's Last Theorem for all even
exponents, specifically for n = 2p, which was proven by Guy Terjanian in 1977.^[98] In 1985, Leonard Adleman, Roger Heath-Brown and Étienne Fouvry proved that the first case of Fermat's Last Theorem
holds for infinitely many odd primes p.^[99]
Ernst Kummer and the theory of ideals[edit]
In 1847, Gabriel Lamé outlined a proof of Fermat's Last Theorem based on factoring the equation x^p + y^p = z^p in complex numbers, specifically the cyclotomic field based on the roots of the number
1. His proof failed, however, because it assumed incorrectly that such complex numbers can be factored uniquely into primes, similar to integers. This gap was pointed out immediately by Joseph
Liouville, who later read a paper that demonstrated this failure of unique factorisation, written by Ernst Kummer.
Kummer set himself the task of determining whether the cyclotomic field could be generalized to include new prime numbers such that unique factorisation was restored. He succeeded in that task by
developing the ideal numbers. Using the general approach outlined by Lamé, Kummer proved both cases of Fermat's Last Theorem for all regular prime numbers. However, he could not prove the theorem for
the exceptional primes (irregular primes) which conjecturally occur approximately 39% of the time; the only irregular primes below 100 are 37, 59 and 67.
Mordell conjecture[edit]
In the 1920s, Louis Mordell posed a conjecture that implied that Fermat's equation has at most a finite number of nontrivial primitive integer solutions if the exponent n is greater than two.^[100]
This conjecture was proven in 1983 by Gerd Faltings,^[101] and is now known as Faltings' theorem.
Computational studies[edit]
In the latter half of the 20th century, computational methods were used to extend Kummer's approach to the irregular primes. In 1954, Harry Vandiver used a SWAC computer to prove Fermat's Last
Theorem for all primes up to 2521.^[102] By 1978, Samuel Wagstaff had extended this to all primes less than 125,000.^[103] By 1993, Fermat's Last Theorem had been proven for all primes less than four
However despite these efforts and their results, no proof existed of Fermat's Last Theorem. Proofs of individual exponents by their nature could never prove the general case: even if all exponents
were verified up to an extremely large number X, a higher exponent beyond X might still exist for which the claim was not true. (This had been the case with some other past conjectures, and it could
not be ruled out in this conjecture.)
Connection with elliptic curves[edit]
The strategy that ultimately led to a successful proof of Fermat's Last Theorem arose from the "astounding"^[105]^:211 Taniyama–Shimura-Weil conjecture, proposed around 1955, which many
mathematicians believed would be near to impossible to prove,^[105]^:223 and which was linked in the 1980s by Gerhard Frey, Jean-Pierre Serre and Ken Ribet to Fermat's equation. By accomplishing a
partial proof of this conjecture in 1995, Andrew Wiles ultimately succeeded in proving Fermat's Last Theorem, as well as leading the way to a full proof by others of what is now the modularity
The Taniyama–Shimura–Weil conjecture[edit]
Around 1955, Japanese mathematicians Goro Shimura and Yutaka Taniyama observed a possible link between two apparently completely distinct branches of mathematics, elliptic curves and modular forms.
The resulting modularity theorem (at the time known as the Taniyama–Shimura conjecture) states that every elliptic curve is modular, meaning that it can be associated with a unique modular form. It
was initially dismissed as unlikely or highly speculative, and was taken more seriously when number theorist André Weil found evidence supporting it, but no proof; as a result the "astounding"^[105]
^:211 conjecture was often known as the Taniyama–Shimura-Weil conjecture. It became a part of the Langlands programme, a list of important conjectures needing proof or disproof.^[105]^:211–215
Even after gaining serious attention, the conjecture was seen by contemporary mathematicians as extraordinarily difficult or perhaps inaccessible to proof.^[105]^:203–205, 223, 226 For example,
Wiles' ex-supervisor John Coates states that it seemed "impossible to actually prove",^[105]^:226 and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely
inaccessible", adding that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]."^[105]^:223
Frey's equation / Ribet's theorem[edit]
In 1984, Gerhard Frey noted a link between Fermat's equation and the modularity theorem, then still a conjecture. If Fermat's equation had any solution (a, b, c) for exponent p > 2, then it could be
shown that the elliptic curve (now known as a Frey curve ^[note 3])
y^2 = x (x − a^p)(x + b^p)
would have such unusual properties that it was unlikely to be modular.^[106] This would conflict with the modularity theorem, which asserted that all elliptic curves are modular. As such, Frey
observed that a proof of the Taniyama–Shimura-Weil conjecture would simultaneously prove Fermat's Last Theorem^[107] and equally, a disproof or refutation of Fermat's Last Theorem would disprove the
Following this strategy, a proof of Fermat's Last Theorem required two steps. First, it was necessary to show that Frey's intuition was correct: that if an elliptic curve were constructed in this
way, using a set of numbers that were a solution of Fermat's equation, the resulting elliptic curve could not be modular. Frey did not quite succeed in proving this rigorously; the missing piece (the
so-called "epsilon conjecture", now known as Ribet's theorem) was identified by Jean-Pierre Serre^[citation needed] and proven in 1986 by Ken Ribet. Second, it was necessary to prove the modularity
theorem – or at least to prove it for the sub-class of cases (known as semistable elliptic curves) which included Frey's equation – and this was widely believed inaccessible to proof by contemporary
mathematicians.^[105]^:203–205, 223, 226
• The modularity theorem – if proven – would mean all elliptic curves (or at least all semistable elliptic curves) are of necessity modular.
• Ribet's theorem – proven in 1986 – showed that if a solution to Fermat's equation existed, it could be used to create a semistable elliptic curve that was not modular;
• The contradiction would imply (if the modularity theorem were correct) that no solutions can exist to Fermat's equation – therefore proving Fermat's Last Theorem.
Wiles's general proof[edit]
Ribet's proof of the epsilon conjecture in 1986 accomplished the first of the two goals proposed by Frey. Upon hearing of Ribet's success, Andrew Wiles, an English mathematician with a childhood
fascination with Fermat's Last Theorem, and a prior study area of elliptical equations, decided to commit himself to accomplishing the second half: proving a special case of the modularity theorem
(then known as the Taniyama–Shimura conjecture) for semistable elliptic curves.^[108]
Wiles worked on that task for six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife.^[105]^:229–230 His
initial study suggested proof by induction,^[105]^:230–232, 249–252 and he based his initial work and first significant breakthrough on Galois theory^[105]^:251–253, 259 before switching to an
attempt to extend Horizontal Iwasawa theory for the inductive argument around 1990–91 when it seemed that there was no existing approach adequate to the problem.^[105]^:258–259 However, by the summer
of 1991, Iwasawa theory also seemed to not be reaching the central issues in the problem.^[105]^:259–260^[109] In response, he approached colleagues to seek out any hints of cutting edge research and
new techniques, and discovered an Euler system recently developed by Victor Kolyvagin and Matthias Flach which seemed "tailor made" for the inductive part of his proof.^[105]^:260–261 Wiles studied
and extended this approach, which worked. Since his work relied extensively on this approach, which was new to mathematics and to Wiles, in January 1993 he asked his Princeton colleague, Nick Katz,
to check his reasoning for subtle errors. Their conclusion at the time was that the techniques used by Wiles seemed to be working correctly.^[105]^:261–265^[110]
By mid-May 1993 Wiles felt able to tell his wife he thought he had solved the proof of Fermat's Last Theorem,^[105]^:265 and by June he felt sufficiently confident to present his results in three
lectures delivered on 21–23 June 1993 at the Isaac Newton Institute for Mathematical Sciences.^[111] Specifically, Wiles presented his proof of the Taniyama–Shimura conjecture for semistable elliptic
curves; together with Ribet's proof of the epsilon conjecture, this implied Fermat's Last Theorem. However, it became apparent during peer review that a critical point in the proof was incorrect. It
contained an error in a bound on the order of a particular group. The error was caught by several mathematicians refereeing Wiles's manuscript including Katz (in his role as reviewer),^[112] who
alerted Wiles on 23 August 1993.^[113]
The error would not have rendered his work worthless – each part of Wiles' work was highly significant and innovative by itself, as were the many developments and techniques he had created in the
course of his work, and only one part was affected.^[105]^:289, 296–297 However without this part proven, there was no actual proof of Fermat's Last Theorem. Wiles spent almost a year trying to
repair his proof, initially by himself and then in collaboration with Richard Taylor, without success.^[114]
On 19 September 1994, on the verge of giving up, Wiles had a flash of insight that the proof could be saved by returning to his original Horizontal Iwasawa theory approach, which he had abandoned in
favour of the Kolyvagin–Flach approach, this time strengthening it with expertise gained in Kolyvagin–Flach's approach.^[115] On 24 October 1994, Wiles submitted two manuscripts, "Modular elliptic
curves and Fermat's Last Theorem"^[116] and "Ring theoretic properties of certain Hecke algebras",^[117] the second of which was co-authored with Taylor and proved that certain conditions were met
which were needed to justify the corrected step in the main paper. The two papers were vetted and published as the entirety of the May 1995 issue of the Annals of Mathematics. These papers
established the modularity theorem for semistable elliptic curves, the last step in proving Fermat's Last Theorem, 358 years after it was conjectured.
Subsequent developments[edit]
The full Taniyama–Shimura–Weil conjecture was finally proved by Diamond (1996), Conrad, Diamond & Taylor (1999), and Breuil et al. (2001) who, building on Wiles' work, incrementally chipped away at
the remaining cases until the full result was proved. The now fully proved conjecture became known as the modularity theorem.
Several other theorems in number theory similar to Fermat's Last Theorem also follow from the same reasoning, using the modularity theorem. For example: no cube can be written as a sum of two coprime
n-th powers, n ≥ 3. (The case n = 3 was already known by Euler.)
Exponents other than positive integers[edit]
Rational exponents[edit]
All solutions of the Diophantine equation $a^{n/m} + b^{n/m} = c^{n/m}$ when n=1 were computed by Lenstra in 1992.^[118] In the case in which the m^th roots are required to be real and positive, all
solutions are given by^[119]
for positive integers r, s, t with s and t coprime.
In 2004, for n>2, Bennett, Glass, and Szekely proved that if gcd(n,m)=1, then there are integer solutions if and only if 6 divides m, and $a^{1/m}$, $b^{1/m},$ and $c^{1/m}$ are different complex 6th
roots of the same real number.^[120]
Negative exponents[edit]
n = –1[edit]
All primitive (pairwise coprime) integer solutions to $a^{-1}+b^{-1}=c^{-1}$ can be written as^[121]
for positive, coprime integers m, n.
n = –2[edit]
The case n = –2 also has an infinitude of solutions, and these have a geometric interpretation in terms of right triangles with integer sides and an integer altitude to the hypotenuse.^[122]^[123]
All primitive solutions to $a^{-2}+b^{-2}=d^{-2}$ are given by
$a=(v^2-u^2)(v^2+u^2), \,$
$b=2uv(v^2+u^2), \,$
$d=2uv(v^2-u^2), \,$
for coprime integers u, v with v > u. The geometric interpretation is that a and b are the integer legs of a right triangle and d is the integer altitude to the hypotenuse. Then the hypotenuse itself
is the integer
$c=(v^2+u^2)^2, \,$
so (a, b, c) is a Pythagorean triple.
Integer n < –2[edit]
There are no solutions in integers for $a^n+b^n=c^n$ for integer n < –2. If there were, the equation could be multiplied through by $a^{|n|}b^{|n|}c^{|n|}$ to obtain $(bc)^{|n|}+(ac)^{|n|}=(ab)^{|n|}
$, which is impossible by Fermat's Last Theorem.
Values other than positive integers[edit]
Fermat's last theorem can easily be extended to positive rationals:
can have no solutions, because any solution could be rearranged as:
to which Fermat's Last Theorem applies.
Did Fermat possess a general proof?[edit]
The mathematical techniques used in Fermat's "marvellous proof" are unknown. Only one detailed proof of Fermat has survived, the above proof that no three coprime integers (x, y, z) satisfy the
equation x^4 − y^4 = z^2. Note that this proves Fermat's Last Theorem for the case n=4, since the equation a^4 + b^4 = c^4 can be written as c^4 − b^4 = (a^2)^2.
Taylor and Wiles's proof relies on mathematical techniques developed in the 20th century,^[124] which would be unknown to mathematicians who had worked on Fermat's Last Theorem even a century
earlier. Fermat's "marvellous proof", by comparison, would have had to be elementary, given mathematical knowledge of the time, and so could not have been the same as Wiles' proof. Most
mathematicians and science historians doubt that Fermat had a valid proof of his theorem for all exponents n.^[citation needed]
Harvey Friedman's grand conjecture implies that any provable theorem (including Fermat's last theorem) can be proved using only elementary arithmetic, a rather weak form of arithmetic with addition,
multiplication, exponentiation, and a limited form of induction for formulas with bounded quantifiers.^[125] Any such proof would indeed be 'elementary' but could involve implausibly long proofs of
millions—or millions of millions—of steps, and might be far too long to be Fermat's proof.
Monetary prizes[edit]
In 1816 and again in 1850, the French Academy of Sciences offered a prize for a general proof of Fermat's Last Theorem.^[126] In 1857, the Academy awarded 3000 francs and a gold medal to Kummer for
his research on ideal numbers, although he had not submitted an entry for the prize.^[127] Another prize was offered in 1883 by the Academy of Brussels.^[128]
In 1908, the German industrialist and amateur mathematician Paul Wolfskehl bequeathed 100,000 marks to the Göttingen Academy of Sciences to be offered as a prize for a complete proof of Fermat's Last
Theorem.^[129] On 27 June 1908, the Academy published nine rules for awarding the prize. Among other things, these rules required that the proof be published in a peer-reviewed journal; the prize
would not be awarded until two years after the publication; and that no prize would be given after 13 September 2007, roughly a century after the competition was begun.^[130] Wiles collected the
Wolfskehl prize money, then worth $50,000, on 27 June 1997.^[131]
Prior to Wiles' proof, thousands of incorrect proofs were submitted to the Wolfskehl committee, amounting to roughly 10 feet (3 meters) of correspondence.^[132] In the first year alone (1907–1908),
621 attempted proofs were submitted, although by the 1970s, the rate of submission had decreased to roughly 3–4 attempted proofs per month. According to F. Schlichting, a Wolfskehl reviewer, most of
the proofs were based on elementary methods taught in schools, and often submitted by "people with a technical education but a failed career".^[133] In the words of mathematical historian Howard Eves
, "Fermat's Last Theorem has the peculiar distinction of being the mathematical problem for which the greatest number of incorrect proofs have been published."^[128]
See also[edit]
1. ^ If the exponent "n" were not prime then it would be possible to write n as a product of two smaller integers (n = P*Q) in which P is a prime number, and then a^n = a^P*Q = (a^Q)^P for each of
a, b, and c, i.e. an equivalent solution would also have to exist for the prime power P which is smaller than N, as well.
2. ^ For example, $\left((j^r+1)^s\right)^r + \left(j(j^r+1)^s)\right)^r = (j^r+1)^{rs+1}.$
3. ^ This elliptic curve was first suggested in the 1960s by Yves Hellegouarch (de), but he did not call attention to its non-modularity. For more details, see Hellegouarch, Yves (2001). Invitation
to the Mathematics of Fermat-Wiles. Academic Press. ISBN 978-0-12-339251-0.
1. ^ Stark, pp. 151–155.
2. ^ Stillwell J (2003). Elements of Number Theory. New York: Springer-Verlag. pp. 110–112. ISBN 0-387-95587-9.
3. ^ Aczel, pp. 13–15
4. ^ Singh, pp. 18–20.
5. ^ Singh, p. 6.
6. ^ Stark, pp. 145–146.
7. ^ Singh, pp. 50–51.
8. ^ Stark, p. 145.
9. ^ Aczel, pp. 44–45; Singh, pp. 56–58.
10. ^ Aczel, pp. 14–15.
11. ^ Stark, pp. 44–47.
12. ^ Friberg, pp. 333– 334.
13. ^ Dickson, p. 731; Singh, pp. 60–62; Aczel, p. 9.
14. ^ Panchishkin, p. 341
15. ^ Dickson, pp. 615–616; Aczel, p. 44.
16. ^ Ribenboim, pp. 13, 24.
17. ^ Singh, pp. 62–66.
18. ^ Dickson, p. 731.
19. ^ Singh, p. 67; Aczel, p. 10.
20. ^ Freeman L. "Fermat's One Proof". Retrieved 23 May 2009.
21. ^ Ribenboim, pp. 15–24.
22. ^ Frénicle de Bessy, Traité des Triangles Rectangles en Nombres, vol. I, 1676, Paris. Reprinted in Mém. Acad. Roy. Sci., 5, 1666–1699 (1729).
23. ^ Euler L (1738). "Theorematum quorundam arithmeticorum demonstrationes". Comm. Acad. Sci. Petrop. 10: 125–146.. Reprinted Opera omnia, ser. I, "Commentationes Arithmeticae", vol. I, pp. 38–58,
Leipzig:Teubner (1915).
24. ^ ^a ^b ^c Kausler CF (1802). "Nova demonstratio theorematis nec summam, nec differentiam duorum cuborum cubum esse posse". Novi Acta Acad. Petrop. 13: 245–253.
25. ^ Barlow P (1811). An Elementary Investigation of Theory of Numbers. St. Paul's Church-Yard, London: J. Johnson. pp. 144–145.
26. ^ ^a ^b Legendre AM (1830). Théorie des Nombres (Volume II) (3rd ed.). Paris: Firmin Didot Frères. Reprinted in 1955 by A. Blanchard (Paris).
27. ^ Schopis (1825). Einige Sätze aus der unbestimmten Analytik. Gummbinnen: Programm.
28. ^ Terquem O (1846). "Théorèmes sur les puissances des nombres". Nouv. Ann. Math. 5: 70–87.
29. ^ Bertrand J (1851). Traité Élémentaire d'Algèbre. Paris: Hachette. pp. 217–230, 395.
30. ^ Lebesgue VA (1853). "Résolution des équations biquadratiques z^2 = x^4 ± 2^my^4, z^2 = 2^mx^4 − y^4, 2^mz^2 = x^4 ± y^4". J. Math. Pures Appl. 18: 73–86.
Lebesgue VA (1859). Exercices d'Analyse Numérique. Paris: Leiber et Faraguet. pp. 83–84, 89.
Lebesgue VA (1862). Introduction à la Théorie des Nombres. Paris: Mallet-Bachelier. pp. 71–73.
31. ^ Pepin T (1883). "Étude sur l'équation indéterminée ax^4 + by^4 = cz^2". Atti Accad. Naz. Lincei 36: 34–70.
32. ^ Tafelmacher WLA (1893). "Sobre la ecuación x^4 + y^4 = z^4". Ann. Univ. Chile 84: 307–320.
33. ^ Hilbert D (1897). "Die Theorie der algebraischen Zahlkörper". Jahresbericht der Deutschen Mathematiker-Vereinigung 4: 175–546. Reprinted in 1965 in Gesammelte Abhandlungen, vol. I by New
34. ^ Bendz TR (1901). Öfver diophantiska ekvationen x^n + y^n = z^n. Uppsala: Almqvist & Wiksells Boktrycken.
35. ^ ^a ^b ^c Gambioli D (1901). "Memoria bibliographica sull'ultimo teorema di Fermat". Period. Mat. 16: 145–192.
36. ^ Kronecker L (1901). Vorlesungen über Zahlentheorie, vol. I. Leipzig: Teubner. pp. 35–38. Reprinted by New York:Springer-Verlag in 1978.
37. ^ Bang A (1905). "Nyt Bevis for at Ligningen x^4 − y^4 = z^4, ikke kan have rationale Løsinger". Nyt Tidsskrift Mat. 16B: 35–36.
38. ^ Sommer J (1907). Vorlesungen über Zahlentheorie. Leipzig: Teubner.
39. ^ Bottari A (1908). "Soluzione intere dell'equazione pitagorica e applicazione alla dimostrazione di alcune teoremi dellla teoria dei numeri". Period. Mat. 23: 104–110.
40. ^ ^a ^b Rychlik K (1910). "On Fermat's last theorem for n = 4 and n = 3 (in Bohemian)". Časopis Pěst. Mat. 39: 65–86.
41. ^ Nutzhorn F (1912). "Den ubestemte Ligning x^4 + y^4 = z^4". Nyt Tidsskrift Mat. 23B: 33–38.
42. ^ Carmichael RD (1913). "On the impossibility of certain Diophantine equations and systems of equations". Amer. Math. Monthly (Mathematical Association of America) 20 (7): 213–221. doi:10.2307/
2974106. JSTOR 2974106.
43. ^ Hancock H (1931). Foundations of the Theory of Algebraic Numbers, vol. I. New York: Macmillan.
44. ^ Vrǎnceanu G (1966). "Asupra teorema lui Fermat pentru n=4". Gaz. Mat. Ser. A 71: 334–335. Reprinted in 1977 in Opera matematica, vol. 4, pp. 202–205, Bucureşti:Edit. Acad. Rep. Soc. Romana.
45. ^ Grant, Mike, and Perella, Malcolm, "Descending to the irrational", Mathematical Gazette 83, July 1999, pp.263–267.
46. ^ Barbara, Roy, "Fermat's last theorem in the case n=4", Mathematical Gazette 91, July 2007, 260–262.
47. ^ Dolan, Stan, "Fermat's method of descente infinie", Mathematical Gazette 95, July 2011, 269–271.
48. ^ Ribenboim, pp. 1–2.
49. ^ Dickson, p. 545.
O'Connor, John J.; Robertson, Edmund F., "Abu Mahmud Hamid ibn al-Khidr Al-Khujandi", MacTutor History of Mathematics archive, University of St Andrews.
50. ^ Euler L (1770) Vollständige Anleitung zur Algebra, Roy. Acad. Sci., St. Petersburg.
51. ^ Freeman L. "Fermat's Last Theorem: Proof for n = 3". Retrieved 23 May 2009.
52. ^ Ribenboim, pp. 24–25; Mordell, pp. 6–8; Edwards, pp. 39–40.
53. ^ Aczel, p. 44; Edwards, pp. 40, 52–54.
J. J. Mačys (2007). "On Euler's hypothetical proof". Mathematical Notes 82 (3–4): 352–356. doi:10.1134/S0001434607090088. MR 2364600.
54. ^ Ribenboim, pp. 33, 37–41.
55. ^ Legendre AM (1823). "Recherches sur quelques objets d'analyse indéterminée, et particulièrement sur le théorème de Fermat". Mém. Acad. Roy. Sci. Institut France 6: 1–60. Reprinted in 1825 as
the "Second Supplément" for a printing of the 2nd edition of Essai sur la Théorie des Nombres, Courcier (Paris). Also reprinted in 1909 in Sphinx-Oedipe, 4, 97–128.
56. ^ Calzolari L (1855). Tentativo per dimostrare il teorema di Fermat sull'equazione indeterminata x^n + y^n = z^n. Ferrara.
57. ^ Lamé G (1865). "Étude des binômes cubiques x^3 ± y^3". C. R. Acad. Sci. Paris 61: 921–924, 961–965.
58. ^ Tait PG (1872). "Mathematical Notes". Proc. Roy. Soc. Edinburgh 7: 144.
59. ^ Günther S (1878). "Über die unbestimmte Gleichung x^3 + y^3 = z^3". Sitzungsberichte Böhm. Ges. Wiss.: 112–120.
60. ^ Krey H (1909). "Neuer Beweis eines arithmetischen Satzes". Math. Naturwiss. Blätter 6: 179–180.
61. ^ Stockhaus H (1910). Beitrag zum Beweis des Fermatschen Satzes. Leipzig: Brandstetter.
62. ^ Carmichael RD (1915). Diophantine Analysis. New York: Wiley.
63. ^ ^a ^b van der Corput JG (1915). "Quelques formes quadratiques et quelques équations indéterminées". Nieuw Archief Wisk. 11: 45–75.
64. ^ Thue A (1917). "Et bevis for at ligningen A^3 + B^3 = C^3 er unmulig i hele tal fra nul forskjellige tal A, B og C". Arch. Mat. Naturv. 34 (15). Reprinted in Selected Mathematical Papers
(1977), Oslo:Universitetsforlaget, pp. 555–559.
65. ^ Duarte FJ (1944). "Sobre la ecuación x^3 + y^3 + z^3 = 0". Ciencias Fis. Mat. Naturales (Caracas) 8: 971–979.
66. ^ Freeman L. "Fermat's Last Theorem: Proof for n = 5". Retrieved 23 May 2009.
67. ^ Ribenboim, p. 49; Mordell, p. 8–9; Aczel, p. 44; Singh, p. 106.
68. ^ Ribenboim, pp. 55–57.
69. ^ Gauss CF (1875, posthumous). "Neue Theorie der Zerlegung der Cuben". Zur Theorie der complexen Zahlen, Werke, vol. II (2nd ed.). Königl. Ges. Wiss. Göttingen. pp. 387–391.
70. ^ Lebesgue VA (1843). "Théorèmes nouveaux sur l'équation indéterminée x^5 + y^5 = az^5". J. Math. Pures Appl. 8: 49–70.
71. ^ Lamé G (1847). "Mémoire sur la résolution en nombres complexes de l'équation A^5 + B^5 + C^5 = 0". J. Math. Pures Appl. 12: 137–171.
72. ^ Gambioli D (1903/4). "Intorno all'ultimo teorema di Fermat". Il Pitagora 10: 11–13, 41–42.
73. ^ Werebrusow AS (1905). "On the equation x^5 + y^5 = Az^5 (in Russian)". Moskov. Math. Samml. 25: 466–473.
74. ^ Rychlik K (1910). "On Fermat's last theorem for n = 5 (in Bohemian)". Časopis Pěst. Mat. 39: 185–195, 305–317.
75. ^ Terjanian G (1987). "Sur une question de V. A. Lebesgue". Ann. Inst. Fourier 37: 19–37. doi:10.5802/aif.1096.
76. ^ Ribenboim, pp. 57–63; Mordell, p. 8; Aczel, p. 44; Singh, p. 106.
77. ^ Lamé G (1839). "Mémoire sur le dernier théorème de Fermat". C. R. Acad. Sci. Paris 9: 45–46.
Lamé G (1840). "Mémoire d'analyse indéterminée démontrant que l'équation x^7 + y^7 = z^7 est impossible en nombres entiers". J. Math. Pures Appl. 5: 195–211.
78. ^ Lebesgue VA (1840). "Démonstration de l'impossibilité de résoudre l'équation x^7 + y^7 + z^7 = 0 en nombres entiers". J. Math. Pures Appl. 5: 276–279, 348–349.
79. ^ Freeman L. "Fermat's Last Theorem: Proof for n = 7". Retrieved 23 May 2009.
80. ^ Genocchi A (1864). "Intorno all'equazioni x^7 + y^7 + z^7 = 0". Annali Mat. 6: 287–288.
Genocchi A (1874). "Sur l'impossibilité de quelques égalités doubles". C. R. Acad. Sci. Paris 78: 433–436.
Genocchi A (1876). "Généralisation du théorème de Lamé sur l'impossibilité de l'équation x^7 + y^7 + z^7 = 0". C. R. Acad. Sci. Paris 82: 910–913.
81. ^ Pepin T (1876). "Impossibilité de l'équation x^7 + y^7 + z^7 = 0". C. R. Acad. Sci. Paris 82: 676–679, 743–747.
82. ^ Maillet E (1897). "Sur l'équation indéterminée ax^λ^t + by^λ^t = cz^λ^t". Assoc. Française Avanc. Sci., St. Etienne (sér. II) 26: 156–168.
83. ^ Thue A (1896). "Über die Auflösbarkeit einiger unbestimmter Gleichungen". Det Kongel. Norske Videnskabers Selskabs Skrifter 7. Reprinted in Selected Mathematical Papers, pp. 19–30,
Oslo:Universitetsforlaget (1977).
84. ^ Tafelmacher WLA (1897). "La ecuación x^3 + y^3 = z^2: Una demonstración nueva del teorema de fermat para el caso de las sestas potencias". Ann. Univ. Chile, Santiago 97: 63–80.
85. ^ Lind B (1909). "Einige zahlentheoretische Sätze". Arch. Math. Phys. 15: 368–369.
86. ^ ^a ^b Kapferer H (1913). "Beweis des Fermatschen Satzes für die Exponenten 6 und 10". Archiv Math. Phys. 21: 143–146.
87. ^ Swift E (1914). "Solution to Problem 206". Amer. Math. Monthly 21: 238–239.
88. ^ ^a ^b Breusch R (1960). "A simple proof of Fermat's last theorem for n = 6, n = 10". Math. Mag. 33 (5): 279–281. doi:10.2307/3029800. JSTOR 3029800.
89. ^ Dirichlet PGL (1832). "Démonstration du théorème de Fermat pour le cas des 14^e puissances". J. Reine Angew. Math. 9: 390–393. Reprinted in Werke, vol. I, pp. 189–194, Berlin:G. Reimer (1889);
reprinted New York:Chelsea (1969).
90. ^ Terjanian G (1974). "L'équation x^14 + y^14 = z^14 en nombres entiers". Bull. Sci. Math. (sér. 2) 98: 91–95.
91. ^ Edwards, pp. 73–74.
92. ^ ^a ^b Edwards, p. 74.
93. ^ Dickson, p. 733.
94. ^ Ribenboim P (1979). 13 Lectures on Fermat's Last Theorem. New York: Springer Verlag. pp. 51–54. ISBN 978-0-387-90432-0.
95. ^ Singh, pp. 97–109.
96. ^ ^a ^b Laubenbacher R, Pengelley D (2007). "Voici ce que j'ai trouvé: Sophie Germain's grand plan to prove Fermat's Last Theorem". Retrieved 19 May 2009.
97. ^ Aczel, p. 57.
98. ^ Terjanian, G. (1977). "Sur l'équation x^2p + y^2p = z^2p". Comptes rendus hebdomadaires des séances de l'Académie des sciences. Série a et B 285: 973–975.
99. ^ Adleman LM, Heath-Brown DR (June 1985). "The first case of Fermat's last theorem". Inventiones Mathematicae (Berlin: Springer) 79 (2): 409–416. doi:10.1007/BF01388981.
100. ^ Aczel, pp. 84–88; Singh, pp. 232–234.
101. ^ Faltings G (1983). "Endlichkeitssätze für abelsche Varietäten über Zahlkörpern". Inventiones Mathematicae 73 (3): 349–366. doi:10.1007/BF01388432.
102. ^ Ribenboim P (1979). 13 Lectures on Fermat's Last Theorem. New York: Springer Verlag. p. 202. ISBN 978-0-387-90432-0.
103. ^ Wagstaff SS, Jr. (1978). "The irregular primes to 125000". Math. Comp. (American Mathematical Society) 32 (142): 583–591. doi:10.2307/2006167. JSTOR 2006167. (PDF)
104. ^ Buhler J, Crandell R, Ernvall R, Metsänkylä T (1993). "Irregular primes and cyclotomic invariants to four million". Math. Comp. (American Mathematical Society) 61 (203): 151–153. doi:10.2307/
2152942. JSTOR 2152942.
105. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i ^j ^k ^l ^m ^n ^o ^p ^q [Fermat's Last Theorem, Simon Singh, 1997, ISBN 1-85702-521-0
106. ^ Frey G (1986). "Links between stable elliptic curves and certain diophantine equations". Ann. Univ. Sarav. Ser. Math. 1: 1–40.
107. ^ Singh, pp. 194–198; Aczel, pp. 109–114.
108. ^ Singh, p. 205; Aczel, pp. 117–118.
109. ^ Singh, pp. 237–238; Aczel, pp. 121–122.
110. ^ Singh, pp. 239–243; Aczel, pp. 122–125.
111. ^ Singh, pp. 244–253; Aczel, pp. 1–4, 126–128.
112. ^ Aczel, pp. 128–130.
113. ^ Singh, p. 257.
114. ^ Singh, pp. 269–274.
115. ^ Singh, pp. 275–277; Aczel, pp. 132–134.
116. ^ Wiles, Andrew (1995). "Modular elliptic curves and Fermat's Last Theorem" (PDF). Annals of Mathematics (Annals of Mathematics) 141 (3): 443–551. doi:10.2307/2118559. JSTOR 2118559. OCLC
117. ^ Taylor R, Wiles A (1995). "Ring theoretic properties of certain Hecke algebras". Annals of Mathematics (Annals of Mathematics) 141 (3): 553–572. doi:10.2307/2118560. JSTOR 2118560. OCLC
118. ^ Lenstra, Jr. H.W. (1992). On the inverse Fermat equation. Discrete Mathematics, 106–107, pp. 329–331.
119. ^ Newton, M., "A radical diophantine equation", Journal of Number Theory 13 (1981), 495–498.
120. ^ Bennett, Curt D., Glass, Andrew M.W., and Székely, Gábor J. (2004). Fermat’s last theorem for rational exponents. The American Mathematical Monthly, 111, no. 4, pp. 322–329.
121. ^ Dickson, pp. 688–691
122. ^ Voles, Roger, "Integer solutions of $a^{-2}+b^{-2}=d^{-2}$," Mathematical Gazette 83, July 1999, 269–271.
123. ^ Richinick, Jennifer, "The upside-down Pythagorean Theorem," Mathematical Gazette 92, July 2008, 313–317.
124. ^ Avigad, Jeremy (2003). "Number theory and elementary arithmetic". Philosophia Mathematica. Philosophy of Mathematics, its Learning, and its Application. Series III 11 (3): 257–284. doi:10.1093
/philmat/11.3.257. ISSN 0031-8019. MR 2006194. CiteSeerX: 10.1.1.105.6509.
125. ^ Aczel, p. 69; Singh, p. 105.
126. ^ Aczel, p. 69.
127. ^ ^a ^b Koshy T (2001). Elementary number theory with applications. New York: Academic Press. p. 544. ISBN 978-0-12-421171-1.
128. ^ Singh, pp. 120–125, 131–133, 295–296; Aczel, p. 70.
129. ^ Singh, pp. 120–125.
130. ^ Singh, p. 284
131. ^ Singh, p. 295.
132. ^ Singh, pp. 295–296.
Further reading[edit]
External links[edit]
|
{"url":"http://blekko.com/wiki/Fermat's_Last_Theorem?source=672620ff","timestamp":"2014-04-17T23:14:11Z","content_type":null,"content_length":"213063","record_id":"<urn:uuid:02677859-4659-4f44-92f7-04865b891e64>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
|
aluable English
R.N. Swamy
Spon Pre
Number of
Published: 1990-12-01
List price: $560.00
This book presents the latest research development on fibre reinforced cementitious materials, especially those related to ageing and durability. The book forms the Proceedings of the International
Symposium held at Sheffield in July 1992, the latest in a series of RILEM symposia on this subject, organised by RILEM Technical Committee 102-AFC Ageing and Durability to Fibre Cement Composites.
John G. Lewis
Soc for Industrial & Applied Math
Number of Pages: 578
Published: 1994-05
List price: $107.00
This proceedings volume reflects the structure of the meeting on which it is based. Papers are arranged in two major sections: minisymposium presentations and technical papers for the common interest
sessions. The emphasis is on the latter.
E. Vazques
Spon Press
Number of Pages: 608
Published: 1990-12-31
List price: $300.00
Chemical admixtures are used to modify the properties and behaviour of fresh and hardened concrete. They enable more economic construction and the achievement of special properties such as high
strength or durability. This book presents research information from an International RILEM Symposium on six main topics: workability, setting, strength, durability, other properties and technology.
L. Taerwe
Spon Press
Number of Pages: 714
Published: 1995-08-01
List price: $441.00
Dealing with a wide range of non-metallic materials, this book opens up possibilities of lighter, more durable structures. With contributions from leading international researchers and design
engineers, it provides a complete overview of current knowledge on the subject.
Society for Industrial Mathematics
Number of Pages: 937
Published: 2001-01-04
List price: $157.00
Symposium held in Washington, DC, January 2001. The Symposium was jointly sponsored by the SIAM Activity Group on Discrete Mathematics and by SIGACT, the ACM Special Interest Group on Algorithms and
Computation Theory. Contains 130 papers, which were selected based on originality, technical contribution, and relevance. Although the papers were not formally refereed, every attempt was made to
verify the main claims. It is expected that most will appear in more complete form in scientific journals. The proceedings also includes the paper presented by invited plenary speaker Ronald Graham,
S. P. Novikov
Amer Mathematical Society
Number of Pages: 250
Published: 1993-11
List price: $208.00
This book contains the proceedings of an international topology conference held in the town of Zagulba, near Baku in the former Soviet Union, in October 1987. Sponsored by the Institute of
Mathematics and Mechanics of Azerbaijan and the Steklov Mathematical Institute, the conference was organized by F. G. Maksudov and S. P. Novikov. About 400 mathematicians, including about 100
foreigners, attended the conference. The book covers aspects of general, algebraic, and low-dimensional topology.
|
{"url":"http://www.ccemagz.com/list/proceedings/7","timestamp":"2014-04-18T18:23:44Z","content_type":null,"content_length":"29572","record_id":"<urn:uuid:96c41bfa-a46f-4fbb-8ac1-1ca8fe458ec1>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is a topos?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
According to Higher Topos Theory math/0608040 topos is
a category C which behaves like the category of sets, or (more generally) the category of sheaves of sets on a topological space.
up vote 12 down vote favorite Could one elaborate on that?
lo.logic topos-theory
add comment
a category C which behaves like the category of sets, or (more generally) the category of sheaves of sets on a topological space.
There are two concepts which both get called a topos, so it depends on who you ask. The more basic notion is that of an elementary topos, which can be characterized in several ways. The
simple definition:
An elementary topos is a category C which has finite limits and power objects.
(A power object for A is an object P(A) such that morphisms B --> P(A) are in natural bijection with subobjects of A x B, so we could rephrase the condition "C has power objects" as "the
functor Sub(A x -) is representable for every object A in C").
The issue with the simple definition is that it doesn't show you why these things are actually interesting. It turns out that a great deal follows from these axioms. For example, C also
up vote 24 has finite colimits, exponential objects, has a representable limit-preserving functor P: C^op --> Doct where Doct the category of Heyting algebras such that if f: AxB --> A is the
down vote projection map for some objects A and B in C, then P(A) --> P(AxB) has both left and right adjoints considered as a morphism of Heyting algebras, etc etc. What the long-winded definition
accepted boils down to is "an elementary topos the the category of types in some world of intuitionistic logic." There's an incredible amount of material here; the best place to start is probably
MacLane and Moerdijk's Sheaves in Geometry and Logic. The main reference work is Johnstone's as-yet-unfinished Sketches of an Elephant, but I certainly wouldn't start there.
The other major notion of topos is that of a Grothendieck topos, which is the category of sheaves of sets on some site (a site is a (decently nice) category with a structure called a
Grothendieck topology which generalizes the notion of "open cover" in the category of open sets in a topological space). Grothendieck topoi are elementary topoi, but the converse is not
true; Giraud's Theorem classifies precisely the conditions needed for an elementary topos to be a Grothendieck topos. Depending on your point of view, you might also look at Sheaves in
Geometry and Logic for more info, or you might check out Grothendieck's SGA4 for the algebraic geometry take on things.
add comment
There are two concepts which both get called a topos, so it depends on who you ask. The more basic notion is that of an elementary topos, which can be characterized in several ways. The simple
An elementary topos is a category C which has finite limits and power objects.
(A power object for A is an object P(A) such that morphisms B --> P(A) are in natural bijection with subobjects of A x B, so we could rephrase the condition "C has power objects" as "the functor Sub
(A x -) is representable for every object A in C").
The issue with the simple definition is that it doesn't show you why these things are actually interesting. It turns out that a great deal follows from these axioms. For example, C also has finite
colimits, exponential objects, has a representable limit-preserving functor P: C^op --> Doct where Doct the category of Heyting algebras such that if f: AxB --> A is the projection map for some
objects A and B in C, then P(A) --> P(AxB) has both left and right adjoints considered as a morphism of Heyting algebras, etc etc. What the long-winded definition boils down to is "an elementary
topos the the category of types in some world of intuitionistic logic." There's an incredible amount of material here; the best place to start is probably MacLane and Moerdijk's Sheaves in Geometry
and Logic. The main reference work is Johnstone's as-yet-unfinished Sketches of an Elephant, but I certainly wouldn't start there.
The other major notion of topos is that of a Grothendieck topos, which is the category of sheaves of sets on some site (a site is a (decently nice) category with a structure called a Grothendieck
topology which generalizes the notion of "open cover" in the category of open sets in a topological space). Grothendieck topoi are elementary topoi, but the converse is not true; Giraud's Theorem
classifies precisely the conditions needed for an elementary topos to be a Grothendieck topos. Depending on your point of view, you might also look at Sheaves in Geometry and Logic for more info, or
you might check out Grothendieck's SGA4 for the algebraic geometry take on things.
I would tend to think that in Charley's reply above the sentence
There are two concepts which both get called a topos, so it depends on who you ask.
is misleading. As Charley mentions himself afterwards, a Grothendieck topos (a category of sheaves) is a special case of the general notion of topos. It's not like there is a conflict in
up vote 12 One may even consider further special cases that go further in the direction from general toposes to things that look more like topological spaces: the next step are the localic toposes,
down vote which are those that are categories of sheaves on categories of open subsets of a topological space.
This, in turn, is seen in higher topos theory (search the keywors on the nLab, I can't post more than one link here, unfortunately) as the beginning of an infinite hierarchy of n-localic
oo-toposes: 0-localic toposes are like ordinary topoligcal while as the n in n-localic increases they become more and more objects in "derived geometry". Again, search the nLab for these
keywords for more on that.
add comment
I would tend to think that in Charley's reply above the sentence
There are two concepts which both get called a topos, so it depends on who you ask.
is misleading. As Charley mentions himself afterwards, a Grothendieck topos (a category of sheaves) is a special case of the general notion of topos. It's not like there is a conflict in terms.
One may even consider further special cases that go further in the direction from general toposes to things that look more like topological spaces: the next step are the localic toposes, which are
those that are categories of sheaves on categories of open subsets of a topological space.
This, in turn, is seen in higher topos theory (search the keywors on the nLab, I can't post more than one link here, unfortunately) as the beginning of an infinite hierarchy of n-localic oo-toposes:
0-localic toposes are like ordinary topoligcal while as the n in n-localic increases they become more and more objects in "derived geometry". Again, search the nLab for these keywords for more on
The entry nLab: topos provides some more details and in particular provides further links, also to material on higher toposes.
up vote 9 down vote Urs Schreiber
add comment
The entry nLab: topos provides some more details and in particular provides further links, also to material on higher toposes.
As Charley mentions, topoi have many nice properties, and since a topos is something which looks like sheaves of sets on a Grothendieck site, it should be clear why a topos theory would be
useful. In his book, Lurie develops, among other things, a theory of infinity-topoi, of which perhaps the main example of interest is sheaves of topological spaces on a Grothendieck site.
This case is already extremely interesting.
One of the reasons why we might want to consider "sheaves of topological spaces" is because we want to obtain a construction of tmf ("topological modular forms"), which is the "universal
up vote elliptic cohomology theory". An elliptic cohomology theory is a cohomology theory (in the sense of Eilenberg-Steenrod) which is associated to an elliptic curve. By the Brown representation
5 down theorem, a cohomology theory can be identified with a spectrum (in the sense of homotopy theory), which is a sequence of topological spaces satisfying some properties. So, following this very
vote rough reasoning, we should have a sheaf (or maybe a presheaf, but we can sheafify) of topological spaces (or rather spectra) over the moduli stack of elliptic curves. Then we would like to
take global sections of this sheaf to obtain the universal elliptic cohomology theory tmf. To make all of this precise, it helps to have a good theory of sheaves of topological spaces on a
Grothendieck site, and in turn a good theory of infinity-topoi. See Lurie's paper "A survey of elliptic cohomology" for some more details on this.
add comment
As Charley mentions, topoi have many nice properties, and since a topos is something which looks like sheaves of sets on a Grothendieck site, it should be clear why a topos theory would be useful. In
his book, Lurie develops, among other things, a theory of infinity-topoi, of which perhaps the main example of interest is sheaves of topological spaces on a Grothendieck site. This case is already
extremely interesting.
One of the reasons why we might want to consider "sheaves of topological spaces" is because we want to obtain a construction of tmf ("topological modular forms"), which is the "universal elliptic
cohomology theory". An elliptic cohomology theory is a cohomology theory (in the sense of Eilenberg-Steenrod) which is associated to an elliptic curve. By the Brown representation theorem, a
cohomology theory can be identified with a spectrum (in the sense of homotopy theory), which is a sequence of topological spaces satisfying some properties. So, following this very rough reasoning,
we should have a sheaf (or maybe a presheaf, but we can sheafify) of topological spaces (or rather spectra) over the moduli stack of elliptic curves. Then we would like to take global sections of
this sheaf to obtain the universal elliptic cohomology theory tmf. To make all of this precise, it helps to have a good theory of sheaves of topological spaces on a Grothendieck site, and in turn a
good theory of infinity-topoi. See Lurie's paper "A survey of elliptic cohomology" for some more details on this.
Let us concentrate on Grothendieck topoi. As mentioned in earlier posts, these are those topoi which arise as the category of sheaves for a category equipped with a Grothendieck topology.
These are those topoi which "behave the most like sheaves of sets on a topological space". Let me try to explain to what extent Grothendieck topoi are topological in nature.
First, given a continuous map f:X->Y, it produces a geometric morphism Sh(X)->Sh(Y). If X and Y are sober, then there is a bijection between Hom(X,Y) and Hom(Sh(X),Sh(Y)) (where the later
is again geometric morphisms). This means, if we restrict to sober spaces, we get a fully faithful functor Sh:SobTop->topoi. (Recall via stone duality that the category of sober spaces is
equivalent to the category of locales with enough points).
More generally, if G is a topological groupoid (a groupoid object in Top), we can construct its classifying topos. This can be defined as follows: Take the enriched nerve of G to obtain a
simplicial space, applying the functor "Sh" (viewing the nerve as a diagram of space) and obtain a simplicial topos. Now take the (weak) colimit of the diagram to obtain a topos BG. This
topos can be described concretely as equivariant sheaves over G_0.
up vote 5 Geometrically, BG is a model for the topos of "small sheaves" over the topological stack associated to G. In fact, on etale topological stacks* (this include all orbifolds), we also have an
down vote equivalence between maps of stacks and geometric morphisms between their categories of sheaves, so, there is a subcategory (sub-2-category) of Grothendieck topoi which is equivalent to
etale topological stacks. These Grothendieck topoi are called topological etendue.
*(over sober spaces)
It turns out that a large class of topoi can be obtained as BG for some topological groupoid. In fact, every Grothendick topos "with enough points" is equivalent to BG for some topological
groupoid. The more general statement is that EVERY Grothendieck topos is equivalent to BG for some localic groupoid (a groupoid object in locales). Since locales are a model for "pointless
topology", we see in some sense, every Grothendieck topos is "topological". You can make sense of the statement that every Grothendieck topos is equivalent to the category of small sheaves
on a localic stack.
add comment
Let us concentrate on Grothendieck topoi. As mentioned in earlier posts, these are those topoi which arise as the category of sheaves for a category equipped with a Grothendieck topology. These are
those topoi which "behave the most like sheaves of sets on a topological space". Let me try to explain to what extent Grothendieck topoi are topological in nature.
First, given a continuous map f:X->Y, it produces a geometric morphism Sh(X)->Sh(Y). If X and Y are sober, then there is a bijection between Hom(X,Y) and Hom(Sh(X),Sh(Y)) (where the later is again
geometric morphisms). This means, if we restrict to sober spaces, we get a fully faithful functor Sh:SobTop->topoi. (Recall via stone duality that the category of sober spaces is equivalent to the
category of locales with enough points).
More generally, if G is a topological groupoid (a groupoid object in Top), we can construct its classifying topos. This can be defined as follows: Take the enriched nerve of G to obtain a simplicial
space, applying the functor "Sh" (viewing the nerve as a diagram of space) and obtain a simplicial topos. Now take the (weak) colimit of the diagram to obtain a topos BG. This topos can be described
concretely as equivariant sheaves over G_0.
Geometrically, BG is a model for the topos of "small sheaves" over the topological stack associated to G. In fact, on etale topological stacks* (this include all orbifolds), we also have an
equivalence between maps of stacks and geometric morphisms between their categories of sheaves, so, there is a subcategory (sub-2-category) of Grothendieck topoi which is equivalent to etale
topological stacks. These Grothendieck topoi are called topological etendue.
It turns out that a large class of topoi can be obtained as BG for some topological groupoid. In fact, every Grothendick topos "with enough points" is equivalent to BG for some topological groupoid.
The more general statement is that EVERY Grothendieck topos is equivalent to BG for some localic groupoid (a groupoid object in locales). Since locales are a model for "pointless topology", we see in
some sense, every Grothendieck topos is "topological". You can make sense of the statement that every Grothendieck topos is equivalent to the category of small sheaves on a localic stack.
|
{"url":"http://mathoverflow.net/questions/101/what-is-a-topos/584","timestamp":"2014-04-20T01:22:33Z","content_type":null,"content_length":"76195","record_id":"<urn:uuid:2464fd14-a6c9-4352-98bd-a25546ebfc09>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Plato Center ACT Tutor
Find a Plato Center ACT Tutor
...When I took the test I received a perfect score on the math and 780 out of 800 on the analytic reasoning section. When helping students prepare for standardized tests I ask them to first take a
pre-assessment to analyze their strengths and weaknesses. Then I tailor their lessons to use their strengths to address their weaknesses.
24 Subjects: including ACT Math, calculus, geometry, GRE
...I've helped both high school students and fellow college classmates master complex material with patience and encouragement. I plan to become an optometrist and will be beginning a Doctor of
Optometry program in the Fall. Until then, I am looking forward to helping students with my favorite subjects: algebra, geometry, trigonometry, chemistry, biology, and physics.
25 Subjects: including ACT Math, chemistry, calculus, physics
I am a teacher certified in middle school and high school English. Grammar and proper writing are passions of mine. I am pursuing a career in tutoring rather than teaching, and am in the process
of completing my own "Writing Bootcamp" curriculum.
17 Subjects: including ACT Math, English, reading, writing
...Now that we're comfortable thinking using the algebraic language, we start to think about new things. We flesh out the relations we had only touched on lightly before (e.g. ellipses,
hyperbolas, inequalities, absolute values, logs), and expand a few methods (long division) so we walk away from o...
14 Subjects: including ACT Math, geometry, GRE, ASVAB
...My research projects regarded estrogen receptors and tuberculosis respectively. While excelling as a student, I have had my fair share of tutoring experiences. I can work with a wide range of
ages, as I have tutored children as young as 6 years old in basic skills such as English, Reading, and Math.
16 Subjects: including ACT Math, chemistry, calculus, geometry
|
{"url":"http://www.purplemath.com/plato_center_act_tutors.php","timestamp":"2014-04-16T04:28:18Z","content_type":null,"content_length":"23863","record_id":"<urn:uuid:ad5e4394-7dc4-40e1-9952-062dd48cfa31>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Contact Information
A teaching statement
Recent publications
Householder Symposium XVI: Daniel Szyld, Eric de Sturler, Plamen Koev, Rich Lehoucq, and Howard Elman.
Eric de Sturler
Editor SIAM Journal on Numerical Analysis,
Editor Applied Numerical Mathematics,
Program Director of the SIAM Activity Group on Supercomputing (2003-2006),
Co-Chair of the Third SIAM Conference on Computational Science & Engineering (2005),
Editor International Journal on Computational Science and Engineering,
Editor Open Applied Mathematics Journal,
Affiliate faculty of the Institute for Genomic Biology and the Materials Computation Center at the University of Illinois at Urbana-Champaign,
Adjunct Associate Professor, Department of Computer Science, University of Illinois at Urbana-Champaign.
Virginia Tech Mathematics Department and Faculty Pool. Blacksburg's Beaches
Department of Mathematics phone: +1 (540) 231 5279
fax: +1 (540) 231-5960
544 McBryde Hall, email: sturler_at_vt_edu
Virginia Tech
Blacksburg, VA 24061-0123, USA
Secretary at UIUC:
Trisha Benson
4322 Thomas M. Siebel Center for
Computer Science, MC-258
phone: +1 (217) 244-0095
email: tdbenson_at_cs_uiuc_edu
The Rime of the Ancient Professor
O teach me, teach me, learned man! Since then, each week the selfsame hour,
The student did implore. That agony returns:
Tell me, quoth she, of PDE And till I have my lecture told,
And ODE much more. This heart within me burns.
Forthwith this frame of mine was wrenched Three times a week I go to class;
With a woeful agony, I have strange power of speech;
Which forced me to begin my class; That moment that her face I see,
And then it left me free. I know the student must hear me:
To her my class I teach.
(EdS, after Coleridge's The Rime of the Ancient Mariner)
Recent publications (send email if not yet available online as technical report)
UIUC (to be updated):
● CS 450 Numerical Analysis: A Comprehensive Overview (including online lecture notes)
● CS 455 Numerical Solution of Partial Differential Equations (including online lecture notes)
● CS 458 Numerical Linear Algebra (including online lecture notes)
● CS 550 Numerical Differential Equations (including online lecture notes)
● CS 556 Iterative and Multigrid Methods (including online lecture notes)
Short Courses
● Summerschool on Computational Materials Science, Introduction to Computational Nanotechnology, Materials Computation Center, University of Illinois at Urbana-Champaign, June 7 - 18, 2004, Lecture
● Summerschool on Computational Materials Science, Tools for Multiple Length and Time Scales, Materials Computation Center, University of Illinois at Urbana-Champaign, May 29 - June 7, 2001.
Lecture notes on MCC website (4 lectures on Krylov subspace methods for linear systems and eigenvalue solvers, and multigrid methods; and lab exercises with additional explanations)
● Implementation Aspects and Parallelism in Krylov Methods; Katholieke Universiteit Leuven (KUL), Leuven, Belgium, 22-24 April, 1998; KUL-UCL Graduate Courses in Numerical Analysis / Seminar
Series: Iterative Methods for Large Scale Systems and Eigenvalue Problems, organized by Paul Van Dooren and Stefan Vandewalle Lecture notes (efficient parallel implementations and messaging
passing and data-parallel implementations)
● Iterative Linear Solvers and Preconditioners (with Martin Gutknecht and Michele Benzi); ETH Zurich, October 16-17, 1997; Lecture notes (optimal methods, optimal truncation, and software)
● University of Basel, Institut fuer Informatik, May 12-14, 1997, High Performance Fortran Course, Basic issues in (data) parallel programming; High Performance Fortran: an overview; Fortran 90;
Data mapping; Data Parallelism; System and data mapping inquiry functions; Compiler and tools; Programming examples and performance studies.
o Lecture notes (at some point)
Selected Journal and Book Publications (send email if not available online)
1. Xin-Guang Zhu, Eric de Sturler, Rafael Alba, and Stephen P. Long, A Simple Model of the Calvin Cycle Has Only One Physiologically Feasible Steady State under the Same External Conditions,
Nonlinear Analysis Series B: Real World Applications, accepted,
2. Xin-Guang Zhu, Eric de Sturler, Stephen Long, Optimizing the distribution of resources between enzymes of carbon metabolism can dramatically increase photosynthetic rate. A numerical simulation
using an evolutionary algorithm, Plant Physiology, October 2007, Vol. 145, pp. 513—526, 2007; DOI:10.1104/pp.107.103713, www.plantphysiol.org (open access article)
3. Shun Wang, Eric de Sturler, and Glaucio H. Paulino, Large-Scale Topology Optimization using Preconditioned Krylov Subspace Methods with Recycling, International Journal for Numerical Methods in
Engineering, Vol. 69, Issue 12, pp. 2441—2468, 2007, DOI: 10.1002/nme.1798,
4. Mike Parks, Eric de Sturler, Greg Mackey, Duane D. Johnson, and Spandan Maiti, Recycling Krylov Subspaces for Sequences of Linear Systems ©2006 SIAM, SIAM Journal on Scientific Computing, Vol.
28, No. 5, pp. 1651-1674, 2006,
5. Chris Siefert and Eric de Sturler, Probing Methods for Saddle-Point Problems, Electronic Transactions in Numerical Analysis (ETNA), Volume 22, pp. 163-183, 2006, in Special Volume on "Saddle
Point Problems: Numerical Solution and Applications",
○ original version available as Tech. Report UIUCDCS-R-2005-2540 and UILU-ENG-2005-1744, March 2005,
6. Chris Siefert and Eric de Sturler, Preconditioners for Generalized Saddle-Point Problems ©2006 SIAM, SIAM Journal on Numerical Analysis, 44(3), pp. 1275-1296, 2006, original version available as
Technical Report UIUCDCS-R-2004-2448 and UILU-ENG-2004-1750, June 2004.
7. Saddle Point Problems: Numerical Solution and Applications, Special Volume of Electronic Transactions on Numerical Analysis (ETNA), Volume 22, 2006, Michele Benzi, Richard B. Lehoucq, and Eric de
Sturler (editors),
8. Misha Kilmer and Eric de Sturler, Recycling Subspace Information for Diffuse Optical Tomography ©2006 SIAM, SIAM Journal on Scientific Computing 27(6), pp. 2140-2166, 2006, original version
available as Technical Report UIUCDCS-R-2004-2447, June 2004,
9. Chris Siefert and Eric de Sturler, Preconditioners for Generalized Saddle-Point Problems, accepted for publication in SIAM Journal on Numerical Analysis, December 2005,
10. Xin-Guang Zhu, Govindjee, Neil Baker, Eric de Sturler, Donald R. Ort, Stephen P. Long, Chlorophyll a fluorescence induction kinetics in leaves predicted from a model describing each discrete step
of excitation energy and electron transfer associated with Photosystem II, Planta 223(1), pages 114-133, 2005; the original publication is available at www.springerlink.com as pdf or html,
11. Eric de Sturler and Jörg Liesen, Block-diagonal and Constraint Preconditioners for Nonsymmetric Indefinite Linear Systems. Part I: Theory ©2005 SIAM, SIAM Journal on Scientific Computing 26(5),
pp. 1598-1619, 2005,
12. Eric de Sturler, Improving the Convergence of the Jacobi-Davidson Algorithm,
13. A. Sheffer and E. de Sturler, Smoothing an Overlay Grid to Minimize Linear Distortion in Texture Mapping ©2002 ACM, ACM Transaction on Graphics 21(4), pp. 874-890, 2002 (final draft with color
14. A. Sheffer and E. de Sturler, Parameterization of Faceted Surfaces for Meshing Using Angle Based Flattening (final draft), Engineering with Computers 17(3) (2001), pp. 326-337. (available from
Springer) Special issue containing 10 selected (and updated) papers from the 9th International Meshing Round Table Conference 2000.
15. J. Liesen, E. de Sturler, A. Sheffer, Y. Aydin, C. Siefert, Preconditioners for Indefinite Linear Systems Arising in Surface Parameterization, Proceedings of the 10th International Meshing Round
Table, Newport Beach, CA, October 7-10, 2001, Sandia National Laboratories, (also available from http://www.andrew.cmu.edu/user/sowen/imr10.html),
16. E. de Sturler, J. Hoeflinger, L. Kale, and M. Bhandarkar, A New Approach to Software Integration Frameworks for Multi-Physics Simulation codes, in The Architecture of Scientific Software, R.F.
Boisvert, P.T.P. Tang, eds., Kluwer Academic Publishers, Boston, 2001, ISBN 0-7923-7339-1.
17. M. Bhandarkar, L.V. Kale, E. de Sturler, J. Hoeflinger, Adaptive Load Balancing for MPI Programs, Lecture Notes in Computer Science 2074, V.N. Alexandrov e.a. (Eds.): Proceedings of the
International Conference in Computational Science, May 28-30, 2001, San Francisco, CA, pp. 108-117, Springer-Verlag, 2001.
18. A. Sheffer and E. de Sturler, Surface Parameterization for Meshing by Triangulation Flattening, Proceedings of the 9th International Meshing Round Table Conference 2000, New Orleans, October 2-5,
2000, pp. 161-172, Sandia National Laboratories, (also available from http://www.andrew.cmu.edu/user/sowen/imr9.html).
19. E. de Sturler, Truncation Strategies for Optimal Krylov Subspace Methods, ©1999 SIAM, SIAM Journal on Numerical Analysis Vol. 36 (1999), pp. 864-889. Awarded the second prize at the Leslie Fox
Prize Meeting 1997.
20. E. de Sturler. Variations on the Jacobi-Davidson Theme. In David Kincaid and Anne Elster, editors: Iterative Methods in Scientific Computation IV: Proceedings of the Fourth IMACS International
Symposium on Iterative Methods in Scientific Computation, honoring the 75th birthday of David M. Young, Austin, Texas, October 18-20, 1998, IMACS series in Computational and Applied Mathematics,
Volume 5, IMACS 1999, pp. 313-323.
21. E. de Sturler. BiCG Explained. Householder Symposum XIV: Proceedings of the Householder International Symposium in Numerical Algebra, Chateau Whistler, Whistler, BC, Canada, June 13-19, 1999.
22. E. de Sturler and D. Loher, Parallel Iterative Solvers for Irregular Sparse Matrices in High Performance Fortran, Journal of Future Generation Computer Systems Vol. 13 (1998), pp. 315-325.
Special issue containing updated versions of the 15 best papers selected out of 130 papers accepted for HPCN’97.
23. E. de Sturler and V. Strumpen, Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler, Scientific Programming Vol. 6 (1997), pp. 127-152. Special issue: High
Performance Fortran Comes of Age.
24. E. de Sturler and D. Loher. Implementing Iterative Solvers for Irregular Sparse Matrix Problems in High Performance Fortran. In C.D. Polychronopoulos, K. Joe, K. Araki, M. Amamiya, editors: High
Performance Computing, International Symposium, ISHPC'97, Fukuoka, Japan, November 4 - 6, 1997, Lecture Notes in Computer Science 1336, Springer-Verlag, Berlin, Heidelberg, Germany, 1997.
25. E. de Sturler, Nested Krylov Methods based on GCR, Journal of Computational and Applied Mathematics Vol. 67 (1996), pp. 15-41.
26. E. de Sturler, A Performance Model for Krylov Subspace Methods on Mesh-based Parallel Computers, Parallel Computing Vol. 22 (1996), pp. 57-74.
27. E. de Sturler. Inner-outer Methods with Deflation for Linear Systems with Multiple Right-Hand Sides. In Householder Symposium XIII, Proceedings of the Householder Symposium on Numerical Algebra,
pp. 193-196. June 17 - 26, 1996, Pontresina, Switzerland.
28. E. de Sturler and H. A. van der Vorst, Reducing the Effect of Global Communication in GMRES(m) and CG on Parallel Distributed Memory Computers, Applied Numerical Mathematics (IMACS), Vol. 18
(1995), pp. 441-459.
29. F. Nataf, F. Rogier, and E. de Sturler. Domain Decomposition Methods for Fluid Dynamics. In A. Sequeira, editor: Navier-Stokes Equations and Related Nonlinear Problems, pp. 367-376, New York,
1995. Plenum Press.
30. E. de Sturler, Incomplete Block LU Preconditioners on Slightly Overlapping Subdomains for a Massively Parallel Computer, Applied Numerical Mathematics (IMACS), Vol. 19 (1995), pp. 129-146.
31. Eric de Sturler. IBLU Preconditioners for Massively Parallel Computers. In D. E. Keyes & J. Xu, editors: Domain Decomposition Methods in Science and Engineering, Proceedings of the Seventh
International Conference on Domain Decomposition, October 27-30, 1993, The Pennsylvania State University, pp. 395-400, American Mathematical Society, Providence, USA, 1994.
32. C. Clémençon, K. M. Decker, A. Endo, J. Fritscher, G. Jost, T. Maruyama, N. Masuda, A. Müller, R. Rühl, W. Sawyer, E. de Sturler, B. J. N. Wylie, and F. Zimmermann, Architecture and
Programmability of the NEC Cenju-3, SPEEDUP, Vol. 8(2) (1994), pp. 15-22.
33. E. de Sturler. Iterative Methods on Distributed Memory Computers, Ph.D. thesis, Delft University of Technology, Delft, The Netherlands, October 1994 (ISBN 90-9007579-8).
34. E. de Sturler and H. A. van der Vorst. Communication Cost Reduction for Krylov Methods on Parallel Computers. In W. Gentzsch and U. Harms, editors: High-Performance Computing and Networking,
Lecture Notes in Computer Science 797, pages 190-195, Springer-Verlag, Berlin, Heidelberg, Germany, 1994.
35. E. de Sturler and D. R. Fokkema. Nested Krylov Methods and Preserving the Orthogonality. In N. Duane Melson, T. A. Manteuffel, and S. F. McCormick, editors, Sixth Copper Mountain Conference on
Multigrid Methods, NASA Conference Publication 3224, Part 1, pages 111-125, NASA Langley Research Center, Hampton, VA, USA, 1993.
36. E. de Sturler. A Parallel Variant of GMRES(m). In J.J.H. Miller and R.Vichnevetsky, editors, Proceedings of the 13th IMACS World Congress on Computation and Applied Mathematics, pp. 682-683,
Dublin, Ireland, 1991. Criterion Press.
37. G. H. Schmidt and E. de Sturler. Multigrid for porous medium flow on locally refined three dimensional grids. In W. Hackbusch and U. Trottenberg, editors, Multigrid Methods: Special Topics and
Applications II, proceedings of the 3rd European Conference on Multigrid Methods, Bonn, Germany, October 1-4, 1990, GMD-Studien Nr. 189, pages 309-318, Gesellschaft fòr Mathematik und
Datenverarbeitung, Sankt Augustin, Germany, 1991.
Full List of Invitations and Invited/Plenary Presentations (old – needs updating)
1. Krylov Recycling for Quasi-Newton methods, 5th European Conference on Computational Methods in Applied Sciences and Engineering/8th World Congress on Computational Mechanics (ECCOMAS 2008/WCCM8),
Venice, Italy, June 30 – July 4, 2008.
2. An Overview of Efficient Solution Methods for Sequences of Linear Systems, in Minisymposium, 6th International Congress on Industrial and Applied Mathematics (ICIAM 2007), Zurich, Switzerland,
July 16-20, 2007,
3. Solving Long Sequences of Linear Systems, Department of Mathematics, TU Berlin (Berlin, Germany), May 22, 2007,
4. Schlumberger-Doll Research & Tufts University, February 18 – 21, Schlumberger-Tufts Computational and Applied Math Seminar, April 19, 2007, A Trust Region Method with Regularized Gauss-Newton
Step for Ill-conditioned Nonlinear Problems,
5. Recycling Krylov Basis Methods, in Minisymposium, SIAM Conference on Mathematical & Computational Issues in the Geosciences, Santa Fe, New Mexico, March 19-22, 2007,
6. Research Directions and Enabling Technologies for the Future of CS&E, Panel member, SIAM Conference on Computational Science and Engineering, Costa Mesa, California, February 19-23, 2007,
7. Large Scale Topology Optimization Using Preconditioned Krylov Subspace Recycling and Continuous Approximation of Material Distribution, Keynote Lecture, Multiscale and Functionally Graded
Materials Conference 2006 (FGM 2006), Ko Olina, O’ahu, Hawaii, October 15-18, 2006,
8. Fast Solution of Long Sequences of Linear Systems in Computational Mechanics, in Minisymposium, 7th World Congress on Computational Mechanics, Los Angeles, California, July 16-21, 2006,
9. Wright-Patterson Air Force Base, August 14-15, 2006, Topology Optimization Applied to Advanced Structural Systems,
10. Schlumberger Doll Research, Cambridge, August 8, 2006, Fast Solvers for Long Sequences of Linear Systems,
11. Parallel Multilevel Preconditioners for Simulations on Adaptively Refined Meshes, SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, California, February 22-24, 2006,
12. Department of Computer Science, College of William and Mary, October 28, 2005, Fast Solvers for Long Sequences of Linear Systems and their Analysis,
13. Computer Science Research Institute, summer faculty, Sandia National Laboratories, Albuquerque, June - August, 2005,
14. Recycling Krylov Subspaces for Sequences of Linear Systems: Convergence Analysis, 45 minute plenary lecture at the Householder XVI Symposium on Numerical Linear Algebra, Seven Springs Mountain
Resort, Champion, PA, May 23-27, 2005,
15. Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah, May 9-11, 2005, Fast Solvers for Long Sequences of Linear Systems,
16. Parallel Multilevel Preconditioners for Simulations on Adaptively Refined Meshes, 5^th Symposium of the Los Alamos Computer Science Institute (LACSI), AMR Workshop, Santa Fe, New Mexico, October
12-14, 2004,
17. Optimization Technology Center and Northwestern University (broadcast through Access Grid), Evanston, Illinois, October 6, 2004, Probing for Schur Complements and Preconditioning Generalized
Saddle-Point Problems,
18. Department of Mathematics and Computer Science, Emory University, Atlanta, Georgia, March 17-19, 2004, Iterative Solution of a Sequence of Linear Systems,
19. SCCM Program, Stanford University, February 27, 2004, Iterative Solution of a Sequence of Linear Systems,
20. Department of Computer Science, University of Maryland at College Park, December 9-11, 2004, Preconditioners for Generalized Saddle-Point Problems,
21. Preconditioners for Generalized Saddle-Point Problems, Solution Methods for Saddle-Point Systems in Computational Mechanics, workshop organized by Sandia National Labs and the SCCM Program
Stanford, Santa Fe, New Mexico, December 3-6, 2003,
22. Updating, Truncating, and Reusing Krylov Spaces, Dagstuhl Seminar on Theoretical and Computational Aspects of Matrix Algorithms, Schloss Dagstuhl, Wadern, Germany, October 11-18, 2003,
23. Department of Mathematics, Tufts University, September 17-21, 2003, Iterative Solution of a Sequence of Linear Systems,
24. Parallel Multi-Physics Integration Frameworks for Virtual Rocket Science, in Minisymposium ‘Supporting Infrastructures for Computational Mechanics Applications’ (Carter Edwards), 7th US National
Congress on Computational Mechanics 2003, July 27-30, 2003, Albuquerque, NM,
25. Preconditioning Saddle-point Problems using Cheap Approximations to the Schur Complement, in Minisymposium Analysis and Algorithms for Ill-conditioned Saddle-point Systems (Kim-Chuan Toh), ICIAM
2003, July 7-11, 2003, Sydney, Australia,
26. Krylov Methods for Lattice QCD, Third International Workshop on Numerical Analysis and Lattice QCD, University of Edinburgh & UK National e-Science Centre, e-Science, Institute, Edinburgh,
Scotland, June 30 – July 4, 2003,
27. Inner-outer Iteration for a Sequence of Linear Systems, Sparse Matrices and Grid Computing, St Girons Deux, June 10-13, 2003, St. Girons, France,
28. Preconditioning Saddle Point Problems using Cheap Approximations to the Schur Complement, ETNA: Following the Flows of Numerical Analysis, Kent, OH, May 29-31, 2003,
29. Ecole Polytechnique de Montreal, Dept. of Computer Science, Montreal, Canada, November 19-23, 2002, Algorithmic and Accuracy Issues in Surface Parameterization,
30. Centre de Recherche en Calcul Applique (CERCA), Montreal November 19-23, 2002, Fast Solution of Symmetric and Nonsymmetric Indefinite Problems,
31. Fast Solution Methods and Preconditioners for Symmetric and Nonsymmetric Indefinite Systems, Plenary presentation at the 15^th Householder Symposium in Numerical Linear Algebra (June 2002),
32. Norm-wise and Iteration-wise Bounds on the Convergence of Krylov Subspace Methods relative to GMRES, in Minisymposium on Fundamentals of Krylov Subspace Methods (Anne Greenbaum and Nick
Trefethen), Copper Mountain Conference on Iterative Methods, Copper Mountain, Colorado, March 24-29, 2002
33. Accuracy and Algorithmic Issues in Surface Parameterization, Fifth AFA Conference on “Curves and Surfaces”, June 27-July 3, 2002, St. Malo, Bretagne, France, in minisymposium “Parameterization
and Surface Reconstruction” (Michael Floater),
34. University of Leiden (Netherlands), Department of Mathematics, January 7, 2002, Fast Solution Methods and Preconditioners for Symmetric and Nonsymmetric Indefinite Systems,
35. University of Utrecht (Netherlands), Department of Mathematics, December 14, 2001 - January 12, 2002, Fast Solution Methods and Preconditioners for Symmetric and Nonsymmetric Indefinite Systems,
36. Extending and truncating the search space in the Jacobi-Davidson Algorithm, in Minisymposium on Invariant Subspaces (H.A. van der Vorst), SIAM Conference on Numerical Linear Algebra, Raleigh,
North-Carolina, USA, October 23-26, 2000,
37. Analysis of Newton and Newton-Krylov Methods for Nonlinear Problems with Ill-conditioned Jacobians, Workshop on Solution Methods for Large-Scale Nonlinear Problems, Lawrence Livermore National
Laboratory, CASC/Institute for Terascale Simulation, Livermore, CA, July 26-28, 2000,
38. Lawrence Livermore National Laboratory/CASC, Livermore, CA, March 12-19, 2000, Programming Environments for Multi-Application Simulation,
39. Analyzing Krylov Subspace Methods by their Projections, in Minisymposium on the Analysis of Krylov Subspace Methods (Anne Greenbaum), 2000 Joint Mathematics Meetings (AMS/MAA/SIAM), Washington
DC, January 19-22, 2000,
40. Ecole Polytechniqe / Centre de Mathematiques Appliquées, Paris (Palaiseau), France, 3-5 January, 2000, Three-dimensional, Integrated, Simulation of Solid Propellant Rockets (overview),
41. Universite Paris VI Pierre et Marie Curie / Laboratoire d’Analyse Numerique, Paris, France, January 3, 2000, Three-dimensional, Integrated, Simulation of Solid Propellant Rockets - coupling
applications on separate grids,
42. Argonne National Laboratory / Mathematics and Computer Science Division, Chicago, Illinois, USA, December 9, 1999, A Programming Environment for Multi-Application Simulations,
43. Swiss Federal Institute of Technology (ETH Zurich) / Department of Civil and Environmental Engineering (VAW), Zurich, Switzerland, July 15-August 1, 1999, Fellow of the European Research
Consortium on Flow, Turbulence, and Combustion (ERCOFTAC), Integrated, Threedimensional, Simulation of Solid Propellant Rockets,
44. BiCG Explained, Householder Symposium XIV, Chateau Whistler, Whistler, BC, Canada, June 13-19, 1999,
45. The Convergence of BiCG, A Mathematical Journey through Analysis, Matrix Theory, and Scientific Computing: A Conference in Honor Richard Varga's 70th Birthday, Kent State University, Kent, OH,
USA, March 25-27, 1999,
46. Los Alamos National Laboratory, Los Alamos, NM, USA, March 17-18, 1999, The Convergence of BiCG,
47. Sandia National Laboratories, Albuquerque, NM, USA, March 15-16,19, 1999,
a. Truncation of Optimal Krylov Subspace Methods,
b. The Convergence of BiCG,
48. Technische Universität Bergakademie Freiberg / Institut für Angewandte Mathematik II, Freiberg, Germany, 11-15 November, 1998, The Convergence of BiCG,
49. College of William and Mary / Department of Computer Science, Williamsburg, Virginia, USA, 21-27 October, 1998, Truncation Strategies for Optimal Krylov Subspace Methods,
50. Variations on the Jacobi-Davidson Theme, plenary lecture, Fourth IMACS International Symposium on Iterative Methods in Scientific Computation, honoring the 75th birthday of David M. Young,
Austin, Texas, USA, October 18-20, 1998,
51. Panel member in the Forum Discussion at the High Performance Fortran User Group Conference 1998 (HUG’98), Porto, Portugal, 25-26 June, 1998, The Future of High Performance Fortran...,
52. Katholieke Universiteit Leuven (KUL), Leuven, Belgium, 22-24 April, 1998, Implementation Aspects and Parallelism in Krylov Methods. (in KUL-UCL Graduate Courses in Numerical Analysis / Seminar
Series: Iterative Methods for Large Scale Systems and Eigenvalue Problems, organized by Paul Van Dooren and Stefan Vandewalle),
a. Efficient Parallel Implementations of Krylov Subspace Methods,
b. Message Passing and Data Parallel Implementation Strategies,
c. Truncation Strategies for Optimal Krylov Subspace Methods,
53. Stanford University / SCCM / Dept. Computer Science, USA, 18-21 February, 1998, Parallel Iterative Solvers for Irregular Sparse Matrices in High Performance Fortran,
54. Ecole Polytechniqe / Centre de Mathematiques Appliquées, Paris (Palaiseau), France, 2-6 February, 1998,
a. The Convergence of BiCG,
b. Parallel Iterative Solvers for Irregular Sparse Matrices in High Performance Fortran,
55. Australian National University/Computer Sciences Laboratory, Canberra, Australia, 10-16 November, 1997,
1. Truncation Strategies for Optimal Krylov Subspace Methods,
2. Parallel Iterative Solvers for Irregular Sparse Matrices in High Performance Fortran,
56. Stanford University / SCCM, USA, June 30 - August 12, 1997, Truncation Strategies for Optimal Krylov Subspace Methods,
57. A Robust and Efficient Truncated GMRES, Third IMACS International Symposium on Iterative Methods in Scientific Computation, Jackson Hole, Wyoming, USA, July 9-12, 1997 (in minisymposium),
58. Truncation Strategies for Optimal Krylov Subspace Methods, Leslie Fox Prize Meeting 1997, University of Dundee, Scotland, June 23, 1997. Awarded 2nd Prize,
59. MIT Lab. for Computer Science, Cambridge MA, USA, 14-18 October, 1996, Parallel Solution of Irregular Sparse Problems using High Performance Fortran, (in Supercomputing Technologies Seminar),
60. Inner-Outer Methods with Deflation for Linear Systems with Multiple Right Hand Sides, Householder Symposium XIII, 17-21 June, 1996, Pontresina, Switzerland,
61. Stanford University/SCCM, USA, 15-19 April, 1996, Inner-Outer Methods with Deflation for Linear Systems with Multiple Right Hand Sides,
62. Inner-Outer Methods with Deflation for Multiple Right Hand Sides, plenary lecture, Workshop on Large Sparse Matrix Computations and Lattice Quantum Chromodynamics, University of Kentucky,
Lexington, USA, August 31-September 2, 1995.
(to do)
|
{"url":"http://www-sal.cs.uiuc.edu/~sturler/","timestamp":"2014-04-19T12:10:16Z","content_type":null,"content_length":"139359","record_id":"<urn:uuid:515fd056-1227-4e16-98b1-6e83d16242e8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
STL: how to get the sequence number of a newly added item into a set
<This a fix to my prev. posting>
"bilgekhan" wrote:
> "Daniel Pitts" wrote:
> > Frank Birbacher wrote:
> > >
> > > Don't be offensed. It is just that std::set will never have a
> > > "index_last_added" method. It's by design. If you need a
> > > function like that you need to implement a new set class of your own.
> > > Having done that come back and tell us how easy it was to determine the
> > > exact size of a tree branch which is never visited or how to keep the
> > > counts up-to-date which tell you the exact size, how you then managed to
> > > keep the insert in O(log n). I think this is too complicated compared to
> > > a sorted vector solution.
> > >
> > I agree with your point, but feel like pointing out that it is possible
> > (almost trivial) to have every node on a tree keep track of how many
> > children it has. An insert operation would increment the value on the
> > return-trip if the insert was successful.
> >
> > My question to the OP is, do you need to update the insert-position of
> > the elements *after* the last inserted element? say I have an existing set:
> >
> > <1, 5, 6> so, 5 and 6 are at position 1 and 2 respectively...
> > I insert 3:
> > <1, 3, 5, 6> and determine that 3 is at position 1, do I need to update
> > the associated data with 5 and 6 to say they are now at position 2 and 3
> > respectively?
> From point of application code it is sufficient to know the position of
> just only the *last* successful insert() operation and of the
> *last* successful find() operation.
> Ie. in the application code there is no need to keep track of the positions
> of the other items; just that of the latest successful insert() or find().
> In the example above, after inserting 3 into this set
> it just shall be possible to get the position
> of this latest new item (ie. here the position of 3, ie. 1).
> There is no need for the library to keep track or to update
> anything defined in the application code that references items of the set.
> Ie. there are no such references (at least not in my code).
Besides the above functionality a furthergoing super solution
would be if the traversing of the container would be possible
for all these 3 cases:
1) traverse in default (sorted) order
2) traverse in insert order
3) traverse in raw (physical) order
The 1st one is of course already possible.
The 2nd case could also efficiently be realized in application code
if only the above mentioned functionality would be possible. Ie. one then
would need to store and update the returned pos of the inserted item
in a std::vector<uint> at position pos. But before doing this one would need
to move (reversely) all items from pos to the end by 1 position,
and while doing this incrementing by 1 those items that have a
value >= the pos to insert...
But one would need a direct access (array access) to the items of the set via a uint handle...
The 3rd could be done for example like this:
cursor c; // stores last walked pos etc.
while ((it = s.next(c)) != s.end())
do_something with *it
Ie. one would need to implement a "next(cursor& c)" method in the library...
These are just some ideas of mine...
At the moment I unfortunately don't have time to implement these
IMO useful functionalities myself, besides this I don't know the STL
that well, much less its internals. It seems one would need to
extend the underlying code of the RB or AVL tree and add
a uint data member to each block and update it with each insert, delete etc.
Long time ago I had worked on Btree implementations and I can imagine
that the most complicated case would be when a block would need
to be splitted or when two blocks need to be joined...
People interessted in implementing the above features
can also take a look at this interessting project:
"STL containers map, set, and list with fast array access" by Ray Virzi
"Source code for STL compliant container classes that add
fast indexing capability to existing container types":
|
{"url":"http://www.velocityreviews.com/forums/t616779-p2-stl-how-to-get-the-sequence-number-of-a-newly-added-item-into-a-set.html","timestamp":"2014-04-21T14:58:52Z","content_type":null,"content_length":"75816","record_id":"<urn:uuid:e9e2e9bd-3878-4bdf-b34d-57c84e8a8036>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Marbles Probability, Should Be Easy
June 7th 2011, 10:48 AM
Marbles Probability, Should Be Easy
I'm looking at this question, and I know the answer. You have 10 blue and 12 red marbles, select two at random, what's the probability that you get at least one blue? I know I can find the
probability of obtaining all red marbles, $\frac{12\cdot11}{22\cdot 21}$ and then find one minus this to get 5/7.
But I initially tried doing this by finding total number of ways of obtaining one blue marble (10) plus total number of ways of obtaining two blue marbles (10C2) divided by total number of ways
of selecting any two marbles (22C2), and my answer is off by a factor of 3. Can anyone explain what's going wrong in this calculation?
June 7th 2011, 11:15 AM
I'm looking at this question, and I know the answer. You have 10 blue and 12 red marbles, select two at random, what's the probability that you get at least one blue? I know I can find the
probability of obtaining all red marbles, $\frac{12\cdot11}{22\cdot 21}$ and then find one minus this to get 5/7.
Try $\frac{\binom{10}{1}\binom{12}{1}}{\binom{22}{2}}$$+\frac{\binom{10}{2}}{\binom{22}{2}}$.
June 7th 2011, 02:30 PM
Hello, ragnar!
You have 10 blue and 12 red marbles, select two at random.
What's the probability that you get at least one blue?
I know I can find the probability of obtaining all red marbles, $\frac{12\cdot11}{22\cdot 21}$
and then find one minus this to get 5/7.
But I initially tried doing this by finding
total number of ways of obtaining one blue marble (10) . ← Here!
plus total number of ways of obtaining two blue marbles (10C2)
divided by total number of ways of selecting any two marbles (22C2),
and my answer is off by a factor of 3.
Can anyone explain what's going wrong in this calculation?
Number of ways to get one blue marble and one red marble:
. . $(_{10}C_1)(_{12}C_1}) \:=\:(10)(12) \:=\:120$
Number of ways to get two blue marbles:
. . $_{10}C_2 \:=\:45$
Number of ways to get at least one blue marble: . $120 + 45 \:=\:165$
. . $\text{Therefore: }\;P(\text{at least one blue}) \:=\:\frac{165}{231} \;=\;\frac{5}{7}$
|
{"url":"http://mathhelpforum.com/statistics/182579-marbles-probability-should-easy-print.html","timestamp":"2014-04-16T06:49:42Z","content_type":null,"content_length":"8232","record_id":"<urn:uuid:82b0b453-008a-4a28-b69c-c0681e9a572b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Catastrophe theory
Catastrophe theory was very fashionable in 70-s and 80-s. Rene Thom was one of its spiritual leaders. This theory originated from qualitative solution of differential equations and it has nothing in
common with Apocalypse or UFO.
Catastrophe means the loss of stability in a dynamic system. The major method of this theory is sorting dynamic variables into slow and fast. Then stability features of fast variables may change
slowly due to dynamics of slow variables.
The theory of catastrophes was applied to the spruce budworm (Choristoneura fumiferana) (Casti 1982, Ecol. Modell., 14: 293-300). We will use the model which was considered in the previous section
and modify it by adding a slow variable: the average age of trees in the stand.
The performance of spruce budworm populations is better in mature spruce stands than in young stands. Thus, we will assume that the intrinsic rate of increase (r) and carrying capacity (K) both
increase with the age of host trees:
where A is the average age of trees in a stand. Now the model is represented by the equation:
The first term is the logistic model, and the second term describes mortality caused by generalist predators which have a type III functional response. Equilibrium points can be found by solving this
equation with the left part set to zero (dN/dt = 0):
The left graph shows phase plots for various forest ages from A = 35 to A = 85, and the right graph shows equilibrium points (where the derivative is equal to 0). Only one non-zero equilibrium exists
if N<38 or N>74. If 40<A<74, then there are 2 two stable equilibria separated by one unstable equilibrium. Equilibrium line folds.
The age of trees continue increasing with time. Age can be considered as a "slow" variable as compared to population density which is a "fast" variable. Dynamics of the system can be explained using
the graph:
Fast processes are vertical arrows; slow processes are thick arrows. Slow processes go along stable lines until it ends, then there is a fast "jump" to another stable line.
Direction of slow processes. When the density of spruce budworm is low, then there is little mortality of trees and the average age of trees increases. Thus, the slow process at the lower branch of
stable budworm density is directed to the right (increasing of stand age). The upper branch of stable budworm density corresponds to outbreak populations. Old trees are more susceptible to
defoliation and they die first. Thus, the mean age of defoliated stand decreases, and the slow process at the upper branch goes backwards.
Population dynamics can be described as a limit cycle that includes 2 periods of slow change and 2 periods of fast change. Transition to a fast process is a catastrophe. This model is built in
(open Sheet 2)
We can add stochastic fluctuations due to weather or other factors. As the age of trees increases, the domain of attraction of the endemic (=low-density) equilibrium becomes smaller. As a result, the
probability of outbreak increases with increasing forest age. If an outbreak occurs in a young forest stand, then it is possible to suppress the population and return it back to the area of
stability. But if the stand is old, then the endemic equilibrium has a very narrow domain of attraction, and thus, the probability of an outbreak is very high. Finally, the lower equilibrium
disappears and it is no longer possible to avoid an outbreak by suppressing budworm population.
The model suggests to reduce forest age by cutting oldest trees. This will move the system back into the stable area.
|
{"url":"http://home.comcast.net/~sharov/PopEcol/lec13/catast.html","timestamp":"2014-04-21T00:33:06Z","content_type":null,"content_length":"4798","record_id":"<urn:uuid:700c5558-e55d-4696-8150-6eb6c6385482>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can Anyone suggest me a simpler algorithm for converting any base number to base 10??
Quote: suggest me a simpler algorithm Simpler than what? db
At this point it doesn't matter that it is simpler. I just need an Iterative way to implement any base conversion to a base 10.
Take a pen and paper. Do some base conversion math, placing each step on a separate line. When you're comfortable that you know what steps are needed, write Java code that carries out the same steps.
If and only if you can't get the right results, post your best attempt and we'll take a look. db
If you are unsure how to comvert numbers by hand, see PlanetMath: base conversion
ok thanks! This is what I got and I actually think that it works Code Java: public static String convert(String num,int b) { int k= num.length(); int num10=0; for(int i=0; i<k ;i++) { String r=
num.charAt(k-(i+1))+""; int NUM1 = Integer.parseInt(r); num10 += (NUM1 *power(b,i)); } return num10+""; }
public static String convert(String num,int b) { int k= num.length(); int num10=0; for(int i=0; i<k ;i++) { String r= num.charAt(k-(i+1))+""; int NUM1 = Integer.parseInt(r); num10 += (NUM1 *power
(b,i)); } return num10+""; }
|
{"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/5989-base-conversion-printingthethread.html","timestamp":"2014-04-21T12:29:15Z","content_type":null,"content_length":"10076","record_id":"<urn:uuid:807dcbcf-c847-48f0-98f6-cfe87f5c96fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of unmultiplied
Elementary arithmetic
is the most basic kind of
: it concerns the operations of
, and
. Most people learn elementary
elementary school
Elementary arithmetic starts with the natural numbers and the Arabic numerals used to represent them. It requires the memorization of addition tables and multiplication tables for adding and
multiplying pairs of digits. Knowing these tables, a person can perform certain well-known procedures for adding and multiplying natural numbers. Other algorithms are used for subtraction and
division. Mental arithmetic is elementary arithmetic performed in the head, for example to know that 100 − 37 = 63 without the use of a calculation aid, such as a sheet of paper, a slide rule, or a
calculator. It is an everyday skill. Extended forms of mental calculation may involve calculating extremely large numbers, but this is a skill not usually taught at the elementary level.
Elementary arithmetic then moves on to fractions, decimals, and negative numbers, which can be represented on a number line.
Nowadays people routinely use electronic calculators, cash registers, and computers to perform their elementary arithmetic for them. Earlier calculating tools included slide rules (for
multiplication, division, logs and trig), tables of logarithms, and nomographs.
In the United States and Canada, the question of whether or not calculators should be used, and whether traditional mathematic's manual computation methods should still be taught in elementary school
has provoked heated controversy as many standards-based mathematics texts deliberately omit some or most standard computation methods. The 1989 NCTM standards led to curricula which de-emphasized or
omitted much of what was considered to be elementary arithmetic in elementary school, and replaced it with emphasis on topics traditionally studied in college such as algebra, statistics and problem
solving, and non-standard computation methods unfamiliar to most adults.
In ancient times, the abacus was used to perform elementary arithmetic, and still is in many parts of Asia. A skilled user can be as fast with an abacus as with a calculator, which may require
In the 14th century Arabic numerals were introduced to Europe by Leonardo Pisano. These numerals were more efficient for performing calculations than Roman numerals, because of the positional system.
The digits
0 , zero, represents absence of objects to be counted.
1 , one. This is one stick: I
2 , two. This is two sticks: I I
3 , three. This is three sticks: I I I
4 , four. This is four sticks: I I I I
5 , five. This is five sticks: I I I I I
6 , six. This is six sticks: I I I I I I
7 , seven. This is seven sticks: I I I I I I I
8 , eight. This is eight sticks: I I I I I I I I
9 , nine. This is nine sticks: I I I I I I I I I
In decimal-counting literate cultures that use place-value written numbers, there are as many digits as fingers on the hands: the word "digit" can also mean finger (note, however, that there have
been human cultures using different radices and correspondingly differently-sized digit sets, such as sexagesimal by the Babylonians and vigesimal by the pre-Columbian Mesoamericans). But if counting
the digits on both hands, the first digit would be one and the last digit would not be counted as "zero" but as "ten": 10 , made up of the digits one and zero. The number 10 is the first two-digit
number. This is ten sticks: I I I I I I I I I I
If a number has more than one digit, then the rightmost digit, said to be the last digit, is called the "ones-digit". The digit immediately to its left is the "tens-digit". The digit immediately to
the left of the tens-digit is the "hundreds-digit". The digit immediately to the left of the hundreds-digit is the "thousands-digit".
+ 0 1 2 3 4 5 6 7 8 9
What does it mean to add two natural numbers? Suppose you have two bags, one bag holding five apples and a second bag holding three apples. Grabbing a third, empty bag, move all the apples from the
first and second bags into the third bag. The third bag now holds eight apples. This illustrates the combination of three apples and five apples is eight apples; or more generally: "three plus five
is eight" or "three plus five equals eight" or "eight is the sum of three and five". Numbers are abstract, and the addition of a group of three things to a group of five things will yield a group of
eight things. Addition is a regrouping: two sets of objects which were counted separately are put into a single group and counted together: the count of the new group is the "sum" of the separate
counts of the two original groups.
Symbolically, addition is represented by the "plus sign": +. So the statement "three plus five equals eight" can be written symbolically as 3 + 5 = 8. The order in which two numbers are added does
not matter, so 3 + 5 = 5 + 3 = 8. This is the commutative property of addition.
To add a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the sum of
the two digits. Some pairs of digits add up to two-digit numbers, with the tens-digit always being a 1. In the addition algorithm the tens-digit of the sum of a pair of digits is called the "carry
Addition algorithm
For simplicity, consider only numbers with three digits or less. To add a pair of numbers (written in Arabic numerals), write the second number under the first one, so that digits line up in columns:
the rightmost column will contain the ones-digit of the second number under the ones-digit of the first number. This rightmost column is the ones-column. The column immediately to its left is the
tens-column. The tens-column will have the tens-digit of the second number (if it has one) under the tens-digit of the first number (if it has one). The column immediately to the left of the
tens-column is the hundreds-column. The hundreds-column will line up the hundreds-digit of the second number (if there is one) under the hundreds-digit of the first number (if there is one).
After the second number has been written down under the first one so that digits line up in their correct columns, draw a line under the second (bottom) number. Start with the ones-column: the
ones-column should contain a pair of digits: the ones-digit of the first number and, under it, the ones-digit of the second number. Find the sum of these two digits: write this sum under the line and
in the ones-column. If the sum has two digits, then write down only the ones-digit of the sum. Write the "carry digit" above the top digit of the next column: in this case the next column is the
tens-column, so write a 1 above the tens-digit of the first number.
If both first and second number each have only one digit then their sum is given in the addition table, and the addition algorithm is unnecessary.
Then comes the tens-column. The tens-column might contain two digits: the tens-digit of the first number and the tens-digit of the second number. If one of the numbers has a missing tens-digit then
the tens-digit for this number can be considered to be a zero. Add the tens-digits of the two numbers. Then, if there is a carry digit, add it to this sum. If the sum was 18 then adding the carry
digit to it will yield 19. If the sum of the tens-digits (plus carry digit, if there is one) is less than ten then write it in the tens-column under the line. If the sum has two digits then write its
last digit in the tens-column under the line, and carry its first digit (which should be a one) over to the next column: in this case the hundreds column.
If none of the two numbers has a hundreds-digit then if there is no carry digit then the addition algorithm has finished. If there is a carry digit (carried over from the tens-column) then write it
in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the sum of the two numbers.
If at least one of the numbers has a hundreds-digit then if one of the numbers has a missing hundreds-digit then write a zero digit in its place. Add the two hundreds-digits, and to their sum add the
carry digit if there is one. Then write the sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the
hundreds-column and write the carry digit to its left: on the thousands-column.
Say one wants to find the sum of the numbers 653 and 274. Write the second number under the first one, with digits aligned in columns, like so:
Then draw a line under the second number and start with the ones-column. The ones-digit of the first number is 3 and of the second number is 4. The sum of three and four is seven, so write a seven in
the ones-column under the line:
Next, the tens-column. The tens-digit of the first number is 5, and the tens-digit of the second number is 7, and five plus seven is twelve: 12, which has two digits, so write its last digit, 2, in
the tens-column under the line, and write the carry digit on the hundreds-column above the first number:
Next, the hundreds-column. The hundreds-digit of the first number is 6, while the hundreds-digit of the second number is 2. The sum of six and two is eight, but there is a carry digit, which added to
eight is equal to nine. Write the nine under the line in the hundreds-column:
No digits (and no columns) have been left unadded, so the algorithm finishes, and
653 + 274 = 927.
Successorship and size
The result of the addition of one to a number is the
of that number. Examples:
the successor of zero is one,
the successor of one is two,
the successor of two is three,
the successor of ten is eleven.
Every natural number has a successor.
The predecessor of the successor of a number is the number itself. For example, five is the successor of four therefore four is the predecessor of five. Every natural number except zero has a
If a number is the successor of another number, then the first number is said to be larger than the other number. If a number is larger than another number, and if the other number is larger than a
third number, then the first number is also larger than the third number. Example: five is larger than four, and four is larger than three, therefore five is larger than three. But six is larger than
five, therefore six is also larger than three. But seven is larger than six, therefore seven is also larger than three... therefore eight is larger than three... therefore nine is larger than three,
If two non-zero natural numbers are added together, then their sum is larger than either one of them. Example: three plus five equals eight, therefore eight is larger than three (8>3) and eight is
larger than five (8>5). The symbol for "larger than" is >.
If a number is larger than another one, then the other is smaller than the first one. Examples: three is smaller than eight (3<8) and five is smaller than eight (5<8). The symbol for smaller than is
<. A number cannot be at the same time larger and smaller than another number. Neither can a number be at the same time larger than and equal to another number. Given a pair of natural numbers, one
and only one of the following cases must be true:
• the first number is larger than the second one,
• the first number is equal to the second one,
• the first number is smaller than the second one.
To count a group of objects means to assign a natural number to each one of the objects, as if it were a label for that object, such that a natural number is never assigned to an object unless its
predecessor was already assigned to another object, with the exception that zero is not assigned to any object: the smallest natural number to be assigned is one, and the largest natural number
assigned depends on the size of the group. It is called
the count
and it is equal to the number of objects in that group.
The process of counting a group is the following:
Step 1: Let "the count" be equal to zero. "The count" is a variable quantity, which though beginning with a value of zero, will soon have its value changed several times.
Step 2: Find at least one object in the group which has not been labeled with a natural number. If no such object can be found (if they have all been labeled) then the counting is finished. Otherwise
choose one of the unlabeled objects.
Step 3: Increase the count by one. That is, replace the value of the count by its successor.
Step 4: Assign the new value of the count, as a label, to the unlabeled object chosen in Step 2.
Step 5: Go back to Step 2.
When the counting is finished, the last value of the count will be the final count. This count is equal to the number of objects in the group.
Often, when counting objects, one does not keep track of what numerical label corresponds to which object: one only keeps track of the subgroup of objects which have already been labeled, so as to be
able to identify unlabeled objects necessary for Step 2. However, if one is counting persons, then one can ask the persons who are being counted to each keep track of the number which the person's
self has been assigned. After the count has finished it is possible to ask the group of persons to file up in a line, in order of increasing numerical label. What the persons would do during the
process of lining up would be something like this: each pair of persons who are unsure of their positions in the line ask each other what their numbers are: the person whose number is smaller should
stand on the left side and the one with the larger number on the right side of the other person. Thus, pairs of persons compare their numbers and their positions, and commute their positions as
necessary, and through repetition of such conditional commutations they become ordered.
Algorithms for subtraction
There are several methods to accomplish subtraction. Traditional mathematics taught elementary school children to subtract using methods suitable for hand calculation. The particular method used
varies from country from country, and within a country, different methods are in fashion at different times. Standards-based mathematics are distinguished generally by the lack of preference for any
standard method, replaced by guiding 2nd grade children to invent their own methods of computation, such as using properties of negative numbers in the case of TERC.
American schools currently teach a method of subtraction using borrowing and a system of markings called crutches. Although a method of borrowing had been known and published in textbooks prior,
apparently the crutches are the invention of William A. Browell who used them in a study in November 1937 This system caught on rapidly, displacing the other methods of subtraction in use in America
at that time.
European children are taught, and some older Americans employ, a method of subtraction called the Austrian method, also known as the additions method. There is no borrowing in this method. There are
also crutches (markings to aid the memory) which [probably] vary according to country.
In the method of borrowing, a subtraction such as 86 - 39 will accomplish the one's place subtraction of 9 from 6 by borrowing a 10 from 80 and adding it to the 6. The problem is thus transformed
into (70+16)-39, effectively. This is indicated by striking through the 8, writing a small 7 above it, and writing a small 1 above the 6. These markings are called crutches. The 9 is then subtracted
from 16, leaving 7, and the 30 from the 70, leaving 40, or 47 as the result.
In the additions method, a 10 is borrowed to make the 6 into 16, in preparation for the subtraction of 9, just as in the borrowing method. However, the 10 is not taken by reducing minuend, rather one
augments the subtrahend. Effectively, the problem is transformed into (80+16)-(39+10). Typically a crutch of a small one is marked just below the subtrahend digit as a reminder. Then the operations
proceed: 9 from 16 is 7; and 40 (that is, 30+10) from 80 is 40, or 47 as the result.
The additions method seem to be taught in two variations, which differ only in psychology. Continuing the example of 86-39, the first variation attempts to subtract 9 from 6, and then 9 from 16,
borrowing a 10 by marking near the digit of the subtrahend in the next column. The second variation attempts to find a digit which, when added to 9 gives 6, and recognizing that is not possible,
gives 16, and carrying the 10 of the 16 as a one marking near the same digit as in the first method. The markings are the same, it is just a matter of preference as to how one explains its
As a final caution, the borrowing method gets a bit complicated in cases such as 100-87, where a borrow cannot be made immediately, and must be obtained by reaching across several columns. In this
case, the minuend is effectively rewriten as 90+10, by taking a one hundred from the hundreds, making ten tens from it, and immediately borrowing that down to 9 tens in the tens column and finally
placing a ten in the one's column.
There are several other methods, some of which are particularly advantageous to machine calculation. For example, digital computers employ the method of two's complement. Of great importance is the
counting up method by which change is made. Suppose an amount P is given to pay the required amount Q, with P greater than Q. Rather than performing the subtraction P-Q and counting out that amount
in change, money is counted out starting at Q and continuing until reaching P. Curiously, although the amount counted out must equal the result of the subtraction P-Q, the subtraction was never
really done and the value of P-Q might still be unknown to the change-maker.
1 Subtraction in the United States: An Historial Perspective, Susan Ross, Mary Pratt-Cotter, The Mathematics Educator, Vol. 8, No. 1.
Browell, W. A. (1939). Learning as reorganization: An experimental study in third-grade arithmetic, Duke University Press.
See also:
× 0 1 2 3 4 5 6 7 8 9
When two numbers are multiplied together, the result is called a product. The two numbers being multiplied together are called factors.
What does it mean to multiply two natural numbers? Suppose there are five red bags, each one containing three apples. Now grabbing an empty green bag, move all the apples from all five red bags into
the green bag. Now the green bag will have fifteen apples. Thus the product of five and three is fifteen. This can also be stated as "five times three is fifteen" or "five times three equals fifteen"
or "fifteen is the product of five and three". Multiplication can be seen to be a form of repeated addition: the first factor indicates how many times the second factor should be added onto itself;
the final sum being the product.
Symbolically, multiplication is represented by the multiplication sign: $times$. So the statement "five times three equals fifteen" can be written symbolically as
$5 times 3 = 15.$
In some countries, and in more advanced arithmetic, other multiplication signs are used, e.g.
. In some situations, especially in
, where numbers can be symbolized with letters, the multiplication symbol may be omitted; e.g.
$x times y$
. The order in which two numbers are multiplied does not matter, so that, for example, three times four equals four times three. This is the commutative property of multiplication.
To multiply a pair of digits using the table, find the intersection of the row of the first digit with the column of the second digit: the row and the column intersect at a square containing the
product of the two digits. Most pairs of digits produce two-digit numbers. In the multiplication algorithm the tens-digit of the product of a pair of digits is called the "carry digit".
Multiplication algorithm for a single-digit factor
Consider a multiplication where one of the factors has only one digit, whereas the other factor has an arbitrary quantity of digits. Write down the multi-digit factor, then write the single-digit
factor under the last digit of the multi-digit factor. Draw a horizontal line under the single-digit factor. Henceforth, the single-digit factor will be called the "multiplier" and the multi-digit
factor will be called the "multiplicand".
Suppose for simplicity that the multiplicand has three digits. The first digit is the hundreds-digit, the middle digit is the tens-digit, and the last, rightmost, digit is the ones-digit. The
multiplier only has a ones-digit. The ones-digits of the multiplicand and multiplier form a column: the ones-column.
Start with the ones-column: the ones-column should contain a pair of digits: the ones-digit of the multiplicand and, under it, the ones-digit of the multiplier. Find the product of these two digits:
write this product under the line and in the ones-column. If the product has two digits, then write down only the ones-digit of the product. Write the "carry digit" as a superscript of the
yet-unwritten digit in the next column and under the line: in this case the next column is the tens-column, so write the carry digit as the superscript of the yet-unwritten tens-digit of the product
(under the line).
If both first and second number each have only one digit then their product is given in the multiplication table, and the multiplication algorithm is unnecessary.
Then comes the tens-column. The tens-column so far contains only one digit: the tens-digit of the multiplicand (though it might contain a carry digit under the line). Find the product of the
multiplier and the tens-digits of the multiplicand. Then, if there is a carry digit (superscripted, under the line and in the tens-column), add it to this product. If the resulting sum is less than
ten then write it in the tens-column under the line. If the sum has two digits then write its last digit in the tens-column under the line, and carry its first digit over to the next column: in this
case the hundreds column.
If the multiplicand does not have a hundreds-digit then if there is no carry digit then the multiplication algorithm has finished. If there is a carry digit (carried over from the tens-column) then
write it in the hundreds-column under the line, and the algorithm is finished. When the algorithm finishes, the number under the line is the product of the two numbers.
If the multiplicand has a hundreds-digit... find the product of the multiplier and the hundreds-digit of the multiplicand, and to this product add the carry digit if there is one. Then write the
resulting sum of the hundreds-column under the line, also in the hundreds column. If the sum has two digits then write down the last digit of the sum in the hundreds-column and write the carry digit
to its left: on the thousands-column.
Say one wants to find the product of the numbers 3 and 729. Write the single-digit multiplier under the multi-digit multiplicand, with the multiplier under the ones-digit of the multiplicand, like
Then draw a line under the multiplier and start with the ones-column. The ones-digit of the multiplicand is 9 and the multiplier is 3. The product of three and nine is 27, so write a seven in the
ones-column under the line, and write the carry-digit 2 as a superscript of the yet-unwritten tens-digit of the product under the line:
Next, the tens-column. The tens-digit of the multiplicand is 2, the multiplier is 3, and three times two is six. Add the carry-digit, 2, to the product 6 to obtain 8. Eight has only one digit: no
carry-digit, so write in the tens-column under the line:
Next, the hundreds-column. The hundreds-digit of the multiplicand is 7, while the multiplier is 3. The product of three and seven is 21, and there is no previous carry-digit (carried over from the
tens-column). The product 21 has two digits: write its last digit in the hundreds-column under the line, then carry its first digit over to the thousands-column. Since the multiplicand has no
thousands-digit, then write this carry-digit in the thousands-column under the line (not superscripted):
No digits of the multiplicand have been left unmultiplied, so the algorithm finishes, and
$3 times 729 = 2187$.
Multiplication algorithm for multi-digit factors
Given a pair of factors, each one having two or more digits, write both factors down, one under the other one, so that digits line up in columns.
For simplicity consider a pair of three-digits numbers. Write the last digit of the second number under the last digit of the first number, forming the ones-column. Immediately to the left of the
ones-column will be the tens-column: the top of this column will have the second digit of the first number, and below it will be the second digit of the second number. Immediately to the left of the
tens-column will be the hundreds-column: the top of this column will have the first digit of the first number and below it will be the first digit of the second number. After having written down both
factors, draw a line under the second factor.
The multiplication will consist of two parts. The first part will consist of several multiplications involving one-digit multipliers. The operation of each one of such multiplications was already
described in the previous multiplication algorithm, so this algorithm will not describe each one individually, but will only describe how the several multiplications with one-digit multipliers shall
be coördinated. The second part will add up all the subproducts of the first part, and the resulting sum will be the product.
First part. Let the first factor be called the multiplicand. Let each digit of the second factor be called a multiplier. Let the ones-digit of the second factor be called the "ones-multiplier". Let
the tens-digit of the second factor be called the "tens-multiplier". Let the hundreds-digit of the second factor be called the "hundreds-multiplier".
Start with the ones-column. Find the product of the ones-multiplier and the multiplicand and write it down in a row under the line, aligning the digits of the product in the previously-defined
columns. If the product has four digits, then the first digit will be the beginning of the thousands-column. Let this product be called the "ones-row".
Then the tens-column. Find the product of the tens-multiplier and the multiplicand and write it down in a row — call it the "tens-row" — under the ones-row, but shifted one column to the left. That
is, the ones-digit of the tens-row will be in the tens-column of the ones-row; the tens-digit of the tens-row will be under the hundreds-digit of the ones-row; the hundreds-digit of the tens-row will
be under the thousands-digit of the ones-row. If the tens-row has four digits, then the first digit will be the beginning of the ten-thousands-column.
Next, the hundreds-column. Find the product of the hundreds-multiplier and the multiplicand and write it down in a row — call it the "hundreds-row" — under the tens-row, but shifted one more column
to the left. That is, the ones-digit of the hundreds-row will be in the hundreds-column; the tens-digit of the hundreds-row will be in the thousands-column; the hundreds-digit of the hundreds-row
will be in the ten-thousands-column. If the hundreds-row has four digits, then the first digit will be the beginning of the hundred-thousands-column.
After having down the ones-row, tens-row, and hundreds-row, draw a horizontal line under the hundreds-row. The multiplications are over.
Second part. Now the multiplication has a pair of lines. The first one under the pair of factors, and the second one under the three rows of subproducts. Under the second line there will be six
columns, which from right to left are the following: ones-column, tens-column, hundreds-column, thousands-column, ten-thousands-column, and hundred-thousands-column.
Between the first and second lines, the ones-column will contain only one digit, located in the ones-row: it is the ones-digit of the ones-row. Copy this digit by rewriting it in the ones-column
under the second line.
Between the first and second lines, the tens-column will contain a pair of digits located in the ones-row and the tens-row: the tens-digit of the ones-row and the ones-digit of the tens-row. Add
these digits up and if the sum has just one digit then write this digit in the tens-column under the second line. If the sum has two digits then the first digit is a carry-digit: write the last digit
down in the tens-column under the second line and carry the first digit over to the hundreds-column, writing it as a superscript to the yet-unwritten hundreds-digit under the second line.
Between the first and second lines, the hundreds-column will contain three digits: the hundreds-digit of the ones-row, the tens-digit of the tens-row, and the ones-digit of the hundreds-row. Find the
sum of these three digits, then if there is a carry-digit from the tens-column (written in superscript under the second line in the hundreds-column) then add this carry-digit as well. If the
resulting sum has one digit then write it down under the second line in the hundreds-column; if it has two digits then write the last digit down under the line in the hundreds-column, and carry over
the first digit to the thousands-column, writing it as a superscript to the yet-unwritten thousands-digit under the line.
Between the first and second lines, the thousands-column will contain either two or three digits: the hundreds-digit of the tens-row, the tens-digit of the hundreds-row, and (possibly) the
thousands-digit of the ones-row. Find the sum of these digits, then if there is a carry-digit from the hundreds-column (written in superscript under the second line in the thousands-column) then add
this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the thousands-column; if it has two digits then write the last digit down under the line in
the thousands-column, and carry the first digit over to the ten-thousands-column, writing it as a superscript to the yet-unwritten ten-thousands-digit under the line.
Between the first and second lines, the ten-thousands-column will contain either one or two digits: the hundreds-digit of the hundreds-column and (possibly) the thousands-digit of the tens-column.
Find the sum of these digits (if the one in the tens-row is missing think of it as a zero), and if there is a carry-digit from the thousands-column (written in superscript under the second line in
the ten-thousands-column) then add this carry-digit as well. If the resulting sum has one digit then write it down under the second line in the ten-thousands-column; if it has two digits then write
the last digit down under the line in the ten-thousands-column, and carry the first digit over to the hundred-thousands-column, writing it as a superscript to the yet-unwritten ten-thousands digit
under the line. However, if the hundreds-row has no thousands-digit then do not write this carry-digit as a superscript, but in normal size, in the position of the hundred-thousands-digit under the
second line, and the multiplication algorithm is over.
If the hundreds-row does have a thousands-digit, then add to it the carry-digit from the previous row (if there is no carry-digit then think of it as a zero) and write the single-digit sum in the
hundred-thousands-column under the second line.
The number under the second line is the sought-after product of the pair of factors above the first line.
Let our objective be to find the product of 789 and 345. Write the 345 under the 789 in three columns, and draw a horizontal line under them:
First part. Start with the ones-column. The multiplicand is 789 and the ones-multiplier is 5. Perform the multiplication in a row under the line:
Then the tens-column. The multiplicand is 789 and the tens-multiplier is 4. Perform the multiplication in the tens-row, under the previous subproduct in the ones-row, but shifted one column to the
3 9^4 4^4 5
3 1^3 5^3 6
Next, the hundreds-column. The multiplicand is once again 789, and the hundreds-multiplier is 3. Perform the multiplication in the hundreds-row, under the previous subproduct in the tens-row, but
shifted one (more) column to the left. Then draw a horizontal line under the hundreds-row:
3 9^4 4^4 5
3 1^3 5^3 6
2 3^2 6^2 7
Second part. Now add the subproducts between the first and second lines, but ignoring any superscripted carry-digits located between the first and second lines.
3 9^4 4^4 5
3 1^3 5^3 6
2 3^2 6^2 7
2 7^1 2^2 2^1 0 5
The answer is
$789 times 345 = 272205.$
In mathematics, especially in elementary arithmetic, division is an arithmetic operation which is the inverse of multiplication.
Specifically, if c times b equals a, written:
$c times b = a,$
is not
, then
divided by
, written:
$frac ab = c$
For instance,
$frac 63 = 2$
$2 times 3 = 6,$.
In the above expression, a is called the dividend, b the divisor and c the quotient.
Division by zero (i.e. where the divisor is zero) is not defined.
Division notation
Division is most often shown by placing the dividend over the divisor with a horizontal line, also called a vinculum, between them. For example, a divided by b is written
$frac ab.$
This can be read out loud as "a divided by b" or "a over b". A way to express division all on one line is to write the
, then a
, then the
, like this:
This is the usual way to specify division in most computer
programming languages
since it can easily be typed as a simple sequence of characters.
A typographical variation, which is halfway between these two forms, uses a solidus (fraction slash) but elevates the dividend, and lowers the divisor:
Any of these forms can be used to display a fraction. A fraction is a division expression where both dividend and divisor are integers (although typically called the numerator and denominator), and
there is no implication that the division needs to be evaluated further.
A less common way to show division is to use the obelus (or division sign) in this manner:
$a div b.$
This form is infrequent except in elementary arithmetic. The obelus is also used alone to represent the division operation itself, as for instance as a label on a key of a
In some non-English-speaking cultures, "a divided by b" is written a : b. However, in English usage the colon is restricted to expressing the related concept of ratios (then "a is to b").
With a knowledge of multiplication tables, two integers can be divided on paper using the method of long division. If the dividend has a fractional part (expressed as a decimal fraction), one can
continue the algorithm past the ones place as far as desired. If the divisor has a decimal fractional part, one can restate the problem by moving the decimal to the right in both numbers until the
divisor has no fraction.
To divide by a fraction, multiply by the reciprocal (reversing the position of the top and bottom parts) of that fraction.
$textstyle\left\{5 div \left\{1 over 2\right\} = 5 times \left\{2 over 1\right\} = 5 times 2 = 10\right\}$
$textstyle\left\{\left\{2 over 3\right\} div \left\{2 over 5\right\} = \left\{2 over 3\right\} times \left\{5 over 2\right\} = \left\{10 over 6\right\} = \left\{5 over 3\right\}\right\}$
See also
|
{"url":"http://www.reference.com/browse/unmultiplied","timestamp":"2014-04-24T04:12:38Z","content_type":null,"content_length":"125446","record_id":"<urn:uuid:a2fe40d9-ada0-4ee6-9017-211b3c4f814d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Astoria, NY Algebra 2 Tutor
Find an Astoria, NY Algebra 2 Tutor
...I strive for excellence every step of the way, whether in classroom or in a tutoring session. I am willing to travel to a student's home for a tutoring session. While I do have a 24-hour
cancellation policy, I offer make-up classes on Saturday and Sunday.
9 Subjects: including algebra 2, physics, geometry, calculus
...SUBJECTS At this time I am available to tutor Prealgebra, Algebra 1, Geometry and Algebra 2. HOURS I am available to tutor Monday-Friday. TUTORING PHILOSOPHY I believe that anyone can do math
or at least understand it, if given enough practice and explained in a way that they can understand.
4 Subjects: including algebra 2, geometry, algebra 1, prealgebra
...As a tutor, I would be especially interested in helping beginners grasp the basics of the language: the mechanics of the Devanagari alphabet, basic grammar and vocabulary, etc. I'm proficient
in Python for automated data processing and media programming. Part of my current job has involved writing Python scripts to automate the production of music notation (MusicXML) files.
30 Subjects: including algebra 2, reading, Spanish, writing
Hello students and parents, I hold a PhD in Physics and MS in Applied Math. I am a devoted and experienced tutor for high school and college-level Physics and Math; I also prepare students for
taking college-entrance tests. Working with each student, I use individual approach based on student's personality and background.
24 Subjects: including algebra 2, physics, GRE, Russian
...I have previously tutored differential equations for STEM majors. I am a recent graduate in Physics. I have had one formal course and linear algebra as an undergrad and have encountered the
topic throughout my degree in advanced math topics like differential equations and quantum mechanics and thus am fairly proficient at the topic.
17 Subjects: including algebra 2, chemistry, Spanish, calculus
Related Astoria, NY Tutors
Astoria, NY Accounting Tutors
Astoria, NY ACT Tutors
Astoria, NY Algebra Tutors
Astoria, NY Algebra 2 Tutors
Astoria, NY Calculus Tutors
Astoria, NY Geometry Tutors
Astoria, NY Math Tutors
Astoria, NY Prealgebra Tutors
Astoria, NY Precalculus Tutors
Astoria, NY SAT Tutors
Astoria, NY SAT Math Tutors
Astoria, NY Science Tutors
Astoria, NY Statistics Tutors
Astoria, NY Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Astoria_NY_Algebra_2_tutors.php","timestamp":"2014-04-16T07:15:28Z","content_type":null,"content_length":"24189","record_id":"<urn:uuid:c81a20cd-2664-4296-9358-1438a2394c48>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
• Similar Articles
ASCII Text x
"Editorials," IEEE Transactions on Visualization and Computer Graphics, vol. 1, no. 1, pp. 1-4, March, 1995.
BibTex x
@article{ 10.1109/TVCG.1995.10000,
author = {},
title = {Editorials},
journal ={IEEE Transactions on Visualization and Computer Graphics},
volume = {1},
number = {1},
issn = {1077-2626},
year = {1995},
pages = {1-4},
doi = {http://doi.ieeecomputersociety.org/10.1109/TVCG.1995.10000},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Visualization and Computer Graphics
TI - Editorials
IS - 1
SN - 1077-2626
EPD - 1-4
PY - 1995
VL - 1
JA - IEEE Transactions on Visualization and Computer Graphics
ER -
About our cover. A volumetric ray tracing of a silicon crystal from a supercomputer simulation. Data courtesy of Oak Ridge National Lab, Oak Ridge, Tennessee. The image, generated by the VolVis
volume visualization system, is courtesy of Lisa Sobierajski and Arie Kaufman, SUNY, Stony Brook.
[1] J.T. Kajiya,“The rendering equation,” Computer Graphics (SIGGRAPH ’86 Proceedings), vol. 20, pp. 143-150, Aug. 1986.
[2] F.E. Nicodemus,J.C. Richmond,J.J. Hsia,I.W. Ginsberg,, and T. Limperis,Geometrical Considerations and Nomenclature for Reflectance, National Bureau of Standards, Washington, D.C., Oct. 1977.
[3] J.T. Kajiya and B.P. Von Heryen,“Ray tracing volume densities,” Computer Graphics, vol. 20, no. 4, pp. 165-174, 1984.
[4] H.C. Hege,T. Höllerer,, and D. Stalling,“Volume rendering,” Tech. Rpt. TR 93-7, Konrad-Zuse-Zentrum für Informationstechnik Berlin, 1993.
[5] R.R. Lewis,“Making shaders more physically plausible,” 4th EUROGRAPHICS Workshop on Rendering,Paris, pp. 47-62, June 1993.
[6] P. Slusallek,T. Pflaum,, and H.-P. Seidel,“Implementing RenderMan - practice, problems, and enhancements,” Computer Graphics Forum (EUROGRAPHICS ’94 Proc.), vol. 13, pp. 443-454, Sept. 1994.
[7] R. Cook,L. Carpenter,, and E. Catmull,“The Reyes image rendering architecture,” Computer Graphics (SIGGRAPH ’87 Proc.), vol. 21, pp. 95-102, July 1987.
[8] D. Kirk and J. Arvo,“The ray tracing kernel,” Proc. of Ausgraph, pp. 75-82, July 1988.
[9] P. Shirley and K. Sung,“A ray tracing framework for global illumination systems,” Proc. Graphics Interface ’91,Calgary, Alberta, pp. 117-128, June 1991.
[10] B. Trumbore,W. Lytle,, and D.P. Greenberg,“A testbed for image synthesis,” in Developing Large-Scale Graphics Software Toolkits (Course notes 3), P.S. Strauss and B. Trubmore, eds., Anaheim,
Calif.: ACM SIGGRAPH, pp. 4.7-4.17, Aug. 1993.
[11] A. Glassner,“Spectrum - a proposed image synthesis architecture,” The Theory and Practice of Ray Tracing, A. Glassner, ed., EUROGRAPHICS ’91 Tutorial Note 1, pp. 33-135, EUROGRAPHICS, 1991.
[12] A.S. Glassner,CSpectrum Reference Manual (Alpha-release Version), Xerox PARC, Aug. 1994.
[13] G.J. Ward and F. Rubenstein,“A ray tracing solution for diffuse interreflection,” Computer Graphics (SIGGRAPH ’88 Proc.), vol. 22, pp.85-92, Aug. 1988.
[14] G.J. Ward and P.S. Heckbert,“Irradiance gradients,” Third EUROGRAPHICS Workshop on Rendering, A. Chalmers and D. Paddon, eds., Bristol, pp. 85-98, May 1992.
[15] G.J. Ward,“The RADIANCE lighting simulation and rendering system,” Computer Graphics (SIGGRAPH ’94 Proc.), pp. 459-472, July 1994.
[16] K. Beck and W. Cunningham,“A laboratory for teaching object-oriented thinking,” OOPSLA ’89 Conf. Proc.,New Orleans, 1989.
[17] R. Wirfs-Brock and B. Wilkerson,“Object-oriented design: A responsibility-driven approach,” OOPSLA ’89 Conf. Proc.,New Orleans, pp. 71-75, 1989.
[18] R. Wirfs-Brock and R. Johnson,“Surveying current research in object-oriented design,” Communications of the ACM, vol. 33, pp. 104-123, Sept. 1990.
[19] G. Booch,Object-Oriented Analysis and Design with Applications, Benjamin/Cummings Publishing, second ed., 1994.
[20] B.N. Freeman-Benson and A. Boring,“Integrating constraints with object-oriented programming,” Proc. ECOOP ’92 - European Conf. on Object-Oriented Programming, O.L. Madsen, ed., Utrecht, pp.
268-286, 1992.
[21] R.C. Veltkamp and E. Blake,“Event-based.constraints: Coordinate.satisfaction r object.solution,” Fourth EUROGRAPHICS Workshop on Object-Oriented Graphics (Part. Edition),Sintra, Portugal, pp.
251-261, May 1994.
[22] R.L. Cook,“Shade trees,” Computer Graphics (SIGGRAPH ’84 Proc.), vol. 18, pp. 223-231, July 1984.
[23] S. Upstill,The RenderMan Companion, Addison Wesley, 1990.
[24] F.X. Sillion,J.R. Arvo,S.H. Westin,, and D.P. Greenberg,“A global illumination solution for general reflectance distributions,” Computer Graphics (SIGGRAPH ’91 Proc.), vol. 25, pp. 187-196, July
[25] L. Aupperle and P. Hanrahan,“Importance and discrete three point transport,” Fourth EUROGRAPHICS Workshop on Rendering,Paris, pp. 85-94, June 1993.
[26] T.L. Kay and J. Kajiya,“Ray tracing complex scenes,” Computer Graphics (SIGGRAPH ’86 Proc.), vol. 20, pp. 269-278, Aug. 1986.
[27] S. Teller and P. Hanrahan,“Global visibility algorithms for illumination computations,” Computer Graphics (SIGGRAPH ’93 Proc.), pp. 239-246, Aug. 1993.
[28] J. Zhao and D. Dobkin,“Continuous algorithms for visibility: The space searching approach,” Fourth EUROGRAPHICS Workshop on Rendering,Paris, June 1993.
[29] J. Neider,T. Davis,, and M. Woo,OpenGL Programming Guide, Addison Wesley, 1993.
[30] M. Cohen,S.E. Chen,J.R. Wallace,, and D.P. Greenberg,“A progressive refinement approach to fast radiosity image generation,” Computer Graphics (SIGGRAPH ’88 Proc.), vol. 22, pp. 75-84, Aug.
[31] P. Hanrahan,D. Salzmann,, and L. Aupperle,“A rapid hierarchical radiosity algorithm,” Computer Graphics (SIGGRAPH ’91 Proc.), vol. 25, no. 4, pp. 197-206, 1991.
[32] S.J. Gortler,P. Schröder,M. Cohen,, and P.M. Hanrahan,“Wavelet radiosity,” Computer Graphics (SIGGRAPH ’93 Proc.), vol. 27, pp. 221-230, Aug. 1993.
[33] P. Schröder,Wavelet Algorithms for Illumination Computations, PhD Thesis, Princeton Univ., Nov. 1994.
[34] B. Corrie and P. Mackerras,“Data shaders,” Proc. Visualization ’93, G.M. Nielson and D. Bergerson, eds., pp. 275-282,Los Alamitos, Calif.: IEEE CS Press, Oct. 1993.
[35] B. Corrie and P. Mackerras,“Data shader language and interface specification,” Tech. Rep. TR-CS-93-02, The Australian National Univ., Comput. Sci. Dept., 1993.
[36] H.E. Rushmeier and T.E. Torrance,“The zonal method for calculating light intensities in the presence of a participating medium,” Computer Graphics, vol. 21, no. 4, pp. 293-302, 1987.
[37] L.M. Sobierajski,Global Illumination Models for Volume Rendering, PhD Thesis, State Univ. of New York at Stony Brook, Aug. 1994.
[38] B. Smits,J. Arvo,, and D. Greenberg,“A clustering algorithm for radiosity in complex environments,” Computer Graphics (SIGGRAPH ’94 Proc.), pp. 435-442, July 1994.
[39] F. Sillion,“Clustering and volume scattering for hierarchical radiosity calculations,” Fifth EUROGRAPHICS Workshop on Rendering, Darmstadt, pp. 57-68, June 1994.
[40] E. Veach and L. Guibas,“Bidirectional estimators for light transport,” Fifth EUROGRAPHICS Workshop on Rendering, Darmstadt, pp. 147-162, June 1994.
[41] H.R. Zatz,“Galerkin radiosity: A higher order solution method for global illumination,” Computer Graphics (SIGGRAPH ’93 Proc.), pp. 213-220, Aug. 1993.
[42] P. Schröder and P. Hanrahan,“Wavelet methods for radiance computations,” Fifth EUROGRAPHICS Workshop on Rendering, Darmstadt, pp. 303-311, June 1994.
[43] E.A. Haines and J.R. Wallace,“Shaft culling for efficient ray-traced radiosity,” Second EUROGRAPHICS Workshop on Rendering (Part. Ed.),Barcelona, May 1991.
[44] A. Pearce and D. Jevans,“Exploiting shadow coherence in ray tracing,” Proc. of Graphics Interface ’91, pp. 109-115, 1991.
[45] G.J. Ward,“Adaptive shadow testing for ray tracing,” Second EUROGRAPHICS Workshop on Rendering (Part. Ed.), May 1991.
[46] W.T. Reeves,D.H. Salesin,, and R.L. Cook,“Rendering antialiased shadows with depth maps,” Computer Graphics (Proc. SIGGRAPH ’87), vol. 21, pp. 283-291, July 1987.
[47] M.H. Kalos and P.A. Whitlock,Monte Carlo Methods, John Wiley & Sons, 1986.
[48] R.L. Cook,T. Porter,, and L. Carpenter,“Distributed ray tracing,” Computer Graphics (SIGGRAPH ’84 Proc.), vol. 18, pp. 137-145, July 1984.
[49] R.L. Cook,“Stochastic sampling in computer graphics,” ACM TOC, vol. 5, pp. 51-72, Jan. 1986.
[50] D. Kirk and J. Arvo,“Unbiased variance reduction for global illumination,” Second EUROGRAPHICS Workshop on Rendering (Part. Ed.),Barcelona, May 1991.
[51] B. Smits,J. Arvo,, and D. Salesin,“An importance driven radiosity algorithm,” Computer Graphics (SIGGRAPH ’92 Proc.), vol. 26, pp. 273-282, July 1992.
[52] M.F. Cohen and J.R. Wallace,Radiosity and Realistic Image Synthesis, Academic Press, 1993.
[53] S. Gortler,M.F. Cohen,, and P. Slusallek,“Radiosity and relaxation methods,” IEEE Computer Graphics & Applications, vol. 14, pp. 48-58, Nov. 1994.
[54] P.H. Christiansen,E.J. Stollnitz,D. Salesin,, and T.D. DeRose,“Wavelet radiance,” Fifth EUROGRAPHICS Workshop on Rendering, Darmstadt, pp. 287-301, June 1994.
[55] A. Kok and F.W. Jansen,“Source selection for the direct lighting computation in global illumination,” Second EUROGRAPHICS Workshop on Rendering,Barcelona, May 1991.
[56] M. Schröder,“Implementierung des Hierachical Radiosity Algorithmus in Vision System,” master thesis, Univ. of Erlangen, IMMD IX - Computer Graphics, Aug. 1994.
[57] J.R. Wallace,K.A. Elmquist,, and E.A. Haines,“A ray tracing algorithm for progressive radiosity,” Computer Graphics (SIGGRAPH ’89 Proc.), vol. 23, pp. 315-324, July 1989.
[58] M. Stamminger,“Wavelet radiosity,” master thesis, Univ. of Erlangen, IMMD IX - Computer Graphics, Sept. 1994.
[59] P.S. Heckbert,“Writing a ray tracer,” An Introduction to Ray Tracing, A.S. Glassner, ed., pp. 263-293, Academic Press, 1989.
[60] P. Slusallek,Vision - An Architecture for Physically Based Image Synthesis, PhD Thesis, Univ. of Erlangen, Computer Graphics Group, 1995 (in preparation).
[61] M.A. Ellis and B. Stroustrup,The Annotated C++ Reference Manual, ddison Wesley, 1991.
[62] W. Heidrich,P. Slusallek,, and H.-P. Seidel,“Using C++ class libraries from an interpreted language,” Technology of Object-Oriented Language & Systems, TOOLS 14,Santa Barbara, Calif., pp.
397-408, 1994.
[63] P. Slusallek,T. Pflaum,, and H.-P. Seidel,“Using procedural RenderMan shaders for global illumination,” Tech. Rpt. TR-94-13, Univ. of Erlangen, Computer Graphics Group, Jan. 1995.
[64] M.J. McLennan,“[incr TCL]: Object-oriented programming in TCL,” Proc.: TCL/Tk Workshop, Univ. of Calif., Berkeley, 1993.
[65] J.K. Osterhout,An Introduction to TCL and Tk., Addison Wesley, 1994.
[66] N. Holzschuch,F. Sillion,, and G. Dretakis,“An efficient progressive refinement strategy for hierarchical radiosity,” Fifth EUROGRAPHICS Workshop on Rendering, Darmstadt, pp. 343-357, June 1994.
[67] OMG, The Common Object Request Broker: Architecture and Specification. OMG, 1993.
"Editorials," IEEE Transactions on Visualization and Computer Graphics, vol. 1, no. 1, pp. 1-4, March 1995, doi:10.1109/TVCG.1995.10000
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tg/1995/01/v0001-abs.html","timestamp":"2014-04-24T10:07:20Z","content_type":null,"content_length":"57100","record_id":"<urn:uuid:012fa403-71d6-45bf-b3f0-d9b6f504b79a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maths solutions online
// July 12th, 2010 // 9 Comments » // CBSE, X class
Time to wake up 10th class students. So, buckle up, gear up. Thinking board is over, do not get relaxed. Your competitor students have already started preparing vigorously for 10th class and also for
the competitions that you all will be giving after 12th class.
No matter boards are over, but do not relax your preparation for your subject. You all must have extra and deep knowledge of every mathematics chapter. You must all know that the the CAT entrance
test for prestigious IIMs have mathematics entrance test and its syllabus included mathematics 9th and 10th class.
IIT aspirants you all must know that every year in IIT exam, one question comes from 10th class and only that student who has grip on the subject is able to solve it.
Pioneer Mathematics will help you develop a very strong grip on mathematics subject with our
IIT Foundation Program and Olympiad Foundation Program.
Not only this course will prepare you for IIT and engineering competitions but even for commerce and medical entrance tests too. The Olympiad foundation course will make sure that your mathematics
quality level increases.
Pioneer Mathematics not only prepare you for your future competitions but even for your 10th CBSE class to. For this we will provide you latest designed CBSE CCE pattern papers, CBSE CCE model
papers, sample papers, tests etc.
10th class RD Sharma Solutions, 10th class NCERT solutions, RS Aggarwal solutions, ML Aggarwal solutions, objective tests, mock tests, smart solutions, mathematics tricks and shortcuts called vedic
mathematics making you a human calculator.
Click on the link below to get all the details of 10th CBSE quality study material :
Good Luck,
Team Pioneer.
Guiding in becoming Mathematics Champions.
|
{"url":"http://blog.pioneermathematics.com/Topic/10th-rs-aggarwal-solved/","timestamp":"2014-04-20T08:44:33Z","content_type":null,"content_length":"29974","record_id":"<urn:uuid:1a5d0915-c324-4e2a-8af2-279eb259aa06>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Activation-Relaxation Technique: ART Nouveau and Kinetic ART
Journal of Atomic, Molecular, and Optical Physics
Volume 2012 (2012), Article ID 925278, 14 pages
Review Article
The Activation-Relaxation Technique: ART Nouveau and Kinetic ART
^1Département de physique and RQMP, Université de Montréal, P.O. Box 6128, Succursale Centre-Ville, Montréal, QC, Canada H3C 3J7
^2Science Program, Texas A&M at Qatar, Texas A&M Engineering Building, Education City, Doha, Qatar
^3Nanosciences Foundation, 23 Rue des Martyrs, 38000 Grenoble, France
^4Laboratoire de Simulation Atomistique (L_Sim), SP2M, UMR-E CEA/UJF-Grenoble 1, INAC, 38054 Grenoble, France
^5CEA, DEN, Service de Recherches de Métallurgie Physique, 91191 Gif-sur-Yvette, France
Received 6 December 2011; Accepted 17 January 2012
Academic Editor: Jan Petter Hansen
Copyright © 2012 Normand Mousseau et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
The evolution of many systems is dominated by rare activated events that occur on timescale ranging from nanoseconds to the hour or more. For such systems, simulations must leave aside the full
thermal description to focus specifically on mechanisms that generate a configurational change. We present here the activation relaxation technique (ART), an open-ended saddle point search algorithm,
and a series of recent improvements to ART nouveau and kinetic ART, an ART-based on-the-fly off-lattice self-learning kinetic Monte Carlo method.
1. Introduction
There has been considerable interest, in the last two decades, in the development of accelerated numerical methods for sampling the energy landscape of complex materials. The goal is to understand
the long-time kinetics of chemical reactions, self-assembly, defect diffusion, and so forth associated with high-dimensional systems counting many tens to many thousands of atoms.
Some of these methods are extension of molecular dynamics (MD), such as Voter’s hyperdynamics [1, 2], which provides an accelerated scheme that incorporates directly thermal effects, Laio and
Parrinello’s metadynamics [3], an ill-named but successful algorithm that focuses on computing free-energy barriers for specific mechanisms, and basin-hopping by Wales and Doye [4] and Goedecker [5].
Most approaches, however, select to follow transition state theory and treat the thermal contributions in a quasi-harmonic fashion, focusing therefore on the search for transition states and the
computation of energy barriers. A number of these methods require the knowledge of both the initial and final states. This is the case, for example, for the nudged-elastic band method [6] and the
growing-string method [7]. In complex systems, such approaches are of very limited application to explore the energy landscapes, as by definition few states are known. In these situations, open-ended
methods, which more or less follow the prescription first proposed by Cerjan and Miller [8] and Simons et al. [9, 10] for low-dimensional systems, are preferred. Examples are the
activation-relaxation technique (ART nouveau) [11–13], which is presented here, but also the eigenvector-following method [14], a hybrid version [15], and the similar dimer method [16].
Once a method is available for finding saddle points and adjacent minima, we must decide what to do with this information. A simple approach is to sample these minima and saddle points and classify
the type of events that can take place, providing basic information on the possible evolution of these systems. This approach has been applied, using ART nouveau or other open-ended methods, on a
number of materials ranging from Lennard-Jones clusters to amorphous silicon and proteins [11, 13, 17–30]. A more structured analysis of the same information, the disconnectivity tree [31],
popularized by Wales and collaborators, allows one to also extract some large-scale structure for the energy landscape and classify the overall kinetics of particular systems based on the generated
events [32–36]. These open-ended methods can also be coupled with various accept/reject schemes, such as the Metropolis algorithm, for finding energy minima of clusters and molecules such as proteins
and bulk systems [11, 20, 37, 38].
In the long run however, most researchers are interested in understanding the underlying kinetics controlling these complex systems. To this end, it is no longer sufficient to collect these saddle
points: they must be ordered and connected in some fashion to reconstruct at least a reduced representation of the energy landscape. Wales and collaborators used a different approach: starting from a
large catalog of events, they construct a connectivity matrix that links together minima through saddle points, and they apply a master equation to solve the kinetics [39]. While the discrete path
sampling method has the advantage of providing a complete solution to the system’s kinetics, the matrix increases rapidly with the system’s complexity, making it difficult to address the kinetics of
large and complex problems. It has nevertheless been applied with success to describe protein folding of a number of sequences [40–42].
A more straightforward approach to generate kinetics with these methods is to apply an on-the-fly kinetic Monte Carlo procedure: in a given local minimum, a finite number of open-ended event searches
are launched and the resulting barriers are used to estimate a probable timescale over which an event will take place. This approach was applied to a number of systems using various open-ended
methods such as the dimer [43–45], the hybrid eigenvector-following [46], and the autonomous basin climbing methods [47]. These approaches are formally correct, if an extensive sampling of activated
events is performed before each kinetic Monte Carlo (KMC) step. They are wasteful, however, as unaffected events are not recycled, increasing rapidly in cost as the number of possible moves
increases. This is why there have been considerable efforts in the last few years to implement cataloging for these events, which most of the time involve off-lattice positions. The kinetic
activation-relaxation technique (kinetic ART), introduced a few years ago, proposes to handle these off-lattice positions through the use of a topological catalog, which allows a discretization of
events even for disordered systems while ensuring that all relevant barriers are relaxed in order to incorporate fully long-range elastic deformations at each step [48, 49]. The need for such a
method is very strong, as is confirmed by the multiplications of other algorithms that also address various limitations of standard KMC published recently, including the self-learning KMC [50], the
self-evolving atomistic KMC [51], and the local-environment KMC [52].
While we are fully aware of other similar algorithms, we focus in this paper on the methods developed by our group: ART nouveau and kinetic ART. First, we present the most recent implementation of
ART nouveau, which has been extensively optimized and is now faster than most comparable methods [30], and two applications: to iron and to protein folding. In the second section, we discuss kinetic
ART, which, at least formally, is the most flexible atomistic KMC algorithm currently available, and show how it can be applied to complex materials including amorphous systems.
2. ART Nouveau
The ART nouveau method is an open-ended algorithm for first-order saddle point search. This algorithm has been developed over the last 15 years. Based on the activation-relaxation technique proposed
in 1996 [11], it incorporates an exact search for a first-order saddle point in line with the eigenvector-following and the dimer method, through the use of the Lanczos algorithm [13]. This method
has been tested and characterized in a recent paper by Marinica and collaborators [29] and updated by Machado-Charry and collaborators [30], decreasing significantly its numerical cost.
ART nouveau proceeds in three steps.(1)Leaving the Harmonic Well. Starting from a local-energy minimum, the configuration is deformed in order to identify a direction of negative curvature on the
energy landscape indicative of the presence of a nearby first-order saddle point.(2)Convergence to a First-Order Saddle Point. Once a direction of negative curvature is identified, the system is
pushed along this direction and away from the initial minimum, while minimizing the energy in the perpendicular hyperplane. When successful, this step ensures that the point reached is indeed a
first-order saddle point.(3)Relaxation into a New Minimum. By definition, a first-order saddle point connects two local minima by a minimum-energy path. To find the new minimum, we push the
configuration over the transition state and relax the total energy.
These three steps form an event, a set of three related configurations: the initial minimum, the transition state, and the final minimum. As described here in after, the event can be used to
characterize the local energy landscape, within a Metropolis scheme or to generate a kinetic trajectory using a kinetic Monte Carlo approach, as all points generated are fully connected.
After providing this short overview, it is useful to describe with some details the three basic steps defining ART nouveau.
2.1. Leaving the Harmonic Well
The selection of a direction for leaving the harmonic well is crucial for finding rapidly a characteristic distribution of saddle points surrounding the initial minimum. This harmonic well is
described as the region of the energy landscape surrounding a local minimum with only positive curvature.
It is, therefore, tempting to use the eigenvectors of the Hessian matrix, corresponding to the second derivative of the energy: where , computed at the local minimum as a reasonable orthogonal set of
sampling directions. However, this matrix does not contain any information regarding the position of nearby saddle points as the steepest-descent pathway towards this basin from each of these
transition states (except for symmetric constraints) becomes rapidly parallel to one of the two directions of lowest, but positive, curvature as it moves into the harmonic basin (Figure 1).
In the absence of a discriminating basis in the selection of a given direction for launching the saddle-point search, it is best to simply use randomly chosen directions of deformation. While global
random deformations are appropriate, numerous tests have shown that these lead to an oversampling of the most favorable events, generally associated with low-energy barriers, making the sampling of
the energy landscape relatively costly [12, 29]. The approach chosen in ART nouveau is therefore to deform randomly select regions of the configuration, allowing nevertheless the full system to react
to this deformation, in order to limit the building of strain. With this scheme, the activation can also be focused on particular subsets of a system, when needed. For example, if we are interested
in the diffusion of a few defects in a crystal, it is appropriate to select, with a heavy preference, atoms near these defects for the initial deformation. Events found with both approaches are the
same, but sampling is significantly more efficient with the localized procedure [29].
More precisely, the procedure implemented in ART nouveau for leaving the harmonic well is the following:(1)an atom is first selected at random from a predefined set of regions, that can include the
whole system but can be limited to, for example, the surface of a slab or a region containing defects;(2)this atom and its neighbors, identified using a cut-off distance that can be limited to the
first-shell neighbors or run through many layers, are then moved iteratively along a random direction;(3)at every step, a slight minimisation in the perpendicular hyperplane is applied to avoid
collisions and allow the full system to react to this deformation; this minimisation is not complete, as this would bring back the system onto the direction of lowest curvature of the harmonic well,
and it is selected to be just sufficient to avoid unphysical conformations;(4)at every step, we use a Lanczos procedure [54] to evaluate the lowest eigenvalue of the Hessian matrix; when this
eigenvalue falls below a certain negative threshold, the system is considered to have left the harmonic well and we move to the activation regime. The negative threshold is selected to ensure that
the negative eigenvalue does not vanish after a few minimisation steps in the activation regime and depends on the details of the forcefield.
2.2. Converging to a First-Order Saddle Point
Once a direction of negative curvature has been identified, it is relatively straightforward to bring the system at a first-order saddle point: the system has to be pushed along this direction while
the energy is minimised in the perpendicular hyperplane. This ensures that the system will reach a first-order transition state as the force vanishes.
While straightforward, the activation can be very costly as it requires a partial knowledge of the Hessian matrix. If, for a system counting a few atoms, it is appropriate to simply compute and
diagonalize the Hessian, this approach is not feasible with larger problems. To avoid this expensive calculation, a number of algorithms have been proposed for this search, including approximate
projections [11], eigenvector-following [14], and hybrid eigenvector-following methods [15] such as the dimer scheme [16]. Since only the lowest direction of curvature is required, the Lanczos
algorithm [54] has the advantage of building on the previous results to identify the next lowest eigendirection, decreasing considerably the computational cost of following a direction of negative
curvature to a transition state.
Irrespective of system size, we find that a Lanczos matrix is sufficient to ensure a stable activation. Since building this matrix requires forces differences, this means that 16 force evaluations,
in a second-order approximation, are necessary when counting the reference position. It is possible to decrease the matrix size to as little as [30]. In this case, the stability of Lanczos’ solution
is ascertained at each step by comparing the newly found direction of lowest curvature with the previous step. If the cosine between the two vectors is less than a set criterion, the Lanzcos
procedure is repeated until full convergence of the eigenvector. Depending on the system, this approach can half the average number of force evaluations needed for this procedure [30].
Near the saddle point, the dual approach of activation with Lanczos and minimization in the perpendicular hyperplane with a different algorithm is not optimal. At this point, we find it often
preferable to use an integrated algorithm that can converge efficiently on inflection points. As described in [30], we have implemented recently the direct-inversion in iterative-subspace (DIIS)
method [55, 56] into ART nouveau to accelerate and improve convergence. Interestingly, we have found in recent work that DIIS cannot be applied when the landscape is too rough, such as at a Si(100)
surface described with an ab initio method, and other approaches are preferable [57]. For the appropriate systems, however, DIIS can significantly decrease the computational costs of the activation
In summary, the activation and convergence to a first-order saddle point phase can be described with the following steps:(1)using the Lanczos algorithm, the direction of negative curvature is
obtained;(2)the system is pushed slightly along this direction, with a displacement decreasing as a square root of the number of iterations, to facilitate convergence onto the saddle point;(3)the
energy is relaxed in the perpendicular hyperplane; in the first iterations after leaving the harmonic well, only a few minimization steps are taken, to avoid going back into the harmonic well and
losing the direction of negative curvature;(4)if DIIS is not used, the first three steps are repeated until the total force falls below a predefined threshold, indicating that a first-order saddle
point has been reached or until the lowest eigenvalue becomes positive, indicating that the system has found its way back into the harmonic well;(5)if DIIS is applied near the transition state, we
apply the Lanczos algorithm until the negative eigenvalue has reached a minimum and has gone up for 4 sequential iterations (this criterion and the DIIS implementation are discussed in [30]); DIIS is
then launched and applied until the convergence criterion is reached; a final Lanczos calculation is then performed to ensure that the point is indeed a first-order saddle point.
2.3. Relaxation into a New Minimum
Once the configuration has reached a saddle point, it is necessary to nudge slightly over to allow it to converge into a new minimum. There is clearly some flexibility here. In ART nouveau, the
configuration is generally pushed over a distance of 0.15 times the distance between the saddle and the initial minimum along the eigenvector and away from the initial minimum. The system is then
brought into an adjacent local energy minimum using FIRE [58], although any controlled minimisation algorithm that ensures convergence to the nearest minimum can be applied.
2.4. Characterization and Comparison
ART nouveau has been applied with success to a range of problems from Lennard-Jones clusters to proteins and amorphous silicon [11, 13, 17, 20, 21, 23, 26–30]. Its efficiency has been compared
previously with a number of other open-ended and two-ended saddle-point searching methods. Olsen et al. [59] compared the efficiency of various methods as a function of the dimensionality of the
studied system and found that if, for low-dimensional problems, exact methods using the knowledge of the full Hessian were preferable, those that focus solely on the direction of lowest-curvature,
such as ART nouveau and the dimer method, provide a more complete set of saddle points for large-dimensional system at a much lower computational cost.
Olsen et al. also noted that for the same method, the number of required force evaluations to reach a saddle point grows with the effective dimensionality of the event taking place. Looking at Pt
diffusion on a Pt(111) surface, they found that, as more atoms were allowed to move, the number of force evaluations for their ART nouveau implementation increased from 145, when a single atom was
free, to 2163 with 175 free atoms [59].
In a recent paper, Machado-Charry et al. [30] compared the accelerated ART nouveau algorithm with various saddle point searching algorithms proposed in the last few years, finding a similar relation
between the effective dimensionality of diffusion events and the average computational effort associated with finding a first-order saddle point. We reproduce in Table 1 the summary of the analysis
in [30]. We see that the latest implementation of ART nouveau, which is many times faster than the original version [13], is extremely competitive with other saddle-searching methods, even
double-ended algorithms such as the nudged-elastic band (NEB) [6] and the growing string methods [7].
Table 1 also shows a few interesting details. First, saddle points are found slightly faster using empirical rather than ab initio forces. This is likely due to the finite precision of the forces in
quantum calculations that make the calculation of the lowest-eigenvalue and eigenvectors less stable. As already observed by Olsen et al. [59], the effective dimensionality of a system also impacts
on the number of force evaluations needed to find a transition state. This effective dimensionality is related to the number of atoms that can move during an event. This is why the effective
dimensionality of defect diffusion in the bulk can be lower than that at the surface (Table 1) and the computational effort for finding a saddle point does not diverge with system size, at least away
from a phase transition.
2.5. Application to Iron and Protein A
ART nouveau can be applied to sample the energy landscape of complex systems and search for minimum-energy states as can be seen in the two applications presented here, to interstitial self-diffusion
in iron and protein folding, both described with semiempirical potentials.
2.5.1. Interstitial Self-Diffusion in Iron
The ART nouveau method using an empirical potential is applied to the systematic search in the energy landscape of small self-interstitial clusters in iron from monointerstitial, , to
quadri-interstitial [29]. The explored phase space was the basin around the standard parallel configuration. The goal is to explore binding states corresponding to clustered interstitials, and not
states made by separated defects. These states are not very far in energy from the lowest parallel energy configuration, since the binding energy is of the order of 1eV. For this reason, the phase
space exploration was performed using a Metropolis algorithm and a fictitious temperature. The value of the fictitious temperature is a key parameter of the method. If it is too high, most of the
time is spent exploring dissociated configurations. If it is too low, it could be the case that the simulation misses clustered configurations belonging to an energy basin separated from the initial
one by a high-energy saddle point. As a compromise, the fictitious temperature was set to 1200K. In addition, when the current configuration settles at an energy 2eV higher than the lowest-energy
configuration, it is replaced with the initial configuration; in this way the system does not escape from the basin of parallel configurations.
ART nouveau revealed a large number of distinct configurations which increase rapidly with cluster size, exceeding 400, 1100, and 1500 for , , and , respectively. The formation energy spectra of
three types of clusters have few isolated low-energy configurations lying below a quasi-continuum of states. This leads to the appearance of a quasicontinuous band of states at relatively low energy
above the ground state at 0.42eV, 0.23eV, and 0.12 for , , and , respectively (see Figure 2(a) for ). This study, effectively at zero K, provides an essential complementary approach to the
molecular dynamics simulations that must be run at high temperature to allow for crossing sufficiently high barriers.
ART nouveau was also used for finding transition pathways from high-energy configurations to lower-energy configurations. We have ensured a broader exploration of different ways to escape from the
starting minima by performing ART nouveau simulations with different values of the fictitious Metropolis temperature, namely, 100K, 400K, and 800K. Hence, we are able to provide the complete path
for the unfaulting mechanisms of the self-trapped ring configurations [3-4]. These configurations were recently proposed for the interstitial clusters [53]: in the case the ring configuration is made
of three nonparallel dumbbells in a plane and, in the case, obtained from by adding a dumbbell at the center of the ring. These configurations, being composed of nonparallel dumbbells, can be
considered as faulted loops which cannot migrate by the simple step mechanism. However, ART nouveau was able to find a transition path for the unfaulting mechanism of the (not shown) and into the
standard parallel configurations and (Figure 2(b)), respectively. ART nouveau generates successful un-faulting pathways with great efficiency and determines the fastest one at low temperature, in
qualitative agreement with the brute force molecular dynamics calculations performed at higher temperature [53, 63].
2.5.2. Protein A
ART nouveau can also be applied to molecules such as proteins, to characterize folding [20, 38] and aggregation [37, 64, 65]. Of course, because the algorithm does not activate the full solvent, most
solvent molecules see an effective temperature, and proteins must be in vacuum or in implicit solvent, to avoid freezing of the surroundings. This restriction is not as severe as a large fraction of
molecular dynamical simulations are also performed with implicit solvent and reduced potentials to accelerate sampling. Moreover, by leaving aside thermal fluctuations, ordered conformations are
easier to identify, providing a clearer picture of various folding and aggregation pathways.
Among others, ART nouveau was applied to study folding of protein A, a fast-folding 60-residue sequence that adopts a three-helix bundle conformation in its native state. Because of its relatively
low complexity, protein A has been extensively studied to understand protein folding (e.g. [66–68]). To identify folding mechanism for the full 60-residue Y15W mutant (PDB:1SS1), 52 trajectories
were launched from three different initial configurations: fully extended and two random coiled conformation with right and left handedness. Simulations were run for 30 000 to 50 000 events with a
Metropolis acceptance rate around 45% at a temperature of 900K. To obtain well-relaxed conformations, each trajectory was run for a further 8000 events at 600K. We note that, because ART nouveau
does not include entropic contributions, the Metropolis temperature does not correspond directly to a physical temperature. The evolution of four folding trajectories, as characterized by the
fraction of folded H1, H2, and H2 helices and the total energy, is shown in Figure 3. They show diversity in folding pathways that does not support a predetermined sequential folding. The helix
formation, however, clearly occurs as the tertiary structures lock in, suggesting instead that folding of protein A is highly cooperative in agreement with the funnel picture [69].
Of the 52 trajectories, 36 adopt a structured conformation with an energy below kcal/mol and belonging to one of the groups presented in Figure 4. The first two correspond to the native left-handed
structure and its slightly less stable mirror image, which is also observed in other simulations [70, 71]. Configurations in Figures 4(c) and 4(d), called -shaped structures, represent intermediate
states on the folding pathway as can be seen in Figure 3. The four structures on the bottom line show high -content low-energy off-pathway conformations. These are many kcal/mol higher in energy than
the native state and, clearly, have only a very low probability of forming. They can provide information useful for understanding structural differences between sequences with close homology. For
example, structure in Figure 4(h) is close to 1GB1 (with an important difference in the packing of the -hairpins). The existence of this unstable structure can explain why it is possible for the
three-helix bundle to convert to protein G topology with a 59% homology between mutated sequences [72].
As a general rule, although ART nouveau does not provide kinetic information regarding trajectories, it can provide information regarding the crucial steps in relaxation, folding, and aggregation
patterns for relatively large systems that are not dominated by entropic considerations. In protein A, ART nouveau managed to identify metastable ordered structures of higher energy that can shed
light on close homological sequences. Finally, ART nouveau is also a very interesting tool for exploring conformation of large biological systems, if it is coupled with internal coordinates and
multiscale representations [73, 74].
3. Kinetic ART
While ART nouveau is very efficient for finding saddle points, it cannot be applied directly to evaluate the dynamics of a system as the inherent bias for selecting a specific barrier over the others
is not known. In the absence of such characterization, ART nouveau is ideal to sample possible events that can then serve either in a master equation or in a kinetic Monte Carlo scheme.
As mentioned in the introduction, the first approach has been attempted by Wales [39], who applied their scheme to protein folding [40–42]. Here, the main challenge is the handling of the rate
matrix, which grows rapidly with the dimensionality of the system and which is very sensitive to a complete sampling of low-energy barriers. Other groups, starting with Henkelman and Jónsson,
implemented a simple on-the-fly KMC approach with sampling of events after every step [43–45, 47]. This solution is time consuming, as new searches must be launched at every step, and makes little
use of previously found events.
Recently, a number of kinetic Monte Carlo algorithms have been proposed to address some of these concerns by constructing a catalog: the kinetic ART [48], the self-learning KMC [50], the
self-evolving atomistic KMC [51], and the local-environment KMC [75]. The challenge, for all these methods, is threefold: ensuring that the most relevant events are correctly included in the catalog,
devising a classification scheme that is efficient and can correctly discriminate events even in complex environments such as alloys and disordered materials, and updating, on the fly, the energy
barrier in order to fully take into account short- and long-range elastic deformations. While the various recently proposed KMC methods address some or all of these challenges, they will be rapidly
evolving over the coming years. Because of this, we focus here on kinetic ART, the method we have developed and which provides one possible answer to the difficult problem of simulation long-time
activated dynamics in complex environments.
Our approach attempts to minimise the computational efforts while preserving the most correct long-time kinetics, including long-range elastic effects. To do so, it is necessary to generate an event
catalog that can be expanded and reused as simulations progress or new simulations are launched and, contrary to the vast majority of KMC schemes available, can handle off-lattice atomic positions.
This catalog must be as compact as possible, that is, offer an efficient classification scheme, yet be sufficiently flexible to handle alloys, surfaces, and disordered environments. In an off-lattice
situation, no catalog can provide precise energy barriers, as short- and long-range elastic deformations will affect the local environment. Rates, if derived from the catalog, must therefore be
reevaluated at each step to include elastic effects.
This is what kinetic ART achieves by using ART nouveau for generating new events and update energy barriers, coupled with a topological classification for local environments, which allows the
construction of a discrete event catalog based on a continuous geometry even in the most complex environment.
Before going into details, it is useful to present the main steps of the algorithm. (1)Starting from a local energy-minimum, the local topology associated with each atom is analyzed. For each
topology, ART nouveau searches are launched and new events are added to the catalog of events and attached to this topology. In order to ensure that the catalog is complete, event searches are not
limited to new topologies: the number of searches is proportional to the logarithm of the frequency with which a topology appears in a given simulation. All events associated with the configuration
at the local energy-minimum are added to the tree. (2)All low-energy events are reconstructed and the transition state is refined to take the impact of long-range elastic deformations on the rate
into account. (3)With all barriers known, the average Poisson rate is determined according to the standard KMC algorithm; the clock is pushed forward and an event is selected at random, with the
proper weight [76]. (4)We go back to .
Each of these steps involves, of course, a number of elements, which we describe in the following sections.
3.1. Topological Analysis: Constructing an Off-Lattice Event Catalog
In order to construct a catalog of events, it is necessary that these be discretized to ensure that events are recognized uniquely. For lattice-constrained motion, this requirement is straightforward
to implement by focusing simply on the occupancy of the various crystalline sites. Geometry, however, is no longer a satisfactory criterion for reconstructed sites, defective and disordered systems,
where atoms cannot easily be mapped back to regular lattice sites. Our solution is to use a topological description of local environments for mapping events uniquely, allowing us to describe
environments of any chemical and geometrical complexity. This is done by first selecting a central atom and all its neighbors within a given cut-off radius. Atoms are then connected, following a
predefined procedure that can be based loosely on first-neighbor distance or a Voronoi construction, forming a graph that is then analysed using the software NAUTY, a powerful package developed by
McKay [75]. For our purposes, this package returns a unique tag associated with the topology and an ordered list of cluster atoms. The positions of the topology cluster atoms in the initial, saddle,
and final states are stored.
For this discretization to work, it is necessary that a one-to-one correspondence exist between the topological description and the real-space geometry of this local region. This correspondence is
enforced by the fact that the graph has to be embedded into a well-defined surrounding geometry and must correspond to either a local minimum or a first-order saddle with the given forcefield. These
two elements ensure that the one-to-one correspondence is valid in the vast majority of conformations.
This correspondence may fail in two cases: for very flat minima or saddle points, there might be more than one topology associated with the geometry. In this case, the corresponding events might be
overcounted. For this reason, we compare the barrier height as well as the absolute total displacement and the direction of motion of the main atom (the one that moves most during an event) both
between initial and final state and between initial state and the saddle point. We only add events to the catalog that are sufficiently different from previously found ones.
It is also possible that a single topology corresponds to more than one geometry. Most frequently, this problem appears in highly symmetric systems: two topologically identical events might differ
only in the direction of motion. To resolve this issue, we reconstruct geometry and ensure that motion takes place in different directions. In this case, events are flagged as different and
additional ART nouveau searches are performed to identify all topologically equivalent but geometrically different moves.
In some cases, a topology is associated with fundamentally different geometries. Here, this failure is identified automatically, since it leads to incorrect saddle-point reconstruction that cannot be
converged. When this occurs, the algorithm automatically modifies the first-neighbor cut-off used for constructing the topology until the multiple geometries are separated into unique topologies. In
most systems, this is extremely rare, and it is generally a symptom of an inappropriately short topology cutoff radius.
Generic and Specific Events
Once a new topology is identified, ART nouveau saddle point searches are launched a number of times from the central atom associated with this topology (to increase the efficiency of finding
topologies with multiple geometries, events are launched from a randomly selected set of atoms characterized by the same topology). Depending on the system, we use between 25 and 50 searches for
every times we see a topology (e.g., 25–50 searches for a topology appearing once, 75–150 searches for a topology seen 100 times, etc.). To ensure detailed balance, these events are stored from both
the initial and final minima.
These events, which are stored in the catalog, are called generic events and serve as the basis for reconstructing specific events associated with a given configuration. They can be accumulated and
serve for further studies, decreasing over time the computational efforts associated with studying particular systems. For example, a catalog constructed for ion-implanted Si can be used as a
starting point for amorphous silicon or an SiGe alloy (as mentioned previously, the topological analysis allows us to handle correctly alloys, by discriminating between the atomic species involved).
3.2. Reconstructing the Geometry and Refining Low-Energy Events
While representative, the transition states and energy barriers in the catalog cannot be applied exactly as they are to new configurations, as short- and long-range elastic deformations affect every
barrier differently, creating favored directions even for formally isotropic defects, for example.
To include these effects, every generic event should be reconstructed for each atom and fully relaxed. This is what kinetic ART does, with one approximation: to limit the effort of refining generic
into specific events, only the kinetically relevant events are reconstructed and relaxed.
After each KMC step, the event list is ordered according to the barrier energy and only the lowest-energy barrier events, representing up to given threshold (we use typically 99.9 or 99.99%) of the
total rate are fully reconstructed and refined. Depending on the system and the temperature, this means that only one to ten percent of all barriers in the catalog are refined specific events. The
remaining events, which contribute very little to the rate, are cloned from the generic events without adjusting the barrier.
This local reconstruction takes place in two steps. First, using the reference atomic positions of the generic event, the geometric transformation necessary to map the initial state onto the current
configuration is determined. This operation is then applied to the atomic displacements between initial state and saddle point. From this first reconstruction of the saddle point geometry, the saddle
point (and with it the energy barrier) is refined. In the second step, the system is relaxed into the final state by pushing it over the saddle point. If it is impossible to map the initial state
onto the current configuration even though the central atoms have the same topology, we know that the one-to-one correspondence between topology and geometry has failed and apply the corrections
mentioned in the previous section.
3.3. Applying the Standard KMC Procedure
The reconstruction of specific events ensures that all elastic and geometrical deformations are taken into account when constructing the list of events that is used in the kinetic Monte Carlo
algorithm. This list now contains both refined low-energy specific events and clones of generic events with higher barrier energies.
Rates are determined according to transition state theory: where , the attempt frequency, is a system-dependent fixed constant of the order of s; is the barrier height; and are the Boltzmann constant
and the temperature, respectively. While it would be possible to extract a more precise attempt frequency from quasi-harmonic theory, we know that varies only weakly with the chosen pathway for a
large number of systems [77, 78].
These rates are then combined, following Bortz et al. [76], to extract the Poisson distributed time step over which an event will take place: where is a random number in the 0,1 interval introduced
to ensure a Poisson distribution and is the rate associated with event . The simulation clock is pushed forward by this an event is selected with a weight proportional to its rate, and the atoms are
moved accordingly. Finally, the whole system is relaxed into a minimum energy configuration, and the next step can proceed.
3.4. Other Implementation Details
The local properties of kinetic ART allow a very efficient implementation of the algorithm. First, it is straightforward to dispatch the event searching and refining to separate processors. For a
typical system, between 20 and 40 processors can be used for these tasks (up to 128 when building the initial catalog), with an efficiency of 90% or more. Moreover, as events are inherently local,
there is no need to compute global forces during an ART nouveau event: only nonzero forces need to be computed most of the time, with a final global minimization needed only at the end of each KMC
step. This renders the algorithm for computing an event almost independent of system size for a fixed number of defects and order overall in simulated time for a system with a constant defect
density, at least with short-range potential, making it scale as efficiently as molecular dynamics in this worst case scenario.
In KMC, the residence time and the selection of the next state are dominated by the lowest-energy barrier present in the system. Groups of states separated by such low-energy barriers are called
basins. These pose a major obstacle to KMC simulations: basin transitions are often nondiffusive and trap the simulation in a small part of the configuration space. Additionally, the high rates also
severely limit the time step, thus increasing the CPU cost without yielding meaningful physics. We developed the basin-autoconstructing mean rate method (bac-MRM) [49], based on the MRM described by
Puchala et al. [79], to overcome this limitation. If an event is selected that matches the description of a basin event (small energy difference between saddle point and both initial and final
state), the event is executed and then added to the current basin of events. Kinetic ART keeps all other available events in memory and adds to this list those originating from the state in which the
system is now. This means that before the next KMC step, events originating from all basin states could be selected. The rates of the events are then adjusted to average over all possible numbers of
intrabasin transitions. This assures that the next event and the time step are picked with the correct distribution, providing the correct kinetics even though, of course, the intrabasin trajectory
information is lost. If the next event is another basin event, the basin is expanded again. This procedure is repeated, until a nonbasin event is selected (usually after all basin events have been
added to the basin). The bac-MRM assures that the system will not visit any basin state twice and thus makes sure that the system can escape being trapped in configuration space.
3.5. Example: Relaxation of an Ion-Bombarded Semiconductor
We first demonstrate the capability of kinetic ART by applying it to the relaxation of a large Si box with ion implantation. In a previous work by Pothier et al. [79], a Si atom was implanted at
3keV in a box containing 100 000 crystalline silicon (c-Si) atoms at 300K. This system was then simulated for 900 ps with molecular dynamics and the Stillinger-Weber potential. From the final
configuration of this simulation, we extracted a smaller box of 26 998 atoms which contains the vast majority of induced defects and imposed periodic boundary conditions.
Kinetic ART is applied at 300K on this complex system. Figure 5(a) reports the evolution of three simulations with the same initial configuration over 3000 events, each reaching simulated time of
over 1ms, at a cost of 8000 CPU-hour. During this time, the system relaxes by nearly 60eV as numerous defects crystallize. Furthermore, we see that the potential energy drops are not all identical.
This indicates that a wide variety of relaxation mechanisms take place on these timescales.
Meanwhile, Figure 5(b) shows the energy barriers of the events which were executed in one of the simulations. The gaps in time are a consequence of the bac-MRM mentioned in Section 3.4. Indeed, if a
basin contains some moderately high barriers, we will spend a good deal of time in it without sampling new states. This figure also shows the richness of the energetic landscape. We execute events
with a great variety of energies, which show the great importance of treating elastic effects with exactitude. The great number of new topologies we encounter (even though the system lowers its
energy) is another hint at the richness of this system.
While standard KMC simulations easily run for millions of steps, regularly reaching timescale of minutes or more, fundamental restrictions have limited them to simple lattice-based problems such as
metal on metal growth. The possibility, with kinetic ART, to now apply these accelerated approaches to complex materials such as ion-bombarded Si, oversimulated timescale many orders of magnitude
longer than what can be reached by MD, opens the door to the study in numerous systems of the long-time atomistic dynamics that was long considered out of reach. This is the case for fully disordered
3.6. Example: Relaxation in Amorphous Silicon
Here, we show that even such complex materials such as amorphous silicon (a-Si), a system that has been used as a test case for ART over the years [11, 12], can be handled with kinetic ART.
Disordered systems are characterized by an extremely large number of possible configurations that exclude studies with traditional lattice-based Monte Carlo methods. Event catalogues will therefore
be large, as almost each atom in a box of a few thousand atoms has its own topology. However, since the topological classification is based on the local environment, it still provides for a real gain
given enough sampling.
Figure 6 shows a 28.5 s simulation, started with a well-relaxed 1000-atom a-Si model and a preformed event catalogue of over 87 000 events at 300K. Since the original configuration is already well
explored, the simulation needs to generate only a few events every time a new topology is found, underlining the powerful capabilities of kinetic ART to generate and classify activated events in
complex environments. Over the 2542 KMC steps of the simulation, only 1000 new topologies were found.
In a system such as a-Si, the continuous distribution of activated barriers means that there is not a clear separation between frequent and rare events. The energy cutoff for bac-MRM is then chosen
so that the inverse of the associated rate is small compared to the desired simulated time. Since in an amorphous system flicker-like events can occur at any energy scale, the value must be adjusted
as a function of the degree of relaxation and temperature. In the present case, the cutoff value was 0.35eV, meaning that while the thermodynamics is accurate at all scales, internal dynamics on
timescales lower than 120ns is ignored. This simulation required 617 CPU-hours over six processors for a total run-time of 103 hours. As shown in Figure 6, with a well-filled catalog, only a few
hundred unknown topologies are visited during the simulation.
Starting from a well-relaxed configuration, the simulation quickly reaches the s timescales, where higher-energy flicker-like events dominate the dynamics. Still, the system finds a way to relax at
three occasions. For instance, the large drop observed at 8.7s is the result of the system choosing an event with a large 0.51eV barrier. Interestingly, the event involves only perfectly
coordinated atoms and can be characterized as a bond-switching move where two neighboring atoms exchange neighbors, allowing an atom with a high angular strain on three of its bonds to relax near its
equilibrium state.
Even though disordered systems require more computational efforts than crystalline-based defective systems, the application of kinetic ART to such systems allows the study of kinetics on timescales
well beyond what could have been reached until now using more traditional molecular dynamics, as can be seen here, in a short test simulation.
4. Conclusion
We have presented the activation-relaxation technique, ART nouveau, a very efficient open-ended search method for transition state, and have shown how it can be applied to extensively search for
diffusion mechanisms and pathways as well as low-energy configurations with systems as diverse as interstitials in iron and a 60-residue protein. With recent improvements, finding a transition can
take as little as 210 force evaluations and, when lost events are taken into account, between 300 and 700 force evaluations, allowing for extensive searches, with either empirical or ab initio
forces, needed to fully describe the energy landscape of complex systems.
As the event-selection bias is unknown in ART nouveau, it cannot be used directly to study the kinetics of atomistic systems. This limitation is lifted by coupling it to a kinetic Monte Carlo scheme.
Kinetic ART goes beyond this simple coupling, however, by introducing a topological classification for catalog building while fully preserving the impact of elastic deformations on the kinetics while
being fully off-lattice. The efficiency of this approach was demonstrated by characterizing the relaxation of a 30 000-atom ion-bombarded box of silicon and a 1000-atom box of amorphous silicon.
Although not presented here, the algorithm also readily handles other systems such as metals and alloys. At the moment, this is done using a fixed prefactor for the attempted frequency in transition
state theory. As discussed previously, such an approximation is fairly good for a number of systems. However, we are currently implementing a version of the algorithm that also evaluates prefactors.
While this is more costly, it could be essential to describe correctly the kinetics of complex materials.
Both ART nouveau and kinetic ART open the door to the study of questions that were out of reach only a few years ago, either to identify diffusion mechanisms and catalytic reactions or to recreate
full diffusion and relaxation pathways in complex materials such as alloys and glasses on timescale that had been out of reach until now. As discussed previously, a number of other accelerated
techniques have been proposed recently and it is too early to determine whether one method will really stand out. Irrespective of this, ART nouveau and kinetic ART have already shown what they can be
used for.
E. Machado-Charry, P. Pochet, and N. Mousseau acknowledge support by Nanosciences Fondation in the frame of MUSCADE Project. Partial funding was also provided by the Natural Science and Engineering
Research Council of Canada, the Fonds Québécois de Recherche Nature et Technologie, and the Canada Research Chair Foundation. This work was performed using HPC resources from GENCI-CINES (Grant
2010-c2010096323) and from Calcul Québec (CQ). The ART nouveau code is freely distributed in a format appropriate for both empirical and quantum forces. Kinetic ART is still under development but a
version should be available for distribution in the near future. To receive the code or for more information, please contact the authors.
1. A. F. Voter, “Hyperdynamics: accelerated molecular dynamics of infrequent events,” Physical Review Letters, vol. 78, no. 20, pp. 3908–3911, 1997. View at Scopus
2. P. Tiwary and A. van de Walle, “Hybrid deterministic and stochastic approach for efficient atomistic simulations at long time scales,” Physical Review B, vol. 84, no. 10, Article ID 100301, 2011.
View at Publisher · View at Google Scholar
3. A. Laio and M. Parrinello, “Escaping free-energy minima,” Proceedings of the National Academy of Sciences of the United States of America, vol. 99, no. 20, pp. 12562–12566, 2002. View at
Publisher · View at Google Scholar
4. D. J. Wales and J. P. K. Doye, “Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms,” Journal of Physical Chemistry A, vol.
101, no. 28, pp. 5111–5116, 1997.
5. S. Goedecker, “Minima hopping: an efficient search method for the global minimum of the potential energy surface of complex molecular systems,” Journal of Chemical Physics, vol. 120, no. 21, pp.
9911–9917, 2004. View at Publisher · View at Google Scholar · View at Scopus
6. G. Henkelman, B. P. Uberuaga, and H. Jónsson, “Climbing image nudged elastic band method for finding saddle points and minimum energy paths,” Journal of Chemical Physics, vol. 113, no. 22, pp.
9901–9904, 2000. View at Publisher · View at Google Scholar · View at Scopus
7. A. Goodrow, A. T. Bell, and M. Head-Gordon, “Transition state-finding strategies for use with the growing string method,” Journal of Chemical Physics, vol. 130, no. 24, Article ID 244108, 2009.
View at Publisher · View at Google Scholar · View at Scopus
8. C. J. Cerjan and W. H. Miller, “On finding transition states,” The Journal of Chemical Physics, vol. 75, no. 6, pp. 2800–2801, 1981. View at Scopus
9. J. Simons, P. Jørgensen, H. Taylor, and J. Ozment, “Walking on potential energy surfaces,” Journal of Physical Chemistry, vol. 87, no. 15, pp. 2745–2753, 1983. View at Scopus
10. H. Taylor and J. Simons, “Imposition of geometrical constraints on potential energy surface walking procedures,” Journal of Physical Chemistry, vol. 89, no. 4, pp. 684–688, 1985. View at Scopus
11. G. T. Barkema and N. Mousseau, “Event-based relaxation of continuous disordered systems,” Physical Review Letters, vol. 77, no. 21, pp. 4358–4361, 1996. View at Scopus
12. N. Mousseau and G. T. Barkema, “Traveling through potential energy landscapes of disordered materials: the activation-relaxation technique,” Physical Review E, vol. 57, pp. 2419–2424, 1998.
13. R. Malek and N. Mousseau, “Dynamics of Lennard-Jones clusters: a characterization of the activation-relaxation technique,” Physical Review E, vol. 62, no. 6, pp. 7723–7728, 2000. View at Scopus
14. J. P. K. Doye and D. J. Wales, “Surveying a potential energy surface by eigenvector-following: applications to global optimisation and the structural transformations of clusters,” Zeitschrift fur
Physik D-Atoms Molecules and Clusters, vol. 40, no. 1–4, pp. 194–197, 1997.
15. L. J. Munro and D. J. Wales, “Defect migration in crystalline silicon,” Physical Review B, vol. 59, no. 6, pp. 3969–3980, 1999. View at Scopus
16. G. Henkelman and H. Jónsson, “A dimer method for finding saddle points on high dimensional potential surfaces using only first derivatives,” Journal of Chemical Physics, vol. 111, no. 15, pp.
7010–7022, 1999. View at Scopus
17. N. Mousseau and L. J. Lewis, “Topology of amorphous tetrahedral semiconductors on intermediate length scales,” Physical Review Letters, vol. 78, no. 8, pp. 1484–1487, 1997. View at Scopus
18. Y. Kumeda, D. J. Wales, and L. J. Munro, “Transition states and rearrangement mechanisms from hybrid eigenvector-following and density functional theory.: application to C10H10 and defect
migration in crystalline silicon,” Chemical Physics Letters, vol. 341, no. 1-2, pp. 185–194, 2001. View at Publisher · View at Google Scholar · View at Scopus
19. G. Henkelman and H. Jónsson, “Long time scale kinetic Monte Carlo simulations without lattice approximation and predefined event table,” Journal of Chemical Physics, vol. 115, no. 21, pp.
9657–9666, 2001. View at Publisher · View at Google Scholar · View at Scopus
20. G. Wei, P. Derreumaux, and N. Mousseau, “Sampling the complex energy landscape of a simple β-hairpin,” Journal of Chemical Physics, vol. 119, no. 13, pp. 6403–6406, 2003. View at Publisher · View
at Google Scholar · View at Scopus
21. F. El-Mellouhi, N. Mousseau, and P. Ordejón, “Sampling the diffusion paths of a neutral vacancy in silicon with quantum mechanical calculations,” Physical Review B, vol. 70, no. 20, Article ID
205202, pp. 205202–9, 2004. View at Publisher · View at Google Scholar · View at Scopus
22. L. Xu, G. Henkelman, C. T. Campbell, and H. Jónsson, “Small Pd clusters, up to the tetramer at least, are highly mobile on the MgO(100) surface,” Physical Review Letters, vol. 95, no. 14, Article
ID 146103, pp. 1–4, 2005. View at Publisher · View at Google Scholar · View at Scopus
23. D. Ceresoli and D. Vanderbilt, “Structural and dielectric properties of amorphous ZrO[2] and HfO[2],” Physical Review B, vol. 74, no. 12, Article ID 125108, 2006. View at Publisher · View at
Google Scholar
24. F. Calvo, T. V. Bogdan, V. K. De Souza, and D. J. Wales, “Equilibrium density of states and thermodynamic properties of a model glass former,” Journal of Chemical Physics, vol. 127, no. 4,
Article ID 044508, 2007. View at Publisher · View at Google Scholar · View at Scopus
25. D. Mei, L. Xu, and G. Henkelman, “Dimer saddle point searches to determine the reactivity of formate on Cu(111),” Journal of Catalysis, vol. 258, no. 1, pp. 44–51, 2008. View at Publisher · View
at Google Scholar
26. D. Rodney and C. Schuh, “Distribution of thermally activated plastic events in a flowing glass,” Physical Review Letters, vol. 102, no. 23, Article ID 235503, 2009. View at Publisher · View at
Google Scholar · View at Scopus
27. M. Baiesi, L. Bongini, L. Casetti, and L. Tattini, “Graph theoretical analysis of the energy landscape of model polymers,” Physical Review E, vol. 80, no. 1, Article ID 011905, 2009. View at
Publisher · View at Google Scholar · View at Scopus
28. H. Kallel, N. Mousseau, and F. Schiettekatte, “Evolution of the potential-energy surface of amorphous silicon,” Physical Review Letters, vol. 105, no. 4, Article ID 045503, 2010. View at
Publisher · View at Google Scholar · View at Scopus
29. M.-C. Marinica, F. Willaime, and N. Mousseau, “Energy landscape of small clusters of self-interstitial dumbbells in iron,” Physical Review B, vol. 83, no. 9, Article ID 094119, 2011. View at
Publisher · View at Google Scholar
30. E. Machado-Charry, L. K. Béland, D. Caliste et al., “Optimized energy landscape exploration using the ab initio based activation-relaxation technique,” Journal of Chemical Physics, vol. 135, no.
3, Article ID 034102, 2011. View at Publisher · View at Google Scholar
31. O. M. Becker and M. Karplus, “The topology of multidimensional potential energy surfaces: theory and application to peptide structure and kinetics,” Journal of Chemical Physics, vol. 106, no. 4,
pp. 1495–1517, 1997. View at Scopus
32. M. A. Miller and D. J. Wales, “Energy landscape of a model protein,” Journal of Chemical Physics, vol. 111, no. 14, pp. 6610–6616, 1999. View at Scopus
33. D. J. Wales and T. V. Bogdan, “Potential energy and free energy landscapes,” Journal of Physical Chemistry B, vol. 110, no. 42, pp. 20765–20776, 2006. View at Publisher · View at Google Scholar ·
View at Scopus
34. A. T. Anghel, D. J. Wales, S. J. Jenkins, and D. A. King, “Theory of C2Hx species on Pt{110} (1x2): reaction pathways for dehydrogenation,” Journal of Chemical Physics, vol. 126, no. 4, Article
ID 044710, 2007. View at Publisher · View at Google Scholar · View at Scopus
35. T. James, D. J. Wales, and J. Hernández Rojas, “Energy landscapes for water clusters in a uniform electric field,” Journal of Chemical Physics, vol. 126, no. 5, Article ID 054506, 2007. View at
Publisher · View at Google Scholar · View at Scopus
36. V. K. de Souza and D. J. Wales, “Connectivity in the potential energy landscape for binary Lennard-Jones systems,” Journal of Chemical Physics, vol. 130, no. 19, Article ID 194508, 2009. View at
Publisher · View at Google Scholar · View at Scopus
37. S. Santini, G. Wei, N. Mousseau, and P. Derreumaux, “Pathway complexity of Alzheimer's β-amyloid Aβ16-22 peptide assembly,” Structure, vol. 12, no. 7, pp. 1245–1255, 2004. View at Publisher ·
View at Google Scholar · View at Scopus
38. J.-F. St-Pierre, N. Mousseau, and P. Derreumaux, “The complex folding pathways of protein A suggest a multiple-funnelled energy landscape,” Journal of Chemical Physics, vol. 128, no. 4, Article
ID 045101, 2008. View at Publisher · View at Google Scholar
39. D. J. Wales, “Discrete path sampling,” Molecular Physics, vol. 100, no. 20, pp. 3285–3305, 2002. View at Publisher · View at Google Scholar · View at Scopus
40. D. A. Evans and D. J. Wales, “The free energy landscape and dynamics of met-enkephalin,” Journal of Chemical Physics, vol. 119, no. 18, pp. 9947–9955, 2003. View at Publisher · View at Google
Scholar · View at Scopus
41. S. A. Trygubenko and D. J. Wales, “Graph transformation method for calculating waiting times in Markov chains,” Journal of Chemical Physics, vol. 124, no. 23, Article ID 234110, 2006. View at
Publisher · View at Google Scholar · View at Scopus
42. J. M. Carr and D. J. Wales, “Refined kinetic transition networks for the GB1 hairpin peptide,” Physical Chemistry Chemical Physics, vol. 11, no. 18, pp. 3341–3354, 2009. View at Publisher · View
at Google Scholar · View at Scopus
43. G. Henkelman and H. Jónsson, “Theoretical calculations of dissociative adsorption of CH4 on an Ir(111) surface,” Physical Review Letters, vol. 86, no. 4, pp. 664–667, 2001. View at Publisher ·
View at Google Scholar · View at Scopus
44. F. Hontinfinde, A. Rapallo, and R. Ferrando, “Numerical study of growth and relaxation of small C60 nanoclusters,” Surface Science, vol. 600, no. 5, pp. 995–1003, 2006. View at Publisher · View
at Google Scholar · View at Scopus
45. L. Xu, D. Mei, and G. Henkelman, “Adaptive kinetic Monte Carlo simulation of methanol decomposition on Cu(100),” Journal of Chemical Physics, vol. 131, no. 24, Article ID 244520, 2009. View at
Publisher · View at Google Scholar
46. T. F. Middleton and D. J. Wales, “Comparison of kinetic Monte Carlo and molecular dynamics simulations of diffusion in a model glass former,” Journal of Chemical Physics, vol. 120, no. 17, pp.
8134–8143, 2004. View at Publisher · View at Google Scholar · View at Scopus
47. Y. Fan, A. Kushima, S. Yip, and B. Yildiz, “Mechanism of void nucleation and growth in bcc Fe: atomistic simulations at experimental time scales,” Physical Review Letters, vol. 106, no. 12,
Article ID 125501, 2011. View at Publisher · View at Google Scholar
48. F. El-Mellouhi, N. Mousseau, and L. J. Lewis, “Kinetic activation-relaxation technique: an off-lattice self-learning kinetic Monte Carlo algorithm,” Physical Review B, vol. 78, no. 15, Article ID
153202, 2008. View at Publisher · View at Google Scholar · View at Scopus
49. L. K. Béland, P. Brommer, F. El-Mellouhi, J. -F. Joly, and N. Mousseau, “Kinetic activation-relaxation technique,” Physical Review E, vol. 84, no. 4, Article ID 046704, 2011. View at Publisher ·
View at Google Scholar
50. A. Kara, O. Trushin, H. Yildirim, and T. S. Rahman, “Off-lattice self-learning kinetic Monte Carlo: application to 2D cluster diffusion on the fcc(111) surface,” Journal of Physics Condensed
Matter, vol. 21, no. 8, Article ID 084213, 2009. View at Publisher · View at Google Scholar · View at Scopus
51. H. Xu, Y. N. Osetsky, and R. E. Stoller, “Simulating complex atomistic processes: on-the-fly kinetic Monte Carlo scheme with selective active volumes,” Physical Review B, vol. 84, no. 13, Article
ID 132103, 2011. View at Publisher · View at Google Scholar
52. D. Konwar, V. J. Bhute, and A. Chatterjee, “An off-lattice, self-learning kinetic Monte Carlo method using local environments,” Journal of Chemical Physics, vol. 135, no. 17, Article ID 174103,
2011. View at Publisher · View at Google Scholar
53. D. A. Terentyev, T. P. C. Klaver, P. Olsson et al., “Self-trapped interstitial-type defects in iron,” Physical Review Letters, vol. 100, no. 14, Article ID 145503, 2008. View at Publisher · View
at Google Scholar · View at Scopus
54. C. Lanczos, Applied Analysis, Dover, New York, NY, USA, 1988.
55. P. Pulay, “Convergence acceleration of iterative sequences. the case of SCF iteration,” Chemical Physics Letters, vol. 73, no. 2, pp. 393–398, 1980. View at Publisher · View at Google Scholar
56. R. Shepard and M. Minkoff, “Some comments on the DIIS method,” Molecular Physics, vol. 105, no. 19–22, pp. 2839–2848, 2007. View at Publisher · View at Google Scholar
57. E. Cancès, F. Legoll, M. C. Marinica, K. Minoukadeh, and F. Willaime, “Some improvements of the activation-relaxation technique method for finding transition pathways on potential energy
surfaces,” Journal of Chemical Physics, vol. 130, no. 11, Article ID 114711, 2009. View at Publisher · View at Google Scholar · View at Scopus
58. E. Bitzek, P. Koskinen, F. Gähler, M. Moseler, and P. Gumbsch, “Structural relaxation made simple,” Physical Review Letters, vol. 97, no. 17, Article ID 170201, 2006. View at Publisher · View at
Google Scholar · View at Scopus
59. R. A. Olsen, G. J. Kroes, G. Henkelman, A. Arnaldsson, and H. Jónsson, “Comparison of methods for finding saddle points without knowledge of the final states,” Journal of Chemical Physics, vol.
121, no. 20, pp. 9776–9792, 2004. View at Publisher · View at Google Scholar · View at Scopus
60. A. Heyden, A. T. Bell, and F. J. Keil, “Efficient methods for finding transition states in chemical reactions: comparison of improved dimer method and partitioned rational function optimization
method,” Journal of Chemical Physics, vol. 123, Article ID 224101, 14 pages, 2005. View at Publisher · View at Google Scholar
61. J. Kästner and P. Sherwood, “Superlinearly converging dimer method for transition state search,” Journal of Chemical Physics, vol. 128, no. 1, Article ID 014106, 2008. View at Publisher · View at
Google Scholar
62. A. Goodrow and A. T. Bell, “A theoretical investigation of the selective oxidation of methanol to formaldehyde on isolated vanadate species supported on titania,” Journal of Physical Chemistry C,
vol. 112, no. 34, pp. 13204–13214, 2008. View at Publisher · View at Google Scholar · View at Scopus
63. Y. Fan, A. Kushima, and B. Yildiz, “Unfaulting mechanism of trapped self-interstitial atom clusters in bcc Fe: a kinetic study based on the potential energy landscape,” Physical Review B, vol.
81, no. 10, Article ID 104102, 2010. View at Publisher · View at Google Scholar · View at Scopus
64. A. Melquiond, G. Boucher, N. Mousseau, and P. Derreumaux, “Following the aggregation of amyloid-forming peptides by computer simulations,” The Journal of chemical physics, vol. 122, no. 17, p.
174904, 2005. View at Scopus
65. N. Mousseau and P. Derreumaux, “Exploring energy landscapes of protein folding and aggregation,” Frontiers in Bioscience, vol. 13, no. 12, pp. 4495–4516, 2008. View at Publisher · View at Google
Scholar · View at Scopus
66. H. Gouda, H. Torigoe, A. Saito, M. Sato, Y. Arata, and I. Shimada, “Three-dimensional solution structure of the B domain of staphylococcal protein A: comparisons of the solution and crystal
structures,” Biochemistry, vol. 31, no. 40, pp. 9665–9672, 1992. View at Scopus
67. J. K. Myers and T. G. Oas, “Preorganized secondary structure as an important determinant of fast protein folding,” Nature Structural Biology, vol. 8, no. 6, pp. 552–558, 2001. View at Publisher ·
View at Google Scholar
68. S. Sato, T. L. Religa, V. Dagget, and A. R. Fersht, “Testing protein-folding simulations by experiment: B domain of protein A,” Proceedings of the National Academy of Sciences of the United
States of America, vol. 101, no. 18, pp. 6952–6956, 2004. View at Publisher · View at Google Scholar · View at Scopus
69. J. N. Onuchic and P. G. Wolynes, “Theory of protein folding,” Current Opinion in Structural Biology, vol. 14, no. 1, pp. 70–75, 2004.
70. J. Lee, A. Liwo, and H. A. Scheraga, “Energy-based de novo protein folding by conformational space annealing and an off-lattice united-residue force field: application to the 10–55 fragment of
staphylococcal protein A and to apo calbindin D9K,” Proceedings of the National Academy of Sciences of the United States of America, vol. 96, no. 5, pp. 2025–2030, 1999. View at Scopus
71. G. Favrin, A. Irbäck, and S. Wallin, “Folding of a small helical protein using hydrogen bonds and hydrophobicity forces,” Proteins: Structure, Function and Genetics, vol. 47, no. 2, pp. 99–105,
2002. View at Publisher · View at Google Scholar · View at Scopus
72. P. A. Alexander, D. A. Rozak, J. Orban, and P. N. Bryan, “Directed evolution of highly homologous proteins with different folds by phage display: implications for the protein folding code,”
Biochemistry, vol. 44, no. 43, pp. 14045–14054, 2005. View at Publisher · View at Google Scholar · View at Scopus
73. M.-R. Yun, R. Lavery, N. Mousseau, K. Zakrzewska, and P. Derreumaux, “ARTIST: an activated method in internal coordinate space for sampling protein energy landscapes,” Proteins: Structure,
Function and Genetics, vol. 63, no. 4, pp. 967–975, 2006. View at Publisher · View at Google Scholar · View at Scopus
74. L. Dupuis and N. Mousseau, “Holographic multiscale method used with non-biased atomistic force fields for simulation of large transformations in protein,” Journal of Physics, vol. 341, no. 1,
Article ID 012015, 2012. View at Publisher · View at Google Scholar
75. B. D. McKay, “Practical graph isomorphism,” Congressus Numerantium, vol. 30, pp. 45–87, 1981.
76. A. B. Bortz, M. H. Kalos, and J. L. Lebowitz, “A new algorithm for Monte Carlo simulation of Ising spin systems,” Journal of Computational Physics, vol. 17, no. 1, pp. 10–18, 1975.
77. Y. Song, R. Malek, and N. Mousseau, “Optimal activation and diffusion paths of perfect events in amorphous silicon,” Physical Review B, vol. 62, no. 23, pp. 15680–15685, 2000. View at Publisher ·
View at Google Scholar
78. H. Yildirim, A. Kara, and T. S. Rahman, “Origin of quasi-constant pre-exponential factors for adatom diffusion on Cu and Ag surfaces,” Physical Review B, vol. 76, no. 16, Article ID 165421, 2007.
View at Publisher · View at Google Scholar
79. B. Puchala, M. L. Falk, and K. Garikipati, “An energy basin finding algorithm for kinetic Monte Carlo acceleration,” Journal of Chemical Physics, vol. 132, no. 13, Article ID 134104, 2010. View
at Publisher · View at Google Scholar
|
{"url":"http://www.hindawi.com/journals/jamp/2012/925278/","timestamp":"2014-04-20T01:27:04Z","content_type":null,"content_length":"164027","record_id":"<urn:uuid:f3758dc4-a218-4d4d-b258-daa0f89f0d39>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Approximating Pi
July 29, 2011
Here is our version of the logarithmic integral:
(define (logint x)
(let ((gamma 0.57721566490153286061) (log-x (log x)))
(let loop ((k 1) (fact 1) (num log-x)
(sum (+ gamma (log log-x) log-x)))
(if (< 100 k) sum
(let* ((k (+ k 1))
(fact (* fact k))
(num (* num log-x))
(sum (+ sum (/ num fact k))))
(loop k fact num sum))))))
We need to be able to factor integers to compute the Möbius function. For general usage, you may want to choose some advanced factoring function, but since we only need to factors the integers to a
hundred, a simple factorization by trial division suffices:
(define (factors n)
(let loop ((n n) (fs (list)))
(if (even? n) (loop (/ n 2) (cons 2 fs))
(if (= n 1) (if (null? fs) (list 1) fs)
(let loop ((n n) (f 3) (fs fs))
(cond ((< n (* f f)) (reverse (cons n fs)))
((zero? (modulo n f))
(loop (/ n f) f (cons f fs)))
(else (loop n (+ f 2) fs))))))))
The Möbius function runs down the list of factors, returning 0 if any factor is the same as its predecessor or ±1 depending on the count:
(define (mobius-mu n)
(if (= n 1) 1
(let loop ((fs (factors n)) (prev 0) (m 1))
(cond ((null? fs) m)
((= (car fs) prev) 0)
(else (loop (cdr fs) (car fs) (- m)))))))
The Riemann function computes a list of Möbius numbers once, when the function is instantiated, then runs through the list accumulating the sum:
(define riemann-r
(let ((ms (let loop ((n 1) (k 100) (ms (list)))
(if (zero? k) (reverse ms)
(let ((m (mobius-mu n)))
(if (zero? m) (loop (+ n 1) k ms)
(loop (+ n 1) (- k 1) (cons (* m n) ms))))))))
(lambda (x)
(let loop ((ms ms) (sum 0))
(if (null? ms) sum
(let* ((m (car ms)) (m-abs (abs m)) (m-recip (/ m)))
(loop (cdr ms) (+ sum (* m-recip (logint (expt x (/ m-abs))))))))))))
Now for the payoff. The table below shows the differences between π(x) and the values of the logarithmic integral and Riemann’s function at various values of x:
pi li-pi r-pi
----------- ----- -----
10^3 168 10 0
10^6 78498 130 29
10^9 50847534 1701 -79
10^12 37607912018 38263 -1476
The logarithmic integral approximation is good, but the Riemann R approximation is stunning; the leading seven digits are right for 10^12. And the Riemann R approximation improves, percentage-wise,
as x increases.
You can run the program at http://programmingpraxis.codepad.org/Fvwfmjc5.
Pages: 1 2
2 Responses to “Approximating Pi”
1. August 1, 2011 at 2:51 PM
Here’s my Python solution; it reuses primes, a variation of the Python Cookbook’s version of the Sieve of Eratosthenes.
from math import log
from operator import mul
GAMMA = 0.57721566490153286061
def li(x):
return GAMMA + log(log(x)) + sum(pow(log(x), n) /
(reduce(mul, xrange(1, n + 1)) * n) for
n in xrange(1, 101))
def factors(n):
return (p for p in primes(n + 1) if n % p == 0)
def mu(n):
m = lambda p: -1 if (n / p) % p else 0
return reduce(mul, (m(p) for p in factors(n)), 1)
def r(x):
return sum(mu(n) / n * li(pow(x, 1.0 / n)) for n in xrange(1, 101))
2. August 25, 2012 at 3:36 AM
Clojure version, uses O’Neil sieve to get list of primes. Results for Riemann function matches table above.
(load "lazy-sieve")
(defn factors
"list of factors of n"
(loop [n n, prime lazy-primes, l (vector-of :int)]
(let [p (first prime)]
(= n 1) l
(= (mod n p) 0)
(recur (/ n p) prime (conj l p))
(recur n (rest prime) l)))))
(defn calc-mu
"Calculate mu(n) for use in building a static array, n > 1"
(let [fac (factors n)]
(if (not= (count fac) (count (distinct fac)))
(reduce * (map (fn [x] -1) fac)))))
(def mu (into (vector-of :int 0 1)
(map calc-mu (range 2 101))))
(defn powers [x]
(iterate (partial * x) x))
(defn float-factorials []
(map first (iterate (fn [[n,i]] [(* n i) (inc i)]) [1.0 2])))
(defn logint
"Compute gauss' logarithmic integral to N terms"
[n, x]
(let [gamma 0.5772156649015329,
ln-x (Math/log x),
sum (reduce +
(map #(/ %1 (* %2 %3))
(take n (powers ln-x))
(take n (float-factorials))
(range 1 (inc n))))]
(+ gamma (Math/log ln-x) sum)))
(defn riemann
"Compute riemann function, max 100 terms"
(reduce +
(map #(* (/ (mu %1) %1) (logint 100 (Math/pow x (/ 1 %1))))
(range 1 101))))
|
{"url":"http://programmingpraxis.com/2011/07/29/approximating-pi/2/","timestamp":"2014-04-20T08:37:15Z","content_type":null,"content_length":"68468","record_id":"<urn:uuid:b65f0d8c-a309-4360-b077-0674696b8d39>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I need the venn diagram for this question can anyone help me I am so lost A mini license plate for a toy car must consist of a vowel followed by two numbers. Each number must be a 1 or 6. Repetition
of digits is permitted. Use the counting principle to determine the number of points in the sample space. Construct a tree diagram to represent this situation List the sample space. Determine the
exact probability of creating a mini license plate with an A. Give solution exactly in reduced fraction form
Best Response
You've already chosen the best response.
there are 5 vowels and then two numbers from 1 to 6 so a total of \[5\times 6\times 6=180\] possiblilites
Best Response
You've already chosen the best response.
one fifth of them will contain an A
Best Response
You've already chosen the best response.
|dw:1328591831448:dw| this is sooo wrong but i tried
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f309529e4b0fc09381ef7a1","timestamp":"2014-04-17T10:01:52Z","content_type":null,"content_length":"44309","record_id":"<urn:uuid:e7e27b9b-d073-4e86-95d0-d87cbf48a682>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Planet Infinity
Types of numbers
Discuss the different number types and explain the relationships between the types.· Explore the followingnatural numberswhole numbersintegersrational numbersirrational numbersetc.· Write the
information about the origin of various types of numbers.· Explore the information about the history of numbers.Explore
Make a project on it. 2
.Matchstick Games
Matchsticks games link
You can prepare a project on matchstick games . 3.
Number Spirals
Know about number spiral and explore the beauty of hidden mathematics in it.
Number Systems of the world
This is an interesting link which contain interesting information about the number systems of the world. Explore this site and make an interesting project.
Fibbonacci Numbers
The Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, ... Each number in the Sequence is obtained by summing the previous two numbers. Observe the sequence and write the next 20 terms. Explore about it
in nature... For example, when counting the number of petals of a flower, it is most probable that they will correspond to one of the Fibonacci Numbers. It is seen that:o Lilies have 3 petalso
Buttercups commonly have 5 petalso Delphiniums have 8 petalso Ragworts have 13 petalso Asters have 21 petalsFind more information of such kind on the internet. Explore the information about Leonardo
Pisano Fibonacci 1170 - 1250 You can take the help from the following link
You can make a project on fibonacci sequence and its presence in nature.The Fibonacci numbers are 0, 1, 1, 2, 3, 5, 8, 13, ... (add the last two to get the next) .It is called the Fibonacci series
after Leonardo of Pisa or (Filius Bonacci), alias Leonardo Fibonacci, born in 1175, whose great book The Liber Abaci (1202) , on arithmetic, was a standard work for 200 years and is still considered
the best book written on arithmetic.The following link will help you to gather useful information.
6.Symbols in mathematics
In the previous session, one of the students made a project on mathematical symbols,their origin and utility.You can also try this one .Some of the information is given below. Search for more
information .
The factorial symbol n!
The symbol n!, called factorial n, was introduced in 1808 by Christian Kramp of Strassbourg, who chose it so as to circumvent printing difficulties incurred by the previously used symbol thus
illustrated on the right. The symbol n! for "factorial n", now universally used in algebra, is due to Christian Kramp (1760-1826) of Strassburg, who used it in 1808.
The symbols for similarity and congruency
Our familiar signs, in geometry, for similar, and for congruent) are due to Leibniz (1646-1715.) Leibniz made important contributions to the notation of mathematics .
The symbol for angle and right angle
In 1923, the National Committee on Mathematical Requirements, sponsored by the Mathematical Association of America, recommended this symbol as standard usage for angle in the United States.
Historically, Pierre Herigone, in a French work in 1634, was apparently the first person to use a symbol for angle.
The symbol for pi
(This symbol for pi was used by the early English mathematicians William Oughtred (1574 -1660), Isaac Barrow (1630-1677), and David Gregory (1661-1701) to designate the circumference , or periphery,
of a circle. The first to use the symbol for the ratio of the circumference to the diameter was the English writer, William Jones, in a publication in 1706. The symbol was not generally used in this
sense, however, until Euler (1707-1783) adopted it in 1737.
The symbol for infinity
John Wallis (1616-1703) was one of the most original English mathematicians of his day. He was educated for the Church at Cambridge and entered Holy Orders, but his genius was employed chiefly in the
study of mathematics. The Arithmetica infinitorum, published in 1655, is his greatest work. This symbol for infinity is first found in print in his 1655 publication Arithmetica Infinitorum.
The symbols for ratio and proportion
The symbol : to indicate ratio seems to have originated in England early in the 17th century. It appears in a text entitled Johnson’s Arithmetick ; In two Bookes (London.1633), but to indicate a
fraction, three quarters being written 3:4.Have fun exploring mathematics.
7.Magic Squares
Click on the given link to know the detail.
Explore the following:
What is a magic square?
Explore on its historical background.
What is an odd ordered magic square?
Explain methods of making them.
What is a dated magic square?
How to make a dated magic square?
8. Making Koch tetrahedron
The following model of Koch's tetrahedron is made by a student at K.H.M.S. Planet Infinity.
9 comments:
Can you have the procedure for making a working model on any of the following:-
Pythagoras theorem
Bar Graph
Congruence of triangles
Identity a(squared)- b(squared)
3D Figures
Properties of triangles
You are from The Mother's International School?
Whats your name.
what is the use of koch tetrahedron
Useful topics are given. i liked it !!!!!!
i want working modeles on
*algebra etc
tetrahedron is really wonderfull
project on statitics and construction or cicle or construction or quadrilaterals required urengtly.
At the time each subjects involved projects so now students have new way to express what they learn. In my opinion student choose a good topic from their area of interest so they can do better on
|
{"url":"http://mykhmsmathclass.blogspot.com/2007/05/e-learning-workshop.html","timestamp":"2014-04-17T18:49:07Z","content_type":null,"content_length":"176325","record_id":"<urn:uuid:79709e87-6398-49d1-808f-177467644b13>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Purplemath Forums
Hello, everyone. I was hoping you guys could help me out. I'm about to take this huge college entrance exam in two months and I bought this reviewer at a bookstore - the problem, however, is that the
reviewer I bought only posts the ANSWERS to the questions without explanations.
How do you solve combination problems when they have specifications?
For example -
In how many ways can a comittee of five be chosen out of 15 people, 7 of which are males and 8 of which are females, if one male and one female are to be included in each selection?
(according to the answer key, the answer is 16,106)
A shipment of 14 personal computers contains 4 defective units. In how many ways can a company purchase 5 of these units and receive at least 2 of the defective units?
(according to the answer key, the answer is 910)
any help would be great.
Re: combination with specifications
misstokyo wrote:In how many ways can a comittee of five be chosen out of 15 people, 7 of which are males and 8 of which are females, if one male and one female are to be included in each
In how many ways can you choose the one required male? In how many ways can you choose the one required female? In how many ways can you choose the remaining three members? Multiplying, what value do
you get?
misstokyo wrote:A shipment of 14 personal computers contains 4 defective units. In how many ways can a company purchase 5 of these units and receive at least 2 of the defective units?
To receive "at least two", you will receive two or three or four. The opposite possibility is receiving zero or one. So one method would be:
In how many ways can you chose none of the defective units from the four available? In how many ways can you choose the remaining five units from the remaining ten?
In how many ways can you choose one of the defective units from the four available? In how many ways can you choose the remaining four units from the remaining ten?
Subtract these values from the total number of ways to choose five of fourteen.
Re: combination with specifications
maggiemagnet wrote:
misstokyo wrote:In how many ways can a comittee of five be chosen out of 15 people, 7 of which are males and 8 of which are females, if one male and one female are to be included in each
In how many ways can you choose the one required male? In how many ways can you choose the one required female? In how many ways can you choose the remaining three members? Multiplying, what
value do you get?
there are 7 choices for the one required male, 8 choices for the one required female, and which leaves 13 people to choose from for the third spot, 12 people to choose from for the fourth, and 11 for
the last spot. Multiplying 7 x 8 x 13 x 12 x 11, I got 96,096 - but the answer key in my book says the answer should be 16,106.
Re: combination with specifications
On what basis are you finding the number of ways that they could be picked in order? Isn't this a combination, so the order doesn't matter?
Re: combination with specifications
maggiemagnet wrote:On what basis are you finding the number of ways that they could be picked in order? Isn't this a combination, so the order doesn't matter?
oh. right. I forgot about that. *embarrased* well anyway, would it be okay to ask for a detailed explanation as to how to solve these kinds of combination problems? I've been looking around the net
and all I've seen are the "normal" or "regular" combinations taken r at a time - I haven't seen any website mentioning anything about specifications.
|
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=4126","timestamp":"2014-04-21T05:10:03Z","content_type":null,"content_length":"26599","record_id":"<urn:uuid:4441f8a5-a331-43fa-b91e-39ff3f8028c3>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[music-dsp] Envelope Detection and Hilbert Transform
Bob M BobM.DSP at gmail.com
Fri Sep 24 20:33:43 EDT 2004
First off, thanks to everyone for a phenomenal discussion. There's a
lot for me to sort through!
To share with other some things I have learned here..
First off, I was confused because of the name "Hilbert Transform", or
more specifically that it contains the word "transform." I thought
transforms (such as Fourier, LaPlace, z...) are meant to go from one
domain to another. However, it seems the Hilbert Transform goes from
time domain to time domain. The end result seems to be that it
produces the imaginary component of the real signal.
I can understand the instantaneous magnitude concept from this. The
way I gather it, it's analogous to the magnitude of a frequency bin in
the frequency domain. That is:
(Re^2 + Im^2)^0.5
Conceptually, I think doing the above operation in the time domain
could be thought of as the sum of the magnitudes of all frequencies at
that instant: instantaneous amplitude.
But "instantaneous frequency" is new to me. I can understand it
mathematically. Now that we have the Re,Im components in the time
domain, we can find the phase just as we would in the frequency
domain. Then the derivative of the phase is the instantaneous
frequency. But conceptually, I'm lost. I'd be interested to know what
this figure looks like when there are sine waves of multiple
frequencies present.
I also asked if low-pass filtering the result of the Hilbert Transform
is necessary, and now I'm convinced that it is not. For example, how
could you resolve a triangular envelope accurately if you low-pass
filter that envelope? I think the triangular shape would not be
I'm still new to performing FIR filters in the frequency domain
(overlap and add), which was recommended. However, from what I could
see, an extremely flat all-pass frequency response (except near the
DC/Nyquist bins) can be had with a fairly small number of Hilbert
Transform coefficients and a window applied. For example, I checked
the gain plot of a HT using 29 coefficients and a Hanning window, and
it looked extremely flat to me. Is it really necessary to do FFT
filtering, or is that only if you're looking for a flat response all
the way out to DC and Nyquist?
Thanks again,
On Fri, 24 Sep 2004 19:22:17 -0400, Citizen Chunk
<citizenchunk at gmail.com> wrote:
> hi James. i just did a little reading on the good ole "interweb" and
> found this page on the Hilbert Transform:
> http://www.numerix-dsp.com/envelope.html
> from what I can gather, because of the 90 degree phase shift, the HT
> (not hyper threading ;) ) helps to "fill in the gaps" that would be
> present in a rectified oscillating signal. this seems a lot like the
> RMS value--but RMS is an average level, not the peak level. so the HT
> is kind of like an average that just follows the peak value. hope i'm
> making sense and properly understanding this thingy.
> if you have a sidechain signal that is tracing the rectified peak
> signal, it will attack/release as the signal rises/falls, which can
> cause envelope ripple. you could get a smoother envelope by using RMS,
> but that will slow down reaction time. from what i gather here, if you
> use the HT, you get a very smooth peak signal, which can then be
> averaged (for something like an RMS) or passed to an attack/release
> detector, for typical compressor behavior. so perhaps using the HT can
> actually reduce envelope ripple and IM distortion.
> or do i have it all wrong? ...
> == chunk
More information about the music-dsp mailing list
|
{"url":"http://music.columbia.edu/pipermail/music-dsp/2004-September/061468.html","timestamp":"2014-04-18T12:24:06Z","content_type":null,"content_length":"6602","record_id":"<urn:uuid:f7dc4186-ed5a-4485-8247-d6b0fadaa3b3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Weymouth Prealgebra Tutor
Find a Weymouth Prealgebra Tutor
...I work with students on understanding that this section is basically reading comprehension using scientific experiments. The method I teach is similar to what I teach for the reading: quickly
reading passages and understanding the main idea, and learning how to answer questions based solely on the information provided. The ISEE is a test for admission to private schools.
26 Subjects: including prealgebra, English, linear algebra, algebra 1
...I have experience mentoring and tutoring students in my community in academics as well as for college applications, and more recently, my peers in college who have struggled in certain subject
areas. While I am confident in my own academic skills, my strongest asset lies in my intuition for prob...
38 Subjects: including prealgebra, English, chemistry, reading
...I like my students to keep a journal so they can see the power that writing has to communicate and the relevance it can have in their lives. I also stress creativity and just plain "fun" so
writing is a lifetime skill to be enjoyed. I also try to work with my students on a variety of projects that incorporate writing into the project as the key skill.
31 Subjects: including prealgebra, English, reading, writing
...I have a true passion for learning and subsequently enjoy being able to pass on the enthusiasm and the knowledge that I can to my students. I pride myself on my experience with younger
children and adolescents through my work as a counselor in leadership camps, summer camps, after-school program...
30 Subjects: including prealgebra, English, reading, calculus
...During high school (German Gymnasium) I took the most advanced Math courses (college prep/AP) and continued to be a resource to fellow students who had difficulties grasping terminology or
concepts. My son took a similar course and is now studying Math and Computer Science at Carnegie Mellon University, PA. Math does not have to be difficult.
21 Subjects: including prealgebra, English, writing, ESL/ESOL
Nearby Cities With prealgebra Tutor
Braintree prealgebra Tutors
Brockton, MA prealgebra Tutors
Brookline, MA prealgebra Tutors
Dorchester, MA prealgebra Tutors
East Weymouth prealgebra Tutors
Hingham, MA prealgebra Tutors
Hull, MA prealgebra Tutors
Hyde Park, MA prealgebra Tutors
North Weymouth prealgebra Tutors
Quincy, MA prealgebra Tutors
Randolph, MA prealgebra Tutors
Revere, MA prealgebra Tutors
Roxbury, MA prealgebra Tutors
South Weymouth prealgebra Tutors
Weymouth Lndg, MA prealgebra Tutors
|
{"url":"http://www.purplemath.com/weymouth_ma_prealgebra_tutors.php","timestamp":"2014-04-17T01:23:35Z","content_type":null,"content_length":"24288","record_id":"<urn:uuid:84548eec-c8b3-4da5-abad-c4a0d4544cea>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
|
West Collingswood Heights, NJ Calculus Tutor
Find a West Collingswood Heights, NJ Calculus Tutor
...This academic background requires considerably more advanced mathematical and analytical skills than are required for the GMAT. On the management front, I moved from a staff position at Oxford
University to industry in 1974 as a section manager for the aerospace division of a privately owned ins...
10 Subjects: including calculus, GRE, algebra 1, GED
...If you don't get that grade, I will refund your money, minus any commission I paid to this website. Please note that I only tutor college students, advanced high school students, returning
adult students, and those studying for standardized tests such as SAT, GRE, and professional licensure exam...
11 Subjects: including calculus, statistics, ACT Math, precalculus
...I was very effective in critiques and frequently assisted the other students in my classes. I've been painting with acrylic paint for over 10 years and it is one of my primary media. As a
trained scientific illustrator, acrylic paint is a major medium for me.
19 Subjects: including calculus, geometry, algebra 2, trigonometry
I am graduate student working in engineering and I want to tutor students in SAT Math and Algebra and Calculus. I think I could do a good job. I studied Chemical Engineering for undergrad, and I
received a good score on the SAT Math, SAT II Math IIC, GRE Math, and general math classes in school.
8 Subjects: including calculus, geometry, algebra 1, algebra 2
...I received an A. I used these topics in many chemical engineering courses after that. I received a Bachelor's in chemical engineering at Rensselaer Polytechnic Institute in 2010.
25 Subjects: including calculus, chemistry, physics, writing
|
{"url":"http://www.purplemath.com/West_Collingswood_Heights_NJ_calculus_tutors.php","timestamp":"2014-04-17T11:01:37Z","content_type":null,"content_length":"24877","record_id":"<urn:uuid:e157eb4d-694f-4ec8-aeed-8747a57605b0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions - Re: Matheology § 222 Back to the roots
Date: Feb 15, 2013 5:58 PM
Author: William Hughes
Subject: Re: Matheology § 222 Back to the roots
On Feb 15, 10:30 pm, WM <mueck...@rz.fh-augsburg.de> wrote:
> On 15 Feb., 00:44, William Hughes <wpihug...@gmail.com> wrote:
> > > > Two potentially infinite sequences x and y are
> > > > equal iff for every natural number n, the
> > > > nth FIS of x is equal to the nth FIS of y
> > So we note that it makes perfect sense to ask
> > if potentially infinite sequences x and y are equal,
> and to answer that they can be equal if they are actually infinite.
> But this answer does not make sense.
> You cannot prove equality without having an end, a q.e.d..
A very strange statement. Anyway there is no reason to
claim equality. Let us define the term coFIS
Two potentially infinite sequences x and y are said to be
coFIS iff for every natural number n, the
nth FIS of x is equal to the nth FIS of y.
We note that it makes perfect sense to ask
if potentially infinite sequences x and y are coFIS,
we have cases where they are not coFIS and cases
where they are coFIS.. We also note that no
concept of completed is needed, so coFIS can
be demonstrated by induction. In particular, you
do not need a last element to prove that x and y
are coFIS.
So WMs statements are
there is a line l such that d and l
are coFIS
there is no line l such that d and l
are coFIS
|
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8341669","timestamp":"2014-04-20T06:33:29Z","content_type":null,"content_length":"2587","record_id":"<urn:uuid:9ca6b435-45a7-43c9-ad29-9a55bcbf0691>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Aarush sat down to study at 8am the angle of elevation of the sun was 15 when he had finished studyng the angle of elevation of the sun was 55.For How long Did he Studied?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
If Aarush was located somewhere on the equator, and the day he studied was the 21st or march or September, then the sun was exactly overhead. In such a case, we can conclude that the sun rises at
0 deg. measured from horizon east, and sets at 180 measured from horizon east 12 hours later. Assuming the sun was due east when he finished studying, then the sun had risen (55-15)=40 degrees =>
40/180*12 hours = 40/15 hours=8/3 hours. If the sun was due west with an angle of elevation of 55 degrees, then the sun had moved (180-55)-15=110 degrees, and elapsed time was then 110/180*12
hours = 22/3 hours. |dw:1358002670369:dw|
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50f17793e4b0abb3d86f98be","timestamp":"2014-04-21T07:39:55Z","content_type":null,"content_length":"44163","record_id":"<urn:uuid:758bed07-7529-4c6b-a7b4-ee9f6d7a66f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Active Contours without Edges
In this talk, I will present a new model for active contours to detect objects in a given image. The model is based on techniques of curve evolution, Mumford-Shah functional for segmentation, and the
level set method of S. Osher and J. Sethian. The model can detect objects whose boundaries are not necessarily defined by gradient. We minimize an energy which can be seen as a particular case of the
so-called minimal partition problem. In the level set formulation, the problem becomes a ``mean-curvature flow''-like evolving the active contour, which will stop on the desired boundary. However,
the stopping term does not depend on the gradient of the image, as in the classical active contour models, but it is instead related to a particular segmentation of the image. Finally, I will present
various experimental results and in particular some examples for which the classical snakes methods based on the gradient are not applicable. We will also see that interior contours are automatically
|
{"url":"http://iris.usc.edu/Projects/seminars/vese-1.html","timestamp":"2014-04-20T05:59:41Z","content_type":null,"content_length":"1746","record_id":"<urn:uuid:bea02009-3f49-42c8-8fc4-2a2694a20345>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: formalism
Judy Roitman roitman at math.ukans.edu
Wed Oct 29 10:39:53 EST 1997
This came off the post-secondary mathematics education list out of Warwick,
and seems relevant to FOM. The author is David Henderson, who has
instantiated his notions of proof by writing at least one highly
non-traditional college text ("Experiencing Geometry on Plane and Sphere";
the title makes his point of view fairly explicit). Any replies should be
copied to him.
"Now, I think that the formalisms in mathematics that have been developed in
this century are clearly very powerful and useful in many situations. I,
and every mathematician that I know, do not think that the formalism ARE
mathematics. For a recent paper, I tried to find quotes positing the
formality of mathematics and could only finds ones from computer scientists
talking about mathematics. But the issues remains: What is the role of
formalisms in mathematics and when and where should it be taught and what
should be the emphasis?"
Judy Roitman | "Whoppers Whoppers Whoppers!
Math, University of Kansas | memory fails
Lawrence, KS 66045 | these are the days."
785-864-4630 | Larry Eigner, 1927-1996
Note new area code
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-October/000112.html","timestamp":"2014-04-25T06:38:33Z","content_type":null,"content_length":"3836","record_id":"<urn:uuid:f94d903c-de40-4cb6-a9fc-63d340609169>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
|
: N-Puzzle
This 8 Puzzle visualization shows an implementation of a DFS algorithm, a greedy algorithm and an A* algorithm.
In the N-puzzle you have an M by M board (where M = sqrt(N)) filled with tiles numbered 1 through N. Therefore there is always one empty tile. For instance - in the 8 puzzle, there is a 3 by 3 board
with tiles numbered 1 through 8. The tiles start out in some initial placement, and the goal is to bring them to a placement in which the tiles are ordered according to the numbers that appear on
them, and the empty tile is the bottom right-hand corner. In each turn only one tile can be moved. This tile must be one of the tiles adjacent to the empty spot, and moving a tile simply moves it
into the empty spot.
Note: While the algorithms are searching for a valid sequence of moves, they do not necessarily perform only valid moves. For instance - several algorithms treat the positioning of the tiles on the
board as a node in a search tree. Going between nodes in the search tree is allowed even though there is no single valid move which will bring the board between the two nodes. This is fine as long as
when the algorithm finds a path from the initial state to the goal state and the path itself consists solely of valid moves.
For the sake of visualizing the search process, when an algorithm is processing a certain node, it may appear that an invalid move has been performed (moving more than one tile or moving tiles
diagonally). This can happen only in the search phase and is done solely for the sake of visualizing the search process.
The search algorithms implemented here are:
• DFS: The algorithm searches through all the possible movements, making sure the same state is never repeated. When it reaches a "dead end" (a state from which all movements lead to states that
have already been explored), it backtracks to the previous state. This algorithm can run very slowly, because there are (N+1)! different states, and the DFS iterates over them randomly. For
instance - for the 8-puzzle there are 9! = 362,880 possible states.
• Best-First Search: The algorithm doesn't appear in the list, but it serves as a basis for other algorithms and therefore deserves some explanation. The Best-First Search algorithm treats each
positioning of tiles on the board as a state. Each state is given a score which may differ between the different implementations of this algorithm. The algorithm keeps two lists of states: a list
of states for which all children have been searched (the closed list) and a list of states which should still be checked (the open list). The open list is a prioritized queue according to the
state score. A state's children are any other states reachable by a valid move from the state. The algorithm begins with the initial board state in the open list. During each search iteration it
takes the first state out of the open list and processes its children. Each child's score is calculated. If the child exists with a better score in either list, it is skipped. Otherwise the child
is added to the open list (in case the child already exists in the open list, its existing copy is first removed from the open list). The algorithm continues processing until either the open list
is empty (in which case all of the states have been processed and no solution has been found) or until a goal state is found.
• Greedy: The algorithm is an implementation of the Best-First Search algorithm. The score is a sum over the Manhattan Block Distance between each tile and its target position on the board.
Manhattan Block Distance is the distance between two locations on the board, where only horizontal and vertical movements are allowed.
• A*: The algorithm is an implementation of the Best-First Search algorithm. The score is a sum over the Manhattan Block Distance beween each tile and its target position on the board plus the
length of the path from the initial state to the current state.
• Greedy Solution Only: The algorithm is the same Greedy algorithm, except that the best path is searched without being displayed. Once a path is found - it is played out step after step. Note that
the search process can be long and can take up a lot of memory and CPU. Press the stop button if you wish to stop the search at any stage.
• A* Solution Only: The algorithm is the same A* algorithm, except that the best path is searched without being displayed. Once a path is found - it is played out step after step. Note that the
search process can be long and can take up a lot of memory and CPU. Press the stop button if you wish to stop the search at any stage.
|
{"url":"http://yuval.bar-or.org/index.php?item=10","timestamp":"2014-04-18T20:54:36Z","content_type":null,"content_length":"18664","record_id":"<urn:uuid:573f4b07-2f95-4515-b534-9e2bc0d03f74>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Floral Formulas and Diagrams
Convenient shorthand methods of
recording floral symmetry, number of
parts, connation and adnation,
insertion, and ovary position.
Floral Formulas
• A floral formula consists of five symbols
indicating from left to right:
• Floral Symmetry
• Number of Sepals
• Number of Petals
• Number of Stamens
• Number of Carpels
Floral Formulas
• Floral formulas are useful tools for
remembering characteristics of the various
angiosperm families. Their construction
requires careful observation of individual
flowers and of variation among the flowers
of the same or different individuals.
Floral Formula Symbol 1
• The first symbol in a floral formula describes the
symmetry of a flower.
• (*) Radial symmetry – Divisible into equal
halves by two or more planes of symmetry.
• (x) Bilateral symmetry – Divisible into equal
halves by only one plane of symmetry.
• ($) Asymmetrical – Flower lacking a plane of
symmetry, neither radial or bilateral.
Floral Formula Symbol 2
• The second major symbol in the floral
formula is the number of sepals, with “K”
representing “calyx”. Thus, K5 would
mean a calyx of five sepals.
Floral Formula Symbol 3
• The third symbol is the number of petals,
with “C” representing “corolla”. Thus, C5
means a corolla of 5 petals.
Floral Formula Symbol 4
• The fourth symbol in the floral formula is
the number of stamens (androecial items),
with “A” representing “androecium”. A∞
(the symbol for infinity) indicates
numerous stamens and is used when
stamens number more than twelve in a
flower. A10 would indicate 10 stamens.
Floral Formula Symbol 5
• The fifth symbol in a floral formula
indicates the number of carpels, with “G”
representing “gynoecium”. Thus, G10
would describe a gynoecium of ten
Basic Floral Formula
• *, K5, C5, A∞, G10
• Radial symmetry (*),
• 5 sepals in the calyx (K5)
• 5 petals in the corolla (C5)
• Numerous (12 or more) stamens (A∞)
• 10 carpels (G10)
Floral Formulas
• At the end of the floral formula, the fruit
type is often listed.
• Example:
• *, K5, C5, A∞, G10, capsule
More on Floral Formulas
• Connation (like parts fused) is indicated by
a circle around the number representing
the parts involved. For example, in a
flower with 5 stamens all fused (connate)
by their filaments, the floral formula
representation would be:
More on Floral Formulas
• The plus symbol (+) is used to indicate
differentation among the members of any
floral part. For example, a flower with five
large stamens alternating with five small
ones would be recorded as:
• A5 + 5.
More on Floral Formulas
• Adnation (fusion of unlike parts) in
indicated by a line connecting the numbers
representing different floral parts. Thus, a
flower that has 4 fused petals (connate
corolla) with 2 stamens fused (or adnate)
to this corrola, is described as:
• C4,A2
More on Floral Formulas
• The presence of a hypanthium (flat,
cuplike, or tubular structure on which the
sepals, petals, and stamens are borne
usually formed from the fused bases of the
perianth parts and stamens) is indicated in
the same fashion as adnation:
• X, K 5, C 5, A 10, G 5
More on Floral Formulas
• Sterile stamens or sterile carpels can be
indicated by placing a dot next to the
number of these sterile structures. Thus,
a flower with a fused (syncarpous)
gynoecium composed of five fertile carpels
and five sterile carpels would be
represented as:
• G5+5
More on Floral Formulas
• Variation in the number of floral parts
within a taxon is indicated by using a dash
(-) to separate the minimum and maximum
numbers. For example a taxon that has
flowers with either 4 or 5 sepals would be
indicated as:
• K 4-5
More on Floral Formulas
• Variation with a taxon in either connation
or adnation is indicated by using a dashed
(instead of continuous) line:
• C 3, A 6
More on Floral Formulas
• The lack of a particular floral part is
indicated by placing a zero (0) in the
appropriate position in the floral formula.
For example, a carpellate flower (flower
with a gynoecium but no functional
androecium) would be described as:
• *, K3, C3, A0, G2
More on Floral Formulas
• Flowers with a perianth of tepals (no
differentation between calyx and corolla)
have the second and third symbols
combined into one. A hyphen(-) is placed
before and after the number in this
symbol. Example:
• *, T-5-, A 10, G 3
More on Floral Formulas
• A line below the carpel number indicates
the superior position of the ovary with
respect to other floral parts. G3
• A line above the carpel number indicates
the inferior position of the ovary with
respect to other floral parts. G3
Floral Diagrams
• Floral diagrams are stylized cross sections
of flowers that represent the floral whorls
as viewed from above. Rather like floral
formulas, floral diagrams are used to show
symmetry, numbers of parts, the
relationships of the parts to one another,
and degree of connation and/or adnation.
Such diagrams cannot easily show ovary
Floral Diagram Symbols I
Floral Diagram Symbols II
Sample floral diagrams
Sample Floral Diagrams Described
|
{"url":"http://www.docstoc.com/docs/5616178/Floral-Formulas-and-Diagrams","timestamp":"2014-04-17T23:45:03Z","content_type":null,"content_length":"60619","record_id":"<urn:uuid:c6c1e714-c7d6-4dcd-a728-0f259fb7a5de>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00549-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wilshire Park, LA Calculus Tutor
Find a Wilshire Park, LA Calculus Tutor
...I can sympathize with that. I have been academically intimidated, and occasionally over-matched, having attended very competitive undergraduate and graduate schools myself. I also recognize
which advisors were able to help me and which ones were not.
58 Subjects: including calculus, English, reading, writing
I have worked as an electrical engineer, chemist, materials scientist, physicist, nanotechnologist, consultant, software programmer, and handyman. I enjoy learning and spreading knowledge and
strive to help students not only learn the subjects they are studying but actually become passionate and mo...
42 Subjects: including calculus, reading, Spanish, chemistry
...In 2013 I completed my PhD research in Computational Bio-physics. During all nine years I have tutored physics students of all levels and helped them quantitatively and qualitatively
understand the physical world. I have been tutoring ESL since I was age 14 and my family hosted a foreign exchange student from Spain.
44 Subjects: including calculus, reading, chemistry, Spanish
...I also am comfortable with Bayesian statistics. I scored an 800 on the math section when I took the SAT. Math has always been my best subject and I know a lot of tricks to solve problems
quickly and easily.
18 Subjects: including calculus, chemistry, geometry, Spanish
...This manner of teaching increases their understanding and teaches them what steps are most likely to help them solve problems. I understand the difficulties of learning complex material, and I
do my best to help improve both a student's performance as well as their understanding. Performed very well in Algebra 2 in high school.
15 Subjects: including calculus, reading, algebra 1, geometry
Related Wilshire Park, LA Tutors
Wilshire Park, LA Accounting Tutors
Wilshire Park, LA ACT Tutors
Wilshire Park, LA Algebra Tutors
Wilshire Park, LA Algebra 2 Tutors
Wilshire Park, LA Calculus Tutors
Wilshire Park, LA Geometry Tutors
Wilshire Park, LA Math Tutors
Wilshire Park, LA Prealgebra Tutors
Wilshire Park, LA Precalculus Tutors
Wilshire Park, LA SAT Tutors
Wilshire Park, LA SAT Math Tutors
Wilshire Park, LA Science Tutors
Wilshire Park, LA Statistics Tutors
Wilshire Park, LA Trigonometry Tutors
Nearby Cities With calculus Tutor
Bicentennial, CA calculus Tutors
Century City, CA calculus Tutors
Farmer Market, CA calculus Tutors
Glendale Galleria, CA calculus Tutors
La Tijera, CA calculus Tutors
Lafayette Square, LA calculus Tutors
Miracle Mile, CA calculus Tutors
Oakwood, CA calculus Tutors
Pico Heights, CA calculus Tutors
Playa, CA calculus Tutors
Rimpau, CA calculus Tutors
Sanford, CA calculus Tutors
Santa Western, CA calculus Tutors
Toluca Terrace, CA calculus Tutors
Westwood, LA calculus Tutors
|
{"url":"http://www.purplemath.com/Wilshire_Park_LA_calculus_tutors.php","timestamp":"2014-04-20T09:06:15Z","content_type":null,"content_length":"24421","record_id":"<urn:uuid:96fdf890-feba-4282-8e25-a28e85a18b9a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the first resource for mathematics
On the mathematical model for linear elastic systems with analytic damping.
(English) Zbl 0644.93048
Linear elastic systems with damping (1) ÿ
in Hilbert spaces are analysed. In (1), A is a positive definite unbounded linear operator, B is a closed linear operator. Equivalent first-order linear systems to (1) are used to study the systems
(1). In the present paper the author investigates the widely used linear elastic systems (1) with damping B related in various ways to
${A}^{\alpha }$
$\le \alpha \le 1\right)$
${A}^{\alpha }$
is the fractional power of the positive definite operator A for a real number 1/2
$\le \alpha \le 1$
. Some preliminary results are describes in section 2 of the paper. The spetral property of the systems (1) associated with
$\alpha \in \left[1/2,1\right]$
is discussed and some new results are proved for the analytic property and exponential stability of the semigroup associated with the systems (1). The obtained results have many advantages in
engineering applications.
93D20 Asymptotic stability of control systems
74B05 Classical linear elasticity
93C25 Control systems in abstract spaces
34G10 Linear ODE in abstract spaces
46C99 Inner product spaces, Hilbert spaces
47D03 (Semi)groups of linear operators
93C05 Linear control systems
|
{"url":"http://zbmath.org/?q=an:0644.93048","timestamp":"2014-04-20T20:57:05Z","content_type":null,"content_length":"22745","record_id":"<urn:uuid:f5f2c124-2f33-46ed-81ab-e07ddc0a7f13>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
|
write a program to display fibonacci series upto n terms
Fibonacci series:The first two Fibonacci numbers are 0 and 1, and each remaining number is the sum of the previous two
e.g.: 0 1 1 2 3 5 8 13 21 ....
A number n is taken from user and the series is displayed upto nth term.
Select To use this code as it is.. select and copy paste this code into code.cpp file :)
1. #include<iostream.h>
2. #include<conio.h>
3. void main()
4. {
5. clrscr();
6. int a=0,b=1,c=0,n;
7. cout<<"Enter the number of terms you wanna see: ";
8. cin>>n;
9. cout<<a<<" "<<b<<" ";
10. for(int i=1;i<=n-2;i++)
11. {
12. c=a+b;
13. a=b;
14. b=c;
15. //Coding by: Snehil Khanor
16. //http://WapCPP.blogspot.com
17. cout<<c<<" ";
18. }
19. getch();
20. }
14 comments:
in FOR statement "i<=n" will come :)
if i<=n is taken then the no of terms displayed is 17...!
using namespace std;
class fib
int i,n,f1,f2,f3;
void get();
void put();
void fib::get()
cout<<"enter the no.";
void fib::put()
int main()
fib a1;
#define fibonacci
void main()
int a,b,c;
AM I CORRECT? BUT ITS NOT FOR N TERMS.. O/P S BASED ON VALUE OF C... PLS REPLY FOR ANY CORRECTION
can u specify #fibonacci jus like dat?? sorry if this is silly
instead of for(int i=1;i<=n-2;i++)
use for(int i=0;i<=n-2;i++)
its show perfect series
it shows correct answers.. thanks
the programs on this site are so useful for computer science students..........C program to convert given number of days into years,weeks and days
int main()
int a,b,c,n;
a=0; b=1;
cout<<”enter number of terms (n)=”;
cout<<Fibonacci series is”<<end1;
int count=3;
while (count<=n)
return 0;
can u plz post a program of fibonacci series using a another function
void fib(int n)
void main()
int n,a=0,b=1,c,i;
cout<<"enter the value of n";
cout<<"fibonacci series is=";
u r already printing the first two numbers
See this one.. little more advanced..
actually whoever the programmer is doesn't know anything related to programming... i am atleast a 100 times better programmer....
|
{"url":"http://wapcpp.blogspot.com/2009/07/write-program-to-display-fibonacci.html","timestamp":"2014-04-18T06:34:38Z","content_type":null,"content_length":"103728","record_id":"<urn:uuid:22cb0656-7543-49f3-86be-c1f20423f936>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
|
* Contents
1 Fundamental Suggestions
2 Multiple Methods of Solution
3 Drawing Inequalities
4 Long Multiplication
1 Fundamental Suggestions
Here are some hints on how to do basic math calculations.
These suggestions are intended to make your life easier. Some of them may seem like extra work, but really they cause you less work in the long run.
This is a preliminary document, i.e. a work in progress.
2 Multiple Methods of Solution
Here’s a classic example: The task is to add 198 plus 215. The easiest way to solve this problem in your head is to rearrange it as (215 + (200 − 2)) which is 415 − 2 which is 413. The small point is
that by rearranging it, a lot of carrying can be avoided.
One of the larger points is that it is important to have multiple methods of solution. This and about ten other important points are discussed in reference 4.
3 Drawing Inequalities
The classic "textbook" diagram of an inequality uses shading to distinguish one half-plane from the other. This is nice and attractive, and is particularly powerful when diagramming the relationship
between two or more inequalities, as shown in figure 2.
However, when you are working with pencil and paper, shading a region is somewhere between horribly laborious and impossible.
It is much more practical to use hatching instead, as shown in figure 3.
Obviously the hatched depiction is not as beautiful as the shaded depiction, but it is good enough. It is vastly preferable on cost/benefit grounds, for most purposes.
Some refinements:
• I recommend hatching the excluded half-plane, so that the solution-set remains unhatched rather than hatched. This is particularly helpful when constructing the conjunction (logical AND) of
multiple inequalities.
• I recommend using solid lines versus dashed lines to distinguish “≥” inequalities from “>” inequalities. If you’re going to hatch the excluded half-plane, use solid lines for the “>”
inequalities, to show that the boundary itself is excluded.
• As suggested in figure 4, for linear inequalities you can do the hatching faster and more accurately with the help of a ruler or straightedge, which makes it easy to ensure that none of the
hatches stray into the wrong region.
4 Long Multiplication
4.1 Background
Short multiplication refers to any multiplication problem that consists of multiplying a one-digit number by another one-digit number. That means everything from 0×0 through 9×9.
Long multiplication refers to multiplying multi-digit numbers. We are about to discuss a highly efficient algorithm for doing this.
As a prerequisite for doing long multiplication, you need to be able to do short multiplication reliably.
• Your short multiplication doesn’t have to be fast. Reliability is more important than speed, so take your time.
• Your don’t need to have your short multiplication facts fully committed to memory at this stage. If necessary, write out a multiplication table and use that to help you do the required short
Speed will improve gradually over time. The short multiplication facts will become consolidated in your memory over time. Indeed, long multiplication gives you plenty of practice doing short
multiplication, so we can kill two birds with one stone.
4.2 Some Simple Examples
Let’s start by doing a super-simple example. Let’s multiply 432 by 1. You can do that in your head. The answer is written out in figure 5, in the standard form used for long multiplication problems.
Multiplication by 2 is almost as easy, as shown in figure 6:
Multiplication by 20 is also easy; all we need to do is shift the previous result over one place, as shown in figure 7:
Here’s a more interesting example. Let’s multiply 432 by 21. We can reduce the workload if we rewrite 21 as 20+1. We can express that as an equation:
So now we are multiplying 4321×(20+1). The equation is:
There are two ways of understanding where equation 2 comes from. One way is to start with equation 1 and multiply both sides by 432. Another way is to start with the left-hand side of equation 2 and
replace the 21 by something that is equal to it; in other words, to do a substitution.
We can apply the distributive law to the right-hand side of equation 2. That gives us:
432×21 = 432×(20+1) (3a)
= 432×20 + 432×1 (3b)
Both of the multiplications in equation 3b are easy, and indeed we have already done them. Plugging in the results of these multiplications gives us:
432×21 = 432×(20+1) (4a)
= 432×20 + 432×1 (4b)
= 8640 + 432 (4c)
So at this point, there’s no more multiplying to be done. All that remains is a simple addition problem, as shown in figure 8. The multiplication problem shown in gray is exactly equivalent to the
addition problem shown in black.
Note that we can get by without writing the zero at the end of line d in the figure. The bottom-line result is the same, whether we write this zero or not, provided we keep things properly lined up
in columns, as shown in figure 9. In the number system we are using, the Hindu number system, we normally think of zeros like this as being necessary to indicate place value. However, that’s not
strictly true. You don’t need to write the zeros, provided you have some other way of keeping track of place value.
You are allowed to write the zero (as in figure 8) if that’s convenient, but you are also allowed to skip it (as in figure 9) if that’s more convenient.
Long multiplication depends on lining stuff up in columns, as a way of keeping track of place value. Therefore, when learning to do long multiplication, use graph paper. This will help you keep
things properly lined up. Tidyness counts for a lot when doing long multiplication.
After you’ve done hundreds of these things, and have learned to be super-tidy, you can do without the graph paper, but until then you’ll save yourself a lot of grief if you use graph paper. The best
thing is to get a lab book, which is a bound book containing blank graph paper on every page. Such things are cheap and readily available from office-supply stores. The second-best thing is to use
graph paper in pad form, or perhaps loose-leaf form. If you have a printer attached to your computer, there are apps that will print graph paper for you, in any imaginable style.
Note that numbers tend to be taller than they are wide. For this reason, rather than writing every row of numbers on the lines of the graph paper, it looks nicer if each row of numbers is 1½ lines
below the previous one. You can see how this works in the figures in this section.
The principle here is that the graph paper is supposed to help you, not hobble you. If using the vertical lines is helpful, use the vertical lines. If using the horizontal lines in the obvious
way is not optimal, use them in some less-obvious way.
There are always lots of people trying to make you color inside the lines, but any kind of originality, artistry, or creativity requires coloring outside the lines. Math, if it’s done right,
involves lots of creativity. Don’t be afraid to color outside the lines.
Let’s do one more example. Let’s multiply 432 by 211. We use the same trick as before: We pick 211 apart, digit by digit. That is, we write 211 = 200 + 10 + 1. Multiplying by 200 is easy; just
multiply by 2, and then shift the result two decimal places. The whole procedure can be carried out very efficiently, if you organize it properly, as shown in figure 10.
4.3 The Recipe
The recipe for doing long multiplication is based on simple ideas:
1. Break the given factors down into their individual digits.
2. Multiply everything on a digit-by-digit basis. This requires doing a lot of short-multiplication problems.
3. Add up all the products, with due regard for place value.
We will mostly use the same approach as we used in section 4.2. The only difference is that this time we will need to pick apart both of the factors into their individual digits. (Previously we only
picked part the second factor.)
As mentioned in section 4.2, the rationale for doing things this way is based on the distributive law and the axioms of the number system. This is discussed in more detail in section 4.7.
There is an easy and reliable method for arranging the calculations. We illustrate the method using the question shown in figure 11.
The first step is to write down the question. It is important to line up the numbers as shown, so that the ones’ place of the top number lines up with the ones’ place of the bottom number, and so on
for all the other digits. Keeping things aligned in columns is crucial, since the colums represent place value. As mentioned in item 1, tidiness pays off.
If one of the factors is longer than the other, it will usually be more convenient to place the longer one on top. That’s not mandatory, but it makes the calculation slightly more compact.
You may omit the multiplication sign (×) if it is obvious from context that this is a multiplication problem (as opposed to an addition problem).
We proceed digit by digit, starting with the rightmost digit in the bottom factor, and proceeding systematically right-to-left. We do things in order of increasing place value, right to left. That
makes sense, especially when doing additions, because a carry can affect the next column (to the left), but never the previous column. This is opposite to the direction of reading English text, which
is unfortunate, but there’ s nothing we can do about it.
So, let’s look at the rightmost digit in the bottom factor in our example. It’s a 1. We multiply this digit into each digit of the other factor, digit by digit, working systematically right-to-left
across the top factor.
We place these results in order in row c, with due regard for place value. The 1×7 result goes in the ones’ place, the 1×6 result goes in the tens’ place, and the 1×5 result goes in the hundreds’
place, and so forth, as shown in row c in figure 12. Actually, multiplication by 1 is so easy that you can just copy the whole number 4567 into row c without thinking about it very hard. So far, this
is just like what we did in section 4.2.
Note: You may wish to leave a little bit of space above the numbers in row c, for reasons that will become apparent later.
Also note: In all the figures in this section, anything shown in black is something you actually write down, whereas anything shown in color is just commentary, put there to help us get
through the explanation the first time.
That is all we need to do with the low-order digit of the bottom factor. We now move on to the next digit of the bottom factor. In this case it is a 2.
At this point in this example, things get interesting, because some of the short-multiplication results are two digits long. There is no way to write all of them in a single row. Fortunately, we are
allowed to use multiple rows. One possibility is to spread things out as shown in figure 13. We write 2×7=14 in row d, and then write 2×6=12 on row e, and so on.
It is theoretically possible to write the product 4567×2 on a single row, but this would require doing a bunch of additions, but we choose not to do that. We are going to do all the short
multiplications first, and then do all the additions later. This requires slightly more paper, but it is vastly easier to do, easier to understand, and easier to check.
As previously mentioned, the procedure is to pick a digit from the lower factor, and multiply this digit into each digit of the upper factor. The result of the first such multiplication is 2×7=14,
which we place in row d. Alignment is crucial here. The 14 must be aligned under the 2 as shown. That’s because it “inherits” the place value of the 2.
Next we multiply 2×6=12. You might be tempted to write this in row d, but there is no room for it there, so it must go on row e, as shown in the figure. Again alignment is critical. This product is
shifted one place to the left of the previous product, because it inherits additional place value. It inherits some from the 2 (in the bottom factor) and some from the 6 (in the top factor). Since we
are working systematically right-to-left, you don’t need to think about this too hard; just remember that each of these short-division products must be shifted one place to the left of the previous
one, as suggested by the diagonal dotted blue line in figure 13.
Let me explain the color-coded backgrounds. In row d, the 14 has a yellow background, because it is derived from the 7 in row a, which has a yellow background. Similarly the 12 in row e yas a pink
background, because it is derived from the 6 in row a, which has a pink background.
Although figure 13 is good for illustrating the principles involved, it is unnecessarily spread out. It would be better to write things more compactly, as shown in figure 14, because this makes it
easier to keep things properly aligned. (The objective here is not to save paper. Remember, paper is cheap, and trying to save paper is almost always a bad idea. The advantage of the compact
representation is that it makes it easier to keep things lined up in columns.)
Note the pattern of placing successive short-division results on alternating rows. There is guaranteed to be enough room to do this, because the product of two one-digit numbers can never have more
than two digits.
We are now more than half-way finished.
At this point (or perhaps earlier), if you are not using grid-ruled paper, you should lightly sketch in some vertical guide lines. The tableau has become large enough that there is some risk of
messing up the alignment, i.e. putting things into the wrong columns, if you don’t put in guide lines.
That’s all for the “2” digit in the bottom factor. We now proceed to the “3” digit. We multiply this by each of the digits in the top factor, in the now-familiar fashion. All the short-multiplication
results from this step are put in rows f and g.
At this point, all of the short multiplications are complete. In figure 15, you can see where everything comes from. The color-coded background indicates which digit of the upper factor was involved,
and the row indicates which digit of the bottom factor was involved.
All that remains is a big addition problem, adding up rows c through g inclusive. Draw a line under row g as shown in figure 16, and start adding.
• You can use the space above row c to keep track of carries if you wish
• On the other hand, writing the carries is not mandatory. There are never very many carries, and you don’t need to remember them for very long, so keeping track of them is easy, no matter how you
do it. Personally, I prefer not writing the carries. That’s because whenever I’m adding a tall column of numbers, I do it twice: once from top to bottom, and once from bottom to top. This is a
good way to double-check the results. When adding bottom-to-top, the carry gets added at the bottom, and having it written at the top creates more problems than it solves. Figure 17 shows an
example of addition with unwritten carries.
In any case, the result of the big addition is the result of the overall multiplication problem, as shown on row h of figure 16.
4.4 Keeping Things Properly Lined Up
Let’s be super-explicit about the rules for keeping things lined up in columns.
It pays to think of the short-multiplication products in groups. The groups are labeled in figure 16. Row c is the “by one” group. Rows d and e form the “by twenty” group. Rows f and g form the “by
three hundred” group.
The first rule is that each group is shifted over one place relative to the group above it. For example, the rightmost digit in row d is shifted one place relative to the rightmost digit on row c.
Similarly, the rightmost digit in row f is shifted one place relative to the rightmost digit in row d.
This rule has a simple explanation: The groups inherit place value from the digit in the lower factor that gave rise to the group.
As another way of expressing the same rule: Each group is lined up under the digit in the lower factor that gave rise to the group. For example, in figure 16, the group on lines f and g is lined up
under the 3 in the lower factor. Similarly, the 8 on line e is shifted over one place relative to the 10 on line d.
The second rule is that within each group, each short-multiplication product is shifted over one place relative to the previous factor in the group. For example, in figure 16, the 12 on line e is
shifted over one place relative to the 14 on line d.
This rule also has a simple explanation: Each of the products inherits some place value from the digit in the upper factor that gave rise to the product.
4.5 Another Example
Let’s do another example, as shown in figure 17, which illustrates one more wrinkle.
This example calls attention to the situation where some of the short-multiplication products have one digit, while others have two. You can see this on row c of the figure, where we have 3×3=9 and
3×8=24. In most cases it is safer to pretend they all have two digits, which is what we have done in the figure, writing 9 as 09. Similarly on line d we write 6 as 06. This makes the work fall into a
nice consistent pattern. It helps you keep things lined up, and makes the work easier to check.
In some cases, such as 345×1 or 432×2, all the short-multiplication products have one digit, so you can write them all on a single line, if you wish. This is less work, and more compact. On the
other hand, remember the advice in item 3: paper is cheap. You may find it helpful to write the short-multiplication products as two digits even if you don’t have to.
If even one of the products (other then the leftmost one) requires two digits, you will almost certainly need two rows anyway, so you might as well get in the habit of using two rows.
Mathematically speaking, writing one-digit products as two digits is unconventional, but it is entirely correct. It has the advantage of making the algorithm more systematic, and therefore easier to
In any case, the result of the addition is the result of the overall multiplication problem, as shown on row g of figure 17. That’s all there is to it.
4.6 Discussion
This algorithm ordinarily uses two rows of intermediate results for each digit in the bottom factor. In special cases, we can get by with a single row.
Another hallmark of this algorithm is that we first do all the short multiplying, and then do all of the adding. Every time we do a short multiplication, we just write down the answer. This has
several major advantages:
1. It makes the short multiplications go faster. When you are multiplying, you are just multiplying. You don’t need to worry about adding. There are no carries to keep track of.
2. There are fewer things to go wrong.
3. It makes it easy to check your work. You can re-do each of the multiplications and check to see that you get the same answer.
Conversely, if any of the short-multiplication results looks suspicious, you can easily figure out where it came from, and re-check the multiplication.
This differs from the old-fashioned “textbook” approach, which uses only one row per digit, as shown below. The old-fashioned approach supposedly uses less paper – but the advantage is slight at
best, and if you allow room keeping track of “carries” throughout the tableau, the advantage becomes even more dubious. Besides, paper is cheap, so it is penny-wise and pound-foolish to save paper at
the cost of making the algorithm more complex, more laborious, and less reliable.
The old-fashioned approach may look more compact, but it involves more work. You have to do the same number of short multiplications, and a greater number of additions. It requires you to do
additions (including carries) as you go along.
Last but not least, it makes it much harder to check your work.
Remember, paper is cheap (item 3) and it is important to be able to check your work (item 7).
4.7 Conceptual Foundation
The long-multiplication algorithm as presented here is guaranteed to work. That’s because it is quite directly based on the fundamental axioms of arithmetic, including the axioms of the number
Consider the numbers 4567 and 321 which appeared in section 4.3. We can expand 4567 as 4000+500+60+7. That’s what 4567 means, according to the rules of the Hindu number system. Similarly, we can
expand 321 as 300+20+1.
The reason for expanding things this way is that in the sum 4000+500+60+7, each term consists of a single non-zero digit, with some place value.
The next step is to expand the product 4567×321 as (4000+500+60+7)×(300+20+1). We can then carry out the product in the expanded form, term by term, by repeated application of the distributive law.
In our example, there are four terms in the first factor and three terms in the second factor, which give us 12 terms in the product, as shown in table 1. Each term in the product consists of a
single-digit multiplication – a short multiplication – with some place value.
So, the main thing the algorithm needs to do is to help you write down these 12 numbers in a convenient, systematic way.
One convenience is that the algorithm saves you from writing down all those zeros. You pay a small price for this. The price is that you absolutely must keep the numbers properly lines up in columns.
The rules about how to write the numbers in columns are not magic; they are just a way of keeping track of place value. They are a direct consequence of the axioms of arithmetic, including the axioms
of the number system.
In particular, the “12” that appears in the lower-left element of table 1 is the same “12” that appears at the left of row g in figure 16 ... and the latter is lined up in columns so as to give it
the correct place value.
4.8 Terminology: Product, Factor, Multiplier, and Multiplicand
The word “multiplicand” is more-or-less Latin for “the thing being multiplied”.
In figure 11, some people insist that the top number should be called the “multiplier” and the bottom number should be called the “multiplicand”. Meanwhile, other people insist on naming them the
other way around. Quite commonly, they use the words in ways inconsistent with the supposed definition.
All of this is nonsense. Multiplication is commutative. Pretending to see a distinction between the two numbers is the silliest thing since the holy war between the Big-Endians and the
Furthermore, multiplication is associative, so we could easily have three or more things being multiplied, as in 12×32×65×99, in which case it is obviously hopeless to attempt to distinguish “the”
multiplier from “the” multiplicand.
The modern practice is to call both of the numbers in figure 11 the factors. You can equally well call both of them multipliers, or call both of them multiplicands. Along the same lines, modern
practice is to say that an expression such as 12×32×65×99 is a product of four factors.
4.9 Terminology: Times versus Multiplied By
There are also holy wars as to whether 4567×321 means 4567 “times” 321, or 4567 “multiplied by” 321. The distinction is meaningless. Don’t worry about it.
5 Long Division
The usual “textbook” instructions for how to do long division are both unnecessarily laborious and unnecessarily hard to understand. There’s another way to organize the calculation that is much less
mysterious and much less laborious (especially when long multi-digit numbers are involved).
Note that as discussed above, keeping things lined up in columns is critical. It may help to use grid-ruled paper, or at least to sketch in some guidelines.
5.1 The Algorithm : Using a Crib
Suppose the assignment is to divide 7527 by 13. After writing down the statement of the problem, and before doing any actual dividing, it helps to make a crib, as shown in figure 18. This is just a
multiplication table, showing all multiples of the assigned divisor, which is 13 in our example.
It is super-easy to construct such a table, since no multiplication is required. Successive addition will do the job. We need all the multiples from ×1 to ×9, but you might as well calculate the ×10
row, by adding 117+13. This serves as a valuable check on all the additions that have gone before.
We now begin the main part of the algorithm. The first step is to consider the leading digit of the dividend (which in this case is a 7). This is less than the divisor (13). There is no way we can
subtract any nonzero multiple of 13 from 7. Therefore we leave a blank space above the 7 and proceed to the next digit. We could equally write a zero above the 7, as shown in gray in figure 20, but
that’s just extra work and confers no lasting advantage, since leading zeros are meaningless.
Now consider the first two digits together, namely the 7 and the 5. Look at the crib to find the largest entry less than or equal to 75. It is 65, as in 5×13=65. Copy this entry to the appropriate
row of the division problem, namely row b. Align it with the 75 on the previous row. Since this is 5 times the divisor, write a 5 on the answer line, aligned with the 75 and the 65, as shown. Check
the work, to see that 5 (on the answer line) times 13 (the divisor, on line a), equals 65 (on line b).
Now do the subtraction, namely 75−65=10, and write the result on line c as shown.
When we take place value into account, the 5 in the quotient is short for 500, and the 65 that we just subtracted is short for 6500. Normally we would need to write the zeros explicitly as part of
these numbers, to indicate place value. However, in this situation we are using columns to keep track of place value. If we keep everything properly lined up, we don’t need to write the zeros. In
fact, the zeros would just get in the way (especially in the quotient) so we are better off not writing them.
We now shift attention to the situation shown in figure 21. Bring down the 2 from the dividend, as shown by the red arrow. So now the number we are working on is 102, on line c.
The steps from now on are a repeat of earlier steps.
Look at the crib to find the largest entry less than or equal to 102. It is 91, as in 7×13=91. Write copy this entry to the division problem, on row d, directly under the 102. Since this came from
row 7 of the crib, write a 7 on the answer line, aligned with the 102 and the 91, as shown. Check the work, to see that 7 (on the answer line) times 13 (the divisor, on line a), equals 91 (on line d
). Do the subtraction.
As a check on the work, when doing this subtraction, the result should never be less than zero, and should never be greater than or equal to the divisor. Otherwise you’ve used the wrong line from
the crib, or made an arithmetic error.
We now shift attention to the situation shown in figure 22. Bring down the 7, look in the crib to find the largest entry less than or equal to 117, which is in fact 117, as in 9×13=117. Since this
came from line 9 of the crib, write a 9 on the answer line, properly aligned.
The final subtraction yields the remainder on line g. The remainder is zero in this example, because 13 divides 7527 evenly.
5.2 Remarks
The crib plays several important roles.
Perhaps the crib’s most important advantage, especially when people are first learning long division, is that the crib removes the mystery and the guesswork from the long division process. This is in
contrast to the “trial” method, where you have to guess a quotient digit, and you might guess wrong. Using the crib means we never need to do a short division or trial division; all we need to do is
skim the table to find the desired row.
We have replaced trial division by multiplication and table-lookup. Actually we didn’t even need to do any multiplication, so it would be better to say we have replaced trial division by addition.
Another advantage is efficiency, especially when the dividend has many digits. That’s because you only need to construct the crib once (for any given divisor), but then you get to use it again and
again, once for each digit if the dividend. For long dividends, this saves a tremendous amount of work. (This is not a good selling point when kids are just learning long division, because they are
afraid of big multi-digit dividends.) Setting up the crib is so fast that you’ve got almost nothing to lose, even for small dividends.
Another advantage is that it is easy to check the correctness of the crib. It’s just sitting there begging to be checked.
When bringing down a digit, you may optionally bring down all the digits. For instance, in figure 21, if you bring down all the digits you get 1027 on row c. One advantage is that 1027 is a
meaningful number, formed by the expression 7527−13×5×100. This shows how the steps of the algorithm (and the intermediate results) actually have mathematical meaning; we are not not blindly
following some mystical mumbo-jumbo incantation. I recommend that if you are trying to understand the algorithm, you should bring down all the digits a few times, at least until you see how
everything works.
A small disadvantage is that bringing down all the digits requires more copying. The countervailing small advantage is that it may help keep the digits lined up in their proper columns. Whether
the advantages outweigh the cost is open to question, and probably boils down to a question of personal preference.
Another remark: Division is the “inverse function” of multiplication. In a profound sense, for any function that can be tabulated, you can construct the inverse function – if it exists – by switching
columns in the table. That is, we interchange abscissa and ordinate: (x,y)↔(y,x). That’s why we are able to perform division using what looks like a multiplication table; we just use the table
6 Hindu Numeral System
The modern numeral system is based on place value. As we understand it today, each numeral can be considered a polynomial in powers of b, where b is the base of the numeral system. For decimal
numerals, b=10. As an example:
2468 = 2 b^3 + 4 b^2 + 6 b^1 + 8 b^0 (5)
= 2000 + 400 + 60 + 8
This allows us to understand long multiplication (section 4) in terms of the more general rule for multiplying polynomials.
7 Multiplying Multi-Term Expressions
Given two expressions such as (a+b+c) and (x+y), each of which has one or more terms, the systematic way to multiply the expressions is to make a table, where the rows correspond to terms in the
first expression, and the rows correspond to terms in the second expression:
a | ax ay
b | bx by (6)
c | cx cy
(a+b+c) · (x+y) = ax + ay + bx + by + cx + cy
In the special case of multiplying a two-term expression by another two-term expression, the mnemonic FOIL applies. That stands for First, Outer, Inner, Last. As shown in figure 23, we start with the
First contribution, i.e. we multiply the first term from in each of the factors. Then we add in the Outer contribution, i.e. the first term from the first factor times the last term from the last
factor. Then we add in the Inner contribution, i.e. the last term from the first factor times the first term from the last factor. Finally we add in the Last contribution, i.e. we multiply the last
terms from each of the factors.
8 Square Roots
8.1 Newton’s Method
If you can do long division, you can do square roots.
Most square roots are irrational, so they cannot be represented exactly in the decimal system. (Decimal numerals are, after all, rational numbers.) So the name of the game is to find a decimal
representation that is a sufficiently-accurate approximation.
We start with the following idea: For any nonzero x we know that x÷√x is equal to √x. Furthermore, if s[1] is greater than √x it means x/s[1] is less than √x. If we take the average of these two
things, s[1] and x/s[1], the average is very much closer to √x. So we set
and then iterate. The method is very powerful; the number of digits of precision doubles each time. It suffices to use a rough estimate for the starting point, s[1]. In particular, if you are seeking
the square root of an 8-digit number, choose some 4-digit number as the s[1]-value.
This is a special case of a more general technique called Newton’s method, but if that doesn’t mean anything to you, don’t worry about it.
8.2 First-Order Expansion
Note that the square of 1.01 is very nearly 1.02. Similarly, the square of 1.02 is very nearly 1.04. Turning this around, we find the general rule that if x gets bigger by two percent, then √x gets
bigger by one percent ... to a good approximation.
We can illustrate this idea by finding the square root of 50. Since 50 is 2% bigger than 49, the square root of 50 is 1% bigger than 7 ... namely 7.07. This is a reasonably good result, accurate to
better than 0.02%.
If we double this result, we get 14.14, which is the square root of 200. That is hardly surprising, since we remember that the square root of 2 is 1.414, accurate to within roundoff error.
9 Simple Trig Functions
Sine and cosine are transcendental functions. Evaluating them will never be super-easy, but it can be done, with reasonably decent accuracy, with relatively little effort, without a calculator.
In particular:
• You can always start with a zeroth-order approximation: For angles near zero, the sine will be near zero. For angles near 90^∘, the sine will be near 1. For angles near 30^∘, the sine will be
near 0.5, et cetera.
• You can draw a graph, using the anchor points in equation 8 as a guide. You can then use graphical interpolation to obtain values for any angle.
• A simple Taylor series gives a result accurate to 2.1% or better using only a couple of multiplications. Remember to express the angle in radians before using these formulas.
The following facts serve to “anchor” our knowledge of the sine and cosine:
sin(0^∘) = = 0 = cos(90^∘) (8a)
sin(30^∘) = = 0.5 = cos(60^∘) (8b)
sin(45^∘) = = 0.70711 = cos(45^∘) (8c)
sin(60^∘) = = 0.86603 = cos(30^∘) (8d)
sin(90^∘) = = 1 = cos(0^∘) (8e)
Actually, that hardly counts as “remembering” because if you ever forget any part of equation 8 you should be able to regenerate it from scratch. The 0^∘ and 90^∘ values are trivial. The 30^∘ is a
simple geometric construction. Then the 60^∘ and 45^∘ values are obtained via the Pythagorean theorem. The value for 45^∘ should be particularly easy to remember, since √2 = 1.414 and √½ = ½√2.
The rest of this section is devoted to the Taylor series. A low-order expansion works well if the point of interest is not too far from the nearest anchor.
1. For angles between −10^∘ and +10^∘, the approximation sin(x)≈x is accurate to better than 0.51%. This is a one-term Taylor series. Let’s call it the Taylor[1] approximation. It is super-easy to
evaluate, since it involves no additions and no multiplications, or at most one multiplication if we need to convert to radians from degrees or some other unit of measure. See equation 9c and the
blue line near x=0 in figure 24.
2. For angles from 25^∘ to 65^∘, all the points are with a few degrees of one of the anchor points in equation 8. This means the first-order Taylor series is accurate to better than 1 percent in
this region. We can call this the Taylor[0,1] approximation. It requires knowing the sine and the cosine at the nearest anchor point ... which we do in fact know from equation 8. See equation 13
and the blue line in figure 24.
Figure 24
: Piecewise 1st-Order Taylor Approximation to Sine
3. For a rather broad range of angles near the top of the sine, from 65^∘ to 115^∘, the approximation sin(x)≈1−x^2/2 is accurate to better than half a percent. This is a second-order Taylor series
with only two terms, because the linear term is zero. Let’s call this the Taylor[0,2] approximation. See equation 9e and the green line near x=90^∘ in figure 24.
4. This leaves us with a region from 10^∘ to 25^∘ that requires some special attention. Options include the following:
For most purposes, the best option is to use the Taylor[1,3] approximation anchored at zero. This requires a couple more multiplications, but the result is accurate to better than 0.07%.
If you really want to minimize the number of multiplications, we can start by noting that the Taylor[1] extrapolation coming up from zero is better than the Taylor[0,1] extrapolation coming down
from 30^∘, so rather than using the closest anchor we use the 0^∘ anchor all the way up to 20^∘ and use the 30^∘ anchor above that. This has the advantage of minimizing the number of
multiplications. Disadvantages include having to remember an obscure fact, namely the need to put the crossover at 20^∘ rather than halfway between the two anchors. The accuracy is better than
2.1%, which is not great, but good enough for some applications. The error is shown in figure 25.
There are many other options, but all the options I know of involve either more work or less accuracy.
The spreadsheet that produces these figures is given by reference 5.
Here are some additional facts that are needed in order to carry out the calculations discussed here.
1^∘ = 17.4532925 milliradian (9a)
1 radian = 57.2957795^∘ (9b)
sin(x) ≈ x for small x, measured in radians (9c)
cos(x) = sin(π/2 − x) for all x (9d)
≈ 1 − x^2/2 for small x, measured in radians (9e)
Last but not least, we have the Pythagorean trig identity:
and the sum-of-angles formula:
sin(a + b) = sin(a)cos(b) + cos(a)sin(b) (11)
If you can maintain even a vague memory of the form of equation 11, you can easily reconstruct the exact details. Use the fact that it has to be symmetric under exchange of a and b (since addition is
commutative on the LHS). Also it has to behave correctly when b=0 and when b=π/2.
If we assume b is small and use the small-angle approximations from equation 9, then equation 11 reduces to the second-order Taylor series approximation to sin(a+b).
sin(a + b) ≈ sin(a) + cos(a) b − sin(a) b^2/2 for small b, measured in radians (12)
If we drop the second-order term, we are left with the first-order series, suitable for even smaller values of b:
sin(a + b) ≈ sin(a) + cos(a) b for smaller b, measured in radians (13)
You can use the Taylor series to interpolate between the values given in equation 8. Since every angle in the first quadrant is at least somewhat near one of these values, you can find the sine of
any angle, to a good approximation, as shown in figure 24.
10 References
|
{"url":"http://www.av8n.com/physics/math-hints.htm","timestamp":"2014-04-20T23:28:02Z","content_type":null,"content_length":"111859","record_id":"<urn:uuid:7a93cd51-75f2-453d-af38-ff98aa05c8af>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Providence, RI
Providence, RI 02912
Mathematics Tutor
I recently completed my undergraduate studies in pure
ematics at Brown University. I am available as a tutor for pre-algebra, algebra I, algebra II, geometry, trigonometry, pre-calculus, calculus I, II, and III, SAT preparation, and various other
Offering 10+ subjects including algebra 1, algebra 2 and calculus
|
{"url":"http://www.wyzant.com/geo_Providence_RI_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-17T19:12:46Z","content_type":null,"content_length":"61288","record_id":"<urn:uuid:141202e0-fce4-4923-b764-629e54aa60b2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Maximizing the Number of Broadcast Operations in Random Geometric Ad Hoc Wireless Networks
February 2011 (vol. 22 no. 2)
pp. 208-216
ASCII Text x
Tiziana Calamoneri, Andrea E.F. Clementi, Emanuele G. Fusco, Riccardo Silvestri, "Maximizing the Number of Broadcast Operations in Random Geometric Ad Hoc Wireless Networks," IEEE Transactions on
Parallel and Distributed Systems, vol. 22, no. 2, pp. 208-216, February, 2011.
BibTex x
@article{ 10.1109/TPDS.2010.77,
author = {Tiziana Calamoneri and Andrea E.F. Clementi and Emanuele G. Fusco and Riccardo Silvestri},
title = {Maximizing the Number of Broadcast Operations in Random Geometric Ad Hoc Wireless Networks},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {22},
number = {2},
issn = {1045-9219},
year = {2011},
pages = {208-216},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2010.77},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - Maximizing the Number of Broadcast Operations in Random Geometric Ad Hoc Wireless Networks
IS - 2
SN - 1045-9219
EPD - 208-216
A1 - Tiziana Calamoneri,
A1 - Andrea E.F. Clementi,
A1 - Emanuele G. Fusco,
A1 - Riccardo Silvestri,
PY - 2011
KW - Energy-aware systems
KW - wireless communication
KW - graph algorithms
KW - network problems.
VL - 22
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
We consider static ad hoc wireless networks whose nodes, equipped with the same initial battery charge, may dynamically change their transmission range. When a node v transmits with range r(v), its
battery charge is decreased by \beta r(v)^2, where \beta >0 is a fixed constant. The goal is to provide a range assignment schedule that maximizes the number of broadcast operations from a given
source (this number is denoted by the length of the schedule). This maximization problem, denoted by Max LifeTime, is known to be NP-hard and the best algorithm yields worst-case approximation ratio
\Theta (\log n), where n is the number of nodes of the network. We consider random geometric instances formed by selecting n points independently and uniformly at random from a square of side length
\sqrt{n} in the euclidean plane. We present an efficient algorithm that constructs a range assignment schedule having length not smaller than 1/12 of the optimum with high probability. Then we design
an efficient distributed version of the above algorithm, where nodes initially know n and their own position only. The resulting schedule guarantees the same approximation ratio achieved by the
centralized version, thus, obtaining the first distributed algorithm having provably good performance for this problem.
[1] C. Ambuehl, "An Optimal Bound for the MST Algorithm to Compute Energy Efficient Broadcast Trees in Wireless Networks," Proc. Int'l Colloquium Automata, Languages and Programming (ICALP '05), pp.
1139-1150, 2005.
[2] R. Bar-Yehuda, O. Goldreich, and A. Itai, "On the Time-Complexity of Broadcast in Multi-Hop Radio Networks: An Exponential Gap between Determinism and Randomization," J. Computer and System
Sciences (JCSS), vol. 45, pp. 104-126, 1992.
[3] R. Bar-Yehuda, A. Israeli, and A. Itai, "Multiple Communication in Multi-Hop Radio Networks," SIAM J. Computing (SICOMP), vol. 22, no. 4, pp. 875-887, 1993.
[4] G. Calinescu, X.Y. Li, O. Frieder, and P.J. Wan, "Minimum-Energy Broadcast Routing in Static Ad Hoc Wireless Networks," Proc. IEEE INFOCOM, pp. 1162-1171, Apr. 2001.
[5] G. Calinescu, S. Kapoor, A. Olshevsky, and A. Zelikovsky, "Network Lifetime and Power Assignment in Ad Hoc Wireless Networks," Proc. European Symp. Algorithms (ESA '03), pp. 114-126, 2003.
[6] I. Caragiannis, M. Flammini, and L. Moscardelli, "An Exponential Improvement on the MST Heuristic for the Minimum Energy Broadcast Problem," Proc. Int'l Colloquium Automata, Languages and
Programming (ICALP '07), pp. 447-458, 2007.
[7] M. Cardei and D.Z. Du, "Improving Wireless Sensor Network Lifetime through Power Organization," Wireless Networks, vol. 11, pp. 333-340, 2005.
[8] M. Cardei, J. Wu, and M. Lu, "Improving Network Lifetime Using Sensors with Adjustable Sensing Ranges," Int'l J. Sensor Networks, vol. 1, nos. 1/2, pp. 41-49, 2006.
[9] T. Calamoneri, A. Clementi, A. Monti, G. Rossi, and R. Silvestri, "Minimum-Energy Broadcast in Random-Grid Ad-Hoc Networks: Approximation and Distributed Algorithms," Proc. 11th ACM Int'l Symp.
Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), 2008.
[10] A. Clementi, P. Crescenzi, P. Penna, G. Rossi, and P. Vocca, "On the Complexity of Computing Minimum Energy Consumption Broadcast Subgraphs," Proc. 18th Ann. Symp. Theoretical Aspects of
Computer Science (STACS), pp. 121-131, www.dia.unisa.it~penna, Feb. 2001.
[11] A. Clementi, G. Huiban, P. Penna, G. Rossi, and Y.C. Verhoeven, "On the Approximation Ratio of the MST-Based Heuristic for the Energy-Efficient Broadcast Problem in Static Ad-Hoc Radio
Networks," Proc. Int'l Parallel and Distributed Processing Symp. (IPDPS '03), vol. 222, 2003.
[12] A.E.F. Clementi, A. Monti, and R. Silvestri, "Distributed Broadcast in Radio Networks of Unknown Topology," Theoretical Computer Science, vol. 302, nos. 1-3, pp. 337-364, 2003.
[13] M. Chrobak, L. Gasieniec, and W. Rytter, "Fast Broadcasting and Gossiping in Radio Networks," J. Algorithms, vol. 43, no. 2, pp. 177-189, 2002.
[14] W. Chu, C.J. Colbourn, and V.R. Syrotiuk, "The Effects of Synchronization on Topology Transparent Scheduling," Wireless Networks, vol. 12, pp. 681-690, 2006.
[15] A. Czumaj and W. Rytter, "Broadcasting Algorithms in Radio Networks with Unknown Topology," J. Algorithms, vol. 60, no. 2, pp. 115-143, 2006.
[16] A. Dessmark and A. Pelc, "Broadcasting in Geometric Radio Networks," J. Discrete Algorithms, vol. 5, no. 1, pp. 187-201, 2007.
[17] A. Ephremides, G.D. Nguyen, and J.E. Wieselthier, "On the Construction of Energy-Efficient Broadcast and Multicast Trees in Wireless Networks," Proc. IEEE INFOCOM, pp. 585-594, 2000.
[18] A.D. Flaxman, A.M. Frieze, and J.C. Vera, "On the Average Case Performance of Some Greedy Approximation Algorithms for the Uncapacitated Facility Location Problem," Proc. ACM Symp. Theory of
Computing (STOC '05), pp. 441-449, 2005.
[19] M. Flammini, A. Navarra, and S. Perennes, "The Real Approximation Factor of the MST Heuristic for the Minimum Energy Broadcast," Proc. Int'l Workshop Experimental and Efficient Algorithms (WEA
'05), pp. 22-31, 2005.
[20] P. Gupta and P.R. Kumar, "Critical Power for Asymptotic Connectivity in Wireless Networks," Stochastic Analysis, Control, Optimization and Applications, pp. 547-566, Birkhauser, 1999.
[21] I. Kang and R. Poovendran, "Maximizing Network Lifetime of Wireless Broadcast Ad Hoc Networks," J. ACM Mobile Networks and Applications, vol. 10, no. 6, pp. 879-896, 2005.
[22] L.M. Kirousis, E. Kranakis, D. Krizanc, and A. Pelc, "Power Consumption in Packet Radio Networks," Theoretical Computer Science, vol. 243, pp. 289-305, 2000.
[23] M. Mitzenmacher and E. Upfal, Probability and Computing. Cambridge Univ. Press, 2005.
[24] K. Pahlavan and A. Levesque, Wireless Information Networks. Wiley-Interscience, 1995.
[25] M. Penrose, Random Geometric Graphs. Oxford Univ. Press, 2003.
[26] P. Santi and D.M. Blough, "The Critical Transmitting Range for Connectivity in Sparse Wireless Ad Hoc Networks," IEEE Trans. Mobile Computing, vol. 2, no. 1, pp. 25-39, Jan.-Mar. 2003.
Index Terms:
Energy-aware systems, wireless communication, graph algorithms, network problems.
Tiziana Calamoneri, Andrea E.F. Clementi, Emanuele G. Fusco, Riccardo Silvestri, "Maximizing the Number of Broadcast Operations in Random Geometric Ad Hoc Wireless Networks," IEEE Transactions on
Parallel and Distributed Systems, vol. 22, no. 2, pp. 208-216, Feb. 2011, doi:10.1109/TPDS.2010.77
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/td/2011/02/ttd2011020208-abs.html","timestamp":"2014-04-19T09:29:44Z","content_type":null,"content_length":"57834","record_id":"<urn:uuid:1b4c7979-5c4f-4aea-b2a9-fc02c2a2ed47>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
$1000 Solar Water Heating System
$1000 Solar Water Heating System Efficiency Test for December 4, 2008
This page uses the data collected on December 4, 2008 to do a rough estimate of the PEX Water Heating Collector efficiency. The results are about as predicted from the early small panel tests -- the
PEX collector shows about 15% less heat output per sqft of collector than a commercial collector would have under the same conditions, but the PEX collector's BTU output per dollar of cost is 6.4
times better than the commercial collector.
The plot below shows the supply and return water temperatures for the PEX collector for December 4, 2008. The sun intensity is also shown. This was a mostly sunny day with occasional light clouds
passing over. Ambient temp was low (about 18F).
Rough efficiency evaluated at around 11:45 AM.
T supply = 106.8F
T return = 113.5 F
Delta T = (113.5 - 106.8) = 6.7F
Flow rate = 1.73 gpm = 14.4 lb/min = 861.5 lb/hr
Sun = 1049 watts/m^2 = 332 BTU/sf-hr
Collector area = 48 sf
T ambient = 62F
Sun Energy in = (48 sf)(332 BTU/sf-hr) = 15936 BTU/hr
Energy Out = (861.5 lb/hr)(6.7F) (1 BTU/lb -F) = 5772 BTU/hr
Efficiency = (Energy Out) / (Energy In) = (5772 BTU/hr) / (15936 BTU/hr) = 36.2%
Using the efficiency calculator, a Heliodyne Gobi flat plate collector with black painted absorber would give 42.6% efficiency under the same conditions. So, the PEX collector has an output of 85% of
the Heliodyne flat plate collector. Amazingly, this is agrees almost exactly with the earlier small panel tests -- isn't science wonderful :)
If both of these efficiencies seem on the low side it is because the ambient temperature is quite low for a mid day winter temperature. Even in January, the average midday high in Bozeman is nearly
30F -- this day was 18F. So, think of these as close to worst case for a sunny day.
BTU per Dollar
If you do the more important calculation of heat output per dollar of collector cost, it comes out this way:
PEX Collector cost at $4 per sf = (48 sf)($4) = $192
Commercial collector at $30 per sf = (48 sf)($30) = $1440
BTU per dollar for PEX = (5772 BTU) / ($192) = 30.1 BTU/$
BTU per dollar for Commercial = (42.6%)(15936 BTU) / ($1440) = 4.7 BTU/$
So, the PEX collector is 6.4 times more cost effective.
The same numbers for the Copper/Aluminum Collector would be:
Copper/alum collector cost at $6/sf = $288
Output should be about 96% of the commercial collector, so (15936)(0.426)(0.96) = 6517 BTU
BTU per dollar for copper/alum = (6517 BTU) / ($288) = 22.6 BTU/$
So, about 4.8 times better than the commercial flat plate.
This is the screen copy for the efficiency calculation for the Heliodyne Flat Plate Collector
Gary December 5, 2008
|
{"url":"http://www.builditsolar.com/Experimental/PEXColDHW/Dec5Efic.htm","timestamp":"2014-04-19T19:57:08Z","content_type":null,"content_length":"14463","record_id":"<urn:uuid:c95fa20c-bc98-4a94-b256-3e3fbbccd5e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
5.3 Exploring Properties of Rectangles and Parallelograms Using Dynamic Software
In this example, properties of rectangles and parallelograms are examined. The emphasis is on identifying what distinguishes a rectangle from a more general parallelogram. Such tasks and the software
can help teachers address the Geometry Standard.
The following links to an updated version of the e-example.
│ │ Exploring Properties of Triangles and Quadrilaterals (5.3) │
│ │ │
│ │ This e-example allows students to observe the properties of triangles and quadrilaterals by manipulating the sides, angles, and type. │
The e-example below contains the original applet which conforms more closely to the pointers in the book.
│ │ Exploring Properties of Rectangles and Parallelograms Using Dynamic Software │
│ │ │
│ │ Dynamic geometry software provides an environment in which students can explore geometric relationships and make and test conjectures. │
Manipulate the dynamic rectangle and parallelogram below by dragging the vertices and the sides. You can rotate or stretch the shapes, but they will retain particular features. What is alike about
all the figures produced by the dynamic rectangle? What is alike about all the figures produced by the dynamic parallelogram? What common characteristics do parallelograms and rectangles share? How
do rectangles differ from other parallelograms?
How to Use the Interactive Figure
To move a shape, click inside of a shape and drag it.
To rotate a shape, click on a vertex of the shape and drag it.
To change the dimensions of a shape, click on a side of the shape and drag it.
Other Tasks
• Predict whether the dynamic rectangle can make each figure below, then check your prediction by trying to duplicate the shape using the dynamic rectangle. Predict whether the dynamic
parallelogram can make each figure below, then check your prediction by trying to duplicate the shape using the dynamic parallelogram.
• Can the dynamic rectangle make all the shapes that the dynamic parallelogram can make? Can the dynamic parallelogram make all the shapes that the dynamic rectangle can make?
• Describe how to decide if the dynamic rectangle can make a particular shape.
• Describe how to decide if the dynamic parallelogram can make a particular shape.
As students manipulate and analyze the shapes that can be made by the dynamic rectangle and dynamic parallelogram, they can make conjectures about the properties of the shapes. For instance, students
might initially say that both types of shapes have "two long and two short sides" or that parallelograms don't have right angles. Manipulating the dynamic rectangle and parallelogram can help
students check the validity of their conjectures. Students can determine that (a) neither shape must have two long sides and two short sides because both can make squares; and (b) rectangles always
have right angles and parallelograms sometimes have right angles. Subsequent investigations using Shape Makers (Battista 1998) software, which includes on-screen measurements for side lengths and
angles, can help students transform these intuitive notions into more-precise formal ideas about geometric properties. With these features students can verify that both rectangles and parallelograms
always have opposite sides congruent but rectangles must also have four right angles. They also see by measurement that a parallelogram can have right angles (in the special cases of a rectangle or
Research has shown that an important step in students' development of geometric thinking is to move away from intuitive, visual-holistic reasoning about geometric shapes to a more analytic conception
of the relationships between the parts of shapes (Battista 1998; Clements and Battista 1992). Conceptualizing and reasoning about the properties of shapes is a major step in this development.
Research further shows that dynamic geometry software is a powerful tool for helping students make the transition to property-based reasoning (Battista 1998).
Take Time to Reflect
• What new insights into the properties of parallelograms can students gain as they work on activities like this?
• What relationships between rectangles and parallelograms are important for students to note?
• What are the advantages and disadvantages of having the students work with existing dynamic figures compared with asking them to construct their own?
• What other pairs of dynamic figures would be interesting for students to consider in activities like this?
Battista, Michael T. Shape Makers: Developing Geometric Reasoning with The Geometer's Sketchpad. Berkeley, Calif.: Key Curriculum Press, 1998.
Clements, Douglas H. & Michael T. Battista. "Geometry and Spatial Reasoning." In Handbook of Research on Mathematics Teaching and Learning, edited by Douglas A. Grouws, pp. 420–64. New York: NCTM/
Macmillan Publishing Co., 1992.
|
{"url":"http://www.nctm.org/standards/content.aspx?id=25040","timestamp":"2014-04-20T21:58:35Z","content_type":null,"content_length":"45448","record_id":"<urn:uuid:c67a8ab5-5371-48d6-8fb4-26e0d3831195>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
|
flying capacitor clamped three level inverter ppt
Some Information About
flying capacitor clamped three level inverter ppt
is hidden..!! Click Here to show flying capacitor clamped three level inverter ppt's more details..
Do You Want To See More Details About
"flying capacitor clamped three level inverter ppt"
? Then
.Ask Here..!
with your need/request , We will collect and show specific information of flying capacitor clamped three level inverter ppt's within short time.......So hurry to Ask now (No Registration , No fees
...its a free service from our side).....Our experts are ready to help you...
.Ask Here..!
In this page you may see flying capacitor clamped three level inverter ppt related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in
proper format with attachments
Page / Author tags
Posted by: seminar
Created at: Saturday flying capacitor clamped three level inverter ppt , space vector modulation for flying capacitor, ppts on three level inveter using space vector modulation ppts , cascaded
30th of April 2011 capacitor multilevel inverter ppt, capacitor clamped using multilevel inverter ppt , capacitor clamped multilevel inverter ppt, fcmli microcontroller ,
03:43:42 AM
Last Edited Or Replied
at :Saturday 30th of
April 2011 03:43:42 AM
ptimized circuit layout and packaging are possible
3- Soft - switching techniques used in for reducing
switching losses and device stresses.
Cascaded H-Bridge Multilevel Inverter
Needs separate DC sources for real power
conversion, thus its applications are limited
Cascaded H-Bridge Multilevel Inverter
PWM – Pulse Width Modulation
-> Also known as Pulse Duration Modulation
-> PWM- There width of pulses of carrier pulse train is
varied in accordance with modulating signal
-> ..................[:=> Show Contents <=:]
Posted by: seminar
class two level inverter using sv pwm ppts , svpwm generation, vector control of induction motor using sine pwm two level inverter with doc , svpwm generation using, sector time
Created at: Thursday calculation in svpwm , space vector generation ppt, svpwm implementation , pulse width modulation strategies in multi level inverter seminar report, space verctor pulse width
05th of May 2011 modulation , a simple space vector pwm generation scheme for any general n level, details about simple space vector pulse width modulation only , a simple space vector pwm
04:16:22 AM generation scheme for any general n level inverter ppt,
Last Edited Or Replied
at :Thursday 05th of
May 2011 04:16:22 AM
to generate SVPWMsignals for multilevel inverters. The proposed method uses sectoridentification only at the two-level. In the proposed method,the actual sector (where the tip of the instantaneous
referencespace vector lies) in the space vector diagram of a multilevelinverter is not required to be identified. A method using theprinciple of mapping is proposed for generating the
switchingvectors corresponding to the actual sector and the optimumswitching sequence of a multilevel inverter from that of the twolevelinverter. An algorithm is proposed for generating SVPWMfor
any n-level inverter. Th..................[:=> Show Contents <=:]
Posted by: seminar
class multi carrier switching control multilevel inverter, hysteresis modulation of multilevel inverter ppt doc , phase shift in multi level inverter ppt, multicarrier pwm control
Created at: Tuesday method , cascaded multilevel inverter thesis, control of multilevel inverters , project report on multilevel inverter by fpga controller, constant switching frequency
03rd of May 2011 multicarrier pulse width modulation function , ppts on documentation of switching losses of multi level inverter, phd thesis multilevel inverters , introduction cascaded
06:31:37 AM multilevel inverter five level ppt, switching characterization of cascaded multilevel inverter controlled systems , multi level inverter thesis wind turbine,
Last Edited Or Replied
at :Tuesday 03rd of
May 2011 06:31:37 AM
for the determination
of the minimum amplitude of the triangular carrier
for smooth modulation at fixed switching frequency. It is shown
that the multilevel modulation based on the phase-shifted carriers
significantly reduces the ripple magnitude in the switching
function and allows the use of a smaller carrier amplitude under
closed loop. This increases the forward gain and, hence, improves
the tracking characteristics. The proposed cascaded multilevel
inverter control is implemented for the operation of a distribution
static compensator (DSTATCOM) in the voltage control mode.
The ..................[:=> Show Contents <=:]
Posted by: seminar
Created at: Saturday flying capacitor clamped three level inverter ppt , space vector modulation for flying capacitor, ppts on three level inveter using space vector modulation ppts , cascaded
30th of April 2011 capacitor multilevel inverter ppt, capacitor clamped using multilevel inverter ppt , capacitor clamped multilevel inverter ppt, fcmli microcontroller ,
03:43:42 AM
Last Edited Or Replied
at :Saturday 30th of
April 2011 03:43:42 AM
It allows the achievement of output waveforms at a reduced.
switching frequency
- used for high-power applications
Disadvantage – Its complicated hardware implementation.
Selective Harmonic Elimination-
Virtual Stage PWM
Virtual stage PWM = Unipolar Programmed + Fundamental Frequency
PWM Switching Scheme
- Used for low modulation indices
Produce output waveform with lower THD
No. of switching angle not equal to No. of dc source
Why Use PWM Techniques?..................[:=> Show Contents <=:]
Posted by: seminar
Created at: Wednesday
30th of March 2011 multi level inverter, diode clamped inverter ppt , use of multilevel inverter ppt, dode clamped multilevel inverter ppt , multilevel inverter, diode clamped multilevel
04:38:15 AM inverter ppt , levels of diode clamped multilevel inverter ppt, multilevel inverter ppt download , contact outpit nl loc es,
Last Edited Or Replied
at :Wednesday 28th of
November 2012 12:57:38
A DCMI with nl number of levels typically comprises (nl-1) capacitors on the DC bus
Voltage across each capacitor is VDC/(nl-1)
( nl nodes on DC bus, nl levels of output phase voltage , (2nl-1) levels of output line voltage)
Output phase voltage can assume any voltage level by selecting any of the nodes
DCMI is considered as a type of multiplexer that attaches the output to one of the available nodes
Consists of main power devices in series with their respective main diodes connected in parallel and clamping diodes
Main diodes conduct only when mo..................[:=> Show Contents <=:]
Posted by: seminar
surveyer Quasi, Operated , Inverter, Multilevel , Diode Clamped, Issues , Balance, Capacitor , diode clamped multilevel inverter, multi level inverter , multilevel inverters doc for
Created at: Wednesday download, what are clamping diodes used in converters used for , voltage balancing of diode clamped multilevel inverters, diode clamped multilevel inverter waveforms ppt ,
20th of October 2010 diode clamped multi level inverters, problems due capacitor unbalance in inverter , problems due to capacitor imbalance, capacitor imbalance problems , multi level inverters,
12:06:20 AM five level diode clamped inverter ppt , diode clamped multilevel inverter operation, diode clamped multilevel inverter ppt , capacitor balancing issues of the diode clamped
Last Edited Or Replied multilevel inverter operated in a, solution to voltage imbalance ,
at :Saturday 26th of
January 2013 11:11:24
n three levels and reduces the dc-link capacitance without introducing any significant voltage ripple at the dc-link nodes. The proposed operation can be generalized for any number of levels. The
validity of the proposed multilevel inverter operational mode is confirmed by simulations in mat lab v7.5. .
The multilevel concept is based on a step approximation to a sinusoidal voltage. Multilevel inverters belong to the inverter circuit family, where the output voltage comprised more than two
intermediate discrete voltage levels. The purpose of these circuits is to generate a h..................[:=> Show Contents <=:]
Posted by: project
report helper
Created at: Thursday
14th of October 2010 INVERTERS , CLAMPED, POINT , NUETRAL, ppt on neutral point clamped inverters , neutral clamped inverter ppt, neutral point clamp , npc inverter, npc inverter operation ,
12:16:49 AM neutral point clamped multilevel inverter ppt, nuetral ,
Last Edited Or Replied
at :Thursday 14th of
October 2010 12:16:49
vices – clamp output potential to neutral point with help of clamping diodes D10 D10’
All PWM techniques apply
NPC inverter operation
Consider HEPWM technique to eliminate 2 lowest significant harmonics (5th and 7th) and control fundamental voltage
..................[:=> Show Contents <=:]
Posted by: seminar multilevel inverter wiki, multilevel inverter mdl , multilevel inverter circuit, multilevel inverter topologies , multilevel inverter pdf, multilevel inverter ppt ,
topics multilevel inverter basics, Multilevel inverter for power system applications highlighting asymmetric design effects from a supply network point of view pdf , Multilevel
Created at: Tuesday inverter for power system applications highlighting asymmetric design effects from a supply network point of view ppt, Multilevel inverter for power system applications
30th of March 2010 highlight , multilevel inverter applications for powersystem, multilevel converter dc dc applications ppt pdf , current multilevel inverter applications wikipedia, multilevel
01:45:37 AM inverters doc ppt , multilevel inverter, multi level inverter operation animation , multi level inverter, what are the applications of multilevel inverters in power systems ,
Last Edited Or Replied applications of multilevel inverters, project on multi level inverter , multilevel invarter, ppt applications of multi level inverter , multilevel inverter for power system
at :Friday 28th of applications, use of multilevel inverter in power system , multi level inverter paper pdf, ppt on power system applications , power system applicatons,
December 2012 11:52:10
same resolution, flexibility for the dc-voltage feeding choice), a new solution is investigated : A symmetrical PT feeds an AMI.
Presented By:
Song-Manguelle, J. ; Rufer, A.
read full report
http://ieeexplore.ieee.org/iel5/8688/27523/01226433.pdf?arnumber=1226433..................[:=> Show Contents <=:]
Cloud Plugin by Remshad Medappil
|
{"url":"http://seminarprojects.net/c/flying-capacitor-clamped-three-level-inverter-ppt","timestamp":"2014-04-18T19:26:09Z","content_type":null,"content_length":"36995","record_id":"<urn:uuid:a5bff56a-930b-4482-b7ae-150129cdcf66>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometry problem
May 18th 2006, 08:14 AM #1
Geometry problem
First, hello all. Great I found this forum.
Please help me solve this mistery
we have two vertices:
and there is also a circle K:
K: (x-5)^2 + (x-5)^2 = 16
Now I have to find a circle G: (x-a)^2 + (x-b)^2 = r^2
This circle (G) has to:
- have both vertices (A and B) on its edge
- "touch" the circle K in only one vertex
Here are the answers:
G1: (x-1)^2 + (x-2)^2 = 1
G2: (x-4)^2 + (x-5)^2 = 25
I have the answers all right, however I don't know how to calculate them.
Please help me find a way to calculate the answers.
And excuse my english, because I am not very familiar with english mathematical terminology.
Thank you,
First, hello all. Great I found this forum.
Please help me solve this mistery
we have two vertices:
and there is also a circle K:
K: (x-5)^2 + (x-5)^2 = 16
Now I have to find a circle G: (x-a)^2 + (x-b)^2 = r^2
This circle (G) has to:
- have both vertices (A and B) on its edge
- "touch" the circle K in only one vertex
Here are the answers:
G1: (x-1)^2 + (x-2)^2 = 1
G2: (x-4)^2 + (x-5)^2 = 25
I have the answers all right, however I don't know how to calculate them.
Please help me find a way to calculate the answers.
And excuse my english, because I am not very familiar with english mathematical terminology.
Thank you,
Okay, let me change then some of your data.
--vertex == point.
--edge == circumference.
--Circle K should be: (x-5)^2 +(y-5)^2 = 16 ----(1)
--Circle G: (x-a)^2 +(y-b)^2 = r^2 ----------(2)
--"touch only in one vertex" == "is tangent to"
--mistery == problem only, :-)
If circle G passes through the points (1,1) and (0,2), then,
(1-a)^2 +(1-b)^2 = r^2 ----at (1,1)------(i)
(0-a)^2 +(2-b)^2 = r^2 ----at (0,2)------(ii)
Eliminate r, (i) minus (ii),
(1-a)^2 -(0-a)^2 +(1-b)^2 -(2-b)^2 = 0
(1-a)^2 -(-a)^2 = (2-b)^2 -(1-b)^2
1 -2a +a^2 -a^2 = 4 -4b +b^2 -(1 -2b +b^2)
1 -2a = 3 -2b
1 -2a -3 = -2b
-2a -2 = -2b
b = a+1`-----------**
That means the center of circle G is at point (a,b) == (a,a+1) --------**
Meaning, circle G is
(x -a)^2 +(y -(a+1))^2 = r^2
(x-a)^2 +(y-a-1)^2 = r^2 ---------(2.1)
We still have two unknowns: a and r.
Let us express r in terms of "a".
At point (1,1), using Eq.(2.1),
(1-a)^2 +(1-a-1)^2 = r^2
(1 -2a +a^2) +(a^2) = r^2
2a^2 -2a +1 = r^2 ----------------**
Now, if circle G is tangent to circle K, then the distance between their centers is minimum at the point of tangency. This minimum distance is a straight line joining the centers of the two
circles. It is the sum of the radius of K and the radius of G. Thus,
--radius of circle K = sqrt(16) = 4.
--radius of circle G = sqrt(r^2) = sqrt(2a^2 -2a +1).
--center of circle K = (5,5).
--center of cicle G = (a,a+1).
--distance between the two centers = distance between points (5,5) and (a,a+1).
4+r = sqrt[(5-a)^2 +(5 -(a+1))^2]
4+r = sqrt[(5-a)^2 +(5-a-1)^2]
4+r = sqrt[(25 -10a +a^2) +(16 -8a +a^2)]
4+r = sqrt[41 -18a +2a^2]
Plugging in the value of r in terms of "a",
4 +sqrt[2a^2 -2a +1] = sqrt[41 -18a +2a^2]
Rationalize one of the radicals, square both sides,
16 +8sqrt[2a^2 -2a +1] +(2a^2 -2a +1) = 41 -18a +2a^2
Isolate the radical,
8sqrt[2a^2 -2a +1] = 41 -18a +2a^2 -16 -2a^2 +2a -1
8sqrt[2a^2 -2a +1] = 24 -16a
Divide both sides by 8,
sqrt[2a^2 -2a +1] = 3 -2a
Square both sides,
2a^2 -2a +1 = 9 -12a +4a^2
Bring them all to the lefthand side,
2a^2 -2a +1 -9 +12a -4a^2 = 0
-2a^2 +10a -8 = 0
Divide both sides by -2,
a^2 -5a +4 = 0
(a-4)(a-1) = 0
a = 4 or 1 -----------***
When a=4,
b = a+1 = 5
Hence, center of circle G is (a,b) == (4,5)
r = sqrt[2a^2 -2a +1] = sqrt[2(4^2) -2(4) +1] = 5
Therefore, circle G is
(x-4)^2 +(y-5)^2 = 5^2
(x-4)^2 +(y-5)^2 = 25 ---------------------------------answer.
When a=1,
b = a+1 = 2
Hence, center of circle G is (a,b) == (1,2)
r = sqrt[2a^2 -2a +1] = sqrt[2(1^2) -2(1) +1] = 1
Therefore, circle G is
(x-1)^2 +(y-2)^2 = 1^2
(x-1)^2 +(y-2)^2 = 1 ---------------------------------answer.
I did the (5 -a -1)^2 on paper by "long" multiplication.
I solve this problem more elegantly I am going to state a test to determine whether two circles "touch", i.e. tangent to each other.
Let us assume you have two circles with $R$ and $r$ with $R>r$.
There are two possibilities with tangent circles.
Internally tangent-meaning the smaller is inside the larger and touches it.
Externally tangent-meaning the smaller and larger are outside and touch at only one point.
In the first case the distance between the radii is,
In the second case the distance between the radii is,
As a result we have a test to determine whether two circles are tangent. You determine whether or not the distance between the two radii is $R\pm r$.
Returning to the problem.
You have a circle,
$(x-5)^2+(y-5)^2=4^2$ (1)
You need to find a circle(s),
$(x-a)^2+(y-b)^2=r^2$ (2)
Which satisfies 2 conditions:
1)Is tangent to (1)
2)Passes through $(1,1),(0,2)$
For condition 1) to be true we need that (1) is tangent to (2), using the test I said in the introduction we need that the distance between the radii is $|r\pm 4|$. The reason why the absolute
value is because we do not know whether $r \mbox{ or }4$ is larger thus we need the positive value of the difference, i.e. absolute value.
Now, the center of (1) is at (5,5) and the distance of (2) is at (a,b). Thus by the distance formula,
$\sqrt{(a-5)^2+(b-5)^2}=|r\pm 4|$
Square both sides, note that absolute value disappears,
$(a-5)^2+(b-5)^2=(r\pm 4)^2$
For condition 2) point (1,1) and (0,2) are on (2) thus,
As a result we have a triple equation with 3 unknowns,
$\left\{ \begin{array}{c} <br /> (a-5)^2+(b-5)^2=(r\pm 4)^2 \\<br /> (a-1)^2+(b-1)^2=r^2 \\<br /> a^2+(b-2)^2=r^2<br />$
Solving this should give you the results.
Note, you need to consider two cases for $\pm$ one for plus and other for minus.
May 18th 2006, 03:21 PM #2
MHF Contributor
Apr 2005
May 18th 2006, 05:43 PM #3
Global Moderator
Nov 2005
New York City
|
{"url":"http://mathhelpforum.com/pre-calculus/3018-geometry-problem.html","timestamp":"2014-04-17T02:35:04Z","content_type":null,"content_length":"43978","record_id":"<urn:uuid:497cc19c-dad7-4066-899f-b21645888b88>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: [TowerTalk] 160m Vertical
As a matter of information I will pass along an equation for calculation of
'equivalent diameter' for triangular towers.
Somewhere I got the following formula for a cylindrical equivalent for
triangular tower like Rohn 45.
Equivalent Cylindrical Diameter = 2 * CUBEROOT [(T * F^2)/2]
where T = tube diameter and F = Face width.
For Rohn 45, T = 1.25" and F = 18", so cylindrical diameter equivalent
I would expect that Rohn 25 would have a smaller Equivalent Cylindrical
Diameter and your estimate of 12" diameter may be high.
I have used this formula to calculate Equivalent Cylindrical Diameters for
Rohn/Spaulding HDX which I have used in modeling [HyTower] and it seems to
provide results that are close to what is measured using a General Radio
1606A RF Bridge at the base.
My experience with an MFJ259B to make 160m measurements has not been
encouraging. It may be that I am too close to BC stations that contribute to
the signal that is being read by the instrument.
I don't use it for 160 measurements any more, but rather lug out the 1606A
and signal generator [the MFJ259B is OK but somewhat frequency 'agile'] and
a detector such as an IC-706. For a 120 foot tall tower with an insulated
base a value of 10 -j29 ohms for 1.7 MHz at the base is probably within the
accuracy expectation of the MFJ instrument, but the R component seems low.
Measuring the frequency [or frequency range] at which the X drops to zero
should give you an estimate of the resonant frequency of the tower. <it may
be that you have to settle for determining what the frequency range is over
which the X is low, but not zero, and where the X value begins to increase.
This change from lowest value to higher value provides a clue as to where
the sign of the reactance is changing.>
I don't know how to assess the 'value' of the readings at the other
frequencies. A 120 foot tall tower certainly could have a 30 +j100 ohm input
at the top end of 75 meters since the 120' length is in the range of 1/2
wavelength on 75 meters. [depending on the effective cylindrical diameter of
the tower]. Frankly, I would have expected the input impedance on 75 meters
to be closer to 250 +j300 ohms if you are really close to a 1/2 wave.
The excitation methods suggested by Frank and others probably are the
quickest way to get a fix on things if you don't have a real RF bridge.
Tod, K0TO
> -----Original Message-----
> From: AB5MM [mailto:ab5mm@9plus.net]
> Sent: Monday, September 17, 2007 11:40 AM
> To: towertalk@contesting.com
> Subject: [TowerTalk] 160m Vertical
> Thank all of you for the answers and suggestions so far in
> this hair pulling project called "Matching at 160m Vertical".
> Larry N8KU, John KK9A, Carl KM1H, Bob W5AH, Rodger KI4NFQ,
> Frank W3LPL, Arlan N4OO, Jeff N8CC
> I need to answer a few questions:
> (1) The SWR analyzer(s) MFJ-249 and MFJ-259B.
> (2) We are also using a Eico 170 GDO.
> (3) The ground radials are located on top of the soil. I made
> a aluminum grounding buss bar as a central location for all
> ground radials to terminate at a common location. The buss
> bar is bolted directly to the tower base at around 6" above
> ground level.
> (4) Sweeping the tower directly at the feed point (short RG-8
> pig tail) I do not have a dip anywhere in the 1.8 to 2.0 MHz range.
> It does show a slight dips at;
> 1.739 MHz with a swr of 3.7/1 with X=29 & R=10.
> 4.254 MHz, 4.6/1 swr X=100 R=30
> 4.734 MHz, 4.0/1 swr X=28 R=86
> 5.670 MHz, 3.3/1 swr X=29 R=80
> 8.665 MHz, 2.2/1 swr X=49 R=36
> The formula for a 1/4 wave length radiator is 234/fo. I
> really think that is for a reasonably sized wire antenna. The
> Rohn 25g is ~12" in diameter, which would make the Q a little
> lower (wider band width) as well as shift the resonance down
> a few kc. I think we are very close to the correct height, if
> not a little short at 120 ft. - 6 inches. Do any of you think
> the guy wires are adding a capacitive loading component? Is
> that what's throwing a wrench in the works?
> I'll sweep the tower this afternoon with the GDO to see if I
> can get any indication of resonance in or around the 160m
> band. The path from the shack to the tower is well worn now :-))
> Steve AB5MM
> _______________________________________________
> _______________________________________________
> TowerTalk mailing list
> TowerTalk@contesting.com
> http://lists.contesting.com/mailman/listinfo/towertalk
TowerTalk mailing list
|
{"url":"http://lists.contesting.com/_towertalk/2007-09/msg00581.html?contestingsid=qpfhtk8pmjheohbger7tkkjol0","timestamp":"2014-04-23T17:05:25Z","content_type":null,"content_length":"12635","record_id":"<urn:uuid:1695ee8b-3781-4cb1-ae0e-5b902b7654be>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Computers
The memory of a classical computer is a string of 0s and 1s, and a classical computer can do calculations on only one set of numbers at once. The memory of a quantum computer is a quantum state which
can be in a superposition of many different numbers at once. A classical computer is made up of bits, and a quantum computer is made up of quantum bits, or qubits. A quantum computer can do an
arbitrary reversible classical computation on all the numbers simultaneously, and also has some ability to produce interference, constructive or destructive, between various different numbers. By
doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of
the same size.
The most famous example of the extra power of a quantum computer is Peter Shor's algorithm for factoring large numbers. Factoring is an important problem in cryptography; for instance, the security
of RSA public key cryptography depends on factoring being a hard problem. Despite much research, no efficient classical factoring algorithm is known.
Shor actually solved a related problem, the discrete log. Suppose we take a number x to the power r and reduce the answer modulo n (i.e., find the remainder r after dividing x^r by n). This is
straightforward to calculate. It is much more difficult to find the inverse - given x, n, and y, find r such that x^r = y (mod n). For factoring, all we need to do is consider y=1 and find the
smallest positive r such that x^r = 1 (mod n). Shor's quantum algorithm to do this calculates x^r for all r at once. Since x^l+r = x^l (mod n), this is a periodic function with period r. Then when we
take the Fourier transform, we will get something that is peaked at multiples of 1/r. Luckily, there is an efficient quantum algorithm for the Fourier transform, so we can then find r.
There are many proposals for how to build a quantum computer, with more being made all the time. The 0 and 1 of a qubit might be the ground and excited states of an atom in a linear ion trap; they
might be polarizations of photons that interact in an optical cavity; they might even be the excess of one nuclear spin state over another in a liquid sample in an NMR machine. As long as there is a
way to put the system in a quantum superposition and there is a way to interact multiple qubits, a system can potentially be used as a quantum computer. In order for a system to be a good choice, it
is also important that we can do many operations before losing quantum coherence. It may not ultimately be possible to make a quantum computer that can do a useful calculation before decohering, but
if we can get the error rate low enough, we can use a quantum error-correcting code to protect the data even when the individual qubits in the computer decohere.
Back to Daniel Gottesman's home page
October 29, 1997
|
{"url":"http://www.perimeterinstitute.ca/personal/dgottesman/qcomp.html","timestamp":"2014-04-21T08:53:36Z","content_type":null,"content_length":"3858","record_id":"<urn:uuid:3b66fee4-494c-422e-95ca-647d2a37b55c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
-- Copyright (c) 2009, Bjoern B. Brandenburg <bbb [at] cs.unc.edu>
-- All rights reserved.
-- Redistribution and use in source and binary forms, with or without
-- modification, are permitted provided that the following conditions are met:
-- * Redistributions of source code must retain the above copyright
-- notice, this list of conditions and the following disclaimer.
-- * Redistributions in binary form must reproduce the above copyright
-- notice, this list of conditions and the following disclaimer in the
-- documentation and/or other materials provided with the distribution.
-- * Neither the name of the copyright holder nor the names of any
-- contributors may be used to endorse or promote products derived from
-- this software without specific prior written permission.
-- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
-- AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
-- ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
-- LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
-- SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
-- INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
-- CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
-- ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-- POSSIBILITY OF SUCH DAMAGE.
{- |
This module implements a number of common bin-packing heuristics: 'FirstFit',
'LastFit', 'BestFit', 'WorstFit', and 'AlmostWorstFit'. In addition, the
not-so-common, but analytically superior (in terms of worst-case behavior),
'ModifiedFirstFit' heuristic is also supported. Further, the (slow)
'SumOfSquaresFit' heuristic, which has been considered in the context of online
bin-packing (Bender et al., 2008), is also supported.
Items can be packed in order of both 'Decreasing' and 'Increasing' size (and,
of course, in unmodified order; see 'AsGiven').
The module supports both the standard (textbook) minimization problem
(/"How many bins do I need to pack all items?"/; see 'minimizeBins' and
'countBins') and the more practical fitting problem
(/"I've got n bins; which items can I take?"/; see 'binpack').
The well-known heuristics are described online in many places and are not
further discussed here. For example, see
<http://www.cs.arizona.edu/icon/oddsends/bpack/bpack.htm> for an overview. A
description of the 'ModifiedFirstFit' algorithm is harder to come by online,
hence a brief description and references are provided below.
Note that most published analysis assumes items to be sorted in some specific
(mostly 'Decreasing') order. This module does not enforce such assumptions,
rather, any ordering can be combined with any placement heuristic.
If unsure what to pick, then try 'FirstFit' 'Decreasing' or 'BestFit'
'Decreasing' as a default. Use 'WorstFit' 'Decreasing' (in combination with
'binpack') if you want a pre-determined number of bins filled evenly.
A short overview of the 'ModifiedFirstFit' heuristic follows. This overview is
based on the description given in (Yue and Zhang, 1995).
Let @lst@ denote the list of items to be bin-packed, let @x@ denote the size of
the smallest element in @lst@, and let @cap@ denote the capacity of one
bin. @lst@ is split into the four sub-lists, @lA@, @lB@, @lC@, @lD@.
[@lA@] All items strictly larger than @cap\/2@.
[@lB@] All items of size at most @cap\/2@ and strictly larger than @cap\/3@.
[@lC@] All items of size at most @cap\/3@ and strictly larger than @(cap - x)\/5@.
[@lD@] The rest, /i.e./, all items of size at most @(cap - x)\/5@.
Items are placed as follows:
(1) Create a list of @length lA@ bins. Place each item in @lA@ into its own
bin (while maintaining relative item order with respect to @lst@). Note:
relevant published analysis assumes that @lst@ is sorted in order of
'decreasing' size.
(2) Take the list of bins created in Step 1 and reverse it.
(3) Sequentially consider each bin @b@. If the two smallest items in @lC@ do
NOT fit together into @b@ of if there a less than two items remaining in
@lC@, then pack nothing into @b@ and move on to the next bin (if any).
If they do fit together, then find the largest item @x1@ in @lC@ that
would fit together with the smallest item in @lC@ into @b@. Remove @x1@
from @lC@. Then find the largest item @x2@, @x2 \\= x1@, in @lC@ that will
now fit into @b@ /together/ with @x1@. Remove @x1@ from @lC@. Place both
@x1@ and @x2@ into @b@ and move on to the next item.
(4) Reverse the list of bins again.
(5) Use the 'FirstFit' heuristic to place all remaining items, /i.e./, @lB@,
@lD@, and any remaining items of @lC@.
* D.S. Johnson and M.R. Garey (1985). A 71/60 Theorem for Bin-Packing.
/Journal of Complexity/, 1:65-106.
* M. Yue and L. Zhang (1995). A Simple Proof of the Inequality MFFD(L) <= 71/60
OPT(L) + 1, L for the MFFD Bin-Packing Algorithm.
/Acta Mathematicae Applicatae Sinica/, 11(3):318-330.
* M.A. Bender, B. Bradley, G. Jagannathan, and K. Pillaipakkamnatt (2008).
Sum-of-Squares Heuristics for Bin Packing and Memory Allocation.
/ACM Journal of Experimental Algorithmics/, 12:1-19.
module Data.BinPack (
-- * Types
, OrderPolicy (AsGiven, Increasing, Decreasing)
, Measure
-- * Feature Enumeration
-- $features
, allOrders
, allPlacements
, allHeuristics
-- * Bin Abstraction
-- $bin
, Bin
, emptyBin
, emptyBins
, asBin
, tryAddItem
, addItem
, addItems
, items
, gap
-- * Bin-Packing Functions
, minimizeBins
, countBins
, binpack
) where
import Data.BinPack.Internals
import Data.BinPack.Internals.MFF (binpackMFF, minimizeMFF)
import Data.BinPack.Internals.SumOfSquares (sosfit, sosfitAnyFit)
-- | What placement heuristic should be used?
data PlacementPolicy = FirstFit -- ^ Traverse bin list from 'head' to
-- 'last' and place item in the first
-- bin that has sufficient capacity.
| ModifiedFirstFit -- ^ See above.
| LastFit -- ^ Traverse bin list from 'last' to
-- 'head' and place item in the first
-- bin that has sufficient capacity.
| BestFit -- ^ Place item in the bin with the
-- most capacity.
| WorstFit -- ^ Place item in the bin with the
-- least (but sufficient) capacity.
| AlmostWorstFit -- ^ Choose the 2nd to worst-fitting
-- bin.
| SumOfSquaresFit -- ^ Choose bin such that sum-of-squares
-- heuristic is minimized.
deriving (Show, Eq, Ord)
-- $features
-- Lists of all supported heuristics. Useful for benchmarking and testing.
-- | The list of all possible 'PlacementPolicy' choices.
allPlacements :: [PlacementPolicy]
allPlacements = [FirstFit, ModifiedFirstFit, LastFit, BestFit
, WorstFit, AlmostWorstFit, SumOfSquaresFit]
-- | The list of all possible 'OrderPolicy' choices.
allOrders :: [OrderPolicy]
allOrders = [Decreasing, Increasing, AsGiven]
-- | All supported ordering and placment choices.
allHeuristics :: [(PlacementPolicy, OrderPolicy)]
allHeuristics = [(p, o) | p <- allPlacements, o <- allOrders]
placement :: (Ord a, Num a) => PlacementPolicy -> Placement a b
placement WorstFit = worstfit
placement BestFit = bestfit
placement FirstFit = firstfit
placement LastFit = lastfit
placement AlmostWorstFit = almostWorstfit
placement SumOfSquaresFit = sosfitAnyFit
placement ModifiedFirstFit = error "Not a simple placment policy."
-- $bin
-- Conceptually, a bin is defined by its remaining capacity and the contained
-- items. Currently, it is just a tuple, but this may change in future
-- releases. Clients of this module should rely on the following accessor
-- functions.
{- | Bin-packing without a limit on the number of bins (minimization problem).
Assumption: The maximum item size is at most the size of one bin (this is not
* Pack the words of the sentence /"Bin packing heuristics are a lot of fun!"/
into bins of size 11, assuming the size of a word is its length. The
'Increasing' ordering yields a sub-optimal result that leaves a lot of empty
space in the bins.
> minimizeBins FirstFit Increasing length 11 (words "Bin packing heuristics are a lot of fun!")
> ~~> [(2,["are","Bin","of","a"]),(4,["fun!","lot"]),(4,["packing"]),(1,["heuristics"])]
* Similarly, for 'Int'. Note that we use 'id' as a 'Measure' of the size of an 'Int'.
> minimizeBins FirstFit Decreasing id 11 [3,7,10,3,1,3,2,4]
> ~~> [(0,[1,10]),(0,[4,7]),(0,[2,3,3,3])]
minimizeBins :: (Num a, Ord a) =>
PlacementPolicy -- ^ How to order the items before placement.
-> OrderPolicy -- ^ The bin-packing heuristic to use.
-> Measure a b -- ^ How to size the items.
-> a -- ^ The size of one bin.
-> [b] -- ^ The items.
-> [Bin a b] -- ^ The result: a list of 'Bins'.
minimizeBins fitPol ordPol size capacity objects =
case fitPol of
-- special MFF: more complicated looping; no re-ordered items.
ModifiedFirstFit -> minimizeMFF ordPol size capacity objects
-- special SOS: not an any-fit heuristic.
SumOfSquaresFit -> minimize capacity size (sosfit capacity) [] items'
-- everything else can be handled by minimize+placement.
_ -> minimize capacity size (placement fitPol) [] items'
where items' = order ordPol size objects
{- |
Wrapper around 'minimizeBins'; useful if only the number of required
bins is of interest. See 'minimizeBins' for a description of the arguments.
* How many bins of size 11 characters each do we need to pack the words of the sentence
/"Bin packing heuristics are a lot of fun!"/?
> countBins FirstFit Increasing length 11 (words "Bin packing heuristics are a lot of fun!")
> ~~> 4
* Similarly, for 'Int'. As before, we use 'id' as a 'Measure' for the size of an 'Int'.
> countBins FirstFit Decreasing id 11 [3,7,10,3,1,3,2,4]
> ~~> 3
countBins :: (Num a, Ord a) =>
PlacementPolicy -> OrderPolicy -> Measure a b -> a -> [b] -> Int
countBins fitPol ordPol size cap = length
. minimizeBins fitPol ordPol size cap
{- | Bin-pack a list of items into a list of (possibly non-uniform) bins. If
an item cannot be placed, instead of creating a new bin, this version will
return a list of items that could not be packed (if any).
Example: We have two empty bins, one of size 10 and one of size 12.
Which words can we fit in there?
> binpack WorstFit Decreasing length [emptyBin 10, emptyBin 12] (words "Bin packing heuristics are a lot of fun!")
> ~~> ([(0,["Bin","packing"]),(0,["of","heuristics"])],["a","lot","are","fun!"])
Both bins were filled completely, and the words /"are a lot fun!"/ coult not be
packed. -}
binpack :: (Num a, Ord a) =>
PlacementPolicy -- ^ The bin packing heuristic to use.
-> OrderPolicy -- ^ How to order the items before placement.
-> Measure a b -- ^ How to size the items.
-> [Bin a b] -- ^ The bins; may be non-uniform and pre-filled.
-> [b] -- ^ The items.
-> ([Bin a b], [b]) -- ^ The result; a list of bins
-- and a list of items that could not
-- be placed.
binpack fitPol ordPol size bins objects =
fit = placement fitPol
items' = order ordPol size objects
case fitPol of
ModifiedFirstFit -> binpackMFF ordPol size bins items'
_ -> binpack' (fit size) bins items' []
|
{"url":"http://hackage.haskell.org/package/Binpack-0.4/docs/src/Data-BinPack.html","timestamp":"2014-04-20T01:20:00Z","content_type":null,"content_length":"35600","record_id":"<urn:uuid:cc307b7b-0047-4f50-bd94-a79fe02ad779>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
|
java-how to calculate coefficient in an quadratic equation
up vote -5 down vote favorite
hi guys I know it would be easy to most of you but because of i am new in java and bad at mathematics i got stuck in these question. It is necessary because our teacher wants to see the output of
it.So can you help me to write the codes of this. Any help would be greatly appreciated. Here is the question..
Write a console program to read in the real roots r1; r2 of a quadratic equation ax2 + bx + c = 0 and print the coefficients of the equation, a, b, c.
java math quadratic
2 The Quadratic Formula is near to top of this page: en.wikipedia.org/wiki/Quadratic_equation Normally, questions such as 'Please do my homework for me' don't get a very positive response! – NickJ
Feb 25 '13 at 10:04
i just dont know what to do and ask about an idea .. nobody here is obliged to do my work of course but since people share their minds here it encouraged me to ask about the problem, no matter the
negative responses that you gave.. thanks anyway – regeme Feb 25 '13 at 10:15
add comment
closed as not a real question by Daniel Fischer, Sean Owen, home, ben75, Klas Lindbäck Feb 25 '13 at 15:47
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying
this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
A starting point for you. If the roots are r1 and r2, then the equation is (x-r1)(x-r2) = 0, since x = r1 makes the first factor zero, and x = r2 makes the second one zero.
up vote 1 down vote
upvoted for god answer but this question should really be on math.stackexchange.com – imulsion Apr 8 '13 at 14:37
add comment
Not the answer you're looking for? Browse other questions tagged java math quadratic or ask your own question.
|
{"url":"http://stackoverflow.com/questions/15064350/java-how-to-calculate-coefficient-in-an-quadratic-equation","timestamp":"2014-04-24T23:39:07Z","content_type":null,"content_length":"60117","record_id":"<urn:uuid:5759874b-003f-4aea-a1c2-fbc52c256ec0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00096-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bounds of Solutions of Integrodifferential Equations
Abstract and Applied Analysis
VolumeΒ 2011Β (2011), Article IDΒ 571795, 7 pages
Research Article
Bounds of Solutions of Integrodifferential Equations
Department of Mathematics, Faculty of Electrical Engineering and Communication, TechnickΓ‘ 8, Brno University of Technology, 61600 Brno, Czech Republic
Received 20 January 2011; Accepted 24 February 2011
Academic Editor: MiroslavaΒ RΕ―ΕΎiΔ kovΓ‘
Copyright Β© 2011 ZdenΔ k Ε marda. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Some new integral inequalities are given, and bounds of solutions of the following integro-differential equation are determined: , , where , , are continuous functions, .
1. Introduction
Ou Yang [1] established and applied the following useful nonlinear integral inequality.
Theorem 1.1. Let and be nonnegative and continuous functions defined on and let be a constant. Then, the nonlinear integral inequality implies
This result has been frequently used by authors to obtain global existence, uniqueness, boundedness, and stability of solutions of various nonlinear integral, differential, and integrodifferential
equations. On the other hand, Theorem 1.1 has also been extended and generalized by many authors; see, for example, [2β 19]. Like Gronwall-type inequalities, Theorem 1.1 is also used to obtain a
priori bounds to unknown functions. Therefore, integral inequalities of this type are usually known as Gronwall-Ou Yang type inequalities.
In the last few years there have been a number of papers written on the discrete inequalities of Gronwall inequality and its nonlinear version to the Bihari type, see [13, 16, 20]. Some applications
discrete versions of integral inequalities are given in papers [21β 23].
Pachpatte [11, 12, 14β 16] and Salem [24] have given some new integral inequalities of the Gronwall-Ou Yang type involving functions and their derivatives. Lipovan [7] used the modified Gronwall-Ou
Yang inequality with logarithmic factor in the integrand to the study of wave equation with logarithmic nonlinearity. Engler [5] used a slight variant of the Haraux's inequality for determination of
global regular solutions of the dynamic antiplane shear problem in nonlinear viscoelasticity. Dragomir [3] applied his inequality to the stability, boundedness, and asymptotic behaviour of solutions
of nonlinear Volterra integral equations.
In this paper, we present new integral inequalities which come out from above-mentioned inequalities and extend Pachpatte's results (see [11, 16]) especially. Obtained results are applied to certain
classes of integrodifferential equations.
2. Integral Inequalities
Lemma 2.1. Let , , and be nonnegative continuous functions defined on . If the inequality holds where is a nonnegative constant, , then for .
Proof. Define a function by the right-hand side of (2.1) Then, , and Define a function by then , , Integrating (2.7) from 0 to , we have Using (2.8) in (2.6), we obtain Integrating from 0 to and
using , we get inequality (2.2). The proof is complete.
Lemma 2.2. Let , , and be nonnegative continuous functions defined on , be a positive nondecreasing continuous function defined on . If the inequality holds, where is a nonnegative constant, , then
where .
Proof. Since the function is positive and nondecreasing, we obtain from (2.10) Applying Lemma 2.1 to inequality (2.12), we obtain desired inequality (2.11).
Lemma 2.3. Let , , , and be nonnegative continuous functions defined on , and let be a nonnegative constant.
If the inequality holds for , then where
Proof. Define a function by the right-hand side of (2.13) Then , and Differentiating and using (2.17), we get Integrating inequality (2.18) from 0 to , we have where is defined by (2.15), is positive
and nondecreasing for . Now, applying Lemma 2.2 to inequality (2.19), we get Using (2.20) and the fact that , we obtain desired inequality (2.14).
3. Application of Integral Inequalities
Consider the following initial value problem where , , are continuous functions. We assume that a solution of (3.1) exists on .
Theorem 3.1. Suppose that where , are nonnegative continuous functions defined on . Then, for the solution of (3.1) the inequality holds on .
Proof. Multiplying both sides of (3.1) by and integrating from 0 to we obtain From (3.2) and (3.4), we get Using inequality (2.14) in Lemma 2.3, we have where which is the desired inequality (3.3).
Remark 3.2. It is obvious that inequality (3.3) gives the bound of the solution of (3.1) in terms of the known functions.
This author was supported by the Council of Czech Government grant MSM 00216 30503 and MSM 00216 30529 and by the Grant FEKTS-11-2-921 of Faculty of Electrical Engineering and Communication.
1. L. Ou Yang, β The boundedness of solutions of linear differential equations ${y}^{\prime }+A\left(t\right)y=0$,β Advances in Mathematics, vol. 3, pp. 409β 415, 1957.
2. D. Baınov and P. Simeonov, Integral Inequalities and Applications, vol. 57 of Mathematics and Its Applications (East European Series), Kluwer Academic Publishers, Dordrecht, The Netherlands,
1992. View at Zentralblatt MATH
3. S. S. Dragomir, β On Volterra integral equations with kernels of L-type,β Analele Universit a tii din Timi soara. Seria Stiin te Matematice, vol. 25, no. 2, pp. 21β 41, 1987.
4. S. S. Dragomir and Y.-H. Kim, β On certain new integral inequalities and their applications,β JIPAM: Journal of Inequalities in Pure and Applied Mathematics, vol. 3, no. 4, article 65, p. 8,
2002. View at Zentralblatt MATH
5. H. Engler, β Global regular solutions for the dynamic antiplane shear problem in nonlinear viscoelasticity,β Mathematische Zeitschrift, vol. 202, no. 2, pp. 251β 259, 1989. View at Publisher
Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
6. A. Haraux, Nonlinear Evolution Equations. Global Behavior of Solutions, vol. 841 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1981.
7. O. Lipovan, β A retarded integral inequality and its applications,β Journal of Mathematical Analysis and Applications, vol. 285, no. 2, pp. 436β 443, 2003. View at Publisher Β· View at Google
Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
8. Q. H. Ma and L. Debnath, β A more generalized Gronwall-like integral inequality wit applications,β International Journal of Mathematics and Mathematical Sciences, vol. 15, pp. 927β 934, 2003.
9. Q.-H. Ma and E.-H. Yang, β On some new nonlinear delay integral inequalities,β Journal of Mathematical Analysis and Applications, vol. 252, no. 2, pp. 864β 878, 2000. View at Publisher Β·
View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
10. F. W. Meng and W. N. Li, β On some new integral inequalities and their applications,β Applied Mathematics and Computation, vol. 148, no. 2, pp. 381β 392, 2004. View at Publisher Β· View at
Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
11. B. G. Pachpatte, β On some new inequalities related to certain inequalities in the theory of differential equations,β Journal of Mathematical Analysis and Applications, vol. 189, no. 1, pp.
128β 144, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
12. B. G. Pachpatte, β On some integral inequalities similar to Bellman-Bihari inequalities,β Journal of Mathematical Analysis and Applications, vol. 49, pp. 794β 802, 1975. View at Publisher Β·
View at Google Scholar Β· View at Zentralblatt MATH
13. B. G. Pachpatte, β On certain nonlinear integral inequalities and their discrete analogues,β Facta Universitatis. Series: Mathematics and Informatics, no. 8, pp. 21β 34, 1993. View at
Zentralblatt MATH
14. B. G. Pachpatte, β On some fundamental integral inequalities arising in the theory of differential equations,β Chinese Journal of Contemporary Mathematics, vol. 22, pp. 261β 273, 1994.
15. B. G. Pachpatte, β On a new inequality suggested by the study of certain epidemic models,β Journal of Mathematical Analysis and Applications, vol. 195, no. 3, pp. 638β 644, 1995. View at
Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
16. B. G. Pachpatte, Inequalities for Differential and Integral Equations, Mathematics in Science and Engineering 197, Academic Press, San Diego, Calif, USA, 2006.
17. Z. Šmarda, β Generalization of certain integral inequalities,β in Proceedings of the 8th International Conference on Applied Mathematics (APLIMAT '09), pp. 223β 228, Bratislava, Slovakia,
18. E. H. Yang, β On asymptotic behaviour of certain integro-differential equations,β Proceedings of the American Mathematical Society, vol. 90, no. 2, pp. 271β 276, 1984. View at Zentralblatt
19. C.-J. Chen, W.-S. Cheung, and D. Zhao, β Gronwall-Bellman-type integral inequalities and applications to BVPs,β Journal of Inequalities and Applications, vol. 2009, Article ID 258569, 15
pages, 2009. View at Zentralblatt MATH
20. E. H. Yang, β On some new discrete generalizations of Gronwall's inequality,β Journal of Mathematical Analysis and Applications, vol. 129, no. 2, pp. 505β 516, 1988. View at Publisher Β· View
at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
21. J. Baštinec and J. Diblík, β Asymptotic formulae for a particular solution of linear nonhomogeneous discrete equations,β Computers & Mathematics with Applications, vol. 45, no. 6–9, pp. 1163β
1169, 2003. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet
22. J. Diblík, E. Schmeidel, and M. Růžičková, β Existence of asymptotically periodic solutions of system of Volterra difference equations,β Journal of Difference Equations and Applications, vol.
15, no. 11-12, pp. 1165β 1177, 2009. View at Zentralblatt MATH
23. J. Diblík, E. Schmeidel, and M. Růžičková, β Asymptotically periodic solutions of Volterra system of difference equations,β Computers & Mathematics with Applications, vol. 59, no. 8, pp. 2854β
2867, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
24. S. Salem, β On some systems of two discrete inequalities of gronwall type,β Journal of Mathematical Analysis and Applications, vol. 208, no. 2, pp. 553β 566, 1997. View at Publisher Β· View
at Google Scholar Β· View at MathSciNet
|
{"url":"http://www.hindawi.com/journals/aaa/2011/571795/","timestamp":"2014-04-20T08:42:49Z","content_type":null,"content_length":"223645","record_id":"<urn:uuid:c4f6bbbe-f285-42bc-8f4d-46499addc157>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of static-pressure
• In the design and operation of aircraft, static pressure is the air pressure in the aircraft’s static pressure system.
• In fluid dynamics, static pressure is the pressure at a nominated point in a fluid. Many authors use the term static pressure in place of pressure to avoid ambiguity.
• The term static pressure is also used by some authors in fluid statics.
Static pressure in design and operation of aircraft
An aircraft’s
is operated by the
static pressure system
. An aircraft’s
airspeed indicator
is operated by the static pressure system and the
pitot pressure system
The static pressure system is open to the exterior of the aircraft to sense the pressure of the atmosphere at the altitude at which the aircraft is flying. This small opening is called the static
port. In flight the air pressure is slightly different at different positions around the exterior of the aircraft. The aircraft designer must select the position of the static port carefully. There
is no position on the exterior of an aircraft at which the air pressure, for all angles of attack, is identical to the atmospheric pressure at the altitude at which the aircraft is flying. The
difference in pressure causes a small error in the altitude indicated on the altimeter, and the airspeed indicated on the airspeed indicator. This error in indicated altitude and airspeed is called
position error.
When selecting the position for the static port, the aircraft designer’s objective is to ensure the pressure in the aircraft’s static pressure system is as close as possible to the atmospheric
pressure at the altitude at which the aircraft is flying, across the operating range of weight and airspeed. Many authors describe the atmospheric pressure at the altitude at which the aircraft is
flying as the freestream static pressure. At least one author takes a different approach in order to avoid a need for the expression freestream static pressure. Gracey has written “The static
pressure is the atmospheric pressure at the flight level of the aircraft”. Gracey then refers to the air pressure at any point close to the aircraft as the local static pressure.
Static pressure in fluid dynamics
The concept of pressure is central to the study of fluids. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be
measured using an aneroid, Bourdon tube, mercury column, or various other methods.
The concepts of stagnation (or total) pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the
usual sense - they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static
pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.
In Aerodynamics, L.J. Clancy writes: "To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often
referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure."
Bernoulli's equation is fundamental to the dynamics of incompressible fluids. In many fluid flow situations of interest, changes in elevation are insignificant and can be ignored. With this
simplification, Bernoulli’s equation for incompressible flows can be expressed as:
$P + frac\left\{1\right\}\left\{2\right\} rho V^2 = P_0$
$P$ is static pressure
$frac\left\{1\right\}\left\{2\right\} rho V^2$ is dynamic pressure, usually denoted by $q$
$P_0$ is total pressure which is constant along any streamline
Every point in a steadily flowing fluid, regardless of the fluid speed at that point, has its own static pressure $P$, dynamic pressure $q$, and total pressure $P_0$. Static pressure and dynamic
pressure are likely to vary significantly throughout the fluid but total pressure is constant along each streamline. In irrotational flow, total pressure is the same on all streamlines and is
therefore constant throughout the flow.
The simplified form of Bernoulli's equation can be summarised in the following memorable word equation:
static pressure + dynamic pressure = total pressure
This simplified form of Bernoulli’s equation is fundamental to an understanding of the design and operation of ships, low speed aircraft, and airspeed indicators for low speed aircraft – aircraft
whose maximum speed will be less than about 30% of the speed of sound.
As a consequence of the widespread understanding of the term static pressure in relation to Bernoulli’s equation, many authors in the field of fluid dynamics also use static pressure rather than
pressure in applications not directly related to Bernoulli’s equation.
The British Standards Institution, in its Standard Glossary of Aeronautical Terms, gives the following definition:
4412 Static pressure The pressure at a point on a body moving with the fluid.
Static pressure in fluid statics
The term static pressure is sometimes used in fluid statics to refer to the pressure of a fluid at a nominated depth in the fluid. In fluid statics the fluid is stationary everywhere and the concepts
of dynamic pressure and total pressure are not applicable. Consequently there is little risk of ambiguity in using the term pressure, but some authors choose to use static pressure in some
See also
Aircraft design and operation
• .
□ . *
☆ Kermode, A.C. (1972) Mechanics of Flight, Longman Group Limited, London ISBN 0-582-23740-8
☆ Lombardo, D.A., Aircraft Systems, 2nd edition, McGraw-Hill (1999), New York ISBN 0-07-038605-6
Fluid dynamics
☆ Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London ISBN 0 273 01120 0
☆ Streeter, V.L. (1966), Fluid Mechanics, McGraw-Hill, New York
|
{"url":"http://www.reference.com/browse/static-pressure","timestamp":"2014-04-16T14:18:20Z","content_type":null,"content_length":"82300","record_id":"<urn:uuid:aaa96182-3901-44e6-a09c-1778d51a6192>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Appendix B, Thirteenth Graphic Equation: Results from the 2004 National Survey on Drug Use and Health: National Findings
Skip To Content
Capital I as a function of i is equal to 1 if the date of the interview minus the date of initiation is less than or equal to 365. capital I is equal to 0 otherwise.
Back to Appendix B, Thirteenth Graphic Equation
This page was last updated on May 20, 2008.
|
{"url":"http://samhsa.gov/data/nsduh/2k4nsduh/2k4results/eqb-13alt.htm","timestamp":"2014-04-20T18:23:57Z","content_type":null,"content_length":"1750","record_id":"<urn:uuid:f70ad886-faee-4562-a7c0-da33ab6967ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Book value per share
Book value per share is a ratio that is calculated by subtracting all liabilities from all assets, then dividing it by the total number of outstanding shares (or equivalents). The idea behind book
value per share is that if a company's calculated book value per share is higher than the current stock price, the company is undervalued. It can also be used in the reverse where if a stock price is
substantially higher than the book value per share that it is overvalued and prone to corrections. It is important to note that investors using book value per share that they need to understand book
value and its limitations. Limitations in book value also directly apply to book value per share.
To calculated book value per preferred share:
(Share capital of preferred and common stock + contributed surplus + retained earnings) / number of preferred shares outstanding.
General value guidelines are as follows:
• Utilities & industrials: Minimum equity value per preferred share in each of the last 5 fiscal years should be about 2 times the dollar value of assets that each preferred share is entitled to in
the event of liquidation
• Other industries: These numbers vary greatly and should be compared to companies that are about the same size and qualities (compare against the primary competitors)
• Book value per preferred share should also show a stable or increasing value over the last 5 year period.
To calculate book value per common share:
(share capital of common stock + contributed surplus + retained earnings) / number of common shares outstanding
General value guidelines are as follows:
• There is no generally accepted values for this ratio and in practice most fundamentalists will find there is generally no substantial relationship between the equity value per common share and
the market value. Some companies (depending on industry) will trade high above the equity value while others are far below the equity value. The difference between the equity value per common
share and the market price is usually accounted for by the actual/future earning power of the company. Companies that have a higher actual/future earning power will generally have a higher market
value then equity value (example: technology companies).
Based on these guidelines, book value per common share is not regarded as an effective fundamental ratio to analyze market value.
Also see:
• [href="/?p=Fundamental-Analysis.Balance-Sheet-Items&t=Book+Value"]Book value[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Debt+to+equity+ratio"]Debt to equity ratio[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=EPS+rank"]EPS rank[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Price+to+book+ratio+(P/B)"]Price to book ratio (P/B)[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Price+to+cash+flow+(P/CF)"]Price to cash flow (P/CF)[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Price+to+dividend+(P/D)"]Price to dividend (P/D)[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Price+to+earnings+(P/E)"]Price to earnings (P/E)[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Price+to+sales+ratio+(P/S)"]Price to sales ratio (P/S)[/HREF]
• [href="/?p=Fundamental-Analysis.Overvalued-and-Undervalued-Ratios&t=Times+interest+earned"]Times interest earned[/HREF]
|
{"url":"http://www.stockdiamond.com/Over/Under-Ratios/book-value-per-share.html","timestamp":"2014-04-16T13:29:05Z","content_type":null,"content_length":"20356","record_id":"<urn:uuid:040f0bba-94ca-4b01-8d8b-7ab1bcde3a3e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of views: 12783 Article added: 2 February 2011 Article last modified: 11 February 2011 © Copyright 2010-2014 Back to top
Welcome to the Thermopedia A-to-Z Index!
This Index is organized alphabetically by subject areas and key words. Click on an item to the left in order to see the information associated with that item.
Each item will contain one of the following:
• A complete article
• A request from the Thermopedia Editorial Board to write an article on that topic
• If there is noarticle associated with an item, a link is provided for more information
Every item in the Index is connected via "RelatesLink", a visual/graphical mapping of each entry in Thermopedia, and its relationship to other articles and entries.
You can also search Thermopedia by entering a term in the Search box above.
|
{"url":"http://www.thermopedia.com/content/1235/","timestamp":"2014-04-19T05:04:55Z","content_type":null,"content_length":"424033","record_id":"<urn:uuid:da8a4c08-d8eb-489f-ad8a-37dc58a8b8af>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the Dynamics of the Theory of Relativity
Translation:On the Dynamics of the Theory of Relativity
From Wikisource
On the Dynamics of the Theory of Relativity (1911)
by , translated from German by Wikisource
On the Dynamics of the Theory of Relativity.
by M. Laue.
The dynamics of the mass point was already dealt with by A. Einstein^[1] in his first foundational work on the relativity principle, as well as shortly afterwards by M. Planck.^[2] The most important
result of their investigation were the known formulas for the dependence of longitudinal and transverse mass from velocity (since then experimentally confirmed at different occasions using the
electron). The assumption that Newton's dynamics remains in the limiting case of infinitely small velocities, was used as the starting point. Later, Planck^[3] extended the theory to the
thermodynamic side, and there he completely derived the mechanical inertia from energy (and pressure). There, he founded his consideration upon the principle of least action, however, he additionally
had to introduce an assumption about the transformation of forces.
Nevertheless, there are still unsolved problems in dynamics. For example, P. Ehrenfest^[4] asks whether the dynamics of the mass point is still valid for the electron, when one doesn't ascribe to it
– as it ordinarily happens – radial symmetry, but perhaps an elliptic shape. Einstein^[5] affirms this, because in the limiting case of infinitely small velocities, Newton's mechanics must be valid
under all circumstances. However, this assumption is surely not accurate in this generality, as we will see later. Also M. Born^[6] considers it necessary to ascribe spherical symmetry to the
electron, otherwise – in contradiction with experience – also a transverse component of force is connected to a longitudinal acceleration.
The Trouton-Noble experiment is in close relation with this. According to the electron theory, a uniformly moving charged condenser experiences a torque by the electromagnetic forces. Trouton and
Noble^[7] tried to confirm this (as the consequence of Earth's motion) by a bifilarly hung condenser, but they couldn't find any rotation out of its position at rest. The theory of relativity of
course can explain this result very easily by the fact, that Earth (relative to which the condenser is at rest) is a valid reference system. But how is the theory formed, when one chooses another
reference system? The electromagnetic torque is also existent according to the theory of relativity. Yet why is there no torque?
The answer to this is given by H. A. Lorentz.^[8] In the co-moving system the electrostatic forces are canceled by molecular cohesion, otherwise the condenser wouldn't be in equilibrium. The
resultant from electric and molecular forces is thus zero at any point. If both kinds of forces are transformed in the same way to other reference frames, then this resultant remains zero in all
systems, and there is no reason for a rotation. Although it's unquestionable that this answer hits the truth, it isn't entirely satisfactory in so far as it is connected to molecular theory, with
which this problem has nothing to do per se.
Already with respect to Newton's mechanics it has often been point out,^[9] that it is more logical to place the dynamics of continua in front of that of the mass point. It seems to me that there is
a hint in the two mentioned problems, that the advantages of the mentioned way before the opposite one (to derive the dynamics of continua from that of the mass point), are even much greater in the
theory of relativity as in the old theory. Therefore, in the following we want to investigate the concept of elastic stresses in its connection with momentum and energy.^[10]
Introductory remarks.[edit]
In the classic theory of elasticity, the stresses form a tensor of symmetric $\mathbf{p}$, i.e. the quantities
$\begin{array}{ccccc} \mathbf{p}_{xx}, & & \mathbf{p}_{xy}, & & \mathbf{p}_{xz},\\ \mathbf{p}_{yx}, & & \mathbf{p}_{yy}, & & \mathbf{p}_{yz},\\ \mathbf{p}_{zx}, & & \mathbf{p}_{zy}, & & \mathbf{p}_
between which the three relations $\mathbf{p}_{jk}=\mathbf{p}_{kj}$ hold, are calculated (when the axis-cross $x, y, z$ is rotated) in the same way, as the squares ($x^2$ etc.) and the products ($xy$
etc.) are calculated from the coordinates. At first, the property of symmetry vanishes in relativity theory. Thus we arrive at the concept of the unsymmetrical tensor, for which, at a rotation
expressed by the scheme
$\begin{array}{c|c|c|c} & x' & y' & z'\\ \hline x & a_{1}^{(1)} & a_{2}^{(1)} & a_{3}^{(1)}\\ \hline y & a_{1}^{(2)} & a_{2}^{(2)} & a_{3}^{(2)}\\ \hline z & a_{1}^{(3)} & a_{2}^{(3)} & a_{3}^{(3)},\
the following transformation formulas should apply:
$\begin{array}{lll} \mathbf{t}_{xx} & =a_{1}^{(1)^{2}}\mathbf{t}_{x'x'}+a_{1}^{(2)^{2}}\mathbf{t}_{y'y'} & +a_{1}^{(3)^{2}}\mathbf{t}_{z'z'}\\ & +a_{1}^{(2)}a_{1}^{(3)}\left(\mathbf{t}_{y'z'}+\
mathbf{t}_{z'y'}\right) & +a_{1}^{(3)}a_{1}^{(1)}\left(\mathbf{t}_{z'x'}+\mathbf{t}_{x'z'}\right)\\ & & +a_{1}^{(1)}a_{1}^{(2)}\left(\mathbf{t}_{x'y'}+\mathbf{t}_{y'x'}\right)\end{array}$
$\begin{array}{ll} \mathbf{t}_{yz} & =a_{2}^{(1)}a_{3}^{(1)}\mathbf{t}_{x'x'}+a_{2}^{(2)}a_{3}^{(2)}\mathbf{t}_{y'y'}+a_{2}^{(3)}a_{3}^{(3)}\mathbf{t}_{z'z'}\\ & +a_{2}^{(2)}a_{3}^{(3)}\mathbf{t}_
{y'z'}+a_{2}^{(3)}a_{3}^{(2)}\mathbf{t}_{z'y'}+a_{2}^{(3)}a_{3}^{(1)}\mathbf{t}_{z'x'}\\ & +a_{2}^{(1)}a_{3}^{(3)}\mathbf{t}_{x'z'}+a_{2}^{(1)}a_{3}^{(2)}\mathbf{t}_{x'y'}+a_{2}^{(2)}a_{3}^{(1)}\
mathbf{t}_{y'x'},\ \mathrm{etc}.\end{array}$
By that, one confirms that one can form a vector product $[\mathfrak{A}\mathbf{t}]$ from $\mathbf{t}$ and an arbitrary vector $\mathfrak{A}$, whose definition reads:^[11]
$[\mathfrak{A}\mathbf{t}]_{k}=\mathfrak{A}_{x}\mathbf{t}_{kx}+\mathfrak{A}_{y}\mathbf{t}_{ky}+\mathfrak{A}_{z}\mathbf{t}_{kz},\ k=x,y,z.$
It's known that one can interpret the operations
$\frac{\partial}{\partial x},\ \frac{\partial}{\partial y},\ \frac{\partial}{\partial z}.$
as components of a symbolic vector, whose vector product with $\mathbf{t}$ is denoted by us as the divergence of $\mathbf{t}[\mathfrak{div}\mathbf{t}]$; its components are to be formed according to
the scheme
$\mathfrak{div}_{k}\mathbf{t}=\frac{\partial\mathbf{t}_{kx}}{\partial x}+\frac{\partial\mathbf{t}_{ky}}{\partial y}+\frac{\partial\mathbf{t}_{kz}}{\partial z},\ k=x,y,z.$
An unsymmetrical tensor is, for example, the tensor product $[[\mathfrak{AB}]]$ of two vectors $\mathfrak{A}$ and $\mathfrak{B}$. Its components are
$[[\mathfrak{AB}]]_{jk}=\mathfrak{A}_{j}\mathfrak{B}_{k},\ j,k=x,y,z,l.$
We call $K$ a valid space-time-system according to the relativity principle; several of that kind are distinguished by us by appended indices $\left(K',\ K^{0}\right)$. The passage from $K$ to $K'$
(Lorentz transformation) is represented by us – in accordance with Minkowski^[12] – as an imaginary rotation of the four-dimensional axis-cross $x, y, z, l$, where $l=ict$; that is, by the scheme:
$\begin{array}{c|c|c|c|c} & x' & y' & z' & l'\\ \hline x & \alpha_{1}^{(1)} & \alpha_{2}^{(1)} & \alpha_{3}^{(1)} & \alpha_{4}^{(1)}\\ \hline y & \alpha_{1}^{(2)} & \alpha_{2}^{(2)} & \alpha_{3}^
{(2)} & \alpha_{4}^{(2)}\\ \hline z & \alpha_{1}^{(3)} & \alpha_{2}^{(3)} & \alpha_{3}^{(3)} & \alpha_{4}^{(3)}\\ \hline l & \alpha_{1}^{(4)} & \alpha_{2}^{(4)} & \alpha_{3}^{(4)} & \alpha_{4}^{(4)}\
There, the coefficients $\alpha_{m}^{(n)}$ satisfy the known reality- and orthogonality conditions.
Furthermore it is known,^[13] that the concepts of vector and tensor can be transferred into a four-dimensional form. A four-vector $F$ – the six-vector is not needed in the following – has the
direction sense of a directed distance; i.e. its four components $F_{x},F_{y},F_{z},F_{l}$ are transformed with a Lorentz transformation like the corresponding coordinates. By a world tensor,
however, we understand the totality of 16 components $T_{jk}(j,k=x,y,z,l)$, which are transformed as the squares and products of $x, y, z, l$. Between them, there are always six symmetry relations
$T_{jk}=T_{kj}$. Analogous to the fact that (with respect to tensor $\mathbf{p}$ in three dimensions) the divergence $\mathfrak{div}\mathbf{p}$ becomes a space vector, here the divergence $\Delta
ivT$ is a four-vector with components
$\Delta iv_{k}T=\frac{\partial T_{kx}}{\partial x}+\frac{\partial T_{ky}}{\partial y}+\frac{\partial T_{kz}}{\partial z}+\frac{\partial T_{kl}}{\partial l},\ k=x,y,z,l$
§ 1. Transformation of force; energy- and momentum theorem.[edit]
From the investigations of Minkowski, Sommerfeld and Abraham^[14] we can see, that the ponderomotive force $\mathfrak{F}$ related to unit volume (force density) of electrodynamics, can be
supplemented to a four-vector $F$, when one adds $F_{l}=i/c(\mathfrak{qF})$ as fourth component to the three spatial components $F_{x}=\mathfrak{F}_{x}$ etc., where $\mathfrak{q}$ is the velocity of
the point of contact of $\mathfrak{F}$, thus $(\mathfrak{qF})$ denotes the work per volume- and time unit.^[15] Furthermore it was explained by the same authors, that the four-force $F$ defined in
this way, has the relation
to the world tensor $T$, where the components of $T$ have simple physical meanings. Namely the nine components, in which $l$ doesn't arise as index, give the three-dimensional tensor $\mathbf{p}$ of
Maxwell's stresses $\left(T_{jk}=\mathbf{p}_{jk},\ j,k=x,y,z\right)$; $-T_{ll}$ is the density $W$ of electromagnetic energy, and the other six components are in relation to the momentum density $\
mathfrak{g}$ and the energy current (Poynting vector) $\mathfrak{S}$ by:
(1a) $T_{lx}=\frac{i}{c}\mathfrak{S}_{x},\ T_{xl}-ic\mathfrak{g}_{x}\ \mathrm{etc}.$
In this interpretation of $T$, equation (1) indeed contains the momentum theorem when applied to the spatial components:
(2) $\mathfrak{F}=-\mathfrak{div}\mathbf{p}-\dot{\mathfrak{g}}$
though when applied to the temporal components, it contains the energy theorem:
(3) $(\mathfrak{qF})=-\mathrm{div}\mathfrak{S}-\dot{W}$
It connects both of them in a form which is invariant with respect to the Lorentz transformation.
Now, Planck and Einstein have already announced, that all ponderomotive forces must be transformed by the Lorentz transformation in the same way, as in electrodynamics. Thus it must be possible in
all areas of physics, to combine the force density with its power to a four-force. Now, with respect to any four-force $F$ defined as a function of the world-points, there are infinitely many world
tensors, which are related to it by relation (1)
We assume, that in any field of physics there is one of those world tensors, whose components have the corresponding meaning, as the components of the mentioned electrodynamic tensor; i.e., that for
example also in dynamics, the energy density is defined by $-T_{ll}$, the momentum density and the energy current according to (1a) by $T_{xl}-T_{lx}$ etc., while $T_{xx}$ is related to the elastic
stresses. That we assume tensor $T$ as being symmetric, can appear to be arbitrary at this place; because also the divergence of an unsymmetrical world tensor would be a four-vector. Yet exactly this
part of the assumption can be explained later (see § 4.). The most important physical consequence from the symmetry relations
$T_{xl}=T_{lx},\ T_{yl}=T_{ly},\ T_{zl}=T_{lz}$
is the law of inertia of energy, which we find from (1a) immediately in its most general form^[16]
(3a) $\mathfrak{g}=\frac{1}{c^{2}}\mathfrak{S}$
In order to interpret the four-force, it is to be noticed, that in pure electromagnetic processes, no ponderomotive force $\mathfrak{F}$ arises, since otherwise processes of different kind must
necessarily arise in consequence of this. Thus also the four-force is given by
Equation (2) then says, that the electromagnetic stress force $-\mathfrak{div}\mathbf{p}$ and the electromagnetic inertial force $-\dot{\mathfrak{g}}$ are in equilibrium. Also with respect to purely
dynamic processes, we will have to set the dynamic four-force equal to zero. Because also Newton's dynamics – maybe in the clearest way in the principle of d’Alembert – says that the inertial force
resisting the increase of mechanical momentum, exactly compensates the force exerted by the elastic stresses – other forces don't exist at all in pure dynamics. In this way, one has as the
fundamental equation of dynamics which is invariant against Lorentz transformation and encloses the momentum and energy theorem, the relation
where the components of $T$ have the given physical meaning.
If dynamic, electrodynamic or other processes are interacting, then (as it can be easily seen) the relation
(5) $\Sigma F=-\Delta iv(\Sigma T)=0\,$
is taking the place of (4). The summation is to be carried out over all four-forces, i.e. the dynamic, electrodynamic ones etc., in other words over all world tensors.
§ 2. Transformation of momentum, energy and stresses.[edit]
The transformation formulas for momentum, energy density and the stresses $\mathbf{p}$ in the passage from one valid reference system $K$ to another system $K'$, is definitely determined by deriving
them from the components of the world tensor $T$. However, we don't want to write them for the most general Lorentz transformation, but we assume that the spatial axes-cross $x, y, z$ in $K$ and $x',
y', z'$ in $K'$ are mutually parallel, and that velocity $\mathfrak{v}$ of $K'$ with respect to $K$, lies in the direction of the $x$-axis. In this case, the Lorentz transformation reads
$\begin{array}{c} x=\frac{x'+i\beta l'}{\sqrt{1-\beta^{2}}},\ y=y',\ z=z',\ l=\frac{l'+i\beta x'}{\sqrt{1-\beta^{2}}},\\ \\\beta=\frac{v}{c}.\end{array}$
If one transforms the components of $T$ as $x^2$ etc., then one finds, by introducing $Q,\mathfrak{g,S}$ and $\mathbf{p}$ according to § 1 (see (1a) and (3a)), instead of them the following formulas
$\left\{ \begin{array}{rl} \mathfrak{g}_{x}= & \frac{1}{c^{2}}\mathfrak{S}_{x}=\frac{\left(c^{2}+v^{2}\right)\mathfrak{g}'_{x}+v\left(\mathbf{p}'_{xx}+W'\right)}{c^{2}-v^{2}},\\ \\\mathfrak{g}_
(6) {y}= & \frac{1}{c^{2}}\mathfrak{S}_{y}=\frac{c^{2}\mathfrak{g}'_{y}+v\mathbf{p}'_{xy}}{c\sqrt{c^{2}-v^{2}}},\\ \\W= & \frac{Q'+\beta^{2}\mathbf{p}'_{xx}+2v\mathfrak{g}'_{x}}{1-\beta^{2}},\\ \\\
mathbf{p}{}_{yy}= & \mathbf{p}'_{yy},\ \mathbf{p}{}_{yz}=\mathbf{p}'_{yz},\\ \\\mathbf{p}{}_{xx}= & \frac{\mathbf{p}'{}_{xx}+2v\mathfrak{g}'_{x}+\beta^{2}W'}{1-\beta^{2}},\\ \\\mathbf{p}{}_{xy}=
& \frac{\mathbf{p}'_{xy}+v\mathfrak{g}'_{y}}{\sqrt{1-\beta^{2}}}.\end{array}\right.$
The formulas (not written down here) for $\mathfrak{g}_{x},\mathbf{p}{}_{xx},\mathbf{p}{}_{zz},\mathbf{p}{}_{zx}$ and $\mathbf{p}{}_{zy}$ can be found by replacing index $y$ by $z$ at the suitable
The most important question now is, which energy forms we want to combine for the formation of density $W$. The answer is: All of those who show no current (and thus also no momentum) in a valid
system (the rest system $K^{0}$^[17]). To those belong: heat^[18], chemical energy, elastic energy, inner energy of atoms, and maybe also new and unknown energy forms. Electromagnetic energy has to
be excluded in general, since it can also flow in the rest system $K^{0}$. If this doesn't occur in a special case, then one can include it in $W$ as well; tensor $\mathbf{p}$ then encloses also
Maxwell's stresses besides the elastic ones, and vector $\mathfrak{g}$ also the electromagnetic momentum besides the mechanical momentum.
If we now put the rest system instead of system $K'$, then formulas (6) become simplified due to $\mathfrak{g}^{0}=0$ in the following way:
$\left\{ \begin{array}{ccc} \mathfrak{g}_{x}=\frac{q}{c^{2}-q^{2}}\left(\mathbf{p}_{xx}^{0}+W^{0}\right), & & \mathfrak{g}_{y}=\frac{q}{q\sqrt{c^{2}-q^{2}}}\mathbf{p}_{xy}^{0},\\ \\W=\frac{c^{2}W
(7) ^{0}+q^{2}\mathbf{p}_{xx}^{0}}{c^{2}-q^{2}},\ & & \mathbf{p}_{yy}=\mathbf{p}_{yy}^{0},\ \mathbf{p}_{yx}=\mathbf{p}_{yx}^{0},\\ \\\mathbf{p}_{xx}=\frac{c^{2}\mathbf{p}_{xx}^{0}+q^{2}W^{0}}{c^{2}-q
^{2}},\ & & \mathbf{p}_{xy}=\frac{c}{\sqrt{c^{2}-q^{2}}}\mathbf{p}_{xy}^{0}.\end{array}\right.$
Here, $v$ is replaced by $q$, since $\mathfrak{q}$ denotes the velocity of the body in system $K$. We can also write the equations for $W$ and $\mathfrak{g}$ in a vectorial way as follows:
$\left\{ \begin{array}{l} W=\frac{c^2}{c^{2}-q^{2}}\left(W^{0}+\frac{1}{c^{2}}\left(\mathfrak{q}\left[\mathfrak{q}\mathbf{p}^{0}\right]\right)\right),\\ \\\mathfrak{g}=\frac{\mathfrak{q}}{c^{2}-q
(8) ^{2}}\left\{ W^{0}+\frac{1}{q^{2}}\left(\mathfrak{q}\left[\mathfrak{q}\mathbf{p}^{0}\right]\right)\right\} +\frac{1}{c\sqrt{c^{2}-q^{2}}}\left\{ \left[\mathfrak{q}\mathbf{p}^{0}\right]-\frac{\
mathfrak{q}}{q^{2}}\left(\mathfrak{q}\left[\mathfrak{q}\mathbf{p}^{0}\right]\right)\right\} .\end{array}\right.$
This can be easily proven, by assuming the $x$-axis as being parallel to $\mathfrak{q}$; the vector product $\left[\mathfrak{q}\mathbf{p}^{0}\right]$ then obtains the components
$\left[\mathfrak{q}\mathbf{p}^{0}\right]_{x}=q\mathbf{p}_{xx}^{0},\ \left[\mathfrak{q}\mathbf{p}^{0}\right]_{y}=q\mathbf{p}_{xy}^{0},$
so that one obtains the formulas under (7) again. Equations (8) contain in the most general way, the derivation of energy and momentum from velocity and the inner state of the body specified by $W^
{0}$ and $\mathbf{p}^{0}$. Especially it must be emphasized, that the second summand in the equation for $\mathfrak{g}$ represents a vector perpendicular to $\mathfrak{q}$; namely its scalar product
with $\mathfrak{q}$ is zero. Thus the momentum density is composed of a component parallel to $\mathfrak{q}$ and proportional to $q/c^{2}-q^{2}$, and a component perpendicular to $\mathfrak{q}$ and
proportional to $q/c\sqrt{c^{2}-q^{2}}$. The latter vanishes only then, when velocity $\mathfrak{q}$ lies in one of the axis-directions of the ellipsoid of the rest-stresses $\mathbf{p}^{0}$; then we
choose one of them as $x$-axis, and if $\mathfrak{q}$ is parallel to $x$, then $\left[\mathfrak{q}\mathbf{p}^{0}\right]=\mathfrak{q}_{x}\mathbf{p}_{xx}^{0}$, thus parallel to $\mathfrak{q}$.
If we integrate equations (8) over the volume
of a body, in which the velocity is spatially constant, then we find for its energy
$E=\int WdV=\frac{\sqrt{c^{2}-q^{2}}}{c}\int WdV^{0}$
and its momentum
the relations
$\left\{ \begin{array}{l} E=\frac{c}{\sqrt{c^{2}-q^{2}}}\left\{ E^{0}+\frac{1}{c^{2}}\left(\mathfrak{q}\left[\mathfrak{q},\ \int\mathbf{p}^{0}dV^{0}\right]\right)\right\} ,\\ \\\mathfrak{G}=\
(8a) frac{\mathfrak{q}}{c\sqrt{c^{2}-q^{2}}}\left\{ E^{0}+\frac{1}{q^{2}}\left(\mathfrak{q}\left[\mathfrak{q},\ \int\mathbf{p}^{0}dV^{0}\right]\right)\right\} +\frac{1}{c^{2}}\left\{ \left[\mathfrak
{q},\ \int\mathbf{p}^{0}dV^{0}\right]-\frac{\mathfrak{q}}{q^{2}}\left(\mathfrak{q}\left[\mathfrak{q},\ \int\mathbf{p}^{0}dV^{0}\right]\right)\right\} .\end{array}\right.$
There, $E^{0}$ is the rest energy, $\int\mathbf{p}^{0}dV^{0}$ like $\mathbf{p}^{0}$ is a symmetrical tensor.
When a body is accelerated in a adiabatic-isopiestic way (i.e. with constant $E^0$ and $\mathbf{p}^{0}$), namely in longitudinal direction, then by (8a) the momentum increase is generally not at all
in the common direction of velocity and acceleration. Its transverse component is rather equal to
$\frac{1}{c^{2}}\left\{ \left[\mathfrak{\dot{q}},\ \int\mathbf{p}^{0}dV^{0}\right]-\frac{\mathfrak{\dot{q}}}{q^{2}}\left(\mathfrak{q}\left[\mathfrak{q},\ \int\mathbf{p}^{0}dV^{0}\right]\right)\right
and not at all vanishes in the limiting case $q=0$. Already at this example one recognizes, that Newton's dynamics is not at all generally valid in this limiting case.
This (at first maybe strange) behavior becomes easily understandable in the sense of the inertia of energy (3a). The first summand in equation (8) for $\mathfrak{g}$, $\tfrac{q}{c^{2}-q^{2}}W^{0}$,
represents the convection current of energy, the other ones the energy current also known in the classical theory of elasticity in the motion of stressed bodies. That the latter by no means must have
the direction of velocity, but can also be perpendicular to it, shows in a very illustrative way the example of a rotating and torqued drive shaft, in which the energy transfer happens in a direction
parallel to the rotation axis.
The equations become essentially simplified, when the stresses $\mathbf{p}^{0}$ form a pressure $p^{0}$ which is equal in all directions. Then according to (8)
(9) $W=\frac{c^{2}W^{0}+q^{2}p^{0}}{c^{2}-q^{2}},\ \mathfrak{g}=\frac{\mathfrak{q}}{c^{2}-q^{2}}\left(W^{0}+p^{0}\right)$
If besides $q$, also $p^{0}$ is spatially constant, then Planck's equations^[19] follow from (8a):
(10) $E=\frac{c^{2}E^{0}+q^{2}p^{0}V^{0}}{c\sqrt{c^{2}-q^{2}}},\ \mathfrak{G}=\frac{\mathfrak{q}}{c\sqrt{c^{2}-q^{2}}}\left(E^{0}+p^{0}V^{0}\right)$
§ 3. Absolute and relative (elastic) stresses.[edit]
Fundamental equation (4) says, when applied to spatial components, that
(11) $\mathfrak{\dot{g}}=-\mathfrak{div}p$
$\mathfrak{\dot{g}}$ is the derivative of $\mathfrak{g}$ and $t$ and is formed for a point fixed in space, i.e. at constant $x, y, z$. Therefore, $\mathbf{p}$ is not the tensor of elastic stresses,
because they must (as in the previous theory) be in connection with the change of momentum of a certain body element $\delta V$. If we denote this change for the time interval $dt$ by $\mathfrak{\
underline{\dot{g}}}\delta Vdt$, then it's known that the relations hold:
$\mathfrak{\dot{g}}_{x}=\mathfrak{\underline{\dot{g}}}+\frac{\partial}{\partial x}\left(\mathfrak{g}_{x}\mathfrak{q}_{x}\right)+\frac{\partial}{\partial y}\left(\mathfrak{g}_{x}\mathfrak{q}_{y}\
right)+\frac{\partial}{\partial z}\left(\mathfrak{g}_{x}\mathfrak{q}_{z}\right)$
etc., or written in a vectorial way (see the "Introductory remarks")
(11a) $\mathfrak{\underline{\dot{g}}}=\mathfrak{\dot{g}}+\mathfrak{div}[[\mathfrak{gq}]]$
If we now introduce the unsymmetrical tensor
(12) $\mathbf{t}=\mathbf{p}-[[\mathfrak{gq}]]$
then one finds from (11) and (11a)
(13) $\mathfrak{\underline{\dot{g}}}=-\mathfrak{div}\mathbf{t}$
By this equation, tensor $\mathbf{t}$ proves to be the tensor of elastic stresses. Because if one integrates (13) over a finite body, it follows for its momentum change
(13a) $\frac{d\mathfrak{G}}{dt}=\int\mathfrak{t}_{n}d\sigma$
($d\sigma$ is the surface element, $n$ its normal), where the components of vector $\mathfrak{t}_{n}$ have to be formed according to the scheme
(14) $\mathfrak{t}_{nx}=\mathbf{t}_{xx}\cos nx+\mathbf{t}_{xy}\cos ny+\mathbf{t}_{zz}\cos nz$
The momentum increase of the body is on the left-hand side of (13a), thus the surface integral on the right-hand side is the force exerted upon it by the stresses, i.e. $\mathfrak{t}_{n}d\sigma$ is
the force acting upon $d\sigma$.
From (12) and the transformation formulas for $\mathbf{p}$ and $\mathfrak{g}$ given in (7), one easily finds the following for $\mathbf{t}$:
(15) $\left\{ \begin{array}{c} \mathbf{t}_{xx}=\mathbf{p}_{xx}^{0},\ \mathbf{t}_{yy}=\mathbf{p}_{yy}^{0},\ \mathbf{t}_{yz}=\mathbf{p}_{yz}^{0},\\ \\\mathbf{t}_{xy}=\frac{c}{\sqrt{c^{2}-q^{2}}}\mathbf
{p}_{xy}^{0},\ \mathbf{t}_{yx}=\frac{\sqrt{c^{2}-q^{2}}}{c}\mathbf{p}_{yx}^{0}.\end{array}\right.$
Contrary to (8), however, $\mathfrak{q}$ is set parallel to $x$. If the inner stress state is a pressure $p^{0}$ equal to all sides, then
$\mathbf{p}_{xx}^{0}=\mathbf{p}_{yy}^{0}=\mathbf{p}_{zz}^{0}=p^{0},\ \mathbf{p}_{xy}^{0}=\mathbf{p}_{yz}^{0}=\mathbf{p}_{zx}^{0}=0$
According to (15), exactly the same relations hold for $\mathbf{t}$. The relative pressure (equal at all sides)
is thus an invariant of the Lorentz transformation (Planck).
That the relative stresses $\mathbf{t}$ must be denoted as elastic stresses, not the absolute stresses $\mathbf{p}$, is also demonstrated in the transformation formulas (15) and (7). Namely, $\mathbf
{t}_{jk}$ are only connected (except with $\mathfrak{q}$) with the rest stresses $\mathbf{p}_{jk}^{0}$, while $\mathbf{p}_{jk}$ are also connected with $W^{0}$. Thus the latter change their value and
meaning, when one for example would separate the heat from $W^{0}$, as it would become necessary under consideration of thermal conduction, while this is not the case with respect to the first ones.
§ 4. The surface theorem.[edit]
In the previous theory of elasticity, the symmetry of the stress tensor is in closest connection with the so-called surface theorem, which is stating the conservation of angular momentum. If we want
here to search for the deeper meaning of the non-symmetry of tensor $\mathbf{t}$, we consequently have to transfer this theorem into relativity theory at first.
As in the previous theory we define as angular momentum, contained in a certain spatial area, the integral
(16) $\mathfrak{L}=\int[\mathfrak{rg}]dS$
extended over this area; $\mathfrak{r}$ is the radius vector, directing from an arbitrary fixed point to $dS$. If we ask after the change of $\mathfrak{L}$ with time, then the surface of this area is
to be considered as invariable during the differentiation. Consequently it is according to (11)
(17) $\frac{\partial\mathfrak{L}}{\partial t}=\int[\mathfrak{r\dot{g}}]dS=-\int[\mathfrak{r\mathfrak{div}}\mathbf{p}]dS=\int[\mathfrak{rp_{n}}]d\sigma,$
where $\mathfrak{p}_{n}$ is the vector with components
(17a) $\mathfrak{p}_{nx}=\mathbf{p}_{xx}\cos(nx)+\mathbf{p}_{xy}\cos(ny)+\mathbf{p}_{xz}\cos(nz)\ \mathrm{etc}.$
However, now we are asking after the angular momentum of a material volume element $dV$; it is
$[\mathfrak{rg}]dV=[\mathfrak{r,\ g}dV]$
however, if we calculate here the derivative with respect to time, then it is to be considered, that not only $\mathfrak{g}$, but also $V$ changes in quantity and location, thus vector $\mathfrak{r}$
as well. If we assume momentum $\mathfrak{g}dV$ as being constant, then this derivative is evidently equal to
$[\mathfrak{\dot{r},\ g}dV]=[\mathfrak{q,\ g}dV]$
since $\mathfrak{\dot{r}=q}$. In general, however,
enters as a summand. With respect to a moving bodies, it is consequently according to (13)
(18) $\frac{\partial\mathfrak{L}}{\partial t}=\int\left\{ [\mathfrak{r\dot{\underline{g}}}]+[\mathfrak{qg}]\right\} dV=\int\left\{ -[\mathfrak{r,\ \mathfrak{div}}\mathbf{t}]+[\mathfrak{qg}]\right\}
The first part of the space integral on the right-hand side can only be transformed by partial integration, and one finds in this way:
$-\int[\mathfrak{r,\ \mathfrak{div}}\mathbf{t}]_{x}dV=\int\left[\mathfrak{rt}_{n}\right]_{x}d\sigma+\int\left(\mathbf{t}_{xy}-\mathbf{t}_{yz}\right)dV,$
thus under consideration of (12), according to which
it is
$-\int[\mathfrak{r\ \mathfrak{div}}\mathbf{t}]dV=\int\left[\mathfrak{rt}_{n}\right]d\sigma-\int[\mathfrak{qg}]dV$
If one substitutes this value, then it follows from (18)
(19) $\frac{\partial\mathfrak{L}}{\partial t}=\int\left[\mathfrak{rt}_{n}\right]d\sigma$
The surface integral on the right-hand side is the torque exerted by the surrounding upon the body, because $\mathfrak{t}_{n}d\sigma$ is the force acting upon $d\sigma$. If we conversely calculate
the torque exerted by the body upon its surrounding, then we also find $\int\left[\mathfrak{rt}_{n}\right]d\sigma$, but now the normal has the opposite direction than before, consequently $\mathfrak
{t}_{n}$ as well as $\int\left[\mathfrak{rt}_{n}\right]d\sigma_{n}$ is oppositely equal to the previous values. If no other torque is acting upon the surrounding as the calculated one, then for the
change of its angular momentum $\mathfrak{L}^{a}$ the relation holds
(20) $\mathfrak{L}+\mathfrak{L}^{a}=const.$
The surface theorem thus also holds in the theory of relativity. This can of course also be concluded from (17).
If we apply equation (19) upon an infinitesimal material parallelepiped, where we locate the coordinate axes parallel to its edges, then the previous theory concludes as follows: $\mathfrak{g}$ is
parallel to $\mathfrak{q}$, thus $[\mathfrak{qg}]=0$ and according to (18)
Since we are allowed to displace the origin of $\mathfrak{r}$ into the volume of $dV$, then $[\mathfrak{rg}]dV$ becomes as small as $dV$ in the limiting passage of higher order. Torque $\int\left[\
mathfrak{rt}_{n}\right]d\sigma$, however, becomes in its $x$-component equal to $\left(\mathbf{t}_{yz}-\mathbf{t}_{zy}\right)dV$. Consequently, it must be $\mathbf{t}_{zy}=\mathbf{t}_{yz}$ etc..
Although in relativity theory, $[\mathfrak{r\mathfrak{\underline{\dot{g}}}}]dV$ is also to be neglected in the limit, yet it follows according to (18)
$\frac{\partial\mathfrak{L}}{\partial t}=\left[\mathfrak{qg}\right]dV$,
in agreement with (12). The stress tensor $\mathbf{t}$ is unsymmetrical because of the reason, that a stressed body element requires a torque to maintain its velocity.
In the rest system $K^{0}$, tensor $\mathbf{t}^{0}$ must be symmetric due to the surface theorem; at the same time, the momentum density $\mathfrak{g}^{0}$ and the energy current $\mathfrak{S}^{0}$
is zero in $K^{0}$. Consequently, in $K^{0}$ (see. (1a)) the symmetry equations hold
$T_{jk}=T_{kj}\ \left(k=x^{0},y^{0},z^{0},l^{0}\right)$
By that, however, the symmetry of world-tensor $T$ is proven per se; in every system the corresponding symmetry equations must hold.
With respect to processes not entirely dynamic, in which equation (5) replaces equation (4)
$\Sigma F=-\Delta iv(\Sigma T)=0$
we conclude in analogy to (11), that
which is the equation containing the momentum theorem. If we then define the total angular momentum
then in analogy to (17)
$\frac{\partial}{\partial t}\Sigma\mathfrak{L}=\int[\mathfrak{r},\Sigma\mathfrak{p}_{n}]d\sigma$
From that, as above, the theorem of conservation of angular momentum $\Sigma\left(\mathfrak{L}+\mathfrak{L}^{a}\right)=const.$ follows. The summations are to be extended over all forms of momentum
(mechanical, electromagnetic, etc.) and over all angular momentums. The surface theorem thus obtains a meaning, which surpasses that of dynamics, and which is valid for the whole of physics.
§ 5. Completely static system.[edit]
As it followed from equation (8) and (8a), the dynamics of relativity theory is generally quite complicated. Yet the relations become simple again with respect to a complete static system. By that,
we understand such one, which is in static equilibrium in a valid reference system $K^{0}$, without interacting with other bodies;^[20] thus for example an electrostatic field including all carriers
of charge.^[21] In this field, the momentum density (related to its rest system) is everywhere zero and its energy is rigidly connected at its place. In every other reference system $K$, the total
energy (including the electromagnetic energy when it is present) shares the motion. Consequently we can understand in formulas (8a) under $E^{0}$ the total energy, under $\mathfrak{G}$ the total
momentum, and under $\mathbf{p}^{0}$ the sum of elastic stresses and Maxwell stresses.
First, we consider the state in $K^{0}$. Since $\mathfrak{g}^{0}=0$, it follows from (11) for an arbitrary limited space
(20a) $-\int\mathfrak{div}\mathbf{p}^{0}dS^{0}=\int\mathfrak{p}_{n}^{0}d\sigma^{0}=0$
Now we choose the boundary, so that it consists of an arbitrary cross-section of the system, and of an area completely outside of the system. Since it was presupposed that the system is not in
interaction with other bodies, it thus can be viewed as being in vacuum, then for the second part $\mathfrak{p}_{n}^{0}=0$, thus also for the cross-section alone
(20b) $\int\mathfrak{p}_{n}^{0}d\sigma^{0}=0$
Now we choose a plane $x^{0}=const.$ as cross-section. Then it follows, by applying the vector equation (20b) upon the coordinate directions, and by expressing the components of $\mathfrak{p}_{n}^{0}
$ by $\mathbf{p}_{x^{0}x^{0}},\ \mathbf{p}_{x^{0}y^{0}}$ etc. according to (17a):
$\begin{array}{c} \int\mathfrak{p}_{nx^{0}}^{0}d\sigma^{0}=\int\mathbf{p}_{x^{0}x^{0}}^{0}dy^{0}dz^{0}=0,\\ \\\int\mathfrak{p}_{ny^{0}}^{0}d\sigma^{0}=\int\mathbf{p}_{x^{0}y^{0}}^{0}dy^{0}dz^{0}=0,\\
If we multiply $dx^{0}$ here, and integrate over the total volume $V^{0}$ of the system, then we find
Six additional equation are obtained by us, when we choose $y^{0}=const$ or $z^{0}=const$ as cross-section. All of them can be summarized in the tensor equation
(21) $\int\mathbf{p}^{0}dV^{0}=0$
For a completely static system, equation (8a) thus pass into:
(22) $E=\frac{c}{\sqrt{c^{2}-q^{2}}}E^{0},\ \mathfrak{G}=\frac{q}{c\sqrt{c^{2}-q^{2}}}E^{0}$
i.e., a completely static system behaves (when in uniform motion) as a mass point of rest mass
The same, however, also holds for quasi-stationary (adiabatic isopiestic) acceleration; because they are denoted as quasi-stationary, when the inner state $\left(E^{0},\mathbf{p}^{0}\right)$ is not
noticeably changed. However this system may be formed, its longitudinal mass is always
$m_{l}=\frac{\partial G}{\partial q}=\frac{c^{3}m^{0}}{\left(\sqrt{c^{2}-q^{2}}\right)^{3}}$
its transverse mass
in the limiting case $q=0$ it satisfies Newton's mechanics.
One such system is for example formed by the electron with its field; however it may be formed, in quasi-stationary motion it must satisfy the dynamics of the mass point, so that from attempts of
this kind one cannot draw conclusions about its form, its charge distribution, and also not as to whether it has another momentum besides its electromagnetic momentum. In this way, Einstein actually
answered Ehrenfest's question correctly. That Born felt compelled to assume spherical symmetry, lies in the fact that he didn't consider the mechanical momentum of the electron, which can only be
assumed as zero at certain assumptions concerning the form and charge distribution, for example when one assumes spherical symmetry; but not generally. Because in the electron, there are stresses of
other kinds^[22] (which can provisionally be denoted as elastic ones) besides the electromagnetic ones, and these will in general supply a transverse component to the mechanical momentum according to
(8a), which cannot be made to zero (unlike the longitudinal component) by a suitable choice of $E^{0}$. In the case of spherical symmetry, the transverse component of course vanishes.
A complete static system is also the condenser of the Trouton-Noble experiment with its field. The total system requires (at uniform velocity) a torque as little as a mass point. The torque, exerted
by the electromagnetic forces upon the condenser itself, it exactly that one, which is required by this elastically stressed body according to § 4. Neither the electromagnetic momentum of the field,
nor the mechanical momentum of the body, have in this case the direction of velocity, but the total momentum consisting of both of them has surely that direction, as it follows from (22).
(Received April 30, 1911)
1. ↑ A. Einstein, Ann. d. Phys. 17. p. 891. 1905.
2. ↑ M. Planck, Verh. d. Deutsch. Physik. Ges. 4. p. 136. 1906.
3. ↑ M. Planck, Berliner Ber. 1907. p. 542; Ann. d. Phys. 26. p. 1. 1908.
4. ↑ P. Ehrenfest, Ann. d. Phys. 23. p. 204. 1907
5. ↑ A. Einstein, Ann. d. Phys. 23. p. 206. 1907
6. ↑ M. Born, Ann. d. Phys. 30. p. 1. 1909
7. ↑ Fr. T. Trouton a. H. R. Noble, Proc. Roy. Soc. 72. p. 132. 1903.
8. ↑ H. A. Lorentz, Proc. Amsterdam 1904. p. 805.
9. ↑ See for example G. Hamel, Mathem. Ann. 66. ü. 350. 1908.
10. ↑ The same consideration can be found in a somewhat different form in my treatise "The Principle of Relativity", Braunschweig 1911, to be published soon.
11. ↑ W. Voigt, Göttinger Nachr. 1904. p. 495.
12. ↑ H. Minkowski, Göttinger Nachr. 1908. p. 1
13. ↑ A. Sommerfeld, Ann. d. Phys. 32. p. 749; 33. p. 649. 1910.
14. ↑ In electrodynamics of ponderable bodies, it is
where $Q$ denotes the Joule-heat produced per volume- and time unit.
15. ↑ M. Planck, Physik. Zeitschr. 9. p. 828. 1908; Verh. d. Deutsch. Physik. Ges. 6. p. 728. 1908.
16. ↑ It is only presupposed here, that the immediate surrounding of the considered material point rests in $K^{0}$. Only later when the integration takes place, which leads to formulas (8a) and
(10), the whole body must be considered as at rest in $K^{0}$
17. ↑ We always think of uniformly tempered bodies, consequently we neglect thermal conduction.
18. ↑ M. Planck, Ann. d. Phys. 26. p. 1. 1908, equations (43) a. (46).
19. ↑ Instead of this condition, one can assume without essential change, that the surrounding shall exert an equal pressure from all sides. Then equations (10) take the place of (22), while (21) had
to be changed into
20. ↑ One even could assume electrostatic-magnetostatic fields, although $\mathfrak{S}^{0}$ is different from zero in this case. This would have the success, that in (7) and (8) certain summands
proportional to $\mathfrak{g}^{0}$ would occur. However, they would vanish again in the integration over the body, since
(8a) would thus remain unchanged; as well as (20a), because $\mathfrak{\dot{g}}^{0}=0$.
This is a translation and has a separate copyright status from the original text. The license for the translation applies to this edition only.
This work is in the
public domain
in the
United States
because it was published before January 1, 1923.
The author died in 1960, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 50 years or less. This work may also be in
the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works.
This work is released under the
Creative Commons Attribution-ShareAlike 3.0 Unported
license, which allows free use, distribution, and creation of derivatives, so long as the license is unchanged and clearly noted, and the original author is attributed.
|
{"url":"https://en.wikisource.org/wiki/Translation:On_the_Dynamics_of_the_Theory_of_Relativity","timestamp":"2014-04-17T14:30:57Z","content_type":null,"content_length":"117586","record_id":"<urn:uuid:8ba98965-d6ea-4ed2-a1d8-9e1d7264494d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
East Los Angeles, CA Precalculus Tutor
Find an East Los Angeles, CA Precalculus Tutor
...I am a math major at Caltech currently doing research in graph theory and combinatorics with a professor at Caltech. I have taken several discrete math courses and I spent a summer solving
hard problems in discrete math with a friend. I began programming in high school, so the first advanced ma...
28 Subjects: including precalculus, chemistry, Spanish, physics
...I have taken the standard Caltech course in differential equations, the graduate course in differential equations, and my thesis is on differential equations of ions in an ion trap. I am a
graduating senior at Caltech in physics. I have taken the undergraduate course in Linear algebra.
26 Subjects: including precalculus, calculus, physics, algebra 2
...My tutoring experience is vast. I, for the last two years of my high school career, tutored in all levels of mathematics, biology, and language arts. In my senior year, I possessed the title
of -AVID Tutor- where I aided my former AP Language and Composition teacher in teaching essay-writing and critical reading and analysis because of my previous success in the course and 5 on the
AP test.
22 Subjects: including precalculus, reading, English, writing
...Happy Precalculusing! I love sine, cosine, tangent, and cotangent. I also like to challenge the identity problems.
11 Subjects: including precalculus, calculus, statistics, geometry
...I try to keep things simple. Even very difficult problems can be broken down into basic components, and applying the relevant formula becomes, well, formulaic. Being able to recognize the
right patterns in math and physics can turn a mind-bending problem into a step-by-step procedure.I have earned a Bachelor's degree in Engineering Physics from Case Western Reserve University.
11 Subjects: including precalculus, calculus, physics, algebra 2
Related East Los Angeles, CA Tutors
East Los Angeles, CA Accounting Tutors
East Los Angeles, CA ACT Tutors
East Los Angeles, CA Algebra Tutors
East Los Angeles, CA Algebra 2 Tutors
East Los Angeles, CA Calculus Tutors
East Los Angeles, CA Geometry Tutors
East Los Angeles, CA Math Tutors
East Los Angeles, CA Prealgebra Tutors
East Los Angeles, CA Precalculus Tutors
East Los Angeles, CA SAT Tutors
East Los Angeles, CA SAT Math Tutors
East Los Angeles, CA Science Tutors
East Los Angeles, CA Statistics Tutors
East Los Angeles, CA Trigonometry Tutors
Nearby Cities With precalculus Tutor
August F. Haw, CA precalculus Tutors
Boyle Heights, CA precalculus Tutors
City Industry, CA precalculus Tutors
City Of Industry precalculus Tutors
Commerce, CA precalculus Tutors
Firestone Park, CA precalculus Tutors
Glassell, CA precalculus Tutors
Hazard, CA precalculus Tutors
Los Nietos, CA precalculus Tutors
Montebello, CA precalculus Tutors
Monterey Park precalculus Tutors
Rancho Dominguez, CA precalculus Tutors
South, CA precalculus Tutors
Walnut Park, CA precalculus Tutors
Windsor Hills, CA precalculus Tutors
|
{"url":"http://www.purplemath.com/east_los_angeles_ca_precalculus_tutors.php","timestamp":"2014-04-19T09:52:04Z","content_type":null,"content_length":"24770","record_id":"<urn:uuid:66cc5d6f-db0a-4a25-9b60-e36760c6680c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Uprise Books Project
Back in 2002, the U.S. Census Bureau released a study that compared the expected lifetime earnings of workers in the U.S. based on education level. The results were pretty much what you’d expect:
more education equals more income (see the chart on the right). A high school dropout could expect to earn about $1 million over a typical forty-year career, while finishing a bachelor’s degree
bumped that number up to roughly $2.1 million. You can see the results in the chart on the right.
If you look at the bottom two entries on that chart, you’ll see that, over the course of a career, a person with a high school diploma will earn about $200,000 more than a person who fails to
graduate ($1.2 million vs. $1 million). In other words, an investment in a high school diploma today will be worth a total of $200,000 forty years from now.
Say we wanted to know how much money we’d need to invest in, oh, the S&P 500 to get those same results: a return on investment of $200,000 in 40 years. Well, it turns out that it’s pretty easy to
calculate. The equation looks like this:
• PV is the number we’re trying to figure out, the “present value” of our $200k diploma. In other words, how much do we need to invest today to end up with $200,000 forty years from now?
• FV is the “future value” of our investment. We already know that’s $200,000.
• n is the number of years we’ve had the investment (forty).
• And i is the interest rate…
Since we’re talking about the stock market here, we have to make some assumptions when it comes to figuring out what value to use for i in that equation. You probably know that stock prices can
fluctuate wildly from year to year, so there isn’t a single guaranteed interest rate to plug into the formula. We can, however, use something called the “compound annual growth rate” (CAGR, think of
it as the average annual rate of growth over a period of time). It’s not exactly the same thing as an interest rate, but it’s close enough for this exercise. A little bit of Googlin’ tells us that
the CAGR for the S&P 500 was 8.92% between 1971 and 2010.
Once we plug all the numbers into that equation, it just takes a little arithmetic to find…
…an investment of $6,557.30 in the S&P 500 in 1971 is worth $200,000 at the end of 2010.
For perspective, The Uprise Books Project could ship over 260 books to underprivileged teens for that same $6557.30. If just ONE of those books made enough of a difference in a kid’s life to keep him
in school long enough to earn a high school diploma, we’ll have beaten the market.
Pretty amazing when you think about it… We only need to make an impact with 0.382% of the books we distribute to earn the same $200,000 return on investment as someone putting their money into the S&
P 500.
One Response to Investing in Banned Books
1. [...] their blog, they make an economic based argument that by sending these books to lower income teens they hope to encourage them to finish high school [...]
|
{"url":"http://uprisebooks.org/blog/2011/09/15/investing-in-banned-books/","timestamp":"2014-04-16T15:59:36Z","content_type":null,"content_length":"39244","record_id":"<urn:uuid:b9221841-5faf-4a08-ba3c-ad90cf4b9ad4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pricing and Greeks Simplex Options Black Scholes formula
Modern option pricing theory began when Black and Scholes (1973) and Merton (1973) published their groundbreaking papers on pricing vanilla options. In the same year, the first organized options
exchange in the world, the Chicago Board of Options Exchange (CBOE), was established. In the ensuing decades, numerous papers have been published to extend the Black-Scholes model, and countless
exotic options have been designed, issued and traded in the markets.
Exotic options are tailor-made to satisfy investors with special requirements on risk and reward. For example, path-dependent options have payoffs that depend critically on the price history of the
underlying asset, and they are structured to address that risk. Rainbow options, in contrast, have payoffs that depend on multiple underlying assets, and they are designed to capture multi- factor
risks. Although each type of options has been studied intensively in the literature, the options that combine both features (i.e., path dependency and multiple underlying assets) have received few
analytical treatments.
This paper contributes to the option pricing theory by analyzing a broad class of path-dependent rainbow options called simplex options and pricing them. This notion is general and includes many
known options as special examples, like vanilla options (Black and Scholes 1973, Merton 1973), reset options (Gray and Whaley 1999), rainbow options (Johnson 1987), discrete lookback options (Heynen
and Kat 1995), and numerous other options not covered in this paper. Furthermore, it also inspires the design of new path-dependent rainbow options, such as forward-start rainbow options and discrete
lookback rainbow options. The building blocks of simplex options will be called the simplex expectations, which can be easily evaluated by our extended Black-Scholes (EBS) formula. These concepts are
new to the literature, to which we turn in the following paragraphs.
An expectation is simplex if it is of the form
where Q is the risk-neutral measure, Sp(tq) is the price of the p-th underlying asset at monitoring time tq, I is the indicator function, and inside the joint event are all pairwise
price-comparisons. The EBS formula extends the celebrated Black-Scholes formula to evaluate simplex expectations.
To give a taste of what the EBS formula looks like, consider the following simplex expectation with three underlying assets:
Its EBS formula, similar to the famous Black-Scholes formula, is
Above, r is the risk-free rate, N is the standard normal cumulative distribution function (CDF), is the volatility of the a-th underlying asset's return process, and is the correlation between the
a-th and b-th assets' returns. This paper offers a general theorem that easily yields the EBS formula for any simplex expectation.
An option is called simplex if its risk-neutral expected payoff is a linear sum of simplex expectations. For example, the vanilla call, with underlying asset S, maturity date T, strike price K and
payoff function max(S(T)-K,0), is simplex since its risk-neutral expected payoff,
is a linear sum of simplex expectations after we associate K, the strike price determined at the initial time 0, with S0(0), the price of the riskless asset S0 at time 0. Many path-dependent rainbow
options turn out to be simplex. The primary focus of this paper is the pricing and Greeks of simplex options that include a broad class of complex path-dependent rainbow options.
The underlying asset's prices of path-dependent options may be continuously monitored or discretely monitored. This paper focuses on the method of discrete monitoring, more widely adopted in
practice. (We shall drop the phrase “path-dependent” whenever the feature of discrete monitoring is included, which already implies path dependency.)
Numerous studies have been conducted on discretely monitored options and rainbow options. Nonetheless, few attempts are made to price options that combine both features by analytical approaches.
Table 1 summarizes the analytical results related to discretely monitored and rainbow options. Simplex options include the options in Table 1 and fill the vacuum in its lower-right quadrant.
We next go over each of the table's four quadrants in turn. For the upper-left quadrant of Table 1, vanilla options are priced by Black and Scholes (1973) and Merton (1973). The original
Black-Scholes model assumes that there is only one risky underlying asset in the market. The Black-Scholes-Merton pricing formulas for vanilla options are computationally simple and do not require
investors' risk preferences. These characteristics contribute greatly to their popularity.
An option on two or more risky assets is referred to as a rainbow option by Rubinstein (1991). Rainbow options occupy the lower-left quadrant of Table 1. The options that exchange one underlying
asset for another are first studied by Margrabe (1978). Stulz (1982) prices the options on the minimum or the maximum of two assets by solving partial differential equations. The options on the
maximum or the minimum of several assets are considered by Johnson (1987). Different from those in Stulz (1982), the pricing techniques in Johnson (1987) are from Margrabe (1978).
The upper-right quadrant of Table 1 contains discretely monitored options. One example is the discrete lookback option whose payoff depends on the discretely monitored maximum or minimum historical
price. Heynen and Kat (1995) value these kinds of options by the martingale pricing method and give closed-form formulas in terms of multinormal CDFs. Another example is the reset option whose strike
price can be reset based on the underlying asset's prices on the reset dates. Gray and Whaley (1999) derive a pricing formula for the reset put option with one reset date. Cheng and Zhang (2000)
price the reset option with multiple reset dates. Finally, Liao and Wang (2003) value the reset option with multiple strike resets and reset dates.
The lower-right quadrant of Table 1 features discretely monitored rainbow options, whose payoffs each may depend on every price of every underlying asset at every monitoring time. Due to their
complexity, analytical results for such options are rare. With the earlier unpublished works on discrete lookback rainbow options and forward-start rainbow options, this paper fills the vacuum in
this quadrant with new exotic options and offers their EBS solutions.
Although they have complex external features, simplex options have the simple essence of being linear on the events called simplex. Equipped with the proposed EBS formula, simplex options are easily
priced and their pricing formulas are systematically formalized. Furthermore, analytical approaches to their Greeks and numerical issues of applying simplex options will be discussed.
We remark that the wonderful idea of pricing complex options by simpler ones appears as well in Ingersoll (2000), where the building blocks are digital options and only single risky asset is taken
into account. Contrast to previous literature, simplex options feature a broad class of discretely monitored rainbow options with Black-Scholes-like pricing formulas. We also remark that an option
with a Black-Scholes-like formula is not necessarily simplex. Both continuous geometric Asian options (Angus 1999) and geometric average trigger reset options (Dai et al. 2005) have
Black-Scholes-like formulas, but neither are simplex.
In summary, the contributions of this paper are:
a new option class called simplex options with analytical formulas and Greeks;
the extended Black-Scholes (EBS) formula for simplex expectations;
the valuation of new exotic options that are discretely monitored rainbow options.
This paper is organized as follows. Section 2 formally investigates the settings, fundamental concepts, theoretical results and numerical issues of the theory of simplex options. Section 3 gives
concrete examples of pricing simplex options by the EBS formula. Section 4 concludes the paper.
Supplementary Background Knowledge
A financial option entitles its holder to buy or sell a risky asset for a certain price (the strike price) on a future date (the maturity date). For example, a vanilla call option on TSMC with the
strike price of 60 NT dollars and with the maturity date on November 17 gives its holder the right, but not the obligation, to buy 2,000 shares of TSMC, each of which is worth 60 NT dollars. It is
clear to see that the call option on TSMC will be valuable if the stock price of TSMC is higher than the strike price (60 NT dollars) on the maturity date (November 17); however, the option will be
worthless if the stock price drops to a price lower than the strike price on the maturity date.
Greeks of a financial option measures the sensitivities of the option value to a change in factors, such as the underlying asset’s price, the time to maturity and the strike price, on which the
option value is dependent. Being often denoted by Greek letters, these measurements of sensitivities are called Greeks.
Share This Essay
Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month.
Request Removal
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Request the removal of this essay.
More from UK Essays
Need help with your essay?
We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay,
you will see an instant price based on your specific needs before the order is processed:
|
{"url":"http://www.ukessays.com/essays/finance/pricing-and-greeks-simplex-options-black-scholes-formula-finance-essay.php","timestamp":"2014-04-18T13:43:07Z","content_type":null,"content_length":"31543","record_id":"<urn:uuid:fa6fe4cb-813a-4e7d-a7ee-c0636978fb4d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[R] why doesn't as.character of this factor create a vector ofcharacters?
[R] why doesn't as.character of this factor create a vector ofcharacters?
Bert Gunter gunter.berton at gene.com
Tue Jul 10 19:53:44 CEST 2007
As you haven't received a reply yet ...
?factor,?UseMethod, and "An Introduction to R" may help. But it's a bit
Factors are objects that are integer vectors (codes) with a levels attribute
that associates the codes with levels as character names. So
df[df$a=="Abraham",] is a data.frame in which the columns are still factors.
as.character() is a S3 generic function that calls the (internal) default
method on a data.frame. This obviously just turns the vector of integers
into characters and ignores the levels attribute.
t() is also a S3 generic with a data.frame method. This merely converts the
data.frame to a matrix via as.matrix and then applies t() to the matrix. The
as.matrix() method for data.frames captures the levels and converts the
data.frame to a character matrix with the level names, not their numeric
codes.So another perhaps more intuitive but also more storage intensive way
(I think) of doing what you wantthat avoids the transpose and as.vector()
conversion would be:
mx <- as.matrix(df)
Bert Gunter
Genentech Nonclinical Statistics
-----Original Message-----
From: r-help-bounces at stat.math.ethz.ch
[mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Andrew Yee
Sent: Tuesday, July 10, 2007 8:57 AM
To: r-help at stat.math.ethz.ch
Subject: [R] why doesn't as.character of this factor create a vector
I'm trying to figure out why when I use as.character() on one row of a
data.frame, I get factor numbers instead of a character vector. Any
See the following code:
#Suppose I'm interested in one line of this data frame but as a vector
one.line <- df[df$a=="Abraham",]
#However the following illustrates the problem I'm having
one.line <- as.vector(df[df$a=="Abraham",]) #Creates a one row
data.frame instead of a vector!
#compare above to
one.line <- as.character(df[df$a=="Abraham",]) #Creates a vector of 1, 3, 1!
#In the end, this creates the output that I'd like:
one.line <-as.vector(t(df[df$a=="Abraham",])) #but it seems like a lot of
R-help at stat.math.ethz.ch mailing list
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
More information about the R-help mailing list
|
{"url":"https://stat.ethz.ch/pipermail/r-help/2007-July/136111.html","timestamp":"2014-04-16T13:34:08Z","content_type":null,"content_length":"5889","record_id":"<urn:uuid:0a6d59b1-6f27-4e56-b4bd-f66fef3b63c5>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Lapack] psuedo inverse algorithm
LAPACK Archives
<prev [Date] next> <prev [Thread] next>
[Lapack] psuedo inverse algorithm
From: adam W
Date: Sat, 2 Mar 2013 08:17:09 -0800 (PST)
I have recently published an algorithm that computes the Moore-Penrose
pseudoinverse, and does so in exact arithmetic. It accomplishes this with 2mn^2
multiplications (dimensions with m>n), thus is comparable to a single matrix
multiply. I believe your function makes use of singular value decomposition.
Mine does not achieve the singular values, which is why it works quicker for
just the pseudoinverse.
Here is the (thus far only) link to my algorithm:
I believe that this could enhance your wonderful software, how may I help this
happen? Do I need to submit publication into a journal, or maybe write some
code? (I have my code in python currently, but the algorithm is quite simple
really and could easily be ported-- at least it would be easy for me to do.)
-------------- next part --------------
An HTML attachment was scrubbed...
<Prev in Thread] Current Thread [Next in Thread>
• [Lapack] psuedo inverse algorithm, adam W <=
For additional information you may use the
LAPACK/ScaLAPACK Forum
Or one of the mailing lists, or
|
{"url":"http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg01394.html","timestamp":"2014-04-17T15:28:35Z","content_type":null,"content_length":"6649","record_id":"<urn:uuid:f1e2af29-9bc1-4730-9afb-bef91dacf486>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Need ray-sphere intersection code
This is going to sound incredibly lazy, but I really need code to compute the intersection of a ray and sphere. I can convert whatever structures it uses to my own, never fear.
There are a billion descriptions of how to do this on the 'net. Half of them describe an algebraic solution (solve a quadratic equation) which I suspect isn't as good for me as the geometric approach
which uses vectors, dot products and so on. Most of the references have no code; some seem to disagree on whether they need to check for certain special cases explicitly or not. I found a function
somewhere but my testing indicates there may be a problem.
I could write my own given a little more time and a little more sleep but I'm out of both of those. :-( So, please, debugged code or *robust* pseudo-code, anyone?
Or, I could post what I'm using and someone can tell me whether it looks reasonable or not.
Thanks! Just a few more hurdles and I'll finally be able to post an alpha version of Asteroid Rally for people to try out...
Measure twice, cut once, curse three or four times.
Check out the E3Ray3D_IntersectSphere() function in the Quesa library. It's in the E3Math.c file.
Quesa is here.
Posts: 869
Joined: 2003.01
I'll give it a shot.
Basically, the dot product and cross product approach is all you need.
The center of the sphere is at C, its radius is r, your ray starts at V and has direction D
1. Map the (C-V) vector onto the D vector. (project (C-V) on D)=CV'
2. Find the vector perpendicular to D that points to C: ((C-V) - CV')=R
3. if the magnitude of R is less than the radius of the sphere, you have an intersection. if (abs® <= r) then intersection
I cannot remember how to find the exact points of intersection, but this should get you started. The point (C+R) should mark the middle of the 2 intersections, and I guess there is some trig to
figure out the rest.
Amazing! Both your answers are better than anything I found with Google. I've had pretty good luck getting hints via Google for most topics, but for some reason I came up short on this one. Too much
of a hurry, maybe.
I'll poke around with these (and any other answers I get) tonight, and hopefully this will get me past my hump! Fingers crossed...
Measure twice, cut once, curse three or four times.
Quote:Originally posted by DoooG
I cannot remember how to find the exact points of intersection, but this should get you started. The point (C+R) should mark the middle of the 2 intersections, and I guess there is some trig to
figure out the rest.
Yes, that much I had figured out. One method is that the distance d from (C+R) to the intersection points can be computed from the Pythagoras theorem:
d = sqrt(r^2 - len(C+R)^2)
Maybe there are ways I could avoid taking the square root but for the number of collisions I expect each frame (usually 0, sometimes 1 or 2) it's not a problem.
It's weird, I had the worst time with this problem. I knew the pieces but I could not for the life of me put together the whole solution. "Mental block" is the only way to describe it. Now I can feel
the block falling away. I may or may not use the Quesa code, but either way I feel much better now that I understand how to do it myself. Thanks, Dooog and Jammer...
Measure twice, cut once, curse three or four times.
Posts: 304
Joined: 2002.04
Glad you got a handle on it. But let me pile on with one more link (even though its a little late) - <
>. Eberly's library has tons of collision/intersection code. If you find it useful make sure to buy his book!
The wild-magic license is clear-cut: I can use the code, period. I may yet do that. But using theirs would require more work. The Quesa code is very close to what I need, but their license doesn't
seem to talk about the terms under which I can use that one function, whether as is, or with modifications. They seem more concerned with the library as a whole...
Does anyone know whether I can crib this one function from Quesa, and under what terms? I want to do the right thing. I will be changing the data structures and adding a case for the end of my line
segment, but the logic would be mostly theirs.
Measure twice, cut once, curse three or four times.
Posts: 5
Joined: 2009.01
Here is the code I use for sphere intersection, maybe it helps.
float ray_sphereIntersection(sphere_t* sphere, ray_t* ray) {
float b = vdot(ray->d, vsub(ray->o, sphere->p));
float c = vsquaredist(ray->o, sphere->p) - (sphere->r * sphere->r);
float d = b * b - c;
if (d < 0.0f) {
return -1.0f;
return -b - sqrt(d);
Quote:Originally posted by MattDiamond
I will be changing the data structures and adding a case for the end of my line segment, but the logic would be mostly theirs.
Actually, "the logic" is not theirs as it comes from 'Real Time Rendering', section 10.3.2., as they note at the top of the function. Frankly, as long as you are making changes to the code, and you
will have to to get your structures to fit, then I think you're fairly safe since the whole library is LGPL'd anyway.
Posts: 148
Joined: 2003.03
magicsofware.com .... gee they aint lying
* MacFiend has been touched by God
Posts: 304
Joined: 2002.04
Quote:Originally posted by MacFiend
magicsofware.com .... gee they aint lying
yeah - and his game physics book/cd/library comes out in December!
Codemattic (who is saving his pennies)
OK, I'll give the complete formulas here:
I'll Use DoooG's notation:
sphere with center C and radius R
line with point V and direction D
The vector from the sphere's center to the line point S = V - C
The distance along the line is
L = - (S.D)/(D.D) +/- sqrt((R^2-(S.S))/(D.D) + ((S.D)/(D.D))^2)
and the intersection point's position is V + L*D
The square-root term can be expressed in a way that may be more convenient for calculation, if the line's direction is close to its point's direction to the circle's center.
Let p = (S x D)^2 -- the absolute square of the cross product of S and D. Then
L = ( - (S.D) +/- sqrt((D.D)*R^2 - P) )/(D.D)
The discriminant (the term in the square root) can be used as a test for intersection.
I'm guessing that the sphere is being used as a bounding box. Other shapes have been used as bounding boxes, notably vertical cylinders and rectangular solids.
The circle part of a cylinder can be handled with the same mathematics as with the sphere, except that the cross-product is a scalar, not a vector quantity:
a x b = a[1]*b[2] - a[2]*b[1]
An object with a bounding box can have its bounding box follow its rotations by making its bounding box also be rotated (oriented bounding box). One has to calculate a collision with each of its six
bounding planes, thoough the calculation is somewhat simplified by those planes coming in parallel pairs.
One can simplify a bit further by constructing an axis-aligned one; find the minimum and maximum coordinate values of all the vertices of an oriented bounding box and use those to construct a new
bounding box with bounding planes along the coordinate axes.
Actually, the sphere is the very thing I'm intersecting with, it's not a bounding box. Aren't simplifying assumptions grand?
In the end I got the Quesa/Real Time Rendering version of the algorithm working very quickly. Thanks to everyone's help I've finally posted a playable version of Asteroid Rally (see thread in the uDG
2003 forum for info.)
Thanks, all!
Measure twice, cut once, curse three or four times.
Posts: 869
Joined: 2003.01
As it happens, I am working on the subject, and have just finished a tidbit about intersection detection.
It has ray/triangle and ray/sphere collision details, outlining the steps to be taken, on a whole 3 pages. No code, but implementing it should be easy. I don't know if the paper contains the fastest
methods, but I think they are logically correct.
If there is interest, I am planning on implementing ray/cylinder, sphere/triangle, cylinder/sphere, cylinder/triangle, ray/box, box/cylinder, sphere/box, and I would document it too, in a similar
format, some graphs thrown in when I get the chance to make them.
Any feedback welcome, cheers
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Lighting a sphere with OpenGL lights (normals wrong?) cecilkorik 3 7,856 Dec 27, 2007 02:40 PM
Last Post: _ibd_
2d Polygon Intersection bizimCity 6 6,562 Aug 31, 2006 05:29 PM
Last Post: reubert
seeing the inside of a sphere Najdorf 10 3,938 Jul 17, 2006 09:19 PM
Last Post: OneSadCookie
Space Box/Sphere misteranderson 1 2,341 May 12, 2006 10:48 AM
Last Post: TomorrowPlusX
texture mapped sphere dave05 5 2,977 May 8, 2006 05:43 PM
Last Post: dave05
|
{"url":"http://idevgames.com/forums/thread-6694.html","timestamp":"2014-04-20T01:42:50Z","content_type":null,"content_length":"49787","record_id":"<urn:uuid:ce59cbea-ea51-454a-bbcf-28f6ccf7c350>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - What is the mass of the Earth?
guitarphysics Dec13-12 09:02 PM
What is the mass of the Earth?
This isn't for homework or anything, I was just trying to figure this out for fun. So what I tried to do to find the mass of the Earth was this:
I looked up the mass afterwards and it's apparently 5.97x10^24. So I was off by about a million kilograms... Where did I mess up? Or is my whole process just completely screwed up? Don't be too harsh
on me, I just finished learning about forces in school, and had to look up the law of universal gravitation on wikipedia...
PS. Sorry if I posted this in the wrong category (I tried the homework category, but when I saw the template I felt like I was definitely in the wrong place).
The_Duck Dec13-12 09:10 PM
Re: What is the mass of the Earth?
You've mixed up your units. I recommend always keeping the units in your calculations; if you drop them and just write the numbers you're liable to mess up units.
Your value for G is in units of kg m^3 / s^2. Your value for m1 is in units of kg. Your value for g is in units of m/s^2. But your value for r is in units of km. The units don't cancel out the way
you want them to, since you've switched from using meters to using kilometers. Convert r to meters and redo the calculation, and you'll get the right answer.
Re: What is the mass of the Earth?
I suggest you check the units on your numbers.
[added] Ah, I didn't quack fast enough.
guitarphysics Dec13-12 09:31 PM
Re: What is the mass of the Earth?
Wow, you're right. Very stupid of me, sorry.
All times are GMT -5. The time now is 04:27 PM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums
|
{"url":"http://www.physicsforums.com/printthread.php?t=658765","timestamp":"2014-04-20T21:27:51Z","content_type":null,"content_length":"6405","record_id":"<urn:uuid:4f17ed29-0d59-44cf-8dd0-9e191e1219b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Contact us
The CUNYMath Oversight Committee consists of mathematics faculty from across many CUNY schools:
• Terrence Blackman (committee chair; Medgar Evers College)
• Nikos Apostolakis (Bronx Community College)
• William Baker (Hostos Community College)
• Hendrick Delcham (LaGuardia Community College)
• Robert Holt (Queensborough Community College)
• Lewis Lasser (York College)
• Alla Morgulis (Borough of Manhattan Community College)
• Stanley Ocken (The City College of New York)
• Jarrod Pickens (Baruch College)
• Jonas Reitz (New York City College of Technology)
• Amir Togha (Bronx Community College)
• Max Tran (Kingsborough Community College)
• John Velling (Brooklyn College)
To contact the committee about technical issues or other comments on this site, email CUNYMath.
For issues regarding mathematics courses, professors, exams, homework, or other school-specific topics, please contact your school’s mathematics department.
One Response to Contact us
1. I am teaching a high school CUNY COMPASS math-prep class and I have been giving my students practice tests, but I have not been able to find information about how the assessment is graded. Is
there a scale? How much is each question worth in each section?
Thank you
|
{"url":"http://cunymath.commons.gc.cuny.edu/about/contact/","timestamp":"2014-04-17T09:43:33Z","content_type":null,"content_length":"31839","record_id":"<urn:uuid:a30da873-cc09-46bc-955d-d0ac226480ee>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sufficient Conditions for Labelled
Sufficient Conditions for Labelled 0-1 Laws
Stanley Burris, Karen Yeats
If F(x) = e^G(x), where F(x) = Σf(n)x^n and G(x) = Σg(n)x^n, with 0≤g(n) =O(n^θn/n!), θ∈(0,1), and gcd(n : g(n) >0)=1, then f(n)=o(f(n-1)). This gives an answer to Compton's request in Question 8.3
[Compton 1987] for an ``easily verifiable sufficient condition'' to show that an adequate class of structures has a labelled first-order 0-1 law, namely it suffices to show that the labelled
component count function is O(n^θn) for some θ∈(0,1). It also provides the means to recursively construct an adequate class of structures with a labelled 0-1 law but not an unlabelled 0-1 law,
answering Compton's Question 8.4.
Full Text:
PDF PostScript
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/618","timestamp":"2014-04-19T21:27:14Z","content_type":null,"content_length":"11549","record_id":"<urn:uuid:dc9b4825-10ec-4bfb-86ed-1922f47c71a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00552-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts by
Posts by LS
Total # Posts: 9
An organ pipe is open at both ends. It is producing sound at its third harmonic, the frequency of which is 406 Hz. The speed of sound is 343 m/s. What is the length of the pipe?
Physics HELP!!
Three charges, q1=+45.9nC q2=+45.9nC and q3=+91.8nC are fixed at the corners of an equilateral triangle with a side length of 3.70cm. Find the magnitude of the electric force on q3.
chemistry... HELPP
Chemistry Equilibrium Constant PLEASE Help!? In an experiment, equal volumes of 0.00150 M FeCl3 and 0.00150 M NaSCN were mixed together and reacted according to the equation: Fe3+ (aq) + SCN (aq)
<--> Fe(SCN)2+ (aq) The equilibrium concentration of FeSCN 2+(aq) was...
Physics HELP!!
What is the acceleration due to gravity at 7220km above the Earth's surface? Take the radius of the Earth to be 6.37e6m and the mass of the Earth to be 5.972e24kg.
A 74.1 kg climber is using a rope to cross between two peaks of a mountain as shown in the figure below. He pauses to rest near the right peak. Assume that the right side rope and the left side rope
make angles of 48.0° and 11.16° with respect to the horizontal respect...
The mass of a robot is 5489.0kg. This robot weighs 3646.0N more on planet A than it does on planet B. Both planets have the same radius of 1.33 x 107 m. What is the difference MA - MB in the masses
of these planets?
The distance the bicycle travels and the time taken are expressed by the formula d(t)=t^2-2t, there d(t) is in miles and t in hours. Find the time taken by the bicycle to cover a distance of 63
Business Law
Does anyone have the answer to these questions,Please?????
Business Law
Asking the same questions
|
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=LS","timestamp":"2014-04-19T00:11:04Z","content_type":null,"content_length":"7964","record_id":"<urn:uuid:76f1879f-7a81-4327-9d64-a5af6b7ba1f0>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Littleton City Offices, CO Algebra Tutor
Find a Littleton City Offices, CO Algebra Tutor
...If the cost is a barrier, we can work it out as finances should not limit a student's education.I'm a Chemical Engineer who graduated with high honors. Algebra 1 is the foundation to most math
and I am very familiar teaching this to students online. My student population for Algebra I (prior to college) ranges between 8 years old - 18 years old.
7 Subjects: including algebra 1, algebra 2, chemistry, ACT Math
...Those topics include: logarithmic and exponential functions, polynomial and rational functions, vectors, conic sections, complex numbers, trigonometry and trig identities, and limits. I have
been teaching trigonometry through honors geometry and Algebra 2 for over ten years as a public school te...
14 Subjects: including algebra 1, algebra 2, reading, geometry
...Study skills are somewhat subject specific and I would plan to present guidelines based on the particular academic area. I have taught Sunday School for more than 30 years. Having read through
the bible several times, I have written religious literature which has been used as a curriculum both in the U.S. and internationally.
43 Subjects: including algebra 1, algebra 2, English, chemistry
...I have tutored several High School students in Algebra 2. Before earning my PhD in mathematics and working as a mathematics instructor/lecturer at a University, I was trained as a High School
Mathematics teacher. I have tutored several students in calculus.
15 Subjects: including algebra 2, algebra 1, calculus, French
...My experience includes tutoring students of all ages and abilities, and I sincerely enjoy working with students to achieve their academic goals. I'm smart, fun, and know how to demystify
difficult concepts. Standardized tests and tricky subjects can be mastered--you just have to know how to attack them.
12 Subjects: including algebra 1, algebra 2, SAT math, precalculus
Related Littleton City Offices, CO Tutors
Littleton City Offices, CO Accounting Tutors
Littleton City Offices, CO ACT Tutors
Littleton City Offices, CO Algebra Tutors
Littleton City Offices, CO Algebra 2 Tutors
Littleton City Offices, CO Calculus Tutors
Littleton City Offices, CO Geometry Tutors
Littleton City Offices, CO Math Tutors
Littleton City Offices, CO Prealgebra Tutors
Littleton City Offices, CO Precalculus Tutors
Littleton City Offices, CO SAT Tutors
Littleton City Offices, CO SAT Math Tutors
Littleton City Offices, CO Science Tutors
Littleton City Offices, CO Statistics Tutors
Littleton City Offices, CO Trigonometry Tutors
Nearby Cities With algebra Tutor
Bow Mar, CO algebra Tutors
Columbine Valley, CO algebra Tutors
Conifer algebra Tutors
Foxfield, CO algebra Tutors
Franktown, CO algebra Tutors
Glendale, CO algebra Tutors
Henderson, CO algebra Tutors
Indian Hills algebra Tutors
Kittredge algebra Tutors
Littleton, CO algebra Tutors
Lonetree, CO algebra Tutors
Morrison, CO algebra Tutors
Pine, CO algebra Tutors
Sedalia, CO algebra Tutors
Sheridan, CO algebra Tutors
|
{"url":"http://www.purplemath.com/littleton_city_offices_co_algebra_tutors.php","timestamp":"2014-04-18T13:34:59Z","content_type":null,"content_length":"24707","record_id":"<urn:uuid:5d4d6426-9d0a-4d50-91e1-9a9955f95421>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ludwig Boltzmann
Ludwig Eduard Boltzmann (Vienna, Austrian Empire, February 20, 1844 Duino near Trieste, September 5, 1906) was an Austrian physicist famous for his founding contributions in the fields of
statistical mechanics and statistical thermodynamics.
Childhood and Education
Ludwig Edward Boltzmann was born on February 20, 1844 in Vienna. His father, Ludwig Georg Boltzmann was a tax official. His grandfather, who had moved to Vienna from Berlin, was a clock manufacturer,
and Boltzmann’s mother, Katharina Pauernfeind, was originally from Salzburg. He received his primary education from a private tutor at the home of his parents. Boltzmann attended high school in Linz,
Upper Austria. At age 15, Boltzmann lost his father.
Boltzmann studied physics at the University of Vienna, starting in 1863. Among his teachers were Josef Loschmidt, Joseph Stefan, Andreas von Ettingshausen and Jozef Petzval. Boltzmann received his
PhD degree in 1866 working under the supervision of Stefan; his dissertation was on kinetic theory of gases. In 1867 he became a Privatdozent (lecturer). After obtaining his doctorate degree,
Boltzmann worked two more years as Stefan’s assistant. It was Stefan who introduced Boltzmann to Maxwell's work.
Academic Career
In 1869, at age 25, he was appointed full Professor of Mathematical Physics at the University of Graz in the province of Styria. In 1869 he spent several months in Heidelberg working with Robert
Bunsen and Leo Königsberger and then in 1871 he was with Gustav Kirchhoff and Hermann von Helmholtz in Berlin. In 1873 Boltzmann joined the University of Vienna as Professor of Mathematics and where
he stayed till 1876.
In 1872, long before women were admitted to Austrian universities, he met Henriette von Aigentler, an aspiring teacher of mathematics and physics in Graz. She was refused permission to unofficially
audit lectures, and Boltzmann advised her to appeal; she did, successfully. On July 17, 1876 Ludwig Boltzmann married Henriette von Aigentler; they had three daughters and two sons. Boltzmann went
back to Graz to take up the chair of Experimental Physics. Among his students in Graz were Svante Arrhenius, and Walther Nernst. He spent 14 happy years in Graz and it was there that he developed his
statistical concept of nature. In 1885 he became member of the Imperial Austrian Academy of Sciences and in 1887 he became the President of the University of Graz.
Boltzmann was appointed to the Chair of Theoretical Physics at the University of Munich in Bavaria, Germany in 1890. In 1893, Boltzmann succeeded his teacher Joseph Stefan as Professor of Theoretical
Physics at the University of Vienna. However, Boltzmann did not get along with some of his colleagues there; particularly when Ernst Mach became professor of philosophy and history of sciences in
1895. Thus in 1900 Boltzmann went to the University of Leipzig, on the invitation of Wilhelm Ostwald. After the retirement of Mach due to bad health, Boltzmann came back to Vienna in 1902, where he
stayed until his death. Among his students there were Karl Przibram, Paul Ehrenfest and Lise Meitner.
In Vienna, Boltzmann not only taught physics but also lectured on philosophy. Boltzmann’s lectures on natural philosophy were very popular, and received a considerable attention at that time. His
first lecture was an enormous success. Even though the largest lecture hall had been chosen for it, the people stood all the way down the staircase. Because of the great successes of Boltzmann’s
philosophical lectures, the Emperor invited him for a reception at the Palace.
Boltzmann's most important scientific contributions were in kinetic theory, including the Maxwell-Boltzmann distribution for molecular speeds in a gas. In addition, Maxwell-Boltzmann statistics and
the Boltzmann distribution over energies remain the foundations of classical statistical mechanics. They are applicable to the many phenomena that do not require quantum statistics and provide a
remarkable insight into the meaning of temperature.
Much of the physics establishment rejected his thesis about the reality of atoms and molecules a belief shared, however, by Maxwell in Scotland and Gibbs in the United States; and by most chemists
since the discoveries of John Dalton in 1808. He had a long-running dispute with the editor of the preeminent German physics journal of his day, who refused to let Boltzmann refer to atoms and
molecules as anything other than convenient theoretical constructs. Only a couple of years after Boltzmann's death, Perrin's studies of colloidal suspensions (1908-1909) confirmed the values of
Avogadro's number and Boltzmann's constant, and convinced the world that the tiny particles really exist.
To quote Planck, The logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases.[1] This famous formula for entropy S is[2] [3]
where k = 1.3806505(24) × 10−23 J K−1 is Boltzmann's constant, and the logarithm is taken to the natural base e. W is the Wahrscheinlichkeit, or number of possible microstates corresponding to the
macroscopic state of a system number of (unobservable) "ways" the (observable) thermodynamic state of a system can be realized by assigning different positions and momenta to the various molecules.
Boltzmann’s paradigm was an ideal gas of N identical particles, of which Ni are in the i-th microscopic condition (range) of position and momentum. W can be counted using the formula for permutations
is engraved on Boltzmann's tombstone at the Vienna Zentralfriedhof.
where i ranges over all possible molecular conditions. (! denotes factorial.) The "correction" in the denominator is due to the fact that identical particles in the same condition are
indistinguishable. W is called the "thermodynamic probability" since it is an integer greater than one, while mathematical probabilities are always numbers between zero and one.
The equation for S is engraved on Boltzmann's tombstone at the Vienna Zentralfriedhof his second grave.
Boltzmann's bust in the courtyard arcade of the main building, University of Vienna.
The Boltzmann equation
The Boltzmann equation was developed to describe the dynamics of an ideal gas.
where f represents the distribution function of single-particle position and momentum at a given time (see the Maxwell-Boltzmann distribution), F is a force, m is the mass of a particle, t is the
time and v is an average velocity of particles.
This equation describes the temporal and spatial variation of the probability distribution for the position and momentum of a density distribution of a cloud of points in single-particle phase space.
(See Hamiltonian mechanics.) The first term on the left-hand side represents the explicit time variation of the distribution function, while the second term gives the spatial variation, and the third
term describes the effect of any force acting on the particles. The right-hand side of the equation represents the effect of collisions.
In principle, the above equation completely describes the dynamics of an ensemble of gas particles, given appropriate boundary conditions. This first-order differential equation has a deceptively
simple appearance, since f can represent an arbitrary single-particle distribution function. Also, the force acting on the particles depends directly on the velocity distribution function f. The
Boltzmann equation is notoriously difficult to integrate. David Hilbert spent years trying to solve it without any real success.
The form of the collision term assumed by Boltzmann was approximate. However for an ideal gas the standard Chapman-Enskog solution of the Boltzmann equation is highly accurate. It is only expected to
lead to incorrect results for an ideal gas under shock-wave conditions.
Boltzmann tried for many years to "prove" the second law of thermodynamics using his gas-dynamical equation his famous H-theorem.[4] However the key assumption he made in formulating the collision
term was "molecular chaos", an assumption which breaks time-reversal symmetry as is necessary for anything which could imply the second law. It was from the probabilistic assumption alone that
Boltzmann's apparent success emanated, so his long dispute with Loschmidt and others over Loschmidt's paradox ultimately ended in his failure.
For higher density fluids, the Enskog equation was proposed. For moderately dense gases this equation, which reduces to the Boltzmann equation at low densities, is fairly accurate. However the Enskog
equation is basically an heuristic approximation without any rigorous mathematical basis for extrapolating from low to higher densities.
Finally, in the 1970s E.G.D. Cohen and J.R. Dorfman proved that a systematic (power series) extension of the Boltzmann equation to high densities is mathematically impossible. Consequently
nonequilibrium statistical mechanics for dense gases and liquids focuses on the Green-Kubo relations, the fluctuation theorem, and other approaches instead.
Energetics of evolution
In 1922, Alfred J. Lotka [5] referred to Boltzmann as one of the first proponents of the proposition that available energy (also called exergy) can be understood as the fundamental object under
contention in the biological, or life-struggle and therefore also in the evolution of the organic world. Lotka interpreted Boltzmann's view to imply that available energy could be the central concept
that unified physics and biology as a quantitative physical principle of evolution. Howard T. Odum later developed this view as the maximum power principle.
Significant contributions
1872 - Boltzmann equation; H-theorem
1877 - Boltzmann distribution; relationship between thermodynamic entropy and probability.
1884 - Derivation of the Stefan-Boltzmann law
Closely associated with a particular interpretation of the second law of thermodynamics, he is also credited in some quarters with anticipating quantum mechanics.
For detailed and technically informed account of Boltzmann's contributions to statistical mechanics consult the article by E.G.D. Cohen.
Final years
Boltzmann was subject to rapid alternation of depressed moods with elevated, expansive or irritable moods. He himself jestingly attributed his rapid swings in temperament to the fact that he was born
during the night between Mardi Gras and Ash Wednesday; he had, almost certainly, bipolar disorder. Meitner relates that those who were close to Boltzmann were aware of his bouts of severe depression
and his suicide attempts. On September 5, 1906, while on a summer vacation in Duino near Trieste he committed suicide during an attack of depression.
|
{"url":"http://www.mlahanas.de/Physics/Bios/LudwigBoltzmann.html","timestamp":"2014-04-16T07:53:19Z","content_type":null,"content_length":"40224","record_id":"<urn:uuid:0fb12300-bd42-4c96-b3cb-959a630c00eb>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
|
search results
Expand all Collapse all Results 1 - 11 of 11
1. CMB 2012 (vol 57 pp. 289)
Closure of the Cone of Sums of $2d$-powers in Certain Weighted $\ell_1$-seminorm Topologies
In a paper from 1976, Berg, Christensen and Ressel prove that the closure of the cone of sums of squares $\sum \mathbb{R}[\underline{X}]^2$ in the polynomial ring $\mathbb{R}[\underline{X}] := \
mathbb{R}[X_1,\dots,X_n]$ in the topology induced by the $\ell_1$-norm is equal to $\operatorname{Pos}([-1,1]^n)$, the cone consisting of all polynomials which are non-negative on the hypercube $
[-1,1]^n$. The result is deduced as a corollary of a general result, established in the same paper, which is valid for any commutative semigroup. In later work, Berg and Maserick and Berg,
Christensen and Ressel establish an even more general result, for a commutative semigroup with involution, for the closure of the cone of sums of squares of symmetric elements in the weighted $\
ell_1$-seminorm topology associated to an absolute value. In the present paper we give a new proof of these results which is based on Jacobi's representation theorem from 2001. At the same time, we
use Jacobi's representation theorem to extend these results from sums of squares to sums of $2d$-powers, proving, in particular, that for any integer $d\ge 1$, the closure of the cone of sums of
$2d$-powers $\sum \mathbb{R}[\underline{X}]^{2d}$ in $\mathbb{R}[\underline{X}]$ in the topology induced by the $\ell_1$-norm is equal to $\operatorname{Pos}([-1,1]^n)$.
Keywords:positive definite, moments, sums of squares, involutive semigroups
Categories:43A35, 44A60, 13J25
2. CMB 2011 (vol 55 pp. 355)
Convolution Inequalities in $l_p$ Weighted Spaces
Various weighted $l_p$-norm inequalities in convolutions are derived by a simple and general principle whose $l_2$ version was obtained by using the theory of reproducing kernels. Applications to
the Riemann zeta function and a difference equation are also considered.
Keywords:inequalities for sums, convolution
Categories:26D15, 44A35
3. CMB 2011 (vol 55 pp. 815)
Restricted Radon Transforms and Projections of Planar Sets
We establish a mixed norm estimate for the Radon transform in $\mathbb{R}^2$ when the set of directions has fractional dimension. This estimate is used to prove a result about an exceptional set of
directions connected with projections of planar sets. That leads to a conjecture analogous to a well-known conjecture of Furstenberg.
Categories:44A12, 28A78
4. CMB 2007 (vol 50 pp. 547)
Inverse Laplace Transforms Encountered in Hyperbolic Problems of Non-Stationary Fluid-Structure Interaction
The paper offers a study of the inverse Laplace transforms of the functions $I_n(rs)\{sI_n^{'}(s)\}^{-1}$ where $I_n$ is the modified Bessel function of the first kind and $r$ is a parameter. The
present study is a continuation of the author's previous work %[\textit{Canadian Mathematical Bulletin} 45] on the singular behavior of the special case of the functions in question, $r$=1. The
general case of $r \in [0,1]$ is addressed, and it is shown that the inverse Laplace transforms for such $r$ exhibit significantly more complex behavior than their predecessors, even though they
still only have two different types of points of discontinuity: singularities and finite discontinuities. The functions studied originate from non-stationary fluid-structure interaction, and as
such are of interest to researchers working in the area.
Categories:44A10, 44A20, 33C10, 40A30, 74F10, 76Q05
5. CMB 2004 (vol 47 pp. 389)
An Inversion Formula of the Radon Transform Transform on the Heisenberg Group
In this paper we give an inversion formula of the Radon transform on the Heisenberg group by using the wavelets defined in [3]. In addition, we characterize a space such that the inversion formula
of the Radon transform holds in the weak sense.
Keywords:wavelet transform, Radon transform, Heisenberg group
Categories:43A85, 44A15
6. CMB 2003 (vol 46 pp. 400)
Approximating Positive Polynomials Using Sums of Squares
The paper considers the relationship between positive polynomials, sums of squares and the multi-dimensional moment problem in the general context of basic closed semi-algebraic sets in real
$n$-space. The emphasis is on the non-compact case and on quadratic module representations as opposed to quadratic preordering presentations. The paper clarifies the relationship between known
results on the algebraic side and on the functional-analytic side and extends these results in a variety of ways.
Categories:14P10, 44A60
7. CMB 2002 (vol 45 pp. 399)
On the Singular Behavior of the Inverse Laplace Transforms of the Functions $\frac{I_n(s)}{s I_n^\prime(s)}$
Exact analytical expressions for the inverse Laplace transforms of the functions $\frac{I_n(s)}{s I_n^\prime(s)}$ are obtained in the form of trigonometric series. The convergence of the series is
analyzed theoretically, and it is proven that those diverge on an infinite denumerable set of points. Therefore it is shown that the inverse transforms have an infinite number of singular points.
This result, to the best of the author's knowledge, is new, as the inverse transforms of $\frac{I_n(s)}{s I_n^\prime(s)}$ have previously been considered to be piecewise smooth and continuous. It
is also found that the inverse transforms have an infinite number of points of finite discontinuity with different left- and right-side limits. The points of singularity and points of finite
discontinuity alternate, and the sign of the infinity at the singular points also alternates depending on the order $n$. The behavior of the inverse transforms in the proximity of the singular
points and the points of finite discontinuity is addressed as well.
Categories:65R32, 44A10, 44A20, 74F10
8. CMB 2001 (vol 44 pp. 223)
Extending the Archimedean Positivstellensatz to the Non-Compact Case
A generalization of Schm\"udgen's Positivstellensatz is given which holds for any basic closed semialgebraic set in $\mathbb{R}^n$ (compact or not). The proof is an extension of W\"ormann's proof.
Categories:12D15, 14P10, 44A60
9. CMB 2000 (vol 43 pp. 472)
An Estimate For a Restricted X-Ray Transform
This paper contains a geometric proof of an estimate for a restricted x-ray transform. The result complements one of A.~Greenleaf and A.~Seeger.
10. CMB 1999 (vol 42 pp. 354)
A Real Holomorphy Ring without the Schmüdgen Property
A preordering $T$ is constructed in the polynomial ring $A = \R [t_1,t_2, \dots]$ (countably many variables) with the following two properties: (1)~~For each $f \in A$ there exists an integer $N$
such that $-N \le f(P) \le N$ holds for all $P \in \Sper_T(A)$. (2)~~For all $f \in A$, if $N+f, N-f \in T$ for some integer $N$, then $f \in \R$. This is in sharp contrast with the Schm\"udgen-W\
"ormann result that for any preordering $T$ in a finitely generated $\R$-algebra $A$, if property~(1) holds, then for any $f \in A$, $f > 0$ on $\Sper_T(A) \Rightarrow f \in T$. Also, adjoining to
$A$ the square roots of the generators of $T$ yields a larger ring $C$ with these same two properties but with $\Sigma C^2$ (the set of sums of squares) as the preordering.
Categories:12D15, 14P10, 44A60
11. CMB 1998 (vol 41 pp. 392)
A note on $H^1$ multipliers for locally compact Vilenkin groups
Kitada and then Onneweer and Quek have investigated multiplier operators on Hardy spaces over locally compact Vilenkin groups. In this note, we provide an improvement to their results for the Hardy
space $H^1$ and provide examples showing that our result applies to a significantly larger group of multipliers.
Categories:43A70, 44A35
|
{"url":"http://cms.math.ca/cmb/msc/44","timestamp":"2014-04-19T06:54:15Z","content_type":null,"content_length":"42416","record_id":"<urn:uuid:699d28ee-3bb7-4860-b8da-b326c7044de8>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bensenville Math Tutor
Find a Bensenville Math Tutor
...I love math and helping students understand it. I first tutored math in college and have been tutoring for a couple years independently. My students' grades improve quickly, usually after only
a few sessions.
26 Subjects: including ACT Math, trigonometry, Spanish, precalculus
...I'm a patient person, good listener and communicator. I work well with students from middle school through college and I can tutor all K-12 math, including college Calculus, Probability,
Statistics, Discrete Math, Linear Algebra, and other subjects. I have flexible days and afternoons, and I can get around Chicago without difficulty.
22 Subjects: including prealgebra, computer science, PHP, Visual Basic
...Usually this strategy gives excellent results and typically a student improves his/her grades by 1-2 letters after 3-4 sessions. I hold a PhD in mathematics and physics and tutor a lot of high
school and college students during the last 10 years. The list of subjects includes various mathematical disciplines, in particular Algebra 2.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...My master's project was on Remote monitoring of engine health using non intrusive methods. Dear Students, I currently work as Sr. Technical Specialist at Case New Holland Industrial in Burr
Ridge, IL.
16 Subjects: including trigonometry, statistics, discrete math, differential equations
Hi! My name is Kishan P. I am currently a Senior in College at Northern Illinois University studying BioMedical Engineering.
26 Subjects: including algebra 1, calculus, algebra 2, SAT math
|
{"url":"http://www.purplemath.com/bensenville_il_math_tutors.php","timestamp":"2014-04-19T02:09:11Z","content_type":null,"content_length":"23616","record_id":"<urn:uuid:6ff5054d-0af0-4147-81aa-cb350cb7d7ca>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Total Asset Turnover Definition & Formula
Table of Contents
Does the data in the above table mean anything to you? There are three key financial ratios listed: (1) net fixed asset turnover, (2) total asset turnover, and (3) equity turnover for Wal-Mart going
back to January 2009. Well, if none of these ring a bell, by the end of this study, you will understand the significance of the total asset turnover ratio.
Total Asset Turnover Definition
The total asset turnover represents the amount of revenue generated by a company as a result of its assets on hand. This equation is a basic formula for measuring how efficiently a company is
Total Asset Turnover Formula
Total Asset Turnover = Sales/Total Assets
While this article does a lovely job of displaying how the total asset turnover can impact a company's stock price, if you really want more information around how to calcualte the total asset
turnover and some working examples, please visit total asset turnover examples.
Now that we have redirected all of the hardcore mathematicians, let's resume our breakdown of the total asset turnover ratio. The sales represents all the revenue generated by the company and is
disclosed on a company's income statement. The total assets represent the assets listed on the company's balance sheet. The higher the ratio of sales to total assets, the better. This implies that a
company is generating "x" number of sales for every dollar of assets on hand. For example if a company has sales of 1.5M and total assets of 5M, the company would have a total asset turnover ratio of
.3 (1.5/5). Now looking at this number alone means very little. An investor must look at the turnover ratio relative to a company's competitors to determine how the company measures up in a common
playing field. There are times when even comparing a company's total asset turnover to its peers is a fruitless activity as the company may be expanding its facilities which will drive future growth
but will hurt on the short-term. We will be discussing a real-life example of this later in the article.
While this article does a lovely job of displaying how the total asset turnover can impact a company's stock price, if you really want more information around how to calcualte the total asset
turnover formula and some working examples, please visit total asset turnover examples.
Interpreting the Total Asset Turnover
The total asset turnover is one of those simple calculations that speak volumes about the health of a company. This number is reported quarterly and can give an insight on whether a company is
becoming more efficient at their core competencies over time. There is no set number that represents a good total asset turnover value because every industry has varying business models. One general
rule of thumb is that the higher a company's asset turnover, the lower the profit margins, since the company is able to sell more products at a cheaper rate.
Based on what we know so far regarding total asset turnover, what conclusions can we draw from the above graph? Stating that the blue line is above the red line isn't enough. This graph shows a
couple of key factors: (1) Wal-Mart is outperforming the consumer services industry in terms of getting its inventory off the shelf and (2) while the consumer services industry saw an uptick in 2009,
Wal-Mart was able to match this growth. As an investor this gives you confidence that not only has Wal-Mart been able to out sell its competitors in the past, but it is also making great strides in
the current year.
Why do Investors Care About the Total Asset Turnover?
What makes the total asset turnover a unique financial ratio is that it provides an investor some indication of the competitiveness in the market, as well as how efficiently a company is utilizing
its assets to generate new sales. For example, let's say that the industry standard total asset turnover ratio for steel producing companies is .75. A fictitious company, Blue Steel has a total asset
turnover ratio of 1.5. As an investor you can quickly see that Blue Steel is selling double its industry for the amount of assets it has on hand. The investor would of course have to look into the
reason behind this success, but one could assume that Blue Steel has a product that competitors simply cannot offer, a great sales force, efficient production processes or all of the above.
Real Life Example of Using the Total Asset Turnover Ratio
As we mentioned in the earlier part of the article, looking at the total asset turnover ratio as a standalone indication of a company's financial health is not smart investing. Investors often look
for a smoking gun in technical analysis or financial ratios but these charting techniques and numbers in isolation do very little to paint the picture of a company's health and vitality. To this
point let's take a look at the total asset ratio of Human Genome Sciences; HGSI is a biotech stock which has struggled in recent years. The company is medium and size and has seen revenues increase
in 4 of the last 5 years (2006: 25.76, 2007: $41.85M, 2008: $48.42M, 2009: $275.8M, 2010: $157.3M). Now looking at the numbers you can clearly see that something happened between 2009 and 2010 as
this went completely counter to the accelerate revenue growth from previous years.
HGSI's total asset turnover relative to its Peers
The first thing we will want to do is determine how HGSI's total asset turnover fairs against other biotechnology companies. In this example we will use the following three companies for our
analysis: Amgen (AMGN), Biogen Idec Inc (BIIB) and Emergent Biosolutions Inc, (EBS). Looking at the total asset turnover we see the below figures:
2010 Total Asset Turnover
HGSI: .12
AMGN: .35
BIIB: .58
EBS: .69
So, from reading above we know that HGSI took a significant hit in revenues from 2009 - 2010 (-118.5M), has a relatively low total asset turnover relative to its peers and does not make as much
money. The average investor at this point would probably pass on HGSI and call it a day. But we have to dig further. Why did HGSI's revenues drop so abruptly? Why is the total asset turnover much
smaller than its competitors? If you do a quick Google search on HGSI, you will quickly see the word Benlysta. Benlysta is the first new drug for Lupus patients in over 50 years. Of course developing
a new drug and completing the FDA approval process is not cheap. HGSI has had to invest millions in developing their commercial infrastructure and expanding their sales force team. However, on a
positive note, analysts project there is a billion dollar market for Benlysta with no competitors, so how do you think this will impacts HGSI's total asset turnover?
Ford's Big Turnaround
Unless you have completely checked out of the financial world over the last 3 years, you would not have heard about Ford's Cinderella story of going from debt to profits. Now this process for Ford
was painful. The company had to get better, cheaper and faster. To this point, Ford had to reduce the number of cars in its product line as well as shed thousands of jobs. Now to try and understand
all of these changes and their impact on the company's bottom-line on a micro level would have been challenging to say the least. Now before we get into the charts, let me say that this is a homerun
example for analyzing the total asset turnover of the company. In the vast majority of the cases, a company's turnaround story can take a decade and is pretty gradual. Ford did not have this luxury
as the company was on the brink of bankruptcy.
Ok. So, maybe the fonts above are a bit small to read. But if you care enough to strain your eyes, you will notice that the average asset turnover was increased from a range of .5 - .6 to over .7.
Now let's take it a step further and do a comparison of the total assets to revenue.
At times it is good to step away from the ratios and look at the hard data. Now remember, for total asset turnover we are focused on the company's sales and net total assets. In the above chart you
will notice that the blue line (revenue) is pretty much flat for the past 9 years. There was a recent dip since 2008, but the revenue is on the uptick from early 2009. Conversely, the orange line
(total assets) took a sharp nosedive in early 2008 as a result of the economic recession and needing to cut costs. So, what do you think happens when a company maintains revenue levels, while
drastically reducing its assets?
If you guessed the stock price rises, you are correct! The one thing to note about financial ratios is that they take time to have an impact on the price of the stock. Look at how Ford cut their
assets by a third, yet it took the stock months to have any sort of price reaction. So, if you are a trader looking to get in and out of the market quickly, you may have to wait it out for the rest
of the world to realize what you were able to discern from the financial numbers.
In summary the total asset turnover ratio can speak volumes about a company's ability to make money in terms of the total assets on the books. The key thing to note is that this number is a lagging
indicator and does not provide much in terms of potential for future earnings.
Related Articles
To learn more about various financial ratios and how they may impact your investment decisions, please visit the financial ratios hub page. Here you will find high quality articles on topics such as
accounts receivable turnover ratio, book to market ratio and liquidity ratios.
|
{"url":"http://www.mysmp.com/fundamental-analysis/total-asset-turnover.html","timestamp":"2014-04-18T08:01:41Z","content_type":null,"content_length":"89940","record_id":"<urn:uuid:cd961922-e5bb-4d0d-a266-835607083251>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > Kernel (algebra)
, especially
abstract algebra
, the
of a
measures the degree to which the homomorphism fails to be
In many cases, the kernel of a homomorphism is a subset of the domain of the homomorphism (specifically, those elements which are mapped to the identity element in the codomain). In more general
contexts, the kernel is instead interpreted as a congruence relation on the domain (specifically, the relation of being mapped to the same image by the homomorphism). In either situation, the kernel
is trivial if and only if the homomorphism is injective; in the first situation "trivial" means consisting of the identity element only, while in the second it means that the relation is equality.
In this article, we survey various definitions of kernel used for important types of homomorphisms.
Kernel of a group homomorphism
The kernel of a group homomorphism f from G to H consists of all those elements of G which are sent by f to the identity element e[H] of H. In formulas:
ker f := {x in G : f(x) = e[H]}.
The kernel is a
normal subgroup
One of the isomorphism theorems states that the factor group G/(ker f) is isomorphic to the image of f, the isomorphism being induced by f itself. A slightly more general statement is the fundamental
theorem on homomorphisms.
The group homomorphism f is injective iff the kernel of f consists of the identity element of G only.
If A is a linear transformation from a vector space V to a vector space W, then the kernel of A is defined as
ker A := {x in V : Ax = 0}.
The kernel is a
of the vector space
, and again the
quotient space[?] V
) is isomorphic to the image of
; in particular, we have for the
dim ker A = dim V - dim im A.
The operator
is injective if and only if ker
= {
If V and W are finite-dimensional and bases have been chosen, then A can be described by a matrix M, and the kernel can be computed by solving the homogenous system of linear equations Mx = 0. In
this representation, the kernel corresponds to the nullspace of M. The dimension of the nullspace, and hence of the kernel, is given by the number of columns of M minus the rank of M, a number also
known as the nullity of M.
Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators[?]. For instance, in order to find all twice-differentiable functions f such that
xf''(x) + 3f'(x) = f(x),
one has to consider the kernel of a linear operator
, where
is the vector space of all twice differentible functions,
is the vector space of all functions, and for
, we define
by requiring that
(Af)(x) = xf''(x) + 3f'(x) - f(x)
for every
One can define kernels for homomorphisms between modules of a ring in an analogous manner, and this example captures the essence of kernels in general abelian categories.
The kernel of a ring homomorphism f from R to S consists of all those elements x of R for which f(x) = 0:
ker f := {x in R : f(x) = 0}.
Such a kernel is always an
The isomorphism theorem mentioned above for groups and vector spaces remains valid in the case of rings.
All the above cases are unified and generalized in universal algebra as follows: Given algebras A and B of the same type and a homomorphism f from A to B, the kernel of f is the congruence relation ~
on A defined as follows: Given elements x and y of A, let x ~ y iff f(x) = f(y). Clearly, this congruence degenerates to equality if and only if f is injective.
In the case of groups, if f is a group homomorphism from G to H, the two notions of kernel are related as follows: Given a and b in G, by definition a ~ b iff f(a) = f(b), which holds iff f(b)^-1f(a)
is the identity element e[H] of H. Since f is a homomorphism, this is true iff f(b^-1a) is e[H]. So to know whether a ~ b, it's enough to keep track of the preimage of the identity of H, and this
preimage is exactly the subgroup that we earlier called the kernel of f.
Essentially the same thing happens with vector spaces and rings and all other ideal supporting algebras[?], but in more general algebraic structures, kernels cannot be thought of as subsets but must
be thought of as congruences.
The notion of kernel of a morphism in category theory is a different generalization of the kernels of group and vector space homomorphisms. See kernel (category theory). The notion of kernel pair[?]
is a further generalisation of the kernel as a congruence relation. There is also the notion of difference kernel[?], or binary equalizer[?].
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/ke/Kernel_(algebra)","timestamp":"2014-04-18T13:25:35Z","content_type":null,"content_length":"25471","record_id":"<urn:uuid:aa41a8a6-2544-4efc-aed1-4b0645a9670f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harmonic analysis and combinatorics: How much may they contribute to each other? IMU/Amer
Results 1 - 10 of 13
- Func. Anal
"... Abstract. Recently Wolff [28] obtained a sharp L 2 bilinear restriction theorem for bounded subsets of the cone in general dimension. Here we adapt the argument of Wolff to also handle subsets
of “elliptic surfaces ” such as paraboloids. Except for an endpoint, this answers a conjecture of Machedon ..."
Cited by 40 (7 self)
Add to MetaCart
Abstract. Recently Wolff [28] obtained a sharp L 2 bilinear restriction theorem for bounded subsets of the cone in general dimension. Here we adapt the argument of Wolff to also handle subsets of
“elliptic surfaces ” such as paraboloids. Except for an endpoint, this answers a conjecture of Machedon and Klainerman, and also improves upon the known restriction theory for the paraboloid and
- DUKE MATH. J , 2004
"... The restriction and Kakeya problems in Euclidean space have received much attention in the last few decades, and they are related to many problems in harmonic analysis, partial differential
equations (PDEs), and number theory. In this paper we initiate the study of these problems on finite fields. I ..."
Cited by 25 (0 self)
Add to MetaCart
The restriction and Kakeya problems in Euclidean space have received much attention in the last few decades, and they are related to many problems in harmonic analysis, partial differential equations
(PDEs), and number theory. In this paper we initiate the study of these problems on finite fields. In many cases the Euclidean arguments carry over easily to the finite setting (and are, in fact,
somewhat cleaner), but there
- J. AMS , 2008
"... Abstract. A Kakeya set is a subset of � n, where � is a finite field of q elements, that contains a line in every direction. In this paper we show that the size of every Kakeya set is at least
Cn · q n, where Cn depends only on n. This answers a question of Wolff [Wol99]. 1. ..."
Cited by 25 (4 self)
Add to MetaCart
Abstract. A Kakeya set is a subset of � n, where � is a finite field of q elements, that contains a line in every direction. In this paper we show that the size of every Kakeya set is at least Cn · q
n, where Cn depends only on n. This answers a question of Wolff [Wol99]. 1.
- Notices Amer. Math. Soc , 2000
"... In 1917 S. Kakeya posed the Kakeya needle problem: What is the smallest area required to rotate a unit line segment (a “needle”) by 180 degrees in the plane? Rotating around the midpoint
requires ..."
Cited by 22 (5 self)
Add to MetaCart
In 1917 S. Kakeya posed the Kakeya needle problem: What is the smallest area required to rotate a unit line segment (a “needle”) by 180 degrees in the plane? Rotating around the midpoint requires
"... We survey recent developments on the Kakeya problem. ..."
, 2003
"... Abstract. We survey recent developments on the Restriction conjecture. ..."
- INTERNAT. MATH. RESEARCH NOTICES
"... ..."
"... The literature of mathematics comprises millions of works, published ones as well as ones deposited in electronic archives. The number of papers and books included in the Mathematical Reviews
database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thous ..."
Add to MetaCart
The literature of mathematics comprises millions of works, published ones as well as ones deposited in electronic archives. The number of papers and books included in the Mathematical Reviews
database since 1940 (the first year of operation of MR) is now more than 1.9 million, and more than 75 thousand items are added to the database each year [28]. The overwhelming majority of works in
this ocean contain new mathematical theorems and their proofs. In addition, many works also formulate unsolved problems, often in the form of precise conjectures. How essential is it for the
development of mathematical science to draw the readers’ attention unceasingly to open problems? Maybe it would suffice to publish only new results? The first-rank mathematicians of the present time
give a definitive answer to this question. In his preface to the first Russian edition [20] of the book under review, 1 V. I. Arnold reminisced: “I. G. Petrovskiĭ, who was one of my teachers in
Mathematics, taught me that the most important thing that a student should learn from his supervisor is that some question is still open. Further choice of the problem from
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=516871","timestamp":"2014-04-20T01:37:19Z","content_type":null,"content_length":"30731","record_id":"<urn:uuid:4487e441-6aa8-49aa-8ed7-d7ace4ae455f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Gap Series and Measures on Spheres
Gap Series and Measures on Spheres
Daniel Gisriel Rider
University of Wisconsin--Madison, 1964 - Group theory - 142 pages
From inside the book
4 pages matching maximal ideal in this book
Where's the rest of this book?
Results 1-3 of 4
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Gap Series 22
Function Algebras on Groups and Spheres 42
BIBLIOGRAPHY 70
Common terms and phrases
2n+l a e G Abel summable absolutely convergent algebra of continuous Banach algebra Banach space Banach-Steinhaus theorem Bohr set Borel measure bounded E-function bounded multiplicative linear
compact Abelian group compact group continuous functions countable defined denote dual group E-polynomial easily seen F is admissible F(or finite constant finite number Fourier series function f g e
G G is Abelian GAP SERIES given Haar measure Hadamard set Hence i e M(S implies integers irreducible characters irreducible unitary L2 G Laplace series LEMMA Let F Let G maximal ideal space
multiplicative linear functional n-oo normal subgroup normal subhypergroup orthogonal polynomial Proof satisfies Schwarz inequality sequence Sidon set spherical harmonics subgroup of G subhypergroup
of G subset summable to zero theorem thesis WpHI x e S2 yields z0 g
Bibliographic information
Gap Series and Measures on Spheres
Daniel Gisriel Rider
University of Wisconsin--Madison, 1964 - Group theory - 142 pages
Gap Series 22
Function Algebras on Groups and Spheres 42
|
{"url":"http://books.google.com/books?id=QXnSAAAAMAAJ&q=maximal+ideal&dq=related:ISBN0471893900&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-16T19:45:21Z","content_type":null,"content_length":"99989","record_id":"<urn:uuid:78fab5a0-f0fa-4701-8a44-89fd2f95436a>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Haledon Trigonometry Tutor
...I am always available by phone or email for any questions. I will rarely just give the answer; I prefer to walk you toward the solution and assist you past your weak points. I am excellent at
finding the piece of information or logic that allows everything to flow for you.
9 Subjects: including trigonometry, chemistry, physics, calculus
...Having earned three master's degrees and working on a doctoral degree, all in different fields, I have become very aware of the importance of approaching material in a way that minimizes the
anxiety of what may seem an overwhelming task. This involves learning how to strategize learning. Let me...
50 Subjects: including trigonometry, chemistry, calculus, physics
...I have been tutoring for the last 10 years both professionally and as a volunteer. The use of multiple approaches to learning has been integral to my success in helping my students achieve
their goals. Though varied approaches customized to your learning style, I will help you reach those breakthrough moments when topics that may have given you trouble suddenly become clearly
22 Subjects: including trigonometry, chemistry, geometry, calculus
...The first lifesaving qualification I attained was in Boy Scouts, in 5th Grade. Since then, I became Water Survival Qualified (WSQ) by the U.S. Marine Corps, and then, Deep Water Environment
Survival Training (DWEST) certified, by the U.S.
53 Subjects: including trigonometry, chemistry, reading, biology
...I did my undergraduate in Physics and Astronomy at Vassar, and did an Engineering degree at Dartmouth. I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and
will be done in a year. I have a lot of experience tutoring physics and math at all levels.
11 Subjects: including trigonometry, Spanish, calculus, physics
|
{"url":"http://www.purplemath.com/haledon_trigonometry_tutors.php","timestamp":"2014-04-17T04:30:31Z","content_type":null,"content_length":"24158","record_id":"<urn:uuid:262becce-1eb2-42dc-ac05-6c34ddad8c0d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
Convex sets with homothetic projections
Valeriu Soltan
Department of Mathematical Sciences, George Mason University, 4400 University Drive, Fairfax, VA 22030, USA, e-mail: vsoltan@gmu.edu
Abstract: Nonempty sets $X_1$ and $X_2$ in the Euclidean space $\R^n$ are called \textit{homothetic} provided $X_1 = z + \lambda X_2$ for a suitable point $z \in \R^n$ and a scalar $\lambda \ne 0$,
not necessarily positive. Extending results of Süss and Hadwiger (proved by them for the case of convex bodies and positive $\lambda$), we show that compact (respectively, closed) convex sets $K_1$
and $K_2$ in $\R^n$ are homothetic provided for any given integer $m$, $2 \le m \le n - 1$ (respectively, $3\le m\le n - 1$), the orthogonal projections of $K_1$ and $K_2$ on every $m$-dimensional
plane of $\R^n$ are homothetic, where the homothety ratio may depend on the projection plane. The proof uses a refined version of Straszewicz's theorem on exposed points of compact convex sets.
Keywords: antipodality, convex set, exposed points, homothety, line-free set, projection
Classification (MSC2000): 52A20
Full text of the article:
Electronic version published on: 27 Jan 2010. This page was last modified: 28 Jan 2013.
© 2010 Heldermann Verlag
© 2010–2013 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
|
{"url":"http://www.emis.de/journals/BAG/vol.51/no.1/17.html","timestamp":"2014-04-17T01:02:23Z","content_type":null,"content_length":"4329","record_id":"<urn:uuid:630fcb49-7eb0-417a-8c4f-f4f51a8972ed>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Five Number Summary
The five number summary is a descriptive statistic which provide information about a set of observation. The quartiles aids us to understand the spread of the data set given. The five number summary
is useful to compare various set of observations which is then represented through the boxplot. Five number summary, consists of
|
{"url":"http://www.mathcaptain.com/statistics/five-number-summary.html","timestamp":"2014-04-20T10:47:30Z","content_type":null,"content_length":"43372","record_id":"<urn:uuid:3b90edac-feda-4c48-b1e4-c80d0bfd17d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Automorphic Forms Online References
This page is an incomplete, but evolving, list of some online references for learning about automorphic forms, representations and related topics. It is focused on open-access notes and survey
papers, not research papers.
I may eventually add comments about each entry, and possibly will reorganize things by topic.
I'd like to make this into open-edit database, but that may be awhile in coming (or some energetic young person should volunteer to help with this), so if there's something you'd like to add, please
let me know. Also, if you have suggestions for how the list (or future database) should be better organized, I'd be happy to hear them.
Notes and Papers (organized by author)
Classic Collections
Recent Collections
Kimball Martin
Tue Mar 4 20:05:51 CST 2014
|
{"url":"http://www2.math.ou.edu/~kmartin/afrefs.html","timestamp":"2014-04-20T05:42:39Z","content_type":null,"content_length":"16548","record_id":"<urn:uuid:ecea91cf-96c3-4227-9d5b-28fe644e3510>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How in the world do I hold all these equations? I'm having a little trouble with my physics class, mainly reconciling equations from lectures, recitations, and labs for instant recall. How do you
gather all of these equations in your head?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Rather than memorizing all the formulas given, try to memorize the concepts so you always know what variable you are solving for. Then plug all the numbers into the equation and solve it.
Best Response
You've already chosen the best response.
yeah it's not so much about memorizing them, unless your professor wants you to memorize them, but how to apply them and know what each value in the equations mean so that you can use them and
know that if you are given a problem, that you have these equations to use to help you find the solution, and if you dont have all the values that you need, that you need to figure out those
values out
Best Response
You've already chosen the best response.
You need to know where to plug in the values by figuring it out using the units. Sometimes you must convert the units to solve the problem. Sorry, I'm not good at explaining.
Best Response
You've already chosen the best response.
I appreciate the response. My high school professor decided to introduce the concepts to Physics I while keeping simplified formulas, and it's been a bit of a journey to expand for components,
scalars, etcetera. I'm just wondering how you usually reconcile what you learn, what techniques and mental relationships you establish. Thanks again.
Best Response
You've already chosen the best response.
Hey Idealist...makes a lot of sense, I just got a 100 on a physics quiz because I remembered what you said about units; thank you :)
Best Response
You've already chosen the best response.
No problem, friend. I'm so glad that I've helped.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50b2a13ce4b09749ccaca56e","timestamp":"2014-04-19T04:44:52Z","content_type":null,"content_length":"40552","record_id":"<urn:uuid:4fd62666-55d5-4b74-9a03-cad9267e50d3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Query languages for bags
- IN PROCEEDINGS OF 4TH INTERNATIONAL WORKSHOP ON DATABASE PROGRAMMING LANGUAGES , 1993
"... In this paper we study the expressive power of query languages for nested bags. We define the ambient bag language by generalizing the constructs of the relational language of Breazu-Tannen,
Buneman and Wong, which is known to have precisely the power of the nested relational algebra. Relative s ..."
Cited by 40 (27 self)
Add to MetaCart
In this paper we study the expressive power of query languages for nested bags. We define the ambient bag language by generalizing the constructs of the relational language of Breazu-Tannen, Buneman
and Wong, which is known to have precisely the power of the nested relational algebra. Relative strength of additional polynomial constructs is studied, and the ambient language endowed with the
strongest combination of those constructs is chosen as a candidate for the basic bag language, which is called BQL (Bag Query Language). We prove that achieveing the power of BQL in the relational
language amounts to adding simple arithmetic to the latter. We show that BQL has shortcomings of the relational algebra: it can not express recursive queries. In particular, parity test is not
definable in BQL. We consider augmenting BQL with powerbag and structural recursion to overcome this deficiency. In contrast to the relational case, where powerset and structural recursion are
- Fundamenta Informaticae , 1995
"... . Two lower bag domain constructions are introduced: the initial construction which gives free lower monoids, and the final construction which is defined explicitly in terms of second order
functions. The latter is analyzed closely. For sober dcpo's, the elements of the final lower bag domains can b ..."
Cited by 7 (3 self)
Add to MetaCart
. Two lower bag domain constructions are introduced: the initial construction which gives free lower monoids, and the final construction which is defined explicitly in terms of second order
functions. The latter is analyzed closely. For sober dcpo's, the elements of the final lower bag domains can be described concretely as bags. For continuous domains, initial and final lower bag
domains coincide. They are continuous again and can be described via a basis which is constructed from a basis of the argument domain. The lower bag domain construction preserves algebraicity and the
properties I and M, but does not preserve bounded completeness, property L, or bifiniteness. 1 Introduction Power domain constructions [13, 15, 16] were introduced to describe the denotational
semantics of non-deterministic programming languages. A power domain construction is a domain constructor P , which maps domains to domains, together with some families of continuous operations. If X
is the semantic domain ...
, 1993
"... Powerdomains like mixes, sandwiches, snacks and scones are typically used to provide semantics of collections of descriptions of partial data. In particular, they were used to give semantics of
databases with partial information. In this paper we argue that to be able to put these constructions into ..."
Cited by 2 (2 self)
Add to MetaCart
Powerdomains like mixes, sandwiches, snacks and scones are typically used to provide semantics of collections of descriptions of partial data. In particular, they were used to give semantics of
databases with partial information. In this paper we argue that to be able to put these constructions into the context of a programming language it is necessary to characterize them as free (ordered)
algebras. Two characterizations -- for mixes and snacks -- are already known, and in the first part of the paper we give characterizations for scones and sandwiches and provide an alternative
characterization of snacks. The algebras involved have binary and unary operations and relatively simple equational theories. We then define a new construction, which is in essence all others put
together (hence called salad) and give its algebraic characterization. It is also shown how all algebras considered in the paper are related in a natural way, that is, in a way that corresponds to
embeddings of their p...
"... This paper is a summary of the following six publications: (1) Stable Power Domains [Hec94d] (2) Product Operations in Strong Monads [Hec93b] (3) Power Domains Supporting Recursion and Failure
[Hec92] (4) Lower Bag Domains [Hec94a] (5) Probabilistic Domains [Hec94b] (6) Probabilistic Power Domains, ..."
Add to MetaCart
This paper is a summary of the following six publications: (1) Stable Power Domains [Hec94d] (2) Product Operations in Strong Monads [Hec93b] (3) Power Domains Supporting Recursion and Failure
[Hec92] (4) Lower Bag Domains [Hec94a] (5) Probabilistic Domains [Hec94b] (6) Probabilistic Power Domains, Information Systems, and Locales [Hec94c] After a general introduction in Section 0, the
main results of these six publications are summarized in Sections 1 through 6. 0 Introduction In this section, we provide a common framework for the summarized papers. In Subsection 0.1, Moggi's
approach to specify denotational semantics by means of strong monads is introduced. In Subsection 0.2, we specialize this approach to languages with a binary choice construct. Strong monads can be
obtained in at least two ways: as free constructions w.r.t. algebraic theories (Subsection 0.3), and by using second order functions (Subsection 0.4). Finally, formal definitions of those concepts
which are used in all...
- In Proceedings of 4th International Workshop on Database Programming Languages , 1993
"... this report as a basis for investigating aggregate functions. ..."
- In Proceedings of 4th International Workshop on Database Programming Languages , 1993
"... In this paper we study the expressive power of query languages for nested bags. We define the ambient bag language by generalizing the constructs of the relational language of Breazu-Tannen,
Buneman and Wong, which is known to have precisely the power of the nested relational algebra. Relative s ..."
Add to MetaCart
In this paper we study the expressive power of query languages for nested bags. We define the ambient bag language by generalizing the constructs of the relational language of Breazu-Tannen, Buneman
and Wong, which is known to have precisely the power of the nested relational algebra. Relative strength of additional polynomial constructs is studied, and the ambient language endowed with the
strongest combination of those constructs is chosen as a candidate for the basic bag language, which is called BQL (Bag Query Language). We prove that achieveing the power of BQL in the relational
language amounts to adding simple arithmetic to the latter. We show that BQL has shortcomings of the relational algebra: it can not express recursive queries. In particular, parity test is not
definable in BQL. We consider augmenting BQL with powerbag and structural recursion to overcome this deficiency. In contrast to the relational case, where powerset and structural recursion are
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2225064","timestamp":"2014-04-20T06:35:38Z","content_type":null,"content_length":"27855","record_id":"<urn:uuid:a82dcd2b-264a-4fae-9825-64dabec8480f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ac circuit non inductive resistor and a coil..
1. The problem statement, all variables and given/known data
A non inductive resistor takes 8A at 100 volts. What inductance of a coil of negligible resistance must be connected in series in order that this load can be supplied from a 220 volt 60 Hz mains.
2. Relevant equations
Z^2 = R^2 + XL^2
XL = 2pi(frequency) L(inductance)
3. The attempt at a solution
i have solve the resistance before the coil is added.. R=12.5 ohms....
i am stuck on how to get the value of the coil since there is no given current..
|
{"url":"http://www.physicsforums.com/showthread.php?t=467859","timestamp":"2014-04-18T23:20:46Z","content_type":null,"content_length":"23196","record_id":"<urn:uuid:fde3887f-2e11-45cd-a4e1-8243e1d4abe5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
« Previous | Next »
Unit 2 begins with hash functions, which are useful for mapping large data sets. We will continue with a broad introduction to object-oriented programming languages (Python is an example), covering
objects, classes, subclasses, abstract data types, exceptions, and inheritance. Other algorithmic concepts covered are "Big O notation," divide and conquer, merge sort, orders of growth, and
amortized analysis.
The next several lectures introduce effective problem-solving methods which rely on probability, statistical thinking, and simulations to solve both random and non-random problems. A background in
probability is not assumed, and we will briefly cover basic concepts such as probability distributions, standard deviation, coefficient of variation, confidence intervals, linear regression, standard
error, and plotting techniques. This will include an introduction to curve fitting, and we introduce the Python libraries numpy and pylab to add tools to create simulations, graphs, and predictive
We will spend some time on random walks and Monte Carlo simulations, a very powerful class of algorithms which invoke random sampling to model and compute mathematical or physical systems. The Monty
Hall problem is used as an example of how to use simulations, and the knapsack problem introduces our discussion of optimization. Finally, we will begin looking at supervised and unsupervised machine
learning, and then turn to data clustering.
At the end of Unit 2 there will be an exam covering all material (lectures, recitations, and problem sets) from the beginning of the course through More Optimization and Clustering.
« Previous | Next »
|
{"url":"http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00sc-introduction-to-computer-science-and-programming-spring-2011/unit-2/","timestamp":"2014-04-18T18:12:59Z","content_type":null,"content_length":"43096","record_id":"<urn:uuid:64817e3f-881c-45ac-a209-c97baa759bc9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Converting Fluid Velocity
Date: 09/08/2002 at 23:00:05
From: Lisa Brzezinski
Subject: Metric conversions
In a science practical writeup, I have to convert fluid velocity,
m^3/s, to linear velocity, m/s, but I don't know how to convert m^3
to m.
Hope you can help.
Date: 09/09/2002 at 10:50:49
From: Doctor Ian
Subject: Re: Metric conversions
Hi Lisa,
Suppose I have 6 cubic meters of stuff, and I arrange it into a cuboid
1 meter by 2 meters by 3 meters.
Now suppose I set the cuboid down on a table, make a mark at one end,
and move the cuboid until the other end is at the mark. Viewed from
above, this would look like
t=0 seconds +------+
| |
t=1 seconds +------+
| |
What's the linear velocity? It depends on the axis of the cuboid that
is parallel to the motion, right? And what's the length of that axis?
It's the volume divided by the area of the side that is perpendicular
to the motion.
Does that make sense?
- Doctor Ian, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/61185.html","timestamp":"2014-04-17T21:49:57Z","content_type":null,"content_length":"6142","record_id":"<urn:uuid:8ecd6449-9fe1-4074-b6ff-6b80e6ef9ecd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
|
North Brentwood, MD Algebra 2 Tutor
Find a North Brentwood, MD Algebra 2 Tutor
...I have taught multiple people how to swim and I personally tutored more than 20 people in stroke, turn, and start technique for all four strokes. I am a college graduate who will apply to
medical school in the coming year. I took over 20 MCAT practice tests, averaging a score of 35 with a high score of 37.
39 Subjects: including algebra 2, reading, chemistry, Spanish
...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information
from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated.
17 Subjects: including algebra 2, chemistry, physics, calculus
...I completed a B.S. degree in Applied Mathematics from GWU, graduating summa cum laude, and also received the Ruggles Prize, an award given annually since 1866 for excellence in mathematics. I
minored in economics and went on to study it further in graduate school. My graduate work was completed...
16 Subjects: including algebra 2, calculus, statistics, geometry
...I am a professional interpreter and language tutor born and educated in Russia. The Russian school system puts a lot of focus on developing excellent math skills and teaching outstanding
abilities to use applied math in everyday life. I pride myself in having acquired excellent math skills, and I will be glad to help my students improve their math skills.
10 Subjects: including algebra 2, calculus, ESL/ESOL, algebra 1
...My coursework and previous experience suit me for working with kids of all ages, and my coaching experience (U3-U19) has allowed me to understand and see best practices of teaching and
coaching of all ages. I was a varsity swimmer and County All Star Senior year. I placed top 8 in county division in every individual event Junior and Senior year.
16 Subjects: including algebra 2, statistics, geometry, algebra 1
Related North Brentwood, MD Tutors
North Brentwood, MD Accounting Tutors
North Brentwood, MD ACT Tutors
North Brentwood, MD Algebra Tutors
North Brentwood, MD Algebra 2 Tutors
North Brentwood, MD Calculus Tutors
North Brentwood, MD Geometry Tutors
North Brentwood, MD Math Tutors
North Brentwood, MD Prealgebra Tutors
North Brentwood, MD Precalculus Tutors
North Brentwood, MD SAT Tutors
North Brentwood, MD SAT Math Tutors
North Brentwood, MD Science Tutors
North Brentwood, MD Statistics Tutors
North Brentwood, MD Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Berwyn Heights, MD algebra 2 Tutors
Bladensburg, MD algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
Colmar Manor, MD algebra 2 Tutors
Cottage City, MD algebra 2 Tutors
Edmonston, MD algebra 2 Tutors
Fairmount Heights, MD algebra 2 Tutors
Hyattsville algebra 2 Tutors
Landover Hills, MD algebra 2 Tutors
Mount Rainier algebra 2 Tutors
Riverdale Park, MD algebra 2 Tutors
Riverdale Pk, MD algebra 2 Tutors
Riverdale, MD algebra 2 Tutors
University Park, MD algebra 2 Tutors
West Hyattsville, MD algebra 2 Tutors
|
{"url":"http://www.purplemath.com/North_Brentwood_MD_algebra_2_tutors.php","timestamp":"2014-04-17T01:00:48Z","content_type":null,"content_length":"24854","record_id":"<urn:uuid:c45cd6bf-6ab8-4c17-b8f7-c9a56c81856e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum - Ask Dr. Math Archives: Middle School Factoring Numbers
This page:
Dr. Math
See also the
Dr. Math FAQ:
learning to factor
About Math
factoring expressions
graphing equations
factoring numbers
conic sections/
3D and higher
Number Sense
factoring numbers
negative numbers
prime numbers
square roots
Word Problems
Browse Middle School Factoring Numbers
Stars indicate particularly interesting answers or good places to begin browsing.
Working with variables?
Try Middle School Factoring Expressions.
Selected answers to common questions:
Finding a least common denominator (LCD).
LCM, GCF.
Prime factoring.
Table of factors 1-60.
Is there a statement that describes product-perfect numbers?
The product of digits in the number 234 is 24: 2*3*4 = 24. Can you describe a general procedure for figuring out how many x-digit numbers have a product equal to p, where x and p are counting
numbers? How many different three-digit numbers have a product equal to 12? to 18? Why?
How many rectangles can you make with 10 small squares? 5 small squares? 12 small squares?
How many rectangular solids can be made from "n" cube-shaped blocks?
How do you reduce fractions with different denominators?
What does the term relatively prime mean, and how can you determine if two numbers are relative primes?
Finding and eliminating (cancelling) prime factors.
Using factor trees to prime factor and simplify 188/240.
Doctor Rick, an eleven year-old, and her father apply least common multiples, modular arithmetic, and the Chinese Remainder Theorem to reason their way to the smallest number which when divided
by 3, 7, and 11 leaves remainders 1, 6, and 5, respectively.
Where can I find a factor sheet with the factors 1 - 100?
How do I figure out the next 2 numbers in the pattern 1, 8, 27, 64, ____, ____?
Can you give my some tips to help me simplify fractions?
Two mathematicians are each assigned a positive integer. They are told that the product of the two numbers is either 8 or 16. Neither knows the other's number...
The GCF of two numbers is 20 and the LCM is 840. One of the numbers is 120. Explain how to find the other number and use the Venn diagram method to illustrate.
How do I find the GCD of three integers using Euclid's Algorithm? I am not sure where you plug the third integer into the algorithm.
Page: [<prev] 1 2 3
|
{"url":"http://mathforum.org/library/drmath/sets/mid_factornumb.html?s_keyid=39754982&f_keyid=39754984&start_at=81&num_to_see=40","timestamp":"2014-04-21T10:19:08Z","content_type":null,"content_length":"15738","record_id":"<urn:uuid:cc5b62f1-0c2e-4e15-9216-55f64c8be6e6>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Holliston Science Tutor
Find a Holliston Science Tutor
...My main enterprise is a management consulting firm. I am considered an expert on business innovation, business-model innovation, and entrepreneurship. Rather than being someone who just teaches
how to do better, I'm someone who has led a life of high accomplishment.
55 Subjects: including ACT Science, philosophy, geology, English
...I am a native Russian speaker. I studied Russian literature (in Russian), for state High school exams (like the AP exams in US). through the years, I taught reading and writing of the Russian
alphabet (Cyrillic)to children. And worked on pronunciation and vocabulary with both adults and children.
23 Subjects: including genetics, microbiology, physical science, anatomy
Hello, I am a patient, friendly and knowledgeable chemistry tutor with six years of experience tutoring and teaching. I enjoy helping students develop the knowledge, skills, and concepts to
understand the world of chemistry and to succeed in their classes, exams, and careers. I have a bachelor’s ...
3 Subjects: including chemistry, physics, organic chemistry
...As I was a graduate student myself at the US school and have been a teacher at the US veterinary school, I understand an individual learning process and can tailor to fit students’ needs. I
have taught all age groups from kindergartner to graduate/professional students during my own teaching car...
11 Subjects: including zoology, calculus, geometry, precalculus
...As a 'Refresher's course" I am currently taking an on-line course given by Prof. Eric Lander of MIT, where he devotes 14 lectures altogether to Genetics, Molecular Biology, Recombinant DNA and
Genomics. During my Ph.D. and Post-doctoral research, I worked with Drosophila melanogaster, an organism amenable to genetic manipulations.
14 Subjects: including zoology, biology, geometry, GRE
|
{"url":"http://www.purplemath.com/Holliston_Science_tutors.php","timestamp":"2014-04-18T21:46:08Z","content_type":null,"content_length":"23827","record_id":"<urn:uuid:db69f805-07c2-4841-a67c-6fd1977bd2e3>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math 302
Spring 2008
Prof. Donnay
Course Play by Play
Wk 1: Wed Jan 23: Intro to course. Problems in the world. Real Analysis can help solve (some of) them. Dynamical systems worksheet. Iteration. What topic from Math 301 does this remind you off?
For Friday, finish worksheet up through example (4): f(x) = e^x.
Fri Jan 25: Review of worksheet. What happens to iteration of different initial values? Make conjectures about possible behaviors. Counter-example to conjecture. List of analysis terms related to
dynamical systems:
- sequence, bounded or unbounded sequence, (monotone) increasing or decreasing sequence, convergence, limit, continous function, subsequences, Bolzano-Weierstrass Theorem (any bounded sequence has a
converge subsequence), Monotone Convergence Theorem (a bounded, monotone sequence converges – useful for proving existence of limits! ), divergence (either to infinity or via oscillation). Cauchy
We did not cover all the topics that are on the hw sheet for next week so only do the following:
#1 using absolute value notation (not with distance notation); #4.
Wk 2: Monday Jan 28. Dr. Francl guest lecture on Quantum Mechanics.
Wed Jan 30: Review definition of limit of sequence in R using absolute value notation. Rewrite this definition using distance notation. Consider sequences in R^2, limit of sequence, definition using
notion of standard Euclidean distance in R^2 (R^n).
Metric space (B: Sect 11.4). Please finish homework worksheet 1 for Friday.
Friday Feb 1: Definition of limit in a metric space. Visualization of Epsilon nbd in R^2. Example of function space C[0,1]. Explorations of what might make a good distance between functions (metric);
worksheet. Formal definition of metric space.
Wk 3: Mon Feb 4. Using the definition of metric, proved that R^2 with the Euclidean distance is a metric. Proved theorem relating limit in R^2 to limits in R. Taxi cab metric, discrete metric.
Web Feb 6: Review of Cauchy sequences. Complete, non-complete metric spaces. Open, closed balls in metric spaces. Quiz.
Friday Feb 8: Open/Closed sets. Limit points. Sequences of functions using Mathematica. (Mathematica notebook, pdf copy of notebook). Pointwise limit of functions.
Wk 4: Mon Feb 11: Home work extension from Wed until Friday. Discussion of how to use triangle inequality from R to prove that triangle inquality holds for d_infinity and d_1 metrics. Limits of
sequences of functions. Group worksheet.
Wed Feb 13: Whole space, null set are both open and closed. Discussion of pointwise convergence of sequence of functions vs uniform convergence.
Friday Feb 15: Uniform convergence. How to prove non-uniform convergence.
Wk 5: Mon Feb 18: Review of continuity. Delta-epsilon cheer. Continuity of functions between metric spaces. Uniform limit of continuous functions is a continuous function.
Wed Feb 20: Equivalence of uniform convergence and convergence in the sup metric. Taylor polynomials as an example of sequence of functions. Taylor polynomial Mathematica notebook.
Friday Feb 22: More Taylor polynomials for e^x and the regions on which they are good approximations to the e^x.
Wk 6: Mon Feb 25: Taylor Remainder Theorem and application to prove pointwise and uniform convergence.
Wed Feb 27: Review for test. See exam review outline guide.
Friday Feb 29: Defintion review with groups. Review of pointwise, uniform convergence with epsilon proofs.
Wk 7: Mon March 4: Introduction to infinite series. Limit of partial sums. Geometric series.
Wed March 5: Use Mathematica to evaluate partial sums of series. Conjecture whether the limit will exist or not (ie whether series converges). Mathematica commands for series, sums.
Friday March 7: test due. Cauchy Criterion, Comparison Test.
Wk 8: Mon March 17: Alternating Harmonic Series as example of alternating series.
Wed March 19: Absolutely convergent series are convergent.
Friday March 21: review of key ideas about series and typical confusions. Favorite examples of series that illustrate various properties. Start mini-presentation process (see instructions).
Wk 9: Mon March 24
Wed March 26: Proof of convergence of Sum (1/n^2) using comparison with geometric series. Student presentations on various series tests. Root test.
Friday March 28: Ratio and Limit Comparison test.
Wk 10: Mon March 31: Integral test. Feedback questionnaire.
Wed April 2: Intuition behind conv/div of series
Frid April 4: Series of Functions. (Sect 9.4)
Wk 11: Mon April 7: M- test,
Wed April 9: Theorem of uniform convergence of sequence of functions, proofs
Fri April 11: radius of convergence of power series, interchanging integral and limit.
Wk 12: Mon April 14: Loose ends: limits of form n^1/n ; interchange limit and integral; radius of convergence of power series, uniform convergence.
Wed April 16: review of topics for midterm.
|
{"url":"http://www.brynmawr.edu/math/people/donnay/vjdwebpage/Teaching/vjdmath302webS08/302PlayByPlayS08.htm","timestamp":"2014-04-20T23:28:55Z","content_type":null,"content_length":"17376","record_id":"<urn:uuid:20620951-ae97-4e53-a48b-63038bc61353>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Algebra - Determinant Properties
I hate to disagree with gabbagabbahey, but you haven't answered the questions at all!
1. The problem statement, all variables and given/known data
1. Give an example of a 2x2 real matrix A such that A^2 = -I
2. Prove that there is no real 3x3 matrix A with A^2 = -I
2. Relevant equations
I think these equations would apply here?
det(A^x) = (detA)^x
det(kA) = (k^n)detA (A being an nxn matrix)
det(I) = 1
3. The attempt at a solution
Would I use above equations with this question? This is what I did so far; I don't know if I'm off in answering this question...
I wrote:
It is a 2x2 matrix, so n = 2
det(A^2) = det(-I)
(detA)^2 = (-1^2)detI
(detA)^2 = detI (and detI = 1)
Therefore, detA * detA must = 1; so could I use the identity matrix itself as a matrix example for A:
A =
[1 0
0 1]
Then, detA * detA = 1 = detI
Does this make sense? Or am I not allowed to use the identity matrix here?
Have you forgotten what the question asked? You were asked to find A such that A
= -I. The example you give has A
= I, not -I.
I wrote:
It is a 3x3 matrix, so n = 3
det(A^2) = det(-I)
(detA)^2 = (-1^3)detI
(detA)^2 = -(detI )
(detA)^2 = -1
Then, can I just say that since (detA)^2 is always positive since it is squared... therefore, (detA)^2 can never equal -1, and there is no real 3x3 matrix A with A^2 = -I
Yes, this part is correct.
Thanks a lot for the help!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
|
{"url":"http://www.physicsforums.com/showthread.php?t=271038","timestamp":"2014-04-17T21:33:46Z","content_type":null,"content_length":"55605","record_id":"<urn:uuid:e29c7f50-c8ff-4897-95ee-0c7ef9625bac>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Volo, IL Precalculus Tutor
Find a Volo, IL Precalculus Tutor
...I have had many physics courses and much related subject matter in my under-graduate engineering coursework and graduate work in applied mathematics. I bring a diverse background to the
tutoring sessions. I thoroughly enjoy tutoring ACT Math due to the diversity of subject matter.
18 Subjects: including precalculus, physics, calculus, GRE
...Many students who hated math started liking it after my tutoring. That is my specialty. I provide guidance based on their attitude interest and ability.
12 Subjects: including precalculus, calculus, trigonometry, statistics
...Qualification: Masters in Computer Applications My Approach : I assess the child's learning ability in the first class and then prepare an individual lesson plan. I break down math problems
for the child, to make him/her understand in an easy way. I work with the child to develop his/her analytical skills.
8 Subjects: including precalculus, geometry, algebra 1, ACT Math
...In addition, I taught a class called "Introduction to the C Programming Language" at Daley College, one of the City Colleges of Chicago, for several years. Computer programming involves
designing, writing, testing and refining source code. The purpose of computer programming is to build a set o...
67 Subjects: including precalculus, chemistry, Spanish, English
...You know, when you are working on something and it finally clicks; when the light-bulb comes on. I am willing to work with groups, between two to seven people. Special rates may be available
(per individual) depending on what the subject matter is, and how often you are wanting to meet.
10 Subjects: including precalculus, geometry, algebra 2, algebra 1
Related Volo, IL Tutors
Volo, IL Accounting Tutors
Volo, IL ACT Tutors
Volo, IL Algebra Tutors
Volo, IL Algebra 2 Tutors
Volo, IL Calculus Tutors
Volo, IL Geometry Tutors
Volo, IL Math Tutors
Volo, IL Prealgebra Tutors
Volo, IL Precalculus Tutors
Volo, IL SAT Tutors
Volo, IL SAT Math Tutors
Volo, IL Science Tutors
Volo, IL Statistics Tutors
Volo, IL Trigonometry Tutors
Nearby Cities With precalculus Tutor
Buffalo Grove precalculus Tutors
Bull Valley, IL precalculus Tutors
Crystal Lake, IL precalculus Tutors
Fox Lake, IL precalculus Tutors
Gurnee precalculus Tutors
Holiday Hills, IL precalculus Tutors
Island Lake precalculus Tutors
Johnsburg, IL precalculus Tutors
Lakemoor, IL precalculus Tutors
Mchenry, IL precalculus Tutors
Port Barrington, IL precalculus Tutors
Round Lake Beach, IL precalculus Tutors
Round Lake Park, IL precalculus Tutors
Round Lake, IL precalculus Tutors
Wheeling, IL precalculus Tutors
|
{"url":"http://www.purplemath.com/volo_il_precalculus_tutors.php","timestamp":"2014-04-20T09:06:28Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:fb84063d-4e5d-4b1c-956f-bb35a1554e46>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sonnet primes in Python
A while back I wrote about sonnet primes, primes of the form ababcdcdefefgg where the letters a through g represent digits and a is not zero. The name comes from the rhyme scheme of an English
(Shakespearean) sonnet.
In the original post I gave Mathematica code to find all sonnet primes. This post shows how to do it in Python.
from sympy.ntheory import isprime
from itertools import permutations
def number(t):
# turn a tuple into a number
return 10100000000000*t[0] + 1010000000000*t[1]
+ 1010000000*t[2] + 101000000*t[3]
+ 101000*t[4] + 10100*t[5]
+ 11*t[6]
sonnet_numbers = (number(t) for t in
permutations(range(10), 7) if t[0] != 0)
sonnet_primes = filter(isprime, sonnet_numbers)
Rather than filter, I would have used
sonnet_primes = (number for number in sonnet_numbers if isprime(number))
It is a bit more verbose, but it looks more obvious to me, especially in the context of mathematics
Love the alignment of your code! Makes everything look beautiful, even poetic.
[...] I could rewrite them using Python. For example, I rewrote my code for finding sonnet primes in Python a few days ago. Next I wanted to try testing the Narcissus [...]
Tagged with: Python, SymPy
Posted in Python
|
{"url":"http://www.johndcook.com/blog/2013/01/08/sonnet-primes-in-python/","timestamp":"2014-04-18T08:40:14Z","content_type":null,"content_length":"29879","record_id":"<urn:uuid:4f9318a1-d391-46ea-bf51-6f2b7539703a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Understanding Terrestrial Multipath Fading Phenomena
Signal-fading phenomena can drastically affect the performance of a terrestrial communications system. Often caused by multipath conditions, fading can degrade the bit-error-rate (BER) performance of
a digital communications system, resulting in lost data or dropped calls in a cellular system. A key to preventing loss of radio performance is to understand the nature of multipath fading phenomena
in terrestrial communications systems and how to anticipate when such phenomena may be a concern.
Fading can occur in many forms, including a phenomenon called flat fading. In flat fading, the same degree of fading takes place for all of the frequency components transmitted through a radio
channel and within the channel bandwidth. That is, all the frequency components of the transmitted signal rise and fall in unison.
In contrast, frequency-selective fading causes different frequencies of an input signal to be attenuated and phase shifted differently in a channel. Frequently, channels experiencing
frequency-selective fading may require an equalizer to achieve the desired performance. Frequency-selective fading gives rise to notches in the frequency response of the channel. Equalization
techniques attempt to restore the memory less (flat fading) nature of the channels. With the proper equalization, it is possible to transmit at higher data rates before the onset of intersymbol
interference (ISI) is apparent in the time domain.
Frequency-selective fading can be viewed in the frequency domain, although in the time domain, it is called multipath delay spread. The simplest measure of multipath is the overall time span of path
delays from the first pulse to arrive at the receiver (the bona fide direct signal) to the last pulse to arrive at the receiver (the multipath echo). This applies to a particular threshold (not
thermal) above which the last echo is significant (Fig. 1). This spread has also been referred to as the excess delay spread. The two parameters most often used as statistical designators of the
multipath channels are the average time delay and the delay spread. The first moment of the time-delay profile is the mean delay, and the square root of its central moment (about the mean) is defined
as the delay spread. The delay spread of different systems may be comparable, but the activity of signal components (the number of echoes and their amplitudes) may be much different. The first moment
of a delay spread is:
and the second central moment (variance) is given as:
Therefore, the root-mean-square (RMS) delay spread is:
Figure 1a offers a simple example of delay spread. It shows the pulse for the direct path and for the indirect path (multipath). The first pulse arrives at τ[o] after the transmitted pulse and the
echo at τ[2]. The difference in time is the excess delay spread. In an actual system, there are multiple paths and in the interval of the delay spread there is a raft of echo pulses, and not just two
discrete paths as shown in the figure. This may also be a continuum and may not be uniform, but more concentrated near τ[o], and falling off in amplitude as the signals travel further in time. Of
course, it is possible that some farther-out pulses may be bigger than the original pulse caused, for example, by specular reflections (forward scatter).
Frequency fading due to time dispersion is also known as ISI. Delay spread in time causes ISI, in which there is time dispersion of the signal. The time dispersion sets a limit on the speed at which
modulated symbols can be transmitted in the channel. Because of the dispersion, symbols can collide and result in distorted output data. In this type of fading, the differences in delay between the
various reflections arriving at the receiver can be a significant fraction of the data symbol interval, establishing conditions for overlapping symbols.
If the time-delay spread equals zero, there is no selective fading. As a rule of thumb, a channel can be considered flat when τ[rms]/T is less than 0.1 where T is the symbol period. Statistical
analysis can determine the range of frequencies over which a channel can be considered flat—that is, all received signal levels are approximately comparable in magnitude and the phases are
approximately in unison (do not negate each other). Using the statistical approach, the coherence bandwidth is defined as the bandwidth over which the fading statistics are correlated to better than
90 percent (Fig. 2). Clearly, if two frequencies do not fall within the coherence bandwidth, they will fade independently.
Consider the multipath model depicted in Fig. 1b. It shows a single reflector of the transmitted signal. A pulse is propagated toward the receiver via a direct path and a single bounce path. The
figure portrays simple conditions; in fact, the echoes are numerous. The delay is indicated by τ[2] relative to the direct path where τ[o] is assumed to be the epoch point. Clearly, the direct path
delay is shorter than the echo delay. At the receiver, the two signals are combined, with the differential delay being τ[Δ] seconds in length. From signal theory, this is analogous to a delay-line
canceller (Fig. 2).
An ideal delay line will produce an output signal, which is an exact replica of the input signal, but delayed in time by τ[2]. A network which had the property that e(t)[out] = e(t)[in] + ein(t−τ[1])
would have to possess exp(−jτω) as a transcendental transfer function and the delayed signal input at t = τ[2] in response to a input at τ[o] (epoch). The similarity to the propagation model
illustrated in Fig. 1b should be apparent.
To understand the nature of signal delays for a given network, it is necessary to find the frequency-domain response (the frequency selectivity) of that network. Since the delayed pulse is assumed to
fall with the direct pulse upon their arrival at the receiver, the end result will be frequency-selective fading. Mathematically, taking the Fourier transform of both sides of the system function, H
(jω), from Eq. 1:
If the multipath signal, a, is equal to a(t), the multipath function is time dependent, as would be prevalent for a mobile communications receiver. The system function H(jω) is complex, but it is the
amplitude (real) part that is of interest, and the frequency-domain characteristic of the network follow from analysis. Applying Euler's identity for exp(−−jωt) results in:
For a =1 (Eq. 3),
Page Title
Equation 3 is the amplitude frequency response of the channel, which can be plotted as shown in Fig. 3. Equation 3 goes to zero ("fade notches") when the value of τ is:
For example at a frequency of τ = 0.5f, there will be a null in the transmission, and the receiver will exhibit zero output (frequency-selective fading). Other applicable frequencies will have
various degrees of fading.
Coherence bandwidth has been defined as the bandwidth over which the fading statistics are correlated to better than 90 percent (Fig. 3). Going back to Eq. 3 for values of a < 1, there is no complete
cancellation at the notches. The transfer output is undulatory (dotted curve in Fig. 3). As the multipath content becomes highly attenuated, the multipath approaches zero and receiver input reverts
to a "flat fade" condition (|H(ω)|=1). Clearly, the output is equal to the transmitted signal, but attenuated by the free-space loss.
The multipath null locations are also a function of frequency and dynamic differential delays between the direct and multipath signal components. The time τ required for a radio wave to travel a
given distance d in free space is τ = d/c, where c is the speed of the electromagnetic (EM) wave (in a vacuum). The delay time of the multipath signal after the arrival of the direct signal is found
from the difference between the direct and multipath signal distances, τ = (d[m] − d[d])/c, where τ is the differential delay.
The multipath model described here is rather simplistic since only a single discrete signal path has been considered. Under real-world conditions, a network would exhibit a multitude of discrete
multipath signals and possibly even a continuum of multipath signals (Fig. 1). It should be noted that the echo signals need not be smaller than the direct signal wave. If the difference in path
length is large, the fading characteristics will vary greatly even with small frequency separations.
Frequency-selectivity fading in the time domain is manifested as ISI or smearing in the time domain. Multipath is not always a bad thing, since there would be no cellular industry without it. A
multiplicity of randomly reflected and diffracted signals, reaching the cellular handset, with random amplitude and uniform phase distribution, assumes Rayleigh statistics. For the rare occasion when
there is a line-of-sight signal path to the base station, the statistics are Ricean in nature.
Combating time-dependent fading in a dynamic channel could be daunting. One solution is the use of adaptive equalization.^6 In this approach, an adaptive transversal equalizer filter is used to track
the fading multipath signals. Another approach is the use of multicarrier modulation, where the spectrum of the frequency selective channel is divided into a large number of parallel, independent,
and approximately flat subchannels. One example of this is orthogonal frequency division multiplexing (OFDM). OFDM is a multitone system using a multiplicity of juxtaposed tones, transmitted at some
rate R, where M independent tones in parallel will result in information transmitted at a rate of MR. These contiguous tones can combat ISI since the fading is flat for each tone over the entire
ensemble of (independent) tones. In this case, the symbol transmission rate (R) is much smaller than the coherence bandwidth of the channel (the tones). Since a condition results in which the
narrowband tones are subject to flat fading, there is therefore no ISI.
Spread-spectrum signals can also effectively minimize the effects of multipath distortion. For example, Fig. 4 shows two received signals: one is a direct signal while the other is a multipath
signal. If the multipath signal is delayed by one chip, it will be rejected by the receiver since it is no longer in sync with the timing of the reference source in the receiver. It is thus seen as
"hash" by the receiver.
1. D.M.J. Devaservatham, "Multiple Time Delay Spread in the Digital Portable Radio Environment," IEEE Communication Magazine, June 1987.
2. P. Monsen, "Fading Channel Communications," IEEE Communications Magazine, January 1980.
3. W.G. Newhall, et al., "Using RF Channel Sounding Measurements to Determine Delay Spread and Path Loss," RF Design, January.
4. J. Shapira, "Channel Characteristics for Land Cellular Radio, and Their Systems Implications," IEEE Antenna & Propagation Magazine, August 1992.
5. G.L. Turin, "Error Probabilities for Binary Ideal Reception Through Nonselective Slow Fading and Noise," Proceedings of the IRE, September 1958.
6. R. Lucky, "Automatic Equalization for Digital Communication," Bell System Technical Journal, April 1965, pp. 547-588.
7. R.S. Burington, Handbook of Mathematical Tables & Formulas, Handbook Publishing Co., 1958.
|
{"url":"http://mwrf.com/print/systems/understanding-terrestrial-multipath-fading-phenomena","timestamp":"2014-04-21T08:33:35Z","content_type":null,"content_length":"26496","record_id":"<urn:uuid:d7c77d63-2369-4794-ba97-3f4eb50e46c5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
educational a
United Kingdom
Educational attainment at age 16
• All of the statistics below include all vocational equivalents (e.g. GNVQs). The precise group of pupils included varies from graph to graph (and, in some cases, from year to year) but, in broad
terms, is those taking GCSEs for the year in question (i.e. those deemed to be at the end of Key Stage 4). Whether independent schools and pupil referral units are included also varies from graph
to graph.
• Looking at the trends for various thresholds:
□ At the lowest threshold (no qualifications): 1% (6,000) of pupils in England obtained no qualifications in 2009/10. This proportion is much lower than either five years previously (3%) or a
decade previously (6%). However, the comparability of these statistics over time is actually very unclear, given that: the scope of what counts as a qualification has been widened; the range
of possible qualifications has increased; and some pupils without any qualifications are no longer included in the statistics.
□ At the middle threshold (less than 5 GCSEs of any grade): 7% (47,000) of pupils in England obtained fewer than 5 GCSEs in 2009/10. This proportion has fallen in each year since 2004/05, when
it was 10%. By contrast, the proportion had remained broadly unchanged between the late 1990s and the early 2000s. Note that some changes in definition in the mid 2000s mean that the time
series before and after that this are not strictly comparable.
□ At the highest threshold (less than 5 GCSEs at grade C or above): 25% of pupils did not achieve the higher threshold of 5+ GCSEs at grade C or above in 2009/10. This proportion has fallen in
each year of the last decade and the 25% now compares with 50% a decade ago.
• The proportion with few GCSEs is similar in all of the English regions.
• 15% of boys eligible for free school meals do not obtain 5 or more GCSEs. This compares with 10% for girls eligible for free school meals and 5% for boys not in eligible for free school meals.
• 16% of White British pupils eligible for free school meals do not obtain 5 or more GCSEs. This is a much higher proportion than that for any other ethnic group.
• Combining gender and ethnic group, 19% of White British boys eligible for free school meals do not obtain 5 or more GCSEs. This is a much higher proportion than that for any other combination of
gender, ethnic group and eligibility for free school meals.
• See the equivalent analyses for Scotland, Wales and Northern Ireland.
View Graph as PDF (resizeable) Right click to save large version of Graph as PNG View Graph as PDF (resizeable) Right click to save large version of Graph as PNG View Graph as PDF (resizeable) Right
click to save large version of Graph as PNG View Graph as PDF (resizeable) Right click to save large version of Graph as PNG View Graph as PDF (resizeable) Right click to save large version of Graph
as PNG Download a spreadsheet with the district-level statistics
In a competitive job market, academic and vocational qualifications are increasingly important. Those without qualifications are at a higher risk of being unemployed and having low incomes. Machines,
S in Exclusion, employment and opportunity, Case Paper No 4, Atkinson. A and Hills J, (eds), 1998, page 61. More generally, success in acquiring formal qualifications bolsters children's self-esteem,
and enhances development of self-identity.
This indicator reflects the importance of children acquiring formal qualifications. This is by no means the same statistic as that in common recent usage, namely the number failing to obtain at least
5 GCSEs at grade C or above. In the context of a report about poverty and exclusion, using a statistic which covers around half of all children seems inappropriate. Furthermore, at least implicitly,
it places no direct value on obtaining a slightly lower set of grades, for example, 4 GCSEs at Grade C.
The first graph shows the proportion of pupils failing to obtain five or more GCSEs (or vocational equivalent) at grade C or above in England. The data is split between those who obtain no GCSE
grades at all (either because they do not enter for exams or achieve no passes), those who do obtain some GCSEs but less than five, and those who obtain 5 or more GCSEs but less than 5 at grade C or
Note that the data pre- and post- 2004/05 is not strictly comparable:
• The precise group of pupils included in the statistics changed in 2004/05 from "pupils aged 15 at 31 August in the calendar year prior to sitting the exams" to "pupils at the end of Key Stage 4"
and no data on the old basis is now available. This had the effect of reducing the proportions not achieving each of the three thresholds (by around half a percentage point in each case) by
effectively excluding some pupils who obtained no qualifications.
• The scope of what was counted as an 'equivalent' in England was widened in 2003/04. In principle, this again had the effect of reducing the proportions not achieving each of the three thresholds,
but, in practice, the impact is only thought to have been material for the lower two thresholds (but not the highest threshold).
• The 'no GCSEs' threshold was changed in 2004/05 to 'no qualifications' and appears to now include "entry level qualifications which do not contribute towards GCSE grade G thresholds". This had
the effect of reducing the proportions not achieving the lowest of the three thresholds (no qualifications) but not the other two thresholds.
In terms of monitoring trends over time, these changes in definition are extremely unfortunate (and also make comparisons between England and the rest of the United Kingdom very dubious). However,
discussions with the relevant government department (the Department for Education) have made it clear that no completely consistent time series exists.
The second graph shows the absolute number of students who obtain either no GCSE grades at all or who do obtain some GCSEs but less than five. The same caveats apply to the time series as for the
first graph. Note that the trends are somewhat different from the first graph because of the changing total numbers of pupils.
The data source for the first two graphs is the Department for Education (DfE) statistical releases entitled GCSE and equivalent results in England. The data relates to England only. It covers all
schools, including independent schools and pupil referral units. Note that this is a different coverage than that for the third to fifth graphs, which exclude independent schools and pupil referral
units. These differences in scope make a material difference to the results. For example, in 2009/10, the proportion of children who did not obtain five or more GCSEs (or vocational equivalent) was
7.2% if independent schools and pupil referral units are included (as in the first two graphs) but only 5.3% if these schools are excluded (as in the third to fifth graphs). The reason for the
differences in coverage are the data that DfE happens to collect and to make available.
The third and fourth graphs show, for the latest year, how the proportion of students without five or more GCSEs (or vocational equivalent) varies by pupil characteristics. In the third graph, the
data is shown separately by gender and whether or not the pupil is eligible for free school meals. In the fourth graph, the data is shown separately by ethnicity and whether or not the pupil is
eligible for free school meals. Since entitlement to free school meals is essentially restricted to families in receipt of out-of-work benefits, this should be thought of as a proxy for worklessness
rather than low income.
The data source for the third and fourth graphs is the English National Pupil Database. As with the first two graphs, the data relates to pupils in England at the end of Key Stage 4. However, unlike
the first two graphs, it covers maintained schools only - and excludes both independent schools and pupil referral units - as data on free school meals is only collected for maintained schools.
The fifth graph shows how, in the latest year, how the proportion of students without five or more GCSEs (or vocational equivalent) varies by English region.
The data source for the fifth graph is the DfE statistical releases entitled GCSE and equivalent results in England. The data is for maintained schools only, and excludes both independent schools and
pupil referral units. It comprises those pupils at the end of key Stage 4.
Overall adequacy of the indicator: medium. While the data itself is sound enough, the choice of the particular level of exam success is a matter of judgement.
• For a description of the education system in England (and Wales), including what stages correspond to what ages, see Wikipedia.
• See the 2007 Joseph Rowntree Foundation report entitled Tackling low educational achievement and the related 2007 Centre for Analysis of Social Exclusion report entitled Understanding low
achievement in English schools.
• See a collection of articles on the links between poverty and exam results by the National Literacy Trust.
Overall aim: Raise the educational achievement of all children and young people
Lead department
Department for Children, Schools and Families.
Official national targets
Increase the proportion of young children achieving a total points score of at least 78 across all 13 Early Years Foundation Stage Profile (EYFSP) scales - with at least 6 in each of the
communications, language and literacy and language (CLL) and personal, social and emotional development (PSED) scales - by an additional 4 percentage points from 2008 results, by 2011.
Increase the proportion achieving level 4 in both English and maths at Key Stage 2 to 78% by 2011.
Increase the proportion achieving level 5 in both English and maths at Key Stage 3 to 74% by 2011.
Increase the proportion achieving 5A*-C GCSEs (and equivalent), including GCSEs in both English and maths, at Key Stage 4 to 53% by 2011.
Increase the proportion of young people achieving Level 2 at age 19 to 82% by 2011.
Increase the proportion of young people achieving Level 3 at age 19 to 54% by 2011.
Previous 2004 targets
Improve children's communication, social and emotional development so that, by 2008, 50% of children reach a good level of development at the end of the Foundation Stage and reduce inequalities
between the level of development achieved by children in the 20% most disadvantaged areas and the rest of England.
Raise standards in English and maths so that:
• by 2006, 85% of 11 year olds achieve level 4 or above and (not with this level of performance sustained to 2008; and
• by 2008, the number of schools in which fewer than 65% of pupils achieve level 4 or above reduced by 40%.Raise standards in English, maths, ICT and science in secondary education so that:
• by 2007 85% of 14 year olds achieve level 5 or above in English, maths and ICT ( 80% in science) nationally with this level of performance sustained to 2008; and
• by 2008, in all schools at least 50% of pupils achieve level 5 or above in each of English, maths and science.
By 2008, 60% of those aged 16 to achieve the equivalent of 5 GCSEs at grades A* to C; and in all schools at least 20% of pupils to achieve this standard by 2004, rising to 25% by 2006 and 30% by
Increase the proportion of 19 year olds who achieve at least Level 2 by 3 percentage points between 2004 to 2006, and a further 2 percentage points between 2006 and 2008, and increase the proportion
of young people who achieve level 3.
Overall aim: Narrow the gap in educational achievement between children from low income and disadvantaged backgrounds and their peers
Lead department
Department for Children, Schools and Families.
Official national targets
Improve the average (mean) score of the lowest 20% of the Early Years Foundation Stage Profile (EYFSP) results, so that the gap between that average score and the median score is reduced by an
additional 3 percentage points from 2008 results, by 2011.
Increase the proportion of pupils progressing by 2 levels in English and maths at each of Key Stages 2, 3 and 4 by 2011:
• KS2: English 9 percentage points, maths 11 percentage points.
• KS3: English 16 percentage points, maths 12 percentage points.
• KS4: English 15 percentage points, maths 13 percentage points.
Increase the proportion of children in care at Key Stage 2 achieving level 4 in English to 60% by 2011, and level 4 in mathematics to 55% by 2011.
Increase the proportion of children in care achieving 5A*-C GCSEs (and equivalent) at Key Stage 4 to 20% by 2011.
Other indicators of progress
Achievement gap between pupils eligible for Free School Meals and their peers at Key Stages 2 and 4.
Proportion of young people from low-income backgrounds progressing to higher education.
Graphs 1 and 2
Percentages Thousands
Year No GCSE passes or At least 1 but less than 5 GCSEs or 5+ GCSEs or equivalent but less than 5 at Grade C or No GCSE passes or At least 1 but less than 5 GCSEs or
equivalent equivalent above equivalent equivalent
Those aged 15 at the start of the school year
1995/ 7.8% 6.1% 41.6% 46K 36K
1996/ 7.7% 5.9% 41.3% 45K 35K
1997/ 6.6% 5.9% 41.2% 38K 34K
1998/ 6.0% 5.5% 40.6% 35K 32K
1999/ 5.6% 5.5% 39.7% 33K 32K
2000/ 5.5% 5.6% 38.9% 33K 34K
2001/ 5.4% 5.7% 37.3% 33K 35K
2002/ 5.2% 6.0% 35.9% 32K 37K
2003/ 4.1% 7.1% 35.1% 26K 46K
At the end of key stage 4
2004/ 3.0% 7.1% 33.1% 19K 45K
2005/ 2.7% 7.2% 31.1% 18K 47K
2006/ 2.0% 7.1% 29.5% 13K 47K
2007/ 1.4% 7.0% 26.3% 9K 46K
2008/ 1.1% 6.6% 22.3% 7K 42K
2009/ 1.0% 6.3% 17.4% 6K 40K
Graph 3
Group In receipt of free school meals Not in receipt of free school meals
Boys 15% 5%
Girls 10% 3%
Graph 4
Ethnic group In receipt of free school meals Not in receipt of free school meals
Bangladeshi 4% 4%
Black African 6% 3%
Black Caribbean 8% 5%
Indian 3% 2%
Pakistani 7% 4%
White British 16% 4%
White other 12% 7%
Graph 5
No GCSE passes or equivalent At least 1 but less than 5 GCSEs or equivalent
(maintained schools only)
East 1.1% 4.4%
East Midlands 1.0% 4.4%
London 1.0% 3.7%
North East 1.0% 4.5%
North West 1.0% 4.5%
South East 1.0% 3.9%
South West 1.0% 4.0%
West Midlands 1.0% 4.1%
Yorkshire and The Humber 1.2% 4.5%
|
{"url":"http://www.poverty.org.uk/26/index.shtml","timestamp":"2014-04-20T00:38:44Z","content_type":null,"content_length":"26160","record_id":"<urn:uuid:6f84455b-1c2c-43c6-8f65-b7ce8c7593dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|