content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
A bijection between planar constellations and some colored
A bijection between planar constellations and some colored Lagrangian trees
Cedric Chauve
Constellations are colored planar maps that generalize different families of maps (planar maps, bipartite planar maps, bi-Eulerian planar maps, planar cacti, ...) and are strongly related to
factorizations of permutations. They were recently studied by Bousquet-Mélou and Schaeffer who describe a correspondence between these maps and a family of trees, called Eulerian trees. In this
paper, we derive from their result a relationship between planar constellations and another family of trees, called stellar trees. This correspondence generalizes a well known result for planar
cacti, and shows that planar constellations are colored Lagrangian objects (that is objects that can be enumerated by the Good-Lagrange formula). We then deduce from this result a new formula for the
number of planar constellations having a given face distribution, different from the formula one can derive from the results of Bousquet-Mélou and Schaeffer, along with systems of functional
equations for the generating functions of bipartite and bi-Eulerian planar maps enumerated according to the partition of faces and vertices.
Full Text:
GZIP Compressed PostScript PostScript PDF original HTML abstract page
|
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/174","timestamp":"2014-04-17T18:33:53Z","content_type":null,"content_length":"12805","record_id":"<urn:uuid:80cd68a7-70bc-4c4e-8f0d-59ef8e6fcf4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
South Colby Algebra Tutor
Find a South Colby Algebra Tutor
...I took the test twice and enrolled in the Kaplan prep class for the MCAT as well. I have been using MS Outlook since 1991. As a project manager, I used this program to send email communications
to my sponsors and other stakeholders on the project.
46 Subjects: including algebra 1, algebra 2, reading, English
...I tutored college students one-on-one in both French and math and worked with students to plan and revise essays in various subjects. I also wrote and helped distribute instructional materials
on writing English essays and structuring mathematical proofs. After graduating from Rice and moving t...
35 Subjects: including algebra 2, algebra 1, English, reading
...I have a master's degree in mathematics and an undergraduate physics minor. I have two years experience teaching college algebra at Eastern Washington University. While I love all things math,
I understand that it is not everyone's cup of tea, and I strive to make the material as approachable as any other subject for my students.
10 Subjects: including algebra 1, algebra 2, calculus, statistics
...I have been an educator for 15 years and have plenty of experience working with students of all ages and different educational environments. Since I was in college, I have worked as a math
tutor for college and high school students, a middle school math teacher, and an English teacher for all gr...
14 Subjects: including algebra 2, algebra 1, English, reading
...Though my degree is in English, and I certainly enjoy crafting essays to perfection, I also enjoy digging into history and science of all types, and other subjects as well. I'm also nearly
fluent in Spanish, and would be happy to converse with students taking Spanish classes. I like to communic...
39 Subjects: including algebra 2, algebra 1, English, reading
|
{"url":"http://www.purplemath.com/South_Colby_Algebra_tutors.php","timestamp":"2014-04-20T21:43:32Z","content_type":null,"content_length":"23868","record_id":"<urn:uuid:cb3ce1b4-f6e1-489c-ab1b-90d689e90431>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bus Service for Qubits
Spin-orbit qubits are right at home in electrical circuits
Qubit-based computing exploiting spooky quantum effects like entanglement and superposition will speed up factoring and searching calculations far above what can be done with mere zero-or-one bits.
To domesticate quantum weirdness, however, to make it a fit companion for mass-market electronic technology, many tricky bi-lateral and multi-lateral arrangements---among photons, electrons,
circuits, cavities, etc.---need to be negotiated.
A new milestone in this forward march: a Princeton-Joint Quantum Institute (JQI) collaboration announces the successful excitation of a spin qubit using a resonant cavity. The circuit, via the
cavity, senses the presence of the qubit as if it were a bit of capacitance. This result, published this week in Nature magazine (*), points toward the eventual movement of quantum information over
“bus” conduits much as digital information moves over buses in conventional computers.
Qubit Catalog
A qubit is an isolated system---a sort of toggle---which is maintained simultaneously in a mixture of two quantum states. Qubits come in many forms, such as photons in either of two polarization
states, or atoms oriented in either of two states, or superconducting circuits excited in either of two ways.
One promising qubit platform is the quantum dot, a tiny speck of semiconducting material in which the number of electrons allowed in or out can be severely controlled by nearby electrodes. In a
charge qubit the dot can co-exist in states where one or zero electrons are in the dot. In a spin qubit, two electrons (sitting in two neighboring dots), acting like a molecule, possess a composite
spin which can be up or down.
Quantum dots are easy to fabricate using established semiconductor technology. This gives them an advantage over many other qubit modes, especially if you want to build an integrated qubit chip with
many qubits in proximity. One drawback is that qubits in a dot are harder to protect against outside interference, which can ruin the delicate quantum processing of information. An example of this
unwanted decoherence is the magnetic interaction of the electron “molecule” with the nuclei of the very atoms that make up the dot.
Another challenge is to control or excite the qubit in the first place. That is, the qubit must be carefully incited into its two-states-at-once condition, the technical name for which is Rabi
oscillation. The Princeton-JQI (**) work artfully addresses all these issues.
Circuit QED
Indeed, the present work is another chapter in the progress of knowledge about electromagnetic force. In the 19th century James Clerk Maxwell set forth the study of electrodynamics in its classic
form. In the early and mid 20th century a quantum version of electrodynamics (QED) was established which also, incidentally, accommodated the revolutionary idea of antimatter and of the creation and
destruction of elementary particles.
More recently QED has been extended to the behavior of electromagnetic waves within metal circuits and resonant cavities. This cavity-QED or circuit-QED (cQED), provides a handy protocol for
facilitating a traffic between qubits and circuits: excitations, entanglement, input and readout, teleportation (movement of quantum information), computational logic, and protection against
Forty years ago this effort to marry coherent quantum effects with electronics was referred to as quantum electronics. The name used nowadays has shifted to quantum optics. “With the advent of the
laser, the focus moved into the optical domain,” says Jacob Taylor, a JQI fellow and NIST physicist. “It is only in the past few years that the microwave domain – where electronics really function –
has come into its own, returning quantum optics to its roots in electrical circuits.”
Cavities are essential for the transportation of quantum information. That’s because speed translates into distance. Qubits are ephemeral; the quantum information they encode can dissipate quickly
(over a time frame of nanoseconds to seconds) and all processing has to be done well before then. If, moreover, the information has to be moved, it should be done as quickly as possible. Nothing goes
faster than light, so transporting quantum information (at least for moving it from one place to another within a computer), or encoding the information, or entangling several qubits should be done
as quickly as possible. In this way information or processing in more distant nodes can take place.
Spin-Orbit Coupling
The JQI part of this spin-qubit collaboration, Jacob Taylor, earlier this year participated in research that established a method for using a resonant cavity to excite a qubit consisting of an ion
held simultaneously in two spin states. The problem there was a mismatch between the frequency at which the circuit operated and the characteristic frequency of the ion oscillating back and forth
between electrodes. The solution was to have the circuit and ion speak to each other through the intermediary of an acoustic device. (See related press release)
The corresponding obstacle in the JQI-Princeton experiment is that the circuit (in effect a microwave photon reflecting back and forth in a resonant cavity) exerts only a weak magnetic effect upon
the electron doublet in the quantum dot. The solution: have the cavity influence the physical movement of the electrons in the dot---a more robust form of interaction (electrical in nature)---rather
than the interact with the electrons’ spin (magnetic force).
Next this excitation is applied to the spin of the electron doublet (the aspect of the doublet which actually constitutes the quantum information) via a force called spin-orbit coupling. In this type
of interaction the physical circulation of the electrons in the dot (the “orbit” part) tangles magnetically with the spins of the nuclei (the “spin” part of the interaction) in the atoms composing
the dot itself.
It turns out this spin-orbit coupling is much stronger in indium-arsenide than in the typical quantum dot material of gallium-arsenide. This it was that material science was an important ingredient
in this work, in addition to contributions from the physics and electrical engineering departments at Princeton.
To recap: an electrical circuit excites the dot electrically but the effect is passed along magnetically to the qubit in the dot when the electrons in the dot move past and provoke an interaction
with the nuclei in the InAs atoms. Thus these qubits deserve to be called the first spin-orbit qubits in a quantum dot.
The influence works both ways. Much as the presence of a diver on the end of a diving board alters the resonant frequency of the board, so the presence of the spin qubit alters the resonant frequency
of the cavity, and so its presence can be senses. Conversely, the alteration, effected by the spin-orbit interaction, can be used to excite the qubit, at rates of a million per second or more.
Previously a quantum-dot-based charge qubit was excited by a cavity. So why was it important to make a quantum-dot qubit based on spin? "The spins couple weakly to the electrical field, making them
much harder to couple to than a charge,” said Taylor. “However, it is exactly this property which also makes them much better qubits since it is precisely undesired coupling to other systems, which
destroys quantum effects. With a spin-based qubit, this is greatly suppressed."
(*) See reference publication.
(**) The Joint Quantum Institute is operated jointly by the National Institute of Standards and Technology in Gaithersburg, MD and the University of Maryland in College Park.
"Circuit Quantum Electrodynamics with a Spin Qubit," K.D. Petersson, L.W. McFaul, M.D. Schroer, M. Jung, J.M. Taylor, A.A. Houck, J.R. Petta, Nature, 490, 380-383 (2012)
Quantum physics began with revolutionary discoveries in the early twentieth century and continues to be central in today’s physics research. Learn about quantum physics, bit by bit. From definitions
to the latest research, this is your portal. Subscribe to receive regular emails from the quantum world. Previous Issues...
Sign Up Now
Sign up to receive A Quantum Bit in your email!
|
{"url":"http://jqi.umd.edu/news/bus-service-qubits","timestamp":"2014-04-21T15:49:28Z","content_type":null,"content_length":"68629","record_id":"<urn:uuid:b2a4cdbb-a68e-43ea-a50a-1fc079b0eee3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Show that Z[i] is a PID.
March 29th 2011, 07:04 PM #1
Junior Member
Apr 2010
Show that Z[i] is a PID.
Show that $\mathbb{Z}[i]$ is a principal ideal domain.
I have already shown that $\mathbb{Z}[i]$ is an integral domain. How do I show that all ideals are principal ( $I=(r), r\in \mathbb{Z}[i]$)?
If you know, or can show by means of the norm, that that ring is an Euclidean one then you're done, otherwise I
can't see how to show directly that it is a PID...
Well, then you still can do something that is actually equivalent: show that the norm $N(a+bi):=a^2+b^2$
in $\mathbb{Z}[i]$ permits you to carry on "division with residue" just as the absolute value
allows us to do the same in the integers.
Once you have this proceed as with the integers to show the ring is a PID.
March 29th 2011, 07:08 PM #2
Oct 2009
March 29th 2011, 07:10 PM #3
Junior Member
Apr 2010
March 29th 2011, 07:14 PM #4
Oct 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/176265-show-z-i-pid.html","timestamp":"2014-04-21T09:09:25Z","content_type":null,"content_length":"40762","record_id":"<urn:uuid:3a9c22c7-4c69-47ce-9393-daaf9c8e92a9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Programming with Algebraic Effects and Handlers
With Matija Pretnar.
Abstract: Eff is a programming language based on the algebraic approach to computational effects, in which effects are viewed as algebraic operations and effect handlers as homomorphisms from free
algebras. Eff supports first-class effects and handlers through which we may easily define new computational effects, seamlessly combine existing ones, and handle them in novel ways. We give a
denotational semantics of eff and discuss a prototype implementation based on it. Through examples we demonstrate how the standard effects are treated in eff, and how eff supports programming
techniques that use various forms of delimited continuations, such as backtracking, breadth-first search, selection functionals, cooperative multi-threading, and others.
Download paper: eff.pdf
ArXiv version: arXiv:1203.1539v1 [cs.PL]
To read more about eff, visit the eff page.
18 thoughts on “Programming with Algebraic Effects and Handlers”
1. This looks fascinating! Is it straightforward to explain why Eff could not simply be implemented within the syntax of an existing language, say Haskell or ML?
2. Well, ML has handlers just for exceptions and those cannot be used to get general handlers. I suppose you could emulate eff in ML with delimited continuations, but I think you would end up with a
“deep” embedding of eff.
People have tried to get something in Haskell, e.g., effects on Hackage. Judge for yourself.
3. I fail to understand why ressources have a different syntax than regular handler. Intuitively I would have thought they were just “handlers declared at toplevel for your convenience”, but it
seems like I missed something here.
4. Resources do not have access to the continuation, while ordinary handlers do.
It makes little sense for a “top-level handler” to be able to manipulate the continuation, since the continuation is open-ended in nature, i.e., it is not delimited. But this is not why we
defined resources. We defined resources as a direct implementation of the theoretical idea of co-models.
5. Well, you could have continuation that map to the empty type as toplevel continuations, but it still makes sense to consider these not too valuable.
On the other hand, I don’t see how comodels play in that picture. Am I missing something obvious?
6. A top-level continuation that maps to the empty type? Where would that come from? Or are you suggesting that there be a top-level “abort” operation (although that is an _operation_ whose return
type is empty). I’ll let Matija explain the comodels.
7. This is what I had in mind yes.
8. I have advocated a top-level abort operation, but was unable to convince Matija. So far.
9. For comodels, observe the “real-world” effectful behaviour of operations:
1. write takes a string, writes it out, and yields a unit value,
2. read takes a unit value, reads a string, and yields it,
3. random_int (say) takes integer bounds, generates a random integer according to the state of the pseudo-random generator, sets the new state, and yields the generated integer.
In general, an operation $op : A \to B$ takes a parameter of type $A$, interacts with the “real-world”, and yields a result of type $B$.
If you take $W$ to be the set of all possible worlds, operations correspond to maps $A \times W \to B \times W$ – they take the parameter and the current world state, and yield a result and the
new world state. And this is exactly what comodels for a given collection of operations are: sets $W$ equipped with a map $A \times W \to B \times W$ for each operation $op : A \to B$. As a
bonus, comodels are dual to models and thus a natural extension of the algebraic theory of effects.
Comodels are dubbed resources in eff and are implemented slightly differently. The main reason is that there is, of course, no datatype you can use to represent the set of all worlds $W$. For
this reason, I was at first inclined to drop any explicit mention of $W$. Instead, only special built-in effects (for example standard I/O channel) would have resources and those resources would
be implemented in effectful OCaml code, making $W$ implicit. Andrej argued that a programmer would still want to write her own resources, for example to implement a pseudo-random number
generator. Thus, we decided to equip each resource with a state only it can access, and provided a syntax with which a programmer defines the desired behaviour.
10. What about toplevel handlers or other alternatives to resources? First imagine how a handler should handle a toplevel (non-delimited) continuation. As soon as a handler applies it to some value,
the continuation never yields back control and whatever follows it in the handler is discarded. Furthermore, the continuation should be called at least once (I’ll discuss this later), otherwise
eff would abruptly exit.
Thus, each toplevel handler is more or less some computation followed by an application of the continuation to a given value. What if this computation triggers some operations? Since these
operations happen at the toplevel, we cannot handle them as they have escaped all handlers. We had an implementation that worked like this for some time, but there were no obvious advantages, all
while the implementation was hacked together, the behaviour was wildly unpredictable, and the (pretty neat, we probably agree) abstraction of effects that eff provides was broken.
In the end, we decided to allow only pure computations at the top-level. So, you have some pure computation that in the end passes a result to the toplevel continuation. But this is exactly what
a resource does, except that you only compute the result while the passing to the continuation is implicit.
What if you do not call the continuation, but instead call some special toplevel abort operation? What exactly should this operation be, if it is not a standard exception?
1. If it is a special extension of eff that can be used only in resources, why would we use it? One reason was that for some exceptions, you want to have a nicer error message than the usual
“Uncaught operation …#raise …”. For this reason, we have a built-in function exception (declared in the pervasives) that generates exceptions with resources that do just that. So we can get the
same benefit without any extensions.
2. If it is a special extension that can be used everywhere, it again breaks the abstraction of effects, as there is a way to perform effects without operations.
11. Thank you for the precisions.
12. Two comments:
1. It’s unnecessarily confusing that you use c for built-in constants and computations, esp. since there is only a short phrase mentioning the first. Can you choose another metavariable for
2. I’m confused about the typing of e#op, both in its own rule and in the handler rule. The only way to get an effect type E seems to be by introducing a computation with new, which is typed with
the $\vdash_c$ judgment, but the aforementioned rules with e#op require typing the effect e with $\vdash_e$ judgment.
13. @Sean: Thanks for the comments. Regarding constants, yes of course, that is an unecessary slip up on our part. Regarding your second point: you are right, so the typical way of getting a e#op is
to write something like let x = new E in .... x#op ... What did you expect? You could write (new E)#op in concrete syntax, but that would be useless and also it would be desugared into let x =
new E in x#op.
14. Ah, okay. So, you’re assuming a variable will be used as the expression e in e#op. I was missing where the coercion from computation to expression was taking place. It might be helpful to mention
that, since you already mention val for coercing an expression to a computation.
It appears that let is the only way to get an expression with an E type. Is it possible to have a non-variable expression in e? If not, perhaps it’s a simplification (and clarification) to
directly use a variable, as in x#op.
15. I am not “assuming” that a variable will be used in e#op. Rather, this is a consequence of the fact that the syntax does not have any exprssions of effect types, other than variables. I don’t
think anything would be simplified if we restricted e in e#op to variables. We would just break orthogonality, so this does not sound like a good idea to me.
16. I said “assuming” because you said “typical.” If it’s a consequence, then it’s not the typical way, it’s the only way. Since it’s the only way, I don’t see the orthogonality that is broken.
Anyway, this is all very nitpicky. We read your paper in our reading club at Utrecht, and one of the comments that came up was that it was fuzzy how computations and expressions were
distinguished. Now that I understand how you get from a computation to an expression, it’s more clear. But I think something could possibly be improved in the explanation.
17. I you have a suggestion, I’d be very happy to hear. Perhaps an example earlier in the paper? Video lecutures?
18. Pingback: Substructural Types in ABC | Awelon Blue
|
{"url":"http://math.andrej.com/2012/03/08/programming-with-algebraic-effects-and-handlers/","timestamp":"2014-04-17T04:22:29Z","content_type":null,"content_length":"46948","record_id":"<urn:uuid:c5bb9a49-0d8f-403d-b396-ecba332e9d83>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Asymptote Zoned
← hide menu
Home RSS Feeds
Funny Pictures Funny Videos
Funny GIFs YouTube Videos
Text/Links Channels
User Rankings • Image Boards >
NSFW Content
• NSFW Boards >
Asymptote Zoned
hay dim. you know flu ?
approaching infinit and all...
do dea; perus, l
2 3 ll 5
i love you like: one of my
How , derperson
thater nits: to know!
Views: 22017
Favorited: 35
Submitted: 01/22/2012
Share On Facebook Add to favorites Subscribe to andalitemadness E-mail to friend
Share image on facebook Share on twitter Share on StumbleUpon Share on Tumblr Share on Pinterest Share on Google Plus E-mail to friend
Anonymous commenting is allowed
|
{"url":"http://www.funnyjunk.com/funny_pictures/3214966/Asymptote+Zoned/","timestamp":"2014-04-21T07:31:44Z","content_type":null,"content_length":"129912","record_id":"<urn:uuid:003faf64-15e8-4242-aa81-fe82946af41c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On the de Rham cohomology of 1-forms in cotangent bundle.
up vote 1 down vote favorite
We know that a cotangent bundle $T^\star M$ has a canonical symplectic form and $M$ is a natural Lagrangian submanifold of it. A well known result is that any submanifold $X=\{(p,f(p)): p\in M\}$,
where $f$ is a closed one form is Lagrangian. Denote by $[f]$ the de Rham cohomology class of $f$. Assume that we flow $X$ in a Hamiltonian direction to $Y$, then $Y$ will be a Lagrangian submanifold
of $T^\star M$. My question is can we write it as $Y={(p,g(p)): p\in M}$ for some closed one form $g$.? If so, do we have $ [g]=[f]?$ Thanks in advance!
1 The "well known result" you refer to cannot be true if understood literally: for example, if $Y\subset X$ is a smooth submanifold, then the set of all covectors at points $y\in Y$ that annihilate
the tangent subspace $T_yY\subset T_yX$, is a Lagrangian submanifold as well. – Serge Lvovski Jan 9 '13 at 14:24
Tanks serge. I think I made a mistake here. I will edit my question. – Mathboy Jan 9 '13 at 15:37
2 Should "Hodge homology group" read "de Rham cohomology class"? – Tim Perutz Jan 9 '13 at 17:49
Yes, actually it should be called the de Rham cohomology. Thanks! – Mathboy Jan 10 '13 at 8:39
Did you mean de Rham cohomology clacc of $f$? – Serge Lvovski Jan 10 '13 at 9:31
show 2 more comments
2 Answers
active oldest votes
As has been noted in Peter Michor's answer, a Hamiltonian isotopy can certainly move the graph of a closed one-form to a Lagrangian submanifold that is not the graph of a one-form.
However, in the special case that the starting and ending submanifolds are both graphs of closed one-forms, the de Rham cohomology classes will in fact be equal. (It is important that we
are assuming here that the isotopy is Hamiltonian and not just symplectic.) This can be seen as follows:
Let $\lambda=\sum p_idq_i$ be the canonical one-form $T^{\ast}M$, so the symplectic form is $d\lambda$. In general, if $\iota:L\to T^{\ast}M$ is a Lagrangian embedding, then the Lagrangian
condition amounts to the statement that $\iota^{\ast}\lambda$ is closed. Consequently, there is a well-defined de Rham cohomology class $[\iota^{\ast}\lambda]\in H^1(L;\mathbb{R})$,
generally called the Liouville class. In the special case that $L=M$ and the Lagrangian embedding is a closed one-form $\sigma:M\to T^{\ast}M$ (viewed as a section), the definition of the
canonical one-form $\lambda$ is such that $\sigma^{\ast}\lambda=\sigma$. In particular, the Liouville class of a closed one-form is just the cohomology class of the one-form.
up vote 3
down vote I claim now that the Liouville class is invariant under Hamiltonian isotopies of the Lagrangian submanifold. In other words, if $\{\phi_t\}$ is a Hamiltonian isotopy, obtained as the flow
accepted of the Hamiltonian vector field $X_H$ of a function $H:T^{\ast}M\to \mathbb{R}$ (i.e. $i_{X_H}d\lambda=-dH$), and if $\iota$ is some Lagrangian embedding, I claim that the Liouville class
of $\phi_t\circ \iota$ is independent of $t$. (For ease of notation I'll assume $H$ is time-independent, but the time-dependent case is a straightforward modification.) Indeed this follows
fairly quickly from Cartan's magic formula, as follows: $$ \frac{d}{dt}(\phi_{t}\circ\iota)^{\ast}\lambda=\iota^{\ast}\left(\frac{d}{dt}\phi_{t}^{\ast}\lambda \right) =\iota^{\ast}\phi_{t}
^{\ast}\mathcal{L}_{X_H}\lambda=\iota^{\ast}\phi_{t}^{\ast}(di_{X_H}\lambda+i_{X_H}d\lambda) $$ $$=\iota^{\ast}\phi_{t}^{\ast}d\left(i_{X_H}\lambda-H\right) $$
which is exact; hence the cohomology class of $(\phi_{t}\circ\iota)^{\ast}\lambda$ is indeed independent of $t$. In particular if $\iota$ is equal to the section of $T^{*}M$ given by a
closed one-form $\sigma$ and if $\phi_1\circ\iota$ is equal to the section given by $\tau$ then looking at their respective Liouville classes shows that $\sigma$ and $\tau$ are
Thanks, Dear Mike. This is a perfect answer! – Mathboy Jan 11 '13 at 11:05
add comment
You can write $Y$ as the image of 1-form as long as $Y$ meets each fiber of $T^*M\to M$ exactly once and this transversally. For a short time this is so (locally on $M$ if you move the image
of a closed one-form by a Hamiltonian flow or even a symplectic flow (this is a smooth curve in the group of symplectic diffeomorphisms). If you look at $M=\mathbb R$, symplectic flows on $T
^*M$ are just volume preserving flows on $\mathbb R^2$, and you easily see that one can deform $X$ so that it becomes vertical or a curve meandering wildly, and you can do that faster as you
go to $\infty$ on $\mathbb R$, so that at no time $Y$ is still the graph of a 1-form (closed plays no role here, since all 1-forms are closed.
up vote
3 down Even if $Y$ stays the graph of a form, the de Rham cohomology class is not constant: Take $M=S^1$ then $T^*M$ is a cylinder, and you can move $X$ just up, which increases the integral, thus
vote the cohomology class.
Edit: Okay, take $T^*\mathbb R$. there every symplectiv flow is Hamiltonian since $H^1=0$. Here the vertical flow is a counterexample.
Thanks, Peter. This answer is helpful to me. – Mathboy Jan 10 '13 at 12:34
In your example on $S^1$, note that the vertical translation on the cylinder is a symplectic isotopy but is not Hamiltonian (i.e. is not given by the flow of the Hamiltonian vector field
of a time-dependent function--its flux is nonzero). So if one is restricting to Hamiltonian flows (as the OP seems to be doing) this example doesn't apply. – Mike Usher Jan 11 '13 at 1:05
Yes, I think if one moves $X$ by a Hamiltonian flow to $Y$, then the integral will be kept and so is the cohomology class. – Mathboy Jan 11 '13 at 8:57
add comment
Not the answer you're looking for? Browse other questions tagged symplectic-topology or ask your own question.
|
{"url":"http://mathoverflow.net/questions/118448/on-the-de-rham-cohomology-of-1-forms-in-cotangent-bundle?sort=newest","timestamp":"2014-04-17T10:16:16Z","content_type":null,"content_length":"67031","record_id":"<urn:uuid:173dc024-ae0e-43d0-b0a8-8b27269c593f>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Almost periodic mappings on topological spaces
Author: Pavel Dorin Status
Degree: doctor of physics and mathematics
Speciality: 01.01.04 - Geometry and topology The thesis was presented on the 19 November, 2009
Year: 2010 Approved by NCAA on the 11 February, 2010
Scientific adviser: Mitrofan Cioban
doctor habilitat, professor, Tiraspol State University
Scientific consultant: Laurenţiu Calmuţchi Abstract
doctor habilitat, professor, Tiraspol State University
Institution: Tiraspol State University – 0.32 Mb / in romanian
Scientific council: DH 01-01.01.04
Institute of Mathematics and Computer Science of the ASM
almost periodic functions, weakly almost periodic functions, lattice, compactification, algebraical compactification, universal algebra, semigroup of translations, stable pseudometrics
The thesis is dedicated to the study of the general problems of the theory of extensions of the topological universal algebras from the topological point of view. In particular, the concept of almost
periodicity on a universal algebra is investigated.
Objectives of the thesis are: the elaboration of the concept of the almost periodicity on a universal algebra; the investigation of the classes of almost periodic functions on a given universal
algebra; the construction of the compact algebraical extensions of topological universal algebras.
In the thesis there are solved the next problems: the elaboration of the methods of research of the spaces of almost periodic functions; the construction of the special compactifications of the n-ary
topological groupoids.
There are established: the space of (weakly) almost periodic functions is a Banach space; the space of (weakly) almost periodic mappings is complete; the space of weakly almost periodic functions
generates the Bohr compactification proposed by Alfsen and Holm; the space of almost periodic functions is determined by the stable totally bounded pseudometrics.
The basic results are new. The notion of complete compactity is essential. In the thesis, in particular, there are solved some concrete problems arised by J.E.Hart, K.Kunen and A.Cashu. Thee concept
of the almost periodicity is based on the notions of the semigroup of translations and the oriented semigroup of translations. This new point of view permits to establish the concepts of almost
periodicity and weakly almost periodicity on any universal algebra. For each universal algebra the Weil’s reduction is obtained.
|
{"url":"http://www.cnaa.md/en/thesis/14422/","timestamp":"2014-04-20T09:08:17Z","content_type":null,"content_length":"13019","record_id":"<urn:uuid:a6341aba-687d-4061-9299-e7526b3c0e56>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geometric Photo Manipulation - The Algorithm Explained
The MATLAB programs perspective.m and orthographic.m are designed to produce the cube pictures of Figure 3, but the programs can be altered to generate an unlimited number of constructions, for
example, the one in Figure 1 and those to appear in the next section. Here is an outline for such programs:
1. Read the input image data into memory as an array. Let's call it inpic.
2. Assign values to the parameters required by the perspective projection method. (We limit this outline to the perspective case, but the orthographic method has a very similar outline.) It's a good
idea to express these values in terms of the dimensions of the input image so that one can experiment first with a very small input image and easily switch later to a larger image. This is
important because programs can take a long time to run if the input image is large.
3. Assign values to the estimated bounds on the calculated projection points, and initialize the output array called outpic by assigning the value "white" to each pixel. Also initialize the values
of the z-buffer array.
4. Loop just once through the pixels of the input image. For each pixel
-- there will be one set of formulas for each copy of the image that is to be included in the object (e.g., the three visible faces of the cube in Figure 3). This is the primary way in which one
construction differs from another. For each point
a. Calculate the projected point and the corresponding output array address;
b. Compute the distance from to the point , and proceed to (c) only if the distance is smaller than the one held in the z-buffer for that output array address;
c. Change the corresponding output array value (all three RGB levels) to inpic(i,j), and update the z-buffer value by assigning as the new value the distance computed in step b.
5. Save and/or print the output array as an image.
Tom Farmer, "Geometric Photo Manipulation - The Algorithm Explained," Loci (October 2005)
|
{"url":"http://www.maa.org/publications/periodicals/loci/joma/geometric-photo-manipulation-the-algorithm-explained","timestamp":"2014-04-25T00:29:22Z","content_type":null,"content_length":"99722","record_id":"<urn:uuid:06f2d31c-3e91-46a8-a6b7-16817d7d78b1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Don't understand this summation problem
September 8th 2012, 10:53 AM #1
Don't understand this summation problem
$\sum\limits_{n = 1}^{2k} {{{\left( { - 1} \right)}^{n + 1}}n} = {\left( { - 1} \right)^{n + 1}}n$
$\sum\limits_{n = 1}^{2k} {{{\left( { - 1} \right)}^{n + 1}}} = {\left( { - 1} \right)^{n }}$
^Thanks to Plato for the Latex code
I just don't understand how to do these kind of things with a different letter on top. I know that I have to do something such as $(-1)^1 + (-1)^2 + ... + (-1)^{2k}$ for the second one but then
how do you solve that?
Do I just ignore all the iterations until k? For example in the second one should I only look at $(-1)^{2k}*2k$? If so, shouldn't the answer be -2k since the first part will give you negative 1?
Sorry for the failed latex. I looked at the latex crash course but I still managed to screw everything up. Please can someone clean up my latex so I can at least understand how to use it
Last edited by Mukilab; September 8th 2012 at 02:20 PM.
Re: Don't understand this summation problem
Could you write the indexes of the sums? Thanks!
Re: Don't understand this summation problem
Sorry, the top of the first one is 2k, the bottom is n=1.
Next to it is (-1)^(n+1) *n
The second one has the same top and bottom but next to it (on the right) is (-1)^n
Re: Don't understand this summation problem
Are these they?
$\sum\limits_{n = 1}^{2k} {{{\left( { - 1} \right)}^{n + 1}}n} = {\left( { - 1} \right)^{n + 1}}n$
$\sum\limits_{n = 1}^{2k} {{{\left( { - 1} \right)}^{n + 1}}} = {\left( { - 1} \right)^{n }}$?
You can click on "reply with quote" to see the LaTeX code.
Re: Don't understand this summation problem
Yes! Thank you very much, updated my OP.
Still waiting for an answer or help though
Last edited by Mukilab; September 9th 2012 at 12:23 AM.
September 8th 2012, 12:22 PM #2
September 8th 2012, 01:40 PM #3
September 8th 2012, 02:12 PM #4
September 8th 2012, 02:19 PM #5
|
{"url":"http://mathhelpforum.com/pre-calculus/203108-don-t-understand-summation-problem.html","timestamp":"2014-04-17T15:18:59Z","content_type":null,"content_length":"47242","record_id":"<urn:uuid:625bff5e-b3ef-4e54-847e-c6ecece160fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lynwood Statistics Tutor
Find a Lynwood Statistics Tutor
Hello! Thanks for taking the time to view my profile. I received my B.S in Psychology and am currently an MBA & M.S of Finance student at the University of Southern California (USC). I have more
than 7 years of experience and am confident that I can help you achieve your goals!
25 Subjects: including statistics, reading, English, SAT math
...I have taken classes in discrete mathematics and computer logic, which have much overlap with mathematical logic. I understand how to construct and use truth tables in order to find the answer
to logic and false logic puzzles. From my computer science background, I have an understanding of how ...
44 Subjects: including statistics, Spanish, chemistry, reading
Since scoring in the 99th percentile on the LSAT, I've been teaching the LSAT with one of the top prep courses in the country. Now, I am excited to help you reveal your potential and I guarantee
that you will have fun doing it. Born in the San Fernando Valley, I am a successful product of the LAUSD and California public school system.
9 Subjects: including statistics, reading, writing, trigonometry
...I can help you on unit circle, solving triangles, identities, and word problems. You will need these skills in real life problems, engineering, and calculus. They are essential for higher math.
11 Subjects: including statistics, calculus, geometry, algebra 1
...I finished the equivalent of a Minor in Chemistry as an undergraduate at USC, where I took Honors General Chemistry, Organic Chemistry I and II, Analytical Chemistry, Advanced Inorganic
Chemistry, and Geochemistry and Hydrogeology. At UC Irvine, I took the graduate level courses Atmospheric Chem...
73 Subjects: including statistics, reading, English, chemistry
|
{"url":"http://www.purplemath.com/lynwood_ca_statistics_tutors.php","timestamp":"2014-04-17T13:08:17Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:5dfd972d-2241-49cc-b869-d4e5769650a2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Comparing Two RV Arrays and Outputting Character Strings
So I'm very new to mathcad prime and I'm trying to compare resultant numbers from computing with range variables. I was hoping someone could show me how to figure out what I'm trying to accomplish.
Example (simplified):
I've defined two range variables
then applying some math with the RVs
I would then like to compare the first value of C to the three values in D and determine if it is greater than or less then the D value and then output an array with Yes if greater, or No is less
than. Then repeat this with the second and third values in C respectively.
I'm thing something like this...
for i=1:3
for j=1:3
if C(i)>D(j)
E(i+j) = "Yes" %I'd like the ouput to be a one dimensional array/vector and not a two dimensional matrix
E(i+j) = "No"
The red boxes are the two variables I'd like to compare, and display if alpha is greater than tau in a coulmn matrix/array.
|
{"url":"http://communities.ptc.com/thread/41830","timestamp":"2014-04-17T04:09:06Z","content_type":null,"content_length":"158517","record_id":"<urn:uuid:95a4b69b-7372-4e2f-bef7-5f64d2e183aa>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to tell how good a TV show is
(This is my last blog of the year. Lance will interrupt his blog sabbatical to do an END OF THE YEAR blog later.)
The TV show MONK recently finished its 8th and final season. My wife and I are big fans and have seasons 1-7 on DVD (and we will get 8). But this post is not about Monk. Its about the question:
How to determine how good a TV show is?
I am sure that whatever I say here may apply to other problems.
First assign to each episode a number between 1 and 10 depending on how much you liked it. (This could be the hardest part of the method.) Let t be a parameter to be picked later. t stands for
threshold. If your criteria is
How likely is it that an episode is OUTSTANDING?
then you would pick t large, perhaps 9. If your criteria is
How likely is it that an episode DOES NOT SUCK?
then you would pick t small, perhaps 2. Some of the methods use t, some do not.
There are many different ways to do this. We give a few of them:
1. The mean or median of all of the episodes.
2. The probability that a randomly chosen episode is rated above t. (Could also get into prob that it is within one standard deviation from t.)
3. The probability that a randomly chosen disc has an episode rated above t.
4. The probability that a randomly chosen disc has fraction f of its episodes rated above t.
5. Rate each disc in the DVD set for the entire season. The mean or median of all of these ratings.
6. The mean or median of the best season.
7. The mean or median of the worst season.
There are others as well. But the question really is, given a set of numbers grouped in a natural way (in this case roughly 8 sets of 16 numbers, and each set of 16 in groups of 4) how do you judge
the quality?
For those who are fans of the show MONK here are my choices for OUTSTANDING and UNWATCHABLE episodes:
14 comments:
1. fair enough11:51 AM, December 22, 2009
as for the post, fair enough. but everything aside, interupt lacks an "r".
the question now turns into, how do we conclude how good a post is ? Do we take typos into account ?
2. I know we do it all the time in theory PCs, but is there any actual meaning to taking averages when given input in the form of human rankings on a scale of 1 to 10? Is there any sense in which
this scale is linear? Or should we interpret it purely as an ordering, in which case the only thing that makes sense is the median?
3. These methods wouldn't work well for shows like The Wire where individual episodes are subsumed by the overarching narrative. I propose instead that you pick threads that run through all the
episodes and evaluate them. e.g.,
What is the probability that a randomly chosen character is compelling? You may want to assume you sample according to screen time or importance to the story.
What is the probability that a randomly chosen actor will deliver a good performance? Again, you probably want to sample according to screen time.
What is the probability that a randomly chosen writer will deliver a good script?
What is the probability that story elements brought up in earlier episodes will pay off in later episodes?
What is the probability that the story will have a satisfying conclusion?
etc. Then you can define value preferences: how much do I value good acting versus a good ending; how much do I value good characters versus a good story; and so on.
4. Aaron Sterling1:02 PM, December 22, 2009
Off topic -- I'd like to say thanks to Bill and Lance, and to all the commenters, for an informative, entertaining year on the blog. Happy holidays to everyone.
5. lance, don't let us down11:19 AM, December 23, 2009
Lance, I am disappointed by you making hype about wolframalpha. Please stick to open source alternatives. For someone in your position Advertising for a company with a notorious reputation is
somewhat of a let down ...
6. Talking of wolframalpha, wouldn't the bing iphone app actually suffice?
7. WolframAlpha isn't worth even mentioning. It's a black box that that is flawed as it is based on NKS work. If I remember correctly I once saw a slashdotted story on how flawed work in NKS is. A
mathematician disproved like 1000 claims. Now, assume that your output result you obtain via wolfram alpha are as reliable as claims made by nks.
no wonder how uncreative Microsoft's BING is, it relies on something as unreliable as Wolfram.
8. If I remember correctly I once saw a slashdotted story on how flawed work in NKS is. A mathematician disproved like 1000 claims.
I read it on the internet. It must be true.
Seriously dude. Even some near future version of Wolfram Alpha may be smart enough to know that "some of NKS is broken" and "alpha may have some connection to NKS" does not imply much about the
reliability of alpha.
In fact, I am not sure Wolfram Alpha has anything to do with NKS. It contains an online version of Mathematica, and has reasonably good NLP skills. It has access to several databases that it can
look up in a sensible way. It can be useful for certain things (e.g. type in "1,1,2,3,5,..." or "1+1/4+1/9+..." or "integrate .1 to .2 sin x/ x "). At least these parts have exactly zero
connection to NKS, as far as I can tell.
I look forward to your examples of the unreliability of Alpha, which you would no doubt put on slashdot soon.
9. Looking forward forward to lance's end of year wrap up! And his progress on his book ? Does anyone know why lance decided to write the book ?
10. I think the story referenced is
in which 44 claims (not conjectures!) in Wolfram's book are shown to be wrong.
11. > I look forward to your examples of the unreliability of Alpha, which you would no doubt put on slashdot soon.
I'm not the same anon, but
shows that W|A thinks that [0,1;1,-7] times its inverse [7,1;1,0] is *not* the identity matrix.
12. I'm not the same anon, but
shows that W|A thinks that [0,1;1,-7] times its inverse [7,1;1,0] is *not* the identity matrix.
This is indeed bad UI design... it comes from Mathematica where matrix product is '.' and '*' is the TIMES operator, which is a strange operator that multiplies point by point, so [a, b ; c, d] *
[x,y;z,w] is [ax,by; cz, dw]. Not sure why they made such a stupid design choice in mathematica, and why it continues in alpha. Nevertheless, one can't blame NKS for this design choice that goes
back to mathematica :)
13. i don't trust wolframalpha's output at all. particularly after seeing the publication.
14. (Coming back to choosing a TV show) To add to the analysis, one non-quantitative factor I think of is how badly I want to watch an episode even if someone gave me a jist of what's going to happen
So, conditional probability could be used to make the following a good measure of how good the TV show is:
Pr[Watch the show|Story] (Pr. that I will watch the show given that I know the story).
P(story) can be decided by the narrator who tells me as much portion of the story while Pr(watching the show) can be chosen using any of the ways you mentioned.
|
{"url":"http://blog.computationalcomplexity.org/2009/12/how-to-tell-how-good-tv-show-is.html","timestamp":"2014-04-18T23:16:40Z","content_type":null,"content_length":"184352","record_id":"<urn:uuid:e2bf22f9-d477-4b7e-94e7-0765d999180f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GLMs and GLMMs in the analysis of randomized experiments
Seminar Room 1, Newton Institute
The Normal linear model analysis is usually used as an approximation to the exact randomization analysis and extended to structures, such as nonorthogonal split-plot designs, as a natural
approximation. If the responses are counts or proportions a generalized linear model (GLM) is often used instead. It will be shown that GLMs do not in general arise as natural approximations to the
randomization model. Instead the natural approximation is a generalized linear mixed model (GLMM).
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
{"url":"http://www.newton.ac.uk/programmes/DAE/seminars/2011080914001.html","timestamp":"2014-04-18T05:41:48Z","content_type":null,"content_length":"6230","record_id":"<urn:uuid:261ea1c5-5732-486f-9525-cbb1c99f801b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Golden ratio (idea)
I believe the geometric proportion served the Creator as an idea when He introduced the continuous generation of similar objects from similar objects. -- Johannes Kepler
The Golden Ratio can be obtained easily by the proper division of a line segment, instead of constructing the geometrically pleasing rectangles. Simply divide the line segment into two parts of
uneven lengths, so that the ratio of the smaller part to the larger part equals the ratio of the larger part to the entire line segment.
A picture is worth a thousand words:
1 x
A B C
And thus:
AB BC 1 x
-- = -- or: - = ---
BC AC x 1+x
x^2 - x - 1 = 0
x = (1 ± √5)/2
|
{"url":"http://everything2.com/user/Professor+Pi/writeups/Golden+ratio","timestamp":"2014-04-20T09:53:15Z","content_type":null,"content_length":"20563","record_id":"<urn:uuid:cd34b0e3-24ed-4c42-9ffd-a6255de180bb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How do i solve this trig problem - Polar equations.
January 12th 2011, 05:40 PM
How do i solve this trig problem - Polar equations.
Here is the problem that I cannot figure out how to do. Please explain.
r = (5csc(θ))/(6csc(θ) + 3)
a) (36x^2) + (27y^2) + (30x) -25 = 0
b) (27x^2) + (36y^2) +(30x) -25 = 0
c) (36x^2) + (27y^2) + (30y) -25 = 0
d) none of the above
Thankyou so much for any help
January 12th 2011, 05:52 PM
Chris L T521
Here is the problem that I cannot figure out how to do. Please explain.
r = (5csc(θ))/(6csc(θ) + 3)
a) (36x^2) + (27y^2) + (30x) -25 = 0
b) (27x^2) + (36y^2) +(30x) -25 = 0
c) (36x^2) + (27y^2) + (30y) -25 = 0
d) none of the above
Thankyou so much for any help
What have you done so far? (Is this from a homework assignment by any chance?)
January 12th 2011, 06:00 PM
Yes it is a hw problem.
So far:
6rcsc(θ) + 3r = 5cscθ
3r = 5cscθ - 6rcscθ
(3r)^2 = (5cscθ)^2 - (6rcscθ)^2
3x^2 + 3y^2 = ?
I dont know where to go from there
January 12th 2011, 06:17 PM
Chris L T521
Here's a suggestion. When you're here: $6r\csc\theta+3r=5\csc\theta$, multiply both sides by $\sin\theta$ to get $6r+3r\sin\theta=5$.
Now make the conversion to rectangular coordinates. Can you take from here?
January 12th 2011, 06:17 PM
Chris L T521
Here's a suggestion. When you're here: $6r\csc\theta+3r=5\csc\theta$, multiply both sides by $\sin\theta$ to get $6r+3r\sin\theta=5$.
Now make the conversion to rectangular coordinates. Can you take from here?
|
{"url":"http://mathhelpforum.com/pre-calculus/168183-how-do-i-solve-trig-problem-polar-equations-print.html","timestamp":"2014-04-20T19:50:26Z","content_type":null,"content_length":"8452","record_id":"<urn:uuid:09b1d61d-9315-4199-88c2-c6948dbd6ef0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Need help with Geometry, 10th grade, I don't get it, Help needed. Explain to me, Im trying to learn it also. http://tinypic.com/r/2pophr9/6
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Which part specifically are you stuck on?
Best Response
You've already chosen the best response.
Basically, I don't get how to find the longest side (bookshelf) and the largest angle (computer), with only ONE thing (7 ft.)
Best Response
You've already chosen the best response.
Okay. So in the picture, that tiny square in the corner tells you that this triangle is a "right" triangle. Right triangles mean that one of the angles is 90 degrees and this is what that little
square is telling you. Next, the rule of triangle angles as that when you add up the 3 angles, it should equal 180 degrees. So with this right triangle, you already know that that corner with the
box is the "right" angle at 90 degrees. That means that the other two angles must add up to 90 degrees as well (180 degrees minus 90 degrees). Following so far?
Best Response
You've already chosen the best response.
Yeah, I have that, I wanted to copy and paste this... but you was typing. This is what i have written down. 33. Since you want to put the computer in the corner with the largest angle, and the
bookshelf on the largest side, then, you should put the computer on corner ... • A right triangle is shown in the figure. • Once you find the largest angle of a triangle, to find the longest
side, you get the other two sides by.... ---- But Yeah, I'm following so far.
Best Response
You've already chosen the best response.
Okay so the next rule in triangles is that the longest side is always opposite the largest angle and the shortest side is always opposite the smallest angle. So without even having to do a
calculation using the number given, you just know that since 90 degrees is the biggest angle, the opposite side is the longest side.
Best Response
You've already chosen the best response.
So.. 33. Since you want to put the computer in the corner with the largest angle, and the bookshelf on the largest side, then, you should put the computer on corner that measures 90 degrees, the
right angle. Correct? and thanks for that lesson
Best Response
You've already chosen the best response.
Yep the computer goes in that corner and your shelf goes on the opposite side. No problem. Glad I could help!
Best Response
You've already chosen the best response.
Genius... Thanks.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ffd9fee4b00c5a3be667da","timestamp":"2014-04-18T19:10:20Z","content_type":null,"content_length":"45836","record_id":"<urn:uuid:7e70dc77-17b9-44b9-af24-7e0a4fb0ad35>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Enclose 5 points with a circle
Replies: 1 Last Post: Feb 13, 2014 9:45 AM
Messages: [ Previous | Next ]
LH Enclose 5 points with a circle
Posted: Feb 9, 2014 11:19 AM
Posts: 1
Registered: 2/9/14 Hi,
I'm trying to work out the algorithm to find the centroid and radius to allow me to draw the smallest circle that encloses all points (p[])
I've got an approx algorithm using the centroid and the radius equation but it's not quite accurate enough.
Is anyone aware of a better algorithm?
Date Subject Author
2/9/14 Enclose 5 points with a circle LH
2/13/14 Re: Enclose 5 points with a circle Mark Rickert
|
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2618906","timestamp":"2014-04-16T19:17:09Z","content_type":null,"content_length":"17206","record_id":"<urn:uuid:8d8ab683-ef87-47c4-bb18-5062017f09f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Modeling and Calibration for Crack Detection in Circular Shafts Supported on Bearings Using Lateral and Torsional Vibration Measurements
Advances in Mechanical Engineering
Volume 2012 (2012), Article ID 519471, 18 pages
Research Article
Modeling and Calibration for Crack Detection in Circular Shafts Supported on Bearings Using Lateral and Torsional Vibration Measurements
Faculty of Engineering and Applied Science, Memorial University of Newfoundland, St. John's, NL, Canada A1B 3X5
Received 10 June 2011; Revised 25 September 2011; Accepted 14 October 2011
Academic Editor: A. Seshadri Sekhar
Copyright © 2012 A. Tlaisi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
In this paper the requisite foundational numerical and experimental investigations that are carried out, to model the “uncracked and cracked” shaft and to identify its bending and torsional vibration
responses, are reported. The cylindrical shaft used in this experimental study is continuous over two spans (with a cantilever span carrying a propeller) with ball-bearing supports. During modal
tests the backward end of shaft (away from the propeller end and connecting it to an electric motor, required for online monitoring) is fixed to one of the test frame supports; later on this backward
end will be connected to an electric motor to carry out online modal monitoring for crack identification. In the numerical study, beam elements are used for modeling the bending and torsional
vibrations of the rotating shaft. The paper describes in detail the numerical “linear spring” models developed for representing the effects of “ball bearings and the (experimental test) frame
supports” on the vibration frequencies. Shaft response parameters are obtained using modal analysis software, LMS Test Lab, for bending vibrations monitored using accelerometers, and three “sets” of
shear strain gages fixed at three different shaft locations measure the torsional vibrations. Effects of different crack depths on bending and torsional frequencies and mode shapes are investigated
experimentally and numerically, and the results interpreted to give better comprehension of its vibratory behavior.
1. Introduction
Vibration and noise in industrial machines, or in the environment around them, occur when dynamic forces excite these machines. This industrial noise has direct and indirect effects on the health and
safety of those operating them. They can also have effects on buildings, machinery, equipment, and vehicles around them. These effects usually manifest themselves in the form of reduced performance,
wear and tear, faulty operation, and irreversible damage in the form of cracks.
In the last four decades [1, 2], the vibrational behavior of cracked shafts has received considerable attention in order to prevent significant rotor faults, which can lead to catastrophic failures
if undetected. The diagnosis of these cracked shafts remains problematic. In general cases, the behavior of the shaft is insensitive to a crack. Sometimes, it is difficult to find differences between
successive states of vibration, even if the crack is medium sized. Thus, it is of the utmost importance to discover the identifiable specific characteristics of the cracked shaft at the earlier
possible instance.
There are two stages in crack development: crack initiation and crack propagation. The former is caused by mechanical stress raisers, such as sharp keyways, abrupt cross-sectional changes, heavy
shrink fits, dents and grooves, and/or metallurgical factors, such as flows, fretting, and forging. The latter stage, namely, crack propagation, can accelerate the growth rate under different
conditions: operating faults generated during sustained surging in compressors’ negative sequence current or grounding faults in generators and coupled turbines, the presence of residual stresses in
the rotor material, thermal stresses, and environmental conditions, such as the presence of a corrosive medium. Cracks can be classified based on their geometry and orientation as follows: cracks
perpendicular to the shaft axis are known as transverse cracks; cracks parallel to the shaft axis are known as longitudinal cracks; cracks at an angle to the shaft axis are known as slant cracks;
cracks that open and close, when the affected part of the material is subjected to alternating stresses, are known as breathing cracks; cracks that primarily remain open are known as gaping cracks or
notches; cracks that appear open on the surface are known as surface cracks; cracks which are not visible on the surface are known as subsurface cracks [3].
1.1. Literature Review
Munoz et al. [4] applied a modal testing procedure to detect a crack on an offline rotor. The changes in rotating shaft frequencies gave a good indication of the presence of cracks. They stated that
the method can be used to detect cracks of areas greater than 2.5% of the rotor cross-sectional area, but this claim seems to be rather exaggerated from other studies published on the same.
Gounaris and Papadopoulos [5] performed experiments using a rotating cracked shaft. The shaft was excited at one end, and the response was measured at the other end. Hamidi et al. [6] developed two
mathematical models to determine the bending natural frequencies of a rotor. The analytical results were compared with the results of experiments. The following conclusions were made. (i) When crack
depth was more than 30% of the shaft radius, the rate of change of the natural frequency was high. (ii) The speed of the rotating shaft did not affect the values of the natural frequency: this was
probably due to the fact that stiffness of the rotating shaft system is governed by the shaft stiffness which is not influenced by the rotational speed of the shaft. Tsai and Wang [7] used transfer
matrix method on the basis of Timoshenko beam theory to obtain the dynamic characteristics and thereby identify the location and size of cracks based on monitoring the change in the mode shapes and
natural frequencies of a cracked shaft. Their method was validated by comparison with existing published experimental data. Zakhezin and Malysheva [8] used a numerical finite element- (FE-) based
crack detection technique and modal tests on a single span shaft. They included system damping in their model and calculated the system’s natural frequencies, eigenvalues, and eigenvectors up to a
frequency of 1100Hz. These values were calculated for a rotor with and without cracks at varying locations and depths. The method was tested and results verified to demonstrate the good quality of
results obtained.
Adewusi and Al-bedoor [9] applied neural networks techniques to detect the inception of cracks on rotors. They carried out experimental studies on a rotor (overhung arrangement and simply supported
arrangement) with and without a propagating crack. In this study, a two neuron network was used to detect the propagating crack and a three neuron network to detect the propagating and nonpropagating
cracks. Prabhakar et al. [10] investigated experimentally the influence of a transverse surface crack on the mechanical impedance of a rotor bearing system. This system consisted of rigid disks,
distributed parameter finite shaft elements, and discrete bearings. The experimental work was done to validate their previous numerical analysis results. They tried to use the concept of mobility for
detecting and monitoring the crack using different crack parameters and force locations. The authors did this experiment for an uncracked and a cracked shaft. They used different depths (20% and 40%
of diameter to represent the crack depth) at the location. Also, they measured the mobility in two directions, horizontal and vertical, at the bearing locations. This measurement was taken at
different rotor speeds. They found that the mobility was directly proportional to the depth of the crack, as well as to the rate of change of mobility at the running frequency. Moreover, since the
crack depth was assumed to grow vertically, the rate of change of mobility in the vertical direction was greater than that in the horizontal direction. There was considerable agreement between
experimental results and numerical simulations. Therefore, the authors suggest using this method to detect the crack and monitoring in a rotor-bearing system.
Pennacchi and Vania [11] presented the results of an experimental study concerning the diagnoses of a crack during the load coupling of a gas turbine; they compared the experimental and analytical
results of the shaft vibration using the model of the rotating shaft of a 100MW power plant. Dorfman and Trubelja [12] used a simplified model of the turbine generator system to examine the
influence of cracks in the shaft. This model showed the relationship between the shaft excitation forces which represented input to the model and the shaft torsional vibration response which
represented the output. This ratio (output to the input) is known as the transfer function. The transfer function is basically dependent on the mass, stiffness, and damping of the shaft. They found
that a properly designed data acquisition monitoring system, such as structural integrity associate’s transient torsional vibration monitor system (SI-TTVMS), would give a good signal and detect
rotor faults before failure. Cho et al. [13] measured the torsional wave in the rotating shafts by using a noncontact method (magnetostrictive patches and a solenoid). In this work, two problems were
noticed during vibration experiment, namely, (i) how to produce sufficient power to generate torsional waves; (ii) how to guarantee that there is no interference from the shaft rotational motion.
Magnetostrictive patches were fastened indirectly to the shaft axis for measuring the torsional motion. Furthermore, the configuration of an arrayed patch was employed for frequency localization and
sufficient power generation. In this paper, they assumed that the effect of the lateral vibrations was negligible because it was very small compared to the torsional motion measured by
magnetostrictive strips. Also the authors used the transduction method to detect a perimeter crack in a rotating shaft as well as to estimate the damage location (with small error) and compare them
with the exact crack size and location.
1.2. Scope of the Study
In the present paper, experimental and numerical investigations are carried out to identify the transverse crack existence in a cylindrical shaft with a cantilever overhang, using lateral and
torsional vibrations. LMS experimental setup, described below, has been used for measuring the cracked and uncracked shaft response parameters. Effects of different crack depths are investigated
experimentally and numerically. The shaft is fixed at one end to the test frame support and is continuous over the other frame support to end in a cantilevered end supporting the propeller; the shaft
is supported through ball bearings that are attached to the two test frame supports, as shown in Figures 1 and 2. Initially the cylindrical shaft, supported through roller bearings and test frame
supports, had to be properly modeled for carrying out analysis using finite element procedures. The beam element, BEAM4, available in ANSYS finite element program is used for the numerical prediction
of the dynamic response of uncracked and cracked shafts as well as to verify the experimental results. In this study, a linear “three-to-six-spring” model is used to represent the effects of each of
the two ball bearings, supporting the shaft, over the (fixed) end and the other support with cantilever end; these spring constants are determined to achieve the best agreement between uncracked
experimental and numerical results. Since the BEAM4 elements do not include the stress intensity effects present in cracks, an equivalent crack effect, as described by Petroski [14] and Rytter [15]
with the use of a short beam element, is used in the present study to include the stress intensity effects in cracks. In addition since the propeller end will generate torsional motions of the shaft,
detailed investigations were carried out to monitor the lower torsional frequency of the shaft and the crack influence at that frequency.
2. Theory and Modeling of the Bearing Support
One transverse open crack has been considered to be present in the shaft in this study. The uncracked shaft, shown in Figure 1, is modeled by replacing the bearing support effects by linear
translational and rotational springs shown in Figure 2; the actual bearing support used in the experimental study is shown in Figure 3 (see McMaster-Carr, 2011 [16]). In the ball bearing used during
these experiments, the flange of the housing bearing is fixed to the steel support frame; the inner ball bearing is fixed to the cylindrical shaft by tightening two screws positioned at 90° to one
another. The elasticity of these bearing connections of the test frame supports and the cylindrical shaft are replaced by orthogonal linear springs, located at the positions of the two orthogonal
tight screws. Hence the linear spring supports at right angles, used in this study, represent the effects of these tight screws of the cylindrical shaft (along with the flange mount and inner
bearing), on the vibration frequencies of the shaft. Figures 2(a[1]) to 2(e[1]) show the five models used for modeling the ball bearing and test frame supports used in the study, namely, six, eight,
ten, and twelve springs, respectively.
From the results obtained (shown later in the paper) it was observed that none of these five numerical models are fully sufficient to represent exactly the bearing support effects, but they do
reasonably represent the effects of the bearing supports; as such they give reasonably good results when compared with the measured experimental results. For each spring support location the
restoring forces increase or decrease depending on the deformation at that location, which in turn depend on the shaft and bearing influences at the same location. The best model that gives results
very close to the experiment is identified in the subsequent computations given in a later section.
3. Modal Testing and Analysis of Cracked Shaft
In this part, the characteristics of the vibrating uncracked and cracked shaft are investigated through modal testing. Manually made saw cuts (0.65mm wide) are used as cracks of different depths.
The objective of this experimental study is to study the effect of cracks on the lateral and torsional vibrations of the tested cylindrical shaft. The experimental results are used to validate the
most appropriate numerical model. The equipment system used to measure the two types of vibrations, namely, lateral and torsional modes of a cylindrical shaft system, is shown in Figure 4. For the
experimental portion of the study, the engineering innovation (LMS Test Lab) software package with two measurement channels is used. The first input channel records the time history output from the
modal hammer used in the study, shown in Figure 4(a). The number designation of the impact hammer type is 8206-002, and the maximum force (nondestructive) that it can deliver is 4448N.
The head of the hammer can use different tip materials, namely, aluminum, plastic, rubber, and stainless steel. In this study, plastic tip (DB-3991-002) material was used. The second channel records
the time history output from the accelerometer device shown in Figure 4(b); alternately a set of shear strain gages can also be used instead of an accelerometer.
As shown in Figure 5(a), in subsequent online monitoring studies, the backside end of the continuous shaft with the cantilever overhang (in the forward end) will be connected to an electric motor and
driven at a maximum speed of 4000rpm, but, in the present experimental modal testing, the backside connection to the electric motor is disconnected and modal testing done in a “static” configuration
of the cylindrical shaft. The propeller is attached to the overhanging end. During modal tests the shaft, with the overhang, is locked (or fixed) to the bearing support (bearing support 1) as shown
in Figure 5(b). The fixed rotor shaft of 16mm diameter and 1210mm length is supported on two bearings with greased fittings and deep-grooved ball-bearing inserts. Two set screws, separated by 90°,
are used to fix the bearings to shaft at each of the bearings 1 and 2. The experimental program is carried out to identify the dynamic shaft characteristics with and without the presence of crack; a
crack having different depths was made at 2.0cm to the right of bearing number 2, as shown in Figure 5(c).
Figure 6 shows the experimental setup used to measure torsional vibration of the cylindrical shaft. In the torsional vibration measurement system three strain gages are fixed at three locations, one
placed near the bearing support 1, the second placed at the middle of the supported span, and the last one placed near the propeller (in the overhanging end) as shown in Figure 6(a). Two sets of
Suzette type (K-XY3X) model strain gauges with connection cables (4-wire circuit), fixed at three locations, are used. They are assembled in half bridge configurations. These sets of strain gauges
are mounted 180° apart on the circumference of the shaft (along the neutral axis of the uncracked beam) at a given longitudinal location. The way they are oriented enables the measurement of
torsional strains while any incidental strains due to beam bending cancel each other. Figures 6(c) and 6(d) show the sets of strain gauges used during modal tests and locations along the shaft. In
Figure 6(d), c and d represent the size of strain gauge with high-endurance backing material (HBM); a, b[1], and b[2] the actual length and width of strain gauge wires. An aluminum arm is used to
apply various magnitudes of impact torque at various locations of shaft. Five (multiplexed) data acquisition channels are used, namely, three for torque gages, one accelerometer (±4g’s) channel, and
the fifth for impact load with the modal hammer.
Neither ANSYS software package [17] nor the LMS Test Lab system was able to indicate the presence of the first torsional frequency in the shaft-propeller system. The probable reasons are as follows:
(i) BEAM4 type beam element probably does not give the first torsional mode due to the improper lumped mass values used for torsional motions [17]; probably higher-order beam elements (such as
BEAM188 or BEAM189, having warping as an additional degree of freedom) would have given the first torsional frequency (this was not attempted since the use of warping as another variable, along with
the available six degrees of freedom at a point, looked superfluous for the shaft vibration); (ii) the LMS Test Lab does not give the torsional mode since the accelerometer used (for getting the
modal amplitudes) measured only the bending motions, and as well the LMS software used in the study does not attempt to extract from the torsional vibration features from the monitored vibration
signals. Hence a different procedure had to be devised to determine the torsional frequency(s) of the shaft-propeller system. For the analytical portion of the investigation to calculate the natural
frequency of torsional vibration, the rotational spring constant of the shaft and mass moment of inertia of a propeller (in addition to the aluminum plate used for generating sudden impact torsional
moments in the shaft) about the axis of rotation had to be determined. Figure 7 shows a standard trifilar suspension arrangement that was used to determine the platform and propeller properties. This
trifilar suspension structure is a circular, stiff, plywood platform attached and hooked to a hanger via stiffer ropes. The three ropes were fixed tight on the top to keep platform suspension as flat
as possible. Also in this experiment, a stop watch was used to record the time of torsional oscillations.
The device shown in Figure 7 was used to determine the frequency of oscillation of the objects placed on the platform. By using two equations given below to calculate mass moment of inertia and by
subtracting the mass moment of inertia of the platform from the mass moment of inertia of the combined object (propeller) and platform, the mass moment of inertia of the propeller can be determined [
18] where is the mass moment of inertia for platform and object, is the theoretical value of mass moment of inertia of the platform disk, is the torsional frequency of motion of the above device, for
platform, and is the length of the rope.
Now, the torsional natural frequency of the cylindrical shaft (with the propeller and the torsion impact device) is calculated by using this formula: where is the torsion stiffness of the shaft, is
total polar mass moment of inertia for shaft, propeller, and plate, is the polar area moment of inertia of the shaft, is the length of shaft, is the shear modulus of the shaft, modules of elasticity,
is the Poisson ratio of shaft material, is polar mass moment of inertia for shaft, is mass of shaft, is the shaft radius, polar mass moment of inertia for propeller, and polar mass moment of inertia
for the plate used for torsional impact.
4. Preparation for Modal Tests on Uncracked and Cracked Shafts
In the present investigation, the general aim is to identify the dynamic system characteristics when the damage (or crack) occurs on the cylindrical shaft. As mentioned above, the experimental
investigation is carried out for crack detection, using only one crack location. In the numerical study, a number of cases are considered; one corresponding to the uncracked shaft and the other the
cracked shaft having different crack depths. The crack is located at the maximum bending moment position, namely, on the right hand side of bearing support 2. Commercial ANSYS software is used to
determine the dynamic characteristics so as to correlate them with the experimental results. In the finite element model the shaft was continuous over one support, namely, support 2 (having the
overhanging span for propeller) with a ball bearing. The cylindrical shaft sketch is shown in Figure 8(a). Its right-hand side end (carrying the propeller) is free, while the left one (at support 1)
is clamped. The length and the diameter of the shaft are as given above, that is, 1210mm and 16mm, respectively. The moment of inertia of the uncracked cross-section is = 3.217 × 10^−9m^4 and the
polar moment of inertia for each element is = 6.434 × 10^−9m^4. The Young’s modulus is = 2 × 10^11N/m^2, Poisson’s ratio is 0.3, shear modulus of elasticity is = 7.69 × 10^10 N/m^2, and the density
is = 7667kg/m^3. Beam element (BEAM4) was used to model the shaft used for numerical analysis through ANSYS. This element is a uniaxial element with torsion, bending, tension, and compression
capabilities. The element has six degrees of freedom at each node: axial, transverse, and rotational motions are shown with numbering of its local degrees of freedom in Figure 7(b).
When modeling the shaft it is assumed that all the elements have the same material properties and geometrical profiles except at the cracked location, which has a different geometrical property
(reduced stiffness) due to the presence of crack at that location. In this modeling, it is assumed that neutral axis does not shift at the crack location, which is not proper.
5. Presentation of the Results and Discussion
In this part of the study the findings from results obtained experimentally and numerically are presented and discussed. For experimental and numerical studies, one crack position and various crack
ratios (from 0% to 70% ratio) are examined. From a detailed comparison of numerical results, obtained for six, eight, ten, and twelve springs modeling, with the experimental results for the uncracked
rotor shaft, it is found that the six springs (shown in Figure 2(b), with some area of contact near the screwed inner bearing contact with the cylindrical shaft) model gave the smallest difference
between the numerical and experimental frequency results. Hence the model with six springs (Figure 2(b)) is used as the proper model for subsequent studies; it is also observed from the ANSYS
numerical results (using BEAM4 shaft elements) that the output did not contain any first torsional frequency; it did contain a higher torsional frequency at 652.0Hz (which could not be the correct
frequency value, since the BEAM4 formulation used in ANSYS does not properly compute contribution to the torsional moment of inertias). Table 1 shows the comparison of the first eight natural
frequencies (four vertical and four horizontal) between the experimental and numerical values (uncracked and cracked), for the case of six springs (see Figure 2(b)). In this part of the study only
one element, having a width of 0.65mm (equal to the width of the saw-cut crack), is used to represent the crack, and all the other elements, around the crack region, are also similar to (but wider
than) this element. It can be seen from Table 1 that the experimental values show comparatively larger changes for the crack present in the shaft whereas the numerical analysis results show almost no
changes, as the crack depth increases from 0 to 70%. The numerical analysis seems to be insensitive to the presence of the crack. This is due to the fact that the flexibility introduced in the
experimental model by the presence of a crack seems to be much higher than that provided by the length size of 0.65mm for a single finite element used to represent the crack effect in the numerical
model, as shown in Figure 9(a). To improve the numerical results, the model shown in Figure 9(b) is used (where many elements of 0.65mm length are used) to represent the crack effect. It is observed
that even though a large number of smaller elements have been used to represent a single wider crack it has been found to give the same accuracy as the finite element model with a number of variable
length elements. Table 1 shows the comparison of the first eight natural frequencies between the experimental and numerical results (with six springs) with the simulated correction with a larger
number of small beam elements to represent crack. It shows from this comparison that the results of corrected numerical analysis are much closer to the experimental results.
Taking into consideration the representation of the crack by a modeled short beam element (having the same depth as the uncracked portion of shaft, at the cracked section, but with a larger element
width—see Figure 9(b)) in the studies of Petroski [14] and Rytter [15] and the crack influence zone cited by Yang [19], more studies were carried out by considering additional elements around the
crack location to have the same moment of inertia of the shaft as that of the cracked section (to represent the longer short beam element).The results of these studies are shown later in Figure 12.
Figures 10 and 11 show the changes that occur in the experimental frequencies as the crack depth ratios change from 0 to 70% (with second-order curve fit). As observed earlier by Hamidi et al. [6],
the rate of change in bending natural frequencies (shown in Figure 11) becomes noticeable when the crack depth ratio becomes greater than 20%, indicating that the rates of change in natural
frequencies (with respect to crack depth ratio) seem to be a better indicator of crack presence. When the rates of frequency change (with respect to crack depth ratio) are plotted as a function of
crack depth ratio it is observed that between 20% and 30% crack depth ratio, the rate of change variation is found to be 3% to 4%. Instead if frequency changes were used as the crack indicator, the
changes between 20% and 30% crack depth ratio are around 0.5% to 1.0%; this is much less than that shown by the rate of change of frequency (with respect to crack depth).
These numerical results plotted in Figure 12 were correlated by comparing them with the experimental results. The first three natural frequencies were calculated for several values of the crack depth
ratios [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7] and for the presence of crack represented by different (twenty-nine) short shaft element lengths [0.65 (case I0), 6.65 (case I3), 12.65 (I5), 18.65
(I7), 24.65 (I9), 30.65 (I11), 36.65 (I13), 42.65 (I15), 54.65 (I19), 60.65 (I21), 66.65 (I23), 72.65 (I25), and 84.65 (case I29)] mm. Figures 12(a), 12(b), and 12(c) show the numerical and
experimental results for the first, second, and third natural (nondimensional) frequencies versus crack depth ratios, respectively. Clearly, it can be seen from these figures that the first natural
frequency needed a larger equivalent length shaft element to give a good agreement between numerical and experimental values. It can be seen from Figure 12(a) that the curve (given by Num. V.I[19])
determined from numerical calculations seems to be better coincident with the curve from the experimental test results (given by experimental values). For the second and third frequencies shown in
Figures 12(b) and 12(c), the curves (represented, resp., by Num. V.I[11] and Num. V.I[9]) seem to be better coincident with the curves from the experimental values (represented by Exp. values).
Therefore while the first frequency needs a longer equivalent shaft element, the second and third natural frequencies need shorter equivalent shaft element lengths to give good agreement with
experimental results. The above modeling of the cracked shaft (by an equivalent short shaft) gives the best fit with the experimental results as follows: (i) for the first natural frequency, the
equivalent cracked shaft length is around 54.65mm; (ii) for the second and third natural frequencies, the values are around 30.65 and 24.65mm.
The differences between numerical and experimental frequencies for various crack depth ratios, before and after the correction with a short shaft element for the cracked section has been made, are
also given in Table 1. For the first natural frequencies the numerical values used are obtained with a short shaft element length given by Num. V.I[19], for the second natural frequencies the
numerical values used are for those given by the short shaft element Num. V.I[11], and for the third natural frequencies the numerical values used are for those given by the short shaft element Num.
V.I[9]. It is clear from this table that the modeling of a cracked location by an equivalent short shaft element has considerably reduced the percentage differences, between the numerical and
experimental values, and keeps the numerical values close to experimental values. Table 2 gives the percentage differences between the experimental and the corrected frequencies shown in Table 1. It
is seen from Table 2 that the six springs’ model (shown in Figure 2(b)) does not seem to be a very good model for the second vertical bending frequency, since the differences are much larger than the
first and third frequencies.
Using the experimental mode shapes shown in Figures 13 to 16, the effective bending lengths (between points of contraflexures) can be taken as () for the first mode, () for the second mode, and ()
for the third mode, where is the length between the two bearing supports. Taking to be equal to 0.97m (from Figure 1, which gives the actual span length between the two test frame supports), the
effective bending lengths for the first three frequencies are obtained as 0.686m, 0.485m, and 0.343m, respectively. This leads to (effective crack length/effective bending length for the mode)
ratios of 1/12.55 for first bending mode, 1/15.83 for second bending mode, and 1/13.91 for third bending mode. Hence the value of 1/12 to 1/16 seems to give a better fit for the equivalent short
length shaft ratio (= effective crack length/effective bending length) for the different modes. The fourth mode shape (shown in Figure 16) is not considered in the analysis owing to the following
reasons. (i) The node over the bearing support seems to house shifted outside its proper location, probably due to the curve fitting procedure. (ii) Measurements are made only at fourteen locations
along the length of the shaft, and this has not provided enough plotting points to give the proper modal shape curve. (iii) The presence of crack seems to be indicated only by the 10% curve and
thereafter no appreciable change seems to occur in the plots.
The value of 1/12 to 1/16 for the equivalent short length shaft ratio can be given an alternate interpretation which will enable this ratio to be utilized in the first level crack identification
scheme for shaft. When a shaft cracks, the average crack velocity in the cracked portion and the uncracked portion should be the same. Hence where is the effective crack length, is the time taken by
the considered bending wave (first record or third frequency) to cover the distance , is the wave length of the considered wave (equal to twice the effective bending length), and is the cracked
frequency of the considered wave. Rearranging (7), Hence, According to Rao [20] in a time-domain numerical integration procedure using finite-difference schemes, the solution becomes unconditionally
stable and reasonably accurate when the ratio is smaller than 1/20 to 1/40. From Table 1, it can be seen that when crack depth ratio is around 40%, the is approximately 1.02 for the first mode and
1.01 for the second and third modes. Hence ratio can be expected to be smaller than 1/19.6 to 1/38.4. Hence (10) can be expressed as which is approximately the ratio obtained from the experimental
values. Consequently a finite element analysis could be carried out with the ratio of (effective crack length/effective bending length) of (1/12.0) to (1/16) and the cracked frequency of rotating
shafts be obtained for a crack depth ratio 40%. When the measured frequency of the rotating shaft reduces below this value for the first three modes, then one can be invariably sure to say that there
is a crack or damage in the rotating shaft and carry out a detailed inspection on the rotating shaft to locate the crack.
Figures 13 to 16 give the experimental data plots given by LMS Test Laboratory software, showing the experimental mode shapes for the various crack depth ratios (0.0 to 0.7), for the first four
frequencies (plotted as a function of modal amplitudes versus accelerometer location). Since only vertical frequencies are of concern, we consider only Figures 13(a), 14(a), and 15(a). It is seen
from these three figures that the identifier of the mode shape change due to crack is shown better by the third mode shape than the other two mode shapes; hence the crack presence can be best
detected by monitoring the third vertical bending mode of the rotor shaft. It should also be noticed that the changes in mode shapes shown in Figure 15(a) (for the third mode) are higher than the
frequency changes shown in Figure 12. This can be appreciated if it can be noticed that this case (third mode) is similar to the case of a fixed-simply supported case (or a cantilever case), where
the crack occurs around the fixed edge (bearing 2).
Figure 17(a) shows the plot of the depth of crack and percent change (decrease) in torsional natural frequencies for experimental measurements. Figure 17(a) shows that the changes in the first
torsional frequency give a much better indication of the crack presence even during the starting of the crack. This is better shown through Figure 17(b) which plots the rate of change of torsional
frequency (with respect to crack depth ratio) versus the crack depth ratio. It is seen that the rate of change in the first torsional frequency (with respect to crack depth ratio) versus crack depth
ratio is much higher (at a crack depth ratio of 10%, the rate of change of frequency with respect to crack depth ratio is nearly 10.0%) whereas the rate of change of bending frequencies during the
earlier stage of crack initiation and growth is much less (at a crack depth ratio of 10%, the rate of change of frequency with respect to crack depth ratio is only 1.0%); refer to Figures 12 and 17
(b). This could be easily understood since the influence of cracking on torsional inertia (due to its larger influence along the skin surface of the cylindrical shaft than its depth) will be much
higher than the bending inertia and the consequent changes in the rate of frequency change. Hence the rate of change of torsional frequency could very well be used as a very good indicator of the
presence of any small crack.
Considering Table 3 and value of torsional natural frequency of experimental measurements for uncracked shaft, it can be seen that the error between analytical and experimental values is less than
1.57%, indicating that the experimental measurements seem to have been done very carefully.
6. Conclusions
From the above experimental and numerical studies, the following contributions from this study can be presented.(1)In this paper, five spring models were developed to represent the ball-bearing
support effect, namely, six, eight, and twelve springs. It is seen that bearings with six springs, shown in Figure 2(a), give the best agreement between experimental and numerical results. This is
due to the fact that that this model closely represents the elasticity effects that exist between the two tight screws that connect the inner bearing to the cylindrical shaft and elasticity of the
support provided by the two frame supports 1 and 2.(2)The values of the experimental natural frequencies for vertical and horizontal transverse vibrations were not the same for all the different pair
of (vertical and horizontal) modes; consequently the difference in modeling the two orthogonal bearing support contacts by linear springs becomes very important so as to make the numerical values
closer to the measured experimental values; this has to be done very carefully.(3)From the modeling of a crack, in a cracked shaft, by an equivalent short beam, the best fit for the length of a shaft
element for first natural frequency is about 54.65mm while the best fit for second and third natural frequencies is between 30.65mm and 24.65mm, respectively. This gives an approximate ratio (=
effective crack length/effective bending length for the mode) of 1/12 to 1/16 for different modes. This also seems to be corroborated by the digitized time interval requirements for accuracy in
finite-difference-related numerical integration. The above relationship could be used as a first-level inspection scheme for determining the presence of cracking in a rotating shaft.(4)The third-mode
shape could be used as a good indicator of the presence of a crack in the shaft. This gives a much higher variation in mode shapes than the changes in frequencies that occur due to the presence of
the crack.(5)Vibration analysis for experimental results was successful in detecting the presence of a crack. These results show that it is possible to detect a crack, around the crack depth ratio of
20%, when the rates of frequency change (as a function of crack depth ratio) are plotted as a function of crack depth ratio (between 20% and 30% crack depth ratios, the rate of change variation is
found to be 3% to 4%). Instead if frequency changes were used as the crack indicator, then the changes are much smaller (between 20% and 30% crack depth ratios, the change in frequency ratio is
around 0.5% to 1.0%) than that shown by the rate of change of frequency (with respect to crack depth).(6)Monitoring the first torsional frequency [with regards to its rate of change (with respect to
crack depth ratio)] gives a much better indication of the crack presence (at a 10% crack depth ratio, the rate of change of frequency is around 10%) than the monitoring of bending frequencies for its
rate of change with respect to crack depth ratio (at a 10% crack depth ratio, the rate of change of frequency is around 1%).
The authors would like to express their sincere gratitude to the staff of the Structural Lab of the Faculty of Engineering and Applied Science at the Memorial University.
1. J. Wauer, “On the dynamics of cracked rotors: a literaturesurvey,” Applied Mechanics Reviews, vol. 43, no. 1, pp. 13–17, 1990.
2. A. D. Dimarogonas, “Vibration of cracked structures: a state of the art review,” Engineering Fracture Mechanics, vol. 55, no. 5, pp. 831–857, 1996. View at Publisher · View at Google Scholar ·
View at Scopus
3. G. Sabnavis, R. G. Kirk, M. Kasarda, and D. Quinn, “Cracked shaft detection and diagnostics: a literature review,” The Shock and Vibration Digest, vol. 36, no. 4, pp. 287–296, 2004. View at
4. R. Q. Munoz, J. A. S. Ramirez, and J. S. Kubiak, “Rotor modal analysis for a rotor crack detection,” in Proceedings of the 15th International Modal Analysis Conference (IMAC' 97), vol. 1, pp.
877–879, February 1997. View at Scopus
5. G. D. Gounaris and C. A. Papadopoulos, “Crack identification in rotating shafts by coupled response measurements,” Engineering Fracture Mechanics, vol. 69, no. 3, pp. 339–352, 2002. View at
Publisher · View at Google Scholar · View at Scopus
6. L. Hamidi, J. Piaud, and M. Massoud, “A study of crack influence on the modal characteristics of rotors,” in Proceedings of the International Conference on Vibrations in Rotating Machinery, pp.
283–288, Bath, UK, 1992, No. C432/066.
7. T. C. Tsai and Y. Z. Wang, “Vibration analysis and diagnosis of a cracked shaft,” Journal of Sound and Vibration, vol. 192, no. 3, pp. 607–620, 1996. View at Publisher · View at Google Scholar ·
View at Scopus
8. A. Zakhezin and T. Malysheva, “Modal analysis rotor system for diagnostic of the fatigue crack,” in Proceedings of the Condition Monitoring Conference, St. Catherine's College, Oxford, UK, 2001.
9. S. A. Adewusi and B. O. Al-Bedoor, “Detection of propagating cracks in rotors using neural networks,” in Proceedings of the American Society of Mechanical Engineers, Pressure Vessels and piping
Division Conference, vol. 447, pp. 71–78, Vancouver, Canada, August 2002. View at Scopus
10. S. Prabhakar, A. S. Sekhar, and A. R. Mohanty, “Detection and monitoring of cracks using mechanical impedance of rotor-bearing system,” Journal of the Acoustical Society of America, vol. 110, no.
5, pp. 2351–2359, 2001. View at Publisher · View at Google Scholar · View at Scopus
11. P. Pennacchi and A. Vania, “Diagnostics of a crack in a load coupling of a gas turbine using the machine model and the analysis of the shaft vibrations,” Mechanical Systems and Signal Processing,
vol. 22, no. 5, pp. 1157–1178, 2008. View at Publisher · View at Google Scholar · View at Scopus
12. L. S. Dorfman and M. Trubelja, “Torsional monitoring of turbine-generators for incipient failure detection,” in Proceedings of the 6th EPRI Steam Turbine/Generator Workshop, pp. 1–6, St. Louis,
Mo, USA, August 1999.
13. S. H. Cho, S. W. Han, C. I. Park, and Y. Y. Kim, “Noncontact torsional wave transduction in a rotating shaft using oblique magnetostrictive strips,” Journal of Applied Physics, vol. 100, no. 10,
pp. 104903–104906, 2006. View at Publisher · View at Google Scholar · View at Scopus
14. H. J. Petroski, “Simple static and dynamic models for the cracked elastic beams,” International Journal of Fracture, vol. 17, no. 4, pp. R71–R76, 1981. View at Publisher · View at Google Scholar
· View at Scopus
15. A. Rytter, Vibrational based inspection of civil engineering structures, Ph.D. thesis, Department of Building Technology and Structural Engineering, University of Aalborg, Aalborg, Denmark, 1993.
16. “McMaster-Carr (Princeton, New Jersey, USA) supplies products (including bearings) used to maintain manufacturing plants and large commercial facilities worldwide,” http://www.mcmaster.com/.
17. ANSYS 13, Section 3.12.7 on Limitations of Lumped Matrix Formulation with Beam, Pipe, or Shell Elements, SAS IP, Cheyenne, Wyo, USA, 13th edition, 2010.
18. Laboratory Handout, Determination of Moment of Inertia, Department of Mechanical Engineering, Dalhousie University, Halifax, NS, Canada, 2007.
19. X. Yang, Vibration based crack analysis and detection in beam using energy method, Ph.D. thesis, Faculty of Engineering, Memorial University, St. John's, NL, Canada, 2001.
20. S. Rao, Mechanical Vibrations, Addison Wesley, New York, NY, USA, 1995.
|
{"url":"http://www.hindawi.com/journals/ame/2012/519471/","timestamp":"2014-04-16T19:21:08Z","content_type":null,"content_length":"120623","record_id":"<urn:uuid:a03e6458-0795-4b64-908d-b6e25c55a9b6>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Written Exam Workshop Fall '09
Written Exam Workshop
Fall 2009
Will Perkins
office 1111
office hours: Mondays, 7 - 8 pm (note the change)
practical information
Week 1 (9/17) - Calculus
Topics: Limits and Series; tests of convergence
Week 2 (9/24) - Calculus
(there will be a subsitute instructor this week)
Topics: Multivariable calc; Green's / Stokes' / Divergence Theorem. Polar / spherical coordinates. Tangents, normals, Implicit Function Theorem. Optimization Problems.
Homework (for today): S07 #2, S06 #3, S05 #4, J05#4, S03 #5, S03 #4
Week 3 (10/1) - Linear Algebra
Topics: Solving Linear Equations; Det, Trace, Characteristic Poly; Diagonalizing; Block Matrices; Projections
Homework: S98 #5, J00 #3, J02 #1, S02 #2, S03 # 1, J07 #1
Week 4 (10/8) - Advanced Calculus
Topics: Improper Integrals, Limits of Integrals, Infinite Products.
90-second student presentations on special types of matrices
Homework: S03 #1, J07 #2, J04 #2, J99 #2, S98 #4, S98 #3, S96 #1
Week 5 (10/15) - Complex
Topics: Conformal Maps; Liouville's Theorem; Schwartz's Lemma; Rouche's Theorem
Homework: S07 #3, S07 #4, S07 #5, J07 #4, J06 #5, S05 #5
Week 6 (10/22) - Linear Algebra
Topics: Decompositions; Graham Schmidt; Rayleigh Quotient
Homework: J07 #4, J07 #5, S04 #3, J04 #3, J01 #4, J98 #5
Week 7 (10/29) - Advanced Calculus
Topics: Optimization, Functional Equations, Fourier Stuff,
Week 8 (11/5) - Complex
Topics: Analytic continuation, branch cuts, Schwartz-Christoffel mapping,
Homework: J98 #4, S98 #5, S99 #4, J01 #1, J02 #5, J02 #1
Week 9 (11/12) - Advanced Calculus
Topics: More Multivariable Calculus
Homework: No specific problems assigned - we'll be going over the basic methods. But look through the past problems and come in with any that you have questions about.
Week 10 (11/19) - Linear Algebra
Topics: Jordan form, least squares, + assorted problems
Homework: J 98 #2, J 96 #4, S 96 #3, J 97 #2, J 99 #5, J 02 #2, J 06 #5
Thanksgiving (11/26)
Week 11 (12/3) - Complex
Topics: Harmonic functions, Poisson Integral Formula, Infinite Products,
Homework: S95 #4, S99 #5, J03 #4, J05 #5, S05 #4, J07 #2
Week 12 (12/10) - Review
Topics: Review of all 3 subjects + practice problems of crucial types.
Homework: Prepare and bring to class an outline of each subject, dividing it into different categories and types of problems. For the types of problems you still need to review, add references and
practice problems.
practical information
1. The most important thing for you to do is pick up a copy of the past written exams from Tamar's office!
2. Each week I'll assign 6 or so past written exam problems to do for the next week. I strongly enocurage you to do/try them. We'll go over the key points of the solutions in class , but if you'd
like to see the details flushed out, just stop by my office.
3. Please print out the worksheet and any handouts before each class. I'll bring a few copies in case you forget, but not enough for the whole class.
4. Give me lots of feedback about what you want covered, how fast we're going, etc. The mix of the topics covered should depend on what you guys want to learn.
5. The best advice I can give you? Do lots and lots of problems!
1. Evan Chou's Writtens Wiki
2. Andrew Suk's Writtens Solutions
3. Miranda Holmes' Fall 07 Written Workshop page
Linear Algebra
|
{"url":"http://www.cims.nyu.edu/~perkins/writtenF09/index.html","timestamp":"2014-04-20T16:09:59Z","content_type":null,"content_length":"12882","record_id":"<urn:uuid:fd9710ce-4bc9-478d-b4ca-4c05bbb35ed8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00656-ip-10-147-4-33.ec2.internal.warc.gz"}
|
hello everyone
August 9th 2007, 03:32 AM #1
Aug 2007
hello everyone
hi all, am just new to this forum.. and am hope that this forum will help me out in my maths problems.actually a math question brought me here...and i really need to get a solution for that
question.. wish everyone will help me out..
here is the question
when the selling price of a packet of 20 cigarettes is $2.5, the weekly demand in the UK is 11 million and when the selling price is $4 the weekly demand is 7.15million.
a) find the demand function stating q, the weekly demand in millions, in terms of p, the selling price in $, assuming that this is a linear relationship.
b) suppose that the selling price of a packet of cigarettes is made up of two elements, the sellers revenue per packet and the tax paid to the exchequer per packet . you are given that the
sellers revenue is fixed at $1.25 per packet although the tax paid to the exchequer per packet can be changed. find an expression for each of the following in terms of p, the selling price in $
(1) C- the total combined weekly revenue measured in $millions.
(2) S- the total seller's weekly revenue measured in $millions.
(3) E- the total exchequer's weekly revenue measure in $millions
c) (1) suppose now that packets of 20 cigarettes are sold for $4.75 what is the resulting total exchequer's weekly revenue?
(2) if the tax per packet is increased to $4.50, find the new total exchequer's weekly revenue and comment on the result including a recommendation to the exchequer.
so i will be very thankful if someone find me the answers. thanks a lot
I have analyzed this last night. I got stalled at the
"...find an expression for each of the following in terms of p, the selling price in $
(1) C- the total combined weekly revenue measured in $millions.
(2) S- the total seller's weekly revenue measured in $millions.
(3) E- the total exchequer's weekly revenue measure in $millions"
Why in terms of p only? When p = 1.25 +t? Should it not be in terms of t?
t = tax component of p.
Anyway, to start the ball rolling, here's what I started last night.
a) find the demand function stating q, the weekly demand in millions, in terms of p, theselling price in $, assuming that this is a linear relationship.
"Assuming... a linear relationship." So the graph is a straight line. Two points determine a straight line.
"when the selling price of a packet of 20 cigarettes is $2.5, the weekly demand in the UK is 11 million"
Using (p,q) ordered pair,
(2.5,11) ---------------------one point
"and when the selling price is $4 the weekly demand is 7.15million."
(4,7.15) ------------other point.
We can get the equation of the line by using the point-slope form. Say, we use the point (2.5,11),
(q -11) = [(11 -7.15)/(2.5 -4)](p -2.5)
q -11 = (-2.566667)(p -2.5)
q = -2.566667p -6.416667 +11 <----wrong. Should be +6.416667
q = -2.566667p +6.416667 +11
q = -2.566667p +17.416667 --------the demand function. (corrected)
Let's see if I can go around the confusion re the C,S,E tonight.
Last edited by ticbol; August 9th 2007 at 11:44 PM.
thank you very much for your help ,, and would u please try for other parts.. thanks again. take care
Okay, to continue, ....
b) suppose that the selling price of a packet of cigarettes is made up of two elements, the sellers revenue per packet and the tax paid to the exchequer per packet . you are given that the
sellers revenue is fixed at $1.25 per packet although the tax paid to the exchequer per packet can be changed. find an expression for each of the following in terms of p, the selling price in $
(1) C- the total combined weekly revenue measured in $millions.
(2) S- the total seller's weekly revenue measured in $millions.
(3) E- the total exchequer's weekly revenue measure in $millions
So, p = $(1.25 +t), where t = tax component of the price.
Revenue = demand*price -----**
demand = q = -2.566667p +17.416667 -----from previous reply.
price = p
C = [-2.566667p +17.416667](p) ----------answer.
S = [-2.566667p +17.416667](1.25) -------answer.
E = [-2.566667p +17.416667](p -1.25) ----answer.
c) (1) suppose now that packets of 20 cigarettes are sold for $4.75 what is the resulting total exchequer's weekly revenue?
E = [-2.566667p +17.416667](p -1.25)
E = [-2.566667(4.75) +17.416667](4.75 -1.25)
E = [5.224999](3.50)
E = 18.287496 million dollars ---------------answer.
(2) if the tax per packet is increased to $4.50, find the new total exchequer's weekly revenue and comment on the result including a recommendation to the exchequer.
then, p = 1.25 +4.50 = $5.75
E = [-2.566667p +17.416667](p -1.25)
E = [-2.566667(5.75) +17.416667](5.75 -1.25)
E = [2.658332](4.50)
E = 11.962494 million dollars ---------------answer.
The 11.962494 million dollars is much less than the 18.287496 million dollars when the tax was $3.50 per pack only. The increase of $1.00 in tax resulted in a loss of (18.287496 -11.962494 =
....) 6.325 million dollars. Because the demand after the $1.00 increase in tax was almost half only of the demand when there was no tax inctrease yet.
(2.658332 / 5.224999) = 0.50877 -----about half.
Less demand, less revenue.
Recommendation to the exhequer?
What the heck, "You are not good at Math, Echequer! You increase the tax so that you can collect more? Huh!! Why not hire one who can show you simple Math?"
Now that I am sober, I'd recommend to Taxman to:
a) If I am against smoking (I am!)...go ahead, keep on increasing the tax. Until the smokies are so expensive the demand would come to a thousand packs per week only. Less demand, less
production, less smoke.
b) If I am for smoking, or if I am included in those benefitting from sales of cigarettes like the Taxman and the manufacturers....reduce the tax!
Say the tax is reduced $1.00 from the original $3.50. So p = 2.50 +1.25 = $3.75 per pack.
E = [-2.566667(3.75) +17.416667](2.50) = 19.48 million dollars.
Umm, an increase of about 1.2 million dollars only for the Taxman. But for the manufacturers, a gain of about 3.2 millions dollars. And for the smokers, an increase of about 2.6 million packs per
week. More than enough to smoke out the cobwebs inside the lungs and the linings of the throat.
Last edited by ticbol; August 11th 2007 at 01:05 PM.
August 9th 2007, 12:05 PM #2
MHF Contributor
Apr 2005
August 9th 2007, 12:22 PM #3
Aug 2007
August 10th 2007, 03:25 AM #4
MHF Contributor
Apr 2005
|
{"url":"http://mathhelpforum.com/calculus/17633-hello-everyone.html","timestamp":"2014-04-16T17:05:30Z","content_type":null,"content_length":"38448","record_id":"<urn:uuid:a2ca6b6b-e5ce-4892-9833-0b9a7ec65047>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rolling Meadows Algebra Tutor
...I have also developed an efficient and effective approach to significantly expanding vocabulary knowledge that I used for my own test preparation and that I recommend to everyone I tutor. MY
RECENT TEST SCORES: GMAT: 760 GRE: Quantitative 168/170 (perfect 800 on earlier version) Verbal 168/170...
38 Subjects: including algebra 1, reading, algebra 2, statistics
...Exponential and Logarithmic Models. Trigonometric Functions OF ANGLES. Angle Measure.
17 Subjects: including algebra 1, algebra 2, reading, geometry
...My prior work background and experience for 30+ years consists of providing structural engineering services for large and small firms in the private sector (commercial, institutional,
educational, and residential). I prepared organized, systematic and detailed structural building calculations in ...
10 Subjects: including algebra 1, algebra 2, geometry, GED
...As an undergraduate math major, I volunteered with a program that tutored local middle school students in math. I worked in small groups or individually with mostly 7th and 8th graders to help
them keep up with the pace of the class. I also assisted them with homework problems.
5 Subjects: including algebra 1, statistics, prealgebra, probability
...I have my bachelors in engineering and I can help you improve your grades and even score better in an exam. I have worked with high school students as well as college students. I have helped
students excel in various exams.
14 Subjects: including algebra 1, algebra 2, geometry, GRE
|
{"url":"http://www.purplemath.com/Rolling_Meadows_Algebra_tutors.php","timestamp":"2014-04-21T15:08:32Z","content_type":null,"content_length":"23970","record_id":"<urn:uuid:6c936d8a-7ec1-4270-bdad-12e1b586ad71>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Symbol of pseudodiff operator
up vote 5 down vote favorite
I am trying to understand the calculus of pseudodifferential operators on manifolds. All the textbooks I could put my hand on define the principal symbol of a pseudodifferential operator locally,
then prove that it transforms well, hence becomes a "global" object. Is there any good way to define the the principal symbol without coordinate patches? Am I asking too much here :)?
First, do you know how to define a pseudodifferential operator without using local co-ordinates? That seems like the hardest step to me. I haven't tried to work out details, but it seems possible
to me that you can then define the symbol by studying how the operator acts on functions that vanish at a point to the appropriate order modulo those that vanish one order higher. – Deane Yang Sep
20 '11 at 18:44
I don't think what I suggest above works for a pseudodifferential operator, but it does work for a differential operator. But if you know how to define what a pesudodifferential operator is without
using co-ordinates, that might provide a hint on how to isolate the symbol from the operator. – Deane Yang Sep 20 '11 at 18:54
add comment
4 Answers
active oldest votes
There is an invariant way of defining pseudodifferential operators, and a (much simpler and quite classical) invariant way of defining symbols.
The latter appears already in the old Atiyah-Singer volume from the early '60's. Choose any point $(x_0, \xi_0)$ in the cotangent bundle. Choose a function $\phi \in \mathcal C^\infty(M)
$ such that $d\phi(x_0) = \xi_0$, and then set $\sigma_m(A)(x_0,\xi_0) = \lim_{\lambda\to\infty} \lambda)^{-m} e^{-i\lambda \phi}A( e^{i\lambda \phi})$ (perhaps I am missing a factor of
$i$). Here $A$ is a psido of order $m$. This is pretty direct and ``natural''.
As for the coordinate-free definition of pseudodifferential operators, the first step is to define the notion of a conormal (or polyhomogeneous conormal) distribution on a manifold $X$
relative to a (closed, embedded) submanifold $Y$. Such a distribution $u$ lies in some fixed (Banach or Hilbert) space $H$ -- for example, a weighted $L^\infty$ space, $r^s L^\infty$,
where $r$ is the distance to $Y$ in $X$ and $s$ is any fixed real number -- and is stably regular in this space, i.e. $V_1 \ldots V_k u \in r^s L^\infty$ for all positive integers $k$
and for all vector fields on $X$ which are smooth and unconstrained away from $Y$, but which are tangent to $Y$.
Finally, a linear operator $A$ on a smooth manifold $M$ (which satisfies some weak continuity requirements) has a Schwartz kernel $K_A$, which is a distribution on $M \times M$. The
operator $A$ is a pseudodifferential operator if $K_A$ is conormal with respect to the diagonal in $M \times M$.
up vote 10
down vote A classical, or polyhomogeneous, distribution is conormal and also has an expansion in ascending powers of $r$ and positive integer powers of $\log r$.
If $K_A$ satisfies this condition, then one can transfer it to a distribution on the normal bundle of the diagonal in $M \times M$, supported near the zero section (it is smooth
elsewhere anyway). Then its Fourier transofrm in the fibres of the normal bundle is a symbol in the usual sense, and vice versa, any symbol on these fibres has F.T. which is conormal to
the zero section and hence, by transferal, to the diagonal in $M \times M$.
The one unsatisfactory thing about this definition is that it is not apparent that if $A$ and $B$ are psido's, then so is their composition $A \circ B$, nor does one ``immediately'' get
a symbol calculus, i.e. the fact that the symbol mapping is a homomorphism.
Anyway, this is a down-to-earth and very useful definition of pseudodifferential operators which allows for all sorts of interesting generalizations. This definition, or certainly the
emphasis on this formulation, is due to Melrose, but appears already in Vol. 3 of H\"ormander.
Rafe Mazzeo
Great answer, Rafe. – Deane Yang Sep 21 '11 at 1:08
add comment
There are several global approaches to pseudo-diffops. All of them seem to need some additional geometric objects, a connection. So suppose that you want to have $\Psi$DO's on some vector
bundle $E \longrightarrow M$ then you need a linear connection on $E$ as well as one on the tangent bundle. Out of this you can build via adaptions of the usual integral formulas for the Weyl
up vote quantization on flat $\mathbb{R}^n$ a symbol calculus, intrinically global and also allowing for a total symbol and not a leading one only. You can find these kind of approaches in the works
6 down of Widom in the 80's if I remember correctly. There are more recent approaches by Pflaum as well as by Safarov. If you're interested in the relation to star products and quantization of
vote cotangent bundles (which is essentially the pullback of the operator product to the symbols) then you may want to take a look at the work of Bordemann, Neumaier, Pflaum and myself :)
add comment
I think that Hormander beautiful short paper
Pseudo-differential operators, Comm. Pure Appl. Math. 18 1965, 501–517
up vote 2 down vote
is always a good place to start. In this paper he gives a coordinate free definition of a (scalar) pseudo-differential operator. This paper did it for me.
add comment
It's possible to define a global symbol (not just principal symbol) on a manifold endowed with a linear connection (for example a Levi-Civita connection in the case of Riemannian geometry).
Essentially, the linear connection provides enough 'geometry' on the manifold to replicate all the results that one has on flat space, and we can define the symbol as a function on the
up vote 1 cotangent bundle T*M. The details can be found in Yuri Safarov's 1995 paper here: http://plms.oxfordjournals.org/content/74/2/379.short.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged differential-operators or ask your own question.
|
{"url":"http://mathoverflow.net/questions/75976/symbol-of-pseudodiff-operator/75983","timestamp":"2014-04-18T18:21:03Z","content_type":null,"content_length":"67388","record_id":"<urn:uuid:08e91ae1-5dfa-4d56-8d82-f7478f6cc7ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
|
tomic orbitals
Atomic orbitals: 6p
The shape of the three 6p orbitals. From left to right: 6p[z], 6p[x], and 6p[y]. For each, the blue zones are where the wave functions have negative values and the gold zones denote positive values.
For any atom, there are three 6p orbitals. These orbitals have the same shape but are aligned differently in space. The three 6p orbitals normally used are labelled 6p[x], 6p[y], and 6p[z] since the
functions are "aligned" along the x, y, and z axes respectively.
Each 6p orbital has ten lobes. There is a planar node normal to the axis of the orbital (so the 6p[x] orbital has a yz nodal plane, for instance). Apart from the planar node there are also four
spherical node that partition off the small inner lobes. The 7p) orbital is more complex still since it has even more spherical nodes.
The origin of the planar node becomes clear if we examine the wave equation which, for instance, includes an x term in the case of the 6p[x] orbital. Clearly When x = 0, then we must have a node, and
this by definition is the yz plane.
The origin of the spherical nodes becomes clearer if we examine the wave equations, which include a (840 - 840ρ + 252ρ^2 - 28ρ^3 + ρ^4) term. When (840 - 840ρ + 252ρ^2 - 28ρ^3 + ρ^4) = 0, then we
must have nodes. While not trivial, we can solve this on a case-by-case basis to determine the position of the nodes.
|
{"url":"http://winter.group.shef.ac.uk/orbitron/AOs/6p/index.html","timestamp":"2014-04-19T01:49:01Z","content_type":null,"content_length":"17262","record_id":"<urn:uuid:287148de-63f1-437f-be2d-62c8574e5a56>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding Taylor series for f
November 17th 2011, 10:40 PM
Finding Taylor series for f
Hello there, I was having some trouble with this problem:
Find the Taylor series for f centered at 4 if:
$\[f^n(0) = \frac{(-1)^nn!}{3^n(n+1)}\]$
What is the radius of convergence of the Taylor series?
Help appreciates.
November 17th 2011, 11:37 PM
Re: Finding Taylor series for f
To find the Taylor series for f centered at 4, you need the values of $f^n(4)$.
November 18th 2011, 03:44 AM
Re: Finding Taylor series for f
The Taylor expansion of f(*) around z=0 is...
$f(z)= \sum_{n=0}^{\infty} \frac{(-1)^{n}\ z^{n}}{3^{n}\ (n+1)}$ (1)
... and the series (1) has radius of convergence 3 because the singularity of the function in z=-3. Therefore the series of f(*) centered in z=4 has radius of convergence 7...
Kind regards
November 18th 2011, 09:38 PM
Re: Finding Taylor series for f
The problem is ill defined, we have no sufficient information about $f$ . For example
$f(x)=\begin{Bmatrix} \displaystyle \sum_{n=0}^{\infty}\dfrac{(-1)^n}{3^n(n+1)}x^n& \mbox{if}& |x|<3\\x-4 & \mbox{if}& |x|\geq 3\end{matrix}\quad g(x)=\begin{Bmatrix} \displaystyle \sum_{n=0}^{\
infty}\dfrac{(-1)^n}{3^n(n+1)}x^n& \mbox{if}& |x|<3\\|x-4| & \mbox{if}& |x|\geq 3\end{matrix}$
We have $f^{(n)}(0)=g^{(n)}(0)=\dfrac{(-1)^nn!}{3^n(n+1)}$ . However: a) The Taylor series of $f$ centered at $4$ has radius of convergence $+\infty$ . b) The Taylor series of $g$ centered at $4$
does not exist .
November 19th 2011, 01:07 AM
Re: Finding Taylor series for f
The Taylor expansion of f(*) around z=0 is...
$f(z)= \sum_{n=0}^{\infty} \frac{(-1)^{n}\ z^{n}}{3^{n}\ (n+1)}$ (1)
... and the series (1) has radius of convergence 3 because the singularity of the function in z=-3. Therefore the series of f(*) centered in z=4 has radius of convergence 7...
May be that a further explanation from me is necessary... if we consider the series expansion...
$g(x)= \sum_{n=0}^{\infty} \frac{(-1)^{n}\ x^{n}}{3^{n}}= \frac{1}{1+\frac{x}{3}}$ (1)
...it is not too difficult to obtain...
$f(x)= \sum_{n=0}^{\infty} \frac{(-1)^{n}\ x^{n}}{3^{n}\ (n+1)}= \frac{1}{x}\ \int_{0}^{x} g(\xi)\ \d \xi = \frac{3\ \ln (1+\frac{x}{3})}{x}$ (2)
Now the f(*) defined in (2) is analytic for $x>-3$ and that means that its series expansion centered in $x_{0}>-3$ has radius of convergence $x_{0}+3$ ...
Kind regards
November 19th 2011, 01:48 AM
Re: Finding Taylor series for f
Well, independently of your explanation I would insist that the only solution to the problem is: The radius of convergence of the Taylor series for $f$ centered at $x_0=4$ (if exists) can be any
$\rho\in[0,+\infty]$ depending on the expression of $f^{(n)}(4)$.
|
{"url":"http://mathhelpforum.com/calculus/192159-finding-taylor-series-f-print.html","timestamp":"2014-04-17T04:43:36Z","content_type":null,"content_length":"14742","record_id":"<urn:uuid:19967ae2-66e6-40f8-9be7-bc820002984e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Differentiable FUnctions
September 29th 2010, 01:12 AM #1
Junior Member
Sep 2010
Differentiable FUnctions
a) Prove from definition that if
f, g : R -->R are differentiable functions, then the product function fg is also differentiable.
(b) Prove that if h : R --->R is a differentiable function that satisties
|h(x)-h(y)|< or = |x-y|^1.5
for all x; y elements of R, then h(x)=0 for all x.
(c) Give an example of a function k : R --->R such that k is differentiable but not twice differentiable.
For functions on the real numbers, the definition of "differentiable" at a point is that the derivative exist at that point. So to prove that "if f and g are differentiable at x= a, then fg is
differentiable there", the best thing to do is to follow the derivation of the product rule.
The derivative of fg at x= a is $\displaystyle\lim_{h\to 0}\frac{f(a+h)(g(a+h)- f(a)g(a)}{h}= \lim_{h\to 0}\frac{f(a+h)g(a+h)- f(a)g(a+h)+ f(a)g(a+h)- f(a)g(a)}{h}$ and now separate into two
fractions. Use the fact that f and g, separately are differentiable. Also, you will need to use the fact that, since f is differentiable, f is continuous.
For (b), use the mean value theorem: for any x and y, there exist c between x and y, such that h(x)- h(y)= h'(c)(x- y). Take the absolute value of each side and use the fact that $|h(x)- h(y)|\le
|x- y|^{1.5}$.
For (c), start with a function that is continuous but not differentiable (there is a well known example) and integrate it!
For functions on the real numbers, the definition of "differentiable" at a point is that the derivative exist at that point. So to prove that "if f and g are differentiable at x= a, then fg is
differentiable there", the best thing to do is to follow the derivation of the product rule.
The derivative of fg at x= a is $\displaystyle\lim_{h\to 0}\frac{f(a+h)(g(a+h)- f(a)g(a)}{h}= \lim_{h\to 0}\frac{f(a+h)g(a+h)- f(a)g(a+h)+ f(a)g(a+h)- f(a)g(a)}{h}$ and now separate into two
fractions. Use the fact that f and g, separately are differentiable. Also, you will need to use the fact that, since f is differentiable, f is continuous.
For (b), use the mean value theorem: for any x and y, there exist c between x and y, such that h(x)- h(y)= h'(c)(x- y). Take the absolute value of each side and use the fact that $|h(x)- h(y)|\le
|x- y|^{1.5}$.
For (c), start with a function that is continuous but not differentiable (there is a well known example) and integrate it!
For (b) i obtain $|h'(c)(x- y)|=|h(x)- h(y)|\le |x- y|^{1.5}$.
therefore $|h'(c)(x- y)|\le |x- y|^{1.5}$. Is there any clue how I can get h(x)=0 for all x?
Dividing both sides by $|x- y|$, $|h'(c)|\le |x- y|^{.5}$ since that is a positive power, if we take x and y very close together, the right side can be as close as we please to 0. That implies
that h'(c) is 0 for all c- that is, that h(x) is a constant. Now show that that constant must be 0.
Dividing both sides by $|x- y|$, $|h'(c)|\le |x- y|^{.5}$ since that is a positive power, if we take x and y very close together, the right side can be as close as we please to 0. That implies
that h'(c) is 0 for all c- that is, that h(x) is a constant. Now show that that constant must be 0.
So in order to show that the constant must be 0, do we show that h(c) passes through the origin at (0,0)?
September 29th 2010, 05:01 AM #2
MHF Contributor
Apr 2005
September 29th 2010, 10:31 PM #3
Junior Member
Sep 2010
September 30th 2010, 04:07 AM #4
MHF Contributor
Apr 2005
September 30th 2010, 04:20 AM #5
Junior Member
Sep 2010
|
{"url":"http://mathhelpforum.com/calculus/157807-differentiable-functions.html","timestamp":"2014-04-20T06:56:14Z","content_type":null,"content_length":"47246","record_id":"<urn:uuid:dc795dd3-add3-4bfd-90d2-e7aa27aa5750>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Isn't Algebra Necessary?
A recent New York Times article ponders if we should downgrade mathematics taught to high school and college students, and in particular, cut basic algebra.
Seriously? A horizontal line may represent an unknown word in those fill-in-the-blank primary school comprehension tests ("The dog’s name is __."), but a letter should never represent an unknown
number lest it cause undue mental stress?
Among my first thoughts was that the article was a professional troll posting. After all, The New York Times is sadly going through a rough patch, and I sympathize if they must occasionally stoop
lower to catch some extra cash. (If it is a troll posting, hats off! You got me.)
But the truth is probably mundane; it seems the author genuinely believes that algebra should be dropped.
On the one hand, this benefits me. If the article is taken seriously, and algebra is withheld from the masses, then those of us who know it possess formidable advantages. (The conspiracy theorist in
me wonders if the author actually finds elementary algebra, well, elementary, and the true intent is to get ahead by encouraging everyone else to dumb down.)
On the other hand, the piece smacks of ignorance-is-strength propaganda, and thus is worth smacking down.
The article suggests that, instead of algebra, classes should perhaps focus on how the Consumer Price Index is computed. I agree studying this is important: for example, I feel more attention should
be drawn to the 1996 recommendations of the Boskin commission. If the Fed did indeed repeat the mistakes of the 1970s, then I should bump up the official US inflation rate when analyizing my
finances. However, this stuff belongs to disciplines outside mathematics.
More importantly, what use is the CPI without algebra? Take a simple example: say I owe you $1000, and the inflation rate is 5%. If all you care about is keeping up with inflation, is it fair if I
pay you back $120 annually for 10 years? If not, what is the right amount?
Without algebra, you might be able to figure that $1000 today is the same as 1000×(1.05)^10 = $1628.89 in 10 years. But how are you going to figure out that the yearly payment should be 0.05×1628.9/
(1.05^10 - 1)? The easiest way to arrive here is to temporarily treat 1.05 as an abstract symbol. In other words, elementary algebra. One does need to play this ballgame for personal finance after
You might counter that an amortized loan calculator can work out the answer for you; there’s no need to understand how it works, right?
Ignorance begets fraud
In the above calculation, do I make my first payment today, or a year from now? Don’t worry, I’ll figure it out for you. Or perhaps I’ll claim you’re using the wrong mode on the calculator and
helpfully retrieve the "right" formula for you.
Maybe you’d avoid these shenanigans by entrusting an accountant to oversee deals like this. Okay, but what if it’s not a loan? Say you’re making a policy recommendation and I’m an disingenuous
lobbyist: can you tell if I’m fudging my figures?
I heard a story about Reagan’s SDI program. Scientists estimated a space laser required 10^20 units of energy, and current technology could generate 10^10 units. They got funding by saying they were
halfway there.
I hope this tale is apocryphal. Nevertheless, one can gouge the mathematically challenged just as unscrupulous salesmen rip off unwitting buyers. Unfortunately, with finance and government policy,
damage caused by bad decisions can be far worse and longer lasting.
Fermat’s Last … Dilemma?
One bright spot in the article was the mention of "the history and philosophy of [mathematics], as well as its applications in early cultures". While not required to solve problems, knowing the
background to famous discoveries makes a subject more fun.
It is inspiring that within a few short school years we enjoy the fruits of thousands of years of labour. Perhaps a student struggling with negative numbers would feel better knowing that it took
many generations for them to be socially acceptable. For instance, the Babylonians were forced to divide the quadratic equation into different cases because they rejected negative numbers on
philosophical grounds.
But at the same time, we see a mention of "Fermat’s dilemma", which charitably is a creative renaming of "Fermat’s Last Theorem" (though more likely there was some confusion with the "Prisoner’s
Dilemma" from game theory). The author chose this example poorly, because the history of Fermat’s Last Theorem actually bolsters the case for algebra. It shows how a little notation goes a long way.
For Fermat did not use symbolic algebra to state his famous conjecture. Instead, he wrote:
Cubum autem in duos cubos, aut quadrato-quadratum in duos quadrato-quadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere cuius rei
demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.
(If it took him that many words to state the theorem, no wonder he had no space for a proof!)
We have it easy today. Mathematics would be considerably harder if you had to compute amortized loan payments with Latin sentences instead of algebra.
How could a writer fail to appreciate algebra? Strunk taught that "vigorous writing is concise." Which is more concise: the above, or "x^n + y^n = z^n has no positive integer solutions for n > 2"?
What should we learn?
Some time ago, I arrived at the opposite conclusion of the author, after reading confessions of professional academic ghostwriters. Algebra is fine; the courses that need reform are those far removed
from mathematics.
According to "Ed Dante", who is hopefully exaggerating, you can pass such courses so long as you have Amazon, Google, Wikipedia, and a decent writing ability. You get the same results and save money
by paying for an internet connection instead of university tuition.
I suppose I should also end on a positive note: I propose introducing ghostwriting courses, where the goal is to bluff your way through another course in the manner "Ed Dante" describes. The library
would be off-limits, and you must not have previously studied the target subject. Perhaps the first 3 assignments can be admissions essays: one each for undergraduate, master’s and doctoral programs.
Grading would be easy: if they fall for it, you get a good score.
With luck, universities would be forced to either beef up the victim degrees (perhaps by assessing students with something besides essays, or by teaching something that cannot be immediately learned
from the web), or withdraw them. Additionally, the students would learn the importance of writing, and be harder to fool.
|
{"url":"http://benlynn.blogspot.com/2012/08/isn-algebra-necessary_19.html","timestamp":"2014-04-17T15:26:42Z","content_type":null,"content_length":"67552","record_id":"<urn:uuid:c34401c0-f62a-4b63-bfa9-f0a9d528599f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Introduction to Polynomials - Problem 1
Polynomials; so in this particular problem we are just going to deal a little bit with some language and some specifics by polynomials. So up here I have g(x) which is a polynomial and the first
thing we are asked is to state the degree of each term.
So basically that’s just the power on x of these term. So just looking at here we have a degree 2 degree 1, x is just x to the first. Here we have x² and x so we can actually combine those. Remember
when we multiply our bases we can add our exponents so this actually can be combined to be x³ which is the same thing as a third degree.
Here is a constant term, so there is no x, x to 0 so the degree on this term is a 0 and lastly x² is a degree here is 2, okay easy enough. Let us take the degree of g(x) the entire polynomial. So for
this one we just look at the degree of the largest term. So for that one we have a 2, 1, 3, 0 and 2, our largest degree is 3. So the degree of the whole polynomial is 3.
Common mistake is to wanting to add up all these numbers together. Don’t just look at the term of the highest power.
Then it says write the degree in descending order, so descending order is when we write it from highest degree to lowest degree, so looking at it our third degree is going to go first. So we'll just
have g(x) is equal to 8x³.
Now we have a second degree we have a second degree 7x² and 4x² we can combine like terms so this becomes 11x² then we go to our first degree 5x and lastly our constant term minus 8. So just looking
at our polynomial talking a little bit about degrees and descending order how we organize everything.
polynomial descending order degree
|
{"url":"https://www.brightstorm.com/math/algebra-2/polynomials/introduction-to-polynomials-problem-1/","timestamp":"2014-04-21T05:08:06Z","content_type":null,"content_length":"57836","record_id":"<urn:uuid:48e177a6-9222-48c5-ae5c-9e03178218db>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to calculate Gross Profit Ratio (Formula and Example)
In order to calculate gross profit ratio we require cost of good sold and the net sales of a company. In other words we require the revenue and the cost of goods sold figures of a company. Revenue of
the company is also called its net sales. So the formula to calculate gross profit is as under
Net Sales= Gross Sales
Now gross profit will b calculated by deducting the cost of goods sold
Gross Profit = Gross Sales – Cost of goods sold
Again gross profit must not be messed up with the operating income. In order to calculate operating income we require net income that is the different between the gross profit and operating expenses
including taxes and interest payments.
Net Income = Gross Profit – operating expense – taxes – interest payments
Operating Profit = Gross Profit – Total operating expenses
In order to understand gross profit in a better way lets have an example of gross profit calculation. Suppose a company XYZ has revenue or net sales amount equals to $30,000. Now assume that cost of
goods sold is equal to the $20,000. Now to calculate gross profit we simply need to put the formula as mentioned above:-
Net Sales/ Revenue = $30,000
Cost of goods sole = $20,000
Gross Profit= 30,000 – 20,000
Gross Profit= $10,000
Now we can simple use gross profit to calculate gross profit ratio. In order to calculate gross profit ratio we will divide the gross profit by the total revenue or the net sales and will multiply
the answer with hundred.
Gross Profit Ratio= Gross Profit/ Revenue x 100
Gross Profit Ratio=10,000/30,000x100
Gross Profit Ratio= 30 percent
All the figures required to calculate gross profit and gross profit ratio are generated from the income statement of the company’s financial profile.
Gross profit between the two companies can be calculated to measure the degree of efficiency of a company to produce same level and number of goods. For example company A and company B have sales of
$ 1 million each. Assume that the cost of goods sold of company A is 90,000 dollars where as the cost of goods sold of company B is 80,000. The gross profit of company A is $100,000 where as the
gross profit of company B is $200,000. This means company B uses same resources more efficiently to produce same number of products as compared to company A. in other words Company B spends less in
producing the same amount of goods as produced by the company A.
|
{"url":"http://www.grossprofitformula.com/2012/05/how-to-calculate-gross-profit-formula.html","timestamp":"2014-04-20T15:51:09Z","content_type":null,"content_length":"97820","record_id":"<urn:uuid:883a3f37-cd12-4c8f-8d5f-2dda93a96cbb>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jarque-Bera test
The Jarque-Bera test is used to check hypothesis about the fact that a given sample x[S ] is a sample of normal random variable with unknown mean and dispersion. As a rule, this test is applied
before using methods of parametric statistics which require distribution normality.
This test is based on the fact that skewness and kurtosis of normal distribution equal zero. Therefore, the absolute value of these parameters could be a measure of deviation of the distribution from
normal. Using the sample Jarque-Bera statistic is calculated:
(here n is a size of sample), then p-value is computed using a table of distribution quantiles. It should be noted that as n increases, JB-statistic converges to chi-square distribution with two
degrees of freedom, so sometimes in practice table of chi-square distribution quantiles is used. However, this is a mistake - convergence is too slow and irregular.
For example, even if n = 70 (which is rather big value) and having JB = 5 chi-square distribution quantiles gives us p-value p = 0.08, whereas real p-value equals 0.045. So, we can accept the wrong
hypothesis. Therefore it's better to use the specially created table of Jarque-Bera distribution quantiles.
Distribution quantiles
To create this table, the Monte-Carlo method was used. The program in C++ had generated 3600000 samples of n normal numbers (at that, a high-quality random number generator was used). Having these
samples 3600000 values JB(x[S ]) were calculated. These values were used to construct tables of quantiles for given n. This was done for each n from {5, 6, 7, ..., 198, 199, 200, 201, 251, 301, 351,
..., 1901, 1951} . Total calculation time was several tens of machine hours.
The table created was too big (2.5 Mb in binary format), so the following step was to compress it: JB(x[S ]) distribution for the key n was saved using piecewise-polynomial approximation,
intermediate values are found using interpolation. For n > 1401 asymptotic approximation is used.
Quality of the table
We think that the approximation table is good enough for practice needs. You can find relative errors for different p-values in the following table:
p-value relative error (5≤N≤1951)
[1, 0.1] < 1%
[0.1, 0.01] < 2%
[0.01, 0.001] < 6%
[0.001, 0] wasn't measured
We should note that the most accurate p-values belong to interval [0, 0.01]. This interval is used to make a decision most often. Accuracy decreasing in [0.01, 0.001] is determined by the fact that
the less p-value, the less probability to get it if a null hypothesis is accepted, and the more tests are required to find the corresponding distribution quantile.
To calculate p-values in interval [0.001, 0] asymptotic approximation is used. The author believes that this method allows us to get credible results in a reasonable interval. The quality of such
approximation wasn't measured because of the considerable machine time required to perform such measurement.
Manual entries
C++ jarquebera subpackage
C# jarquebera subpackage
This article is intended for personal use only.
Download ALGLIB
C# source.
C++ source.
C++, multiple precision arithmetic
C++ source. MPFR/GMP is used.
GMP source is available from gmplib.org. MPFR source is available from www.mpfr.org.
FreePascal version.
Delphi version.
VB.NET version.
VBA version.
Python version (CPython and IronPython are supported).
|
{"url":"http://www.alglib.net/hypothesistesting/jarqueberatest.php","timestamp":"2014-04-18T08:02:27Z","content_type":null,"content_length":"12775","record_id":"<urn:uuid:8466e1f5-6bfd-4865-b4f6-3f6baa8faaea>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] ANN: NumPy 1.7.0b2 release
Frédéric Bastien nouiz@nouiz....
Mon Sep 24 15:25:44 CDT 2012
I tested this new beta on Theano and discovered an interface change
that was not there in the beta 1.
New behavior:
Old behavior:
This break some Theano code that look like this:
import numpy
out = numpy.zeros(out_shape, int)
for i in numpy.ndindex(*shape):
out[i] = random_state.permutation(5)
I suppose this is an regression as the only mention of ndindex in the
first email of this change is that it is faster.
There is a second "regression" in ndindex This was working in the
past, but it raise an ValueError now:
But If I call numpy.ndindex(2,1,1,1)
The documentation[1] do not talk about receiving a tuple as input. I
already make a commit to change Theano code to make it work. But this
could break other people code. It is up to you to decide if you want
this, but a warning in the release note would be great to help people
know that the old not documented behavior changed.
Do you know if the first change is expected? This will probably cause
bad results in some people code if you intended this change.
[1] http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndindex.html
On Thu, Sep 20, 2012 at 3:51 PM, Ondřej Čertík <ondrej.certik@gmail.com> wrote:
> On Thu, Sep 20, 2012 at 12:00 PM, Nathaniel Smith <njs@pobox.com> wrote:
>> On Thu, Sep 20, 2012 at 3:33 PM, Ondřej Čertík <ondrej.certik@gmail.com> wrote:
>>> On Thu, Sep 20, 2012 at 4:50 AM, Richard Hattersley
>>> <rhattersley@gmail.com> wrote:
>>>> Hi,
>>>> [First of all - thanks to everyone involved in the 1.7 release. Especially
>>>> Ondřej - it takes a lot of time & energy to coordinate something like this.]
>>>> Is there an up to date release schedule anywhere? The trac milestone still
>>>> references June.
>>> Well, originally we were supposed to release about a month ago, but it
>>> turned out there are more things to fix.
>>> Currently, we just need to fix all the issues here:
>>> https://github.com/numpy/numpy/issues/396
>>> it looks like a lot, but many of them are really easy to fix, so my
>>> hope is that it will not take long. The hardest one is this:
>>> http://projects.scipy.org/numpy/ticket/2108
>>> if anyone wants to help with this one, that'd be very much appreciated.
>> This particular bug should actually be pretty trivial to fix if anyone
>> is looking for something to do (esp. if you have a working win32 build
>> environment to test your work):
>> http://thread.gmane.org/gmane.comp.python.numeric.general/50950/focus=50980
> Ah, that looks easy. I'll try to give it a shot. See my repo here how
> to get a working win32 environment:
> https://github.com/certik/numpy-vendor
> However, I don't have access to MSVC, but I am sure somebody else can
> test it there, once the PR is ready.
> Ondrej
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-September/064019.html","timestamp":"2014-04-19T12:33:19Z","content_type":null,"content_length":"7137","record_id":"<urn:uuid:c2fe8068-0866-446c-8ff7-dc0f4128ee58>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Existence and Uniqueness of the Positive Definite Solution for the Matrix Equation
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 216035, 4 pages
Research Article
Existence and Uniqueness of the Positive Definite Solution for the Matrix Equation
Department of Mathematics, Heze University, Heze, Shandong 274015, China
Received 15 May 2013; Revised 23 June 2013; Accepted 4 July 2013
Academic Editor: Vejdi I. Hasanov
Copyright © 2013 Dongjie Gao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
|
{"url":"http://www.hindawi.com/journals/aaa/2013/216035/ref/","timestamp":"2014-04-18T13:40:41Z","content_type":null,"content_length":"38284","record_id":"<urn:uuid:af96b102-a017-4a4b-b88b-1797e2679bf4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sumneytown Math Tutor
Find a Sumneytown Math Tutor
...I prefer the Phonics/Linguistic Method, so, unless parents have other materials that they want used, I always use an old Lippincott primer I have for children to practice phonics, along with
home-made flash cards. I find phonology interesting and I have done some study of that, which has given m...
16 Subjects: including algebra 1, algebra 2, English, geometry
...Currently, she is pursuing her Master's in Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities. Latoya has been heavily involved in
advocating education and excellence in young people through various avenues.
13 Subjects: including algebra 2, trigonometry, psychology, biochemistry
...Whether you need help with your college or graduate school applications, improving your SAT grades, or writing your graduate thesis, I can provide the support necessary to succeed.As an adjunct
professor at Gwynedd Mercy University, I taught a course in Public Speaking. I can help with controlli...
15 Subjects: including ACT Math, English, reading, grammar
I am a youthful high school Latin teacher. I have been tutoring both Latin & Math to high school students for the past six years. I hold a teaching certificate for Latin, Mathematics, and English,
and I am in the finishing stages of my master's program at Villanova.
7 Subjects: including geometry, SAT math, differential equations, linear algebra
...I am looking forward to working with you! Thank you for your timeI have personally taught several classes in Calculus AB and BC, where differential equations is a single part of that course. I
have also tutored students (not through WyzAnt, but through other programs) Calculus and Differential Equations.
35 Subjects: including prealgebra, anatomy, botany, nursing
|
{"url":"http://www.purplemath.com/Sumneytown_Math_tutors.php","timestamp":"2014-04-19T07:15:51Z","content_type":null,"content_length":"23726","record_id":"<urn:uuid:814744ac-0023-48cc-a581-3de204342e57>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An Artificial Intelligence Approach for Modeling and Prediction of Water Diffusion Inside a Carbon Nanotube
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented
to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good
performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
Carbon nanotube; Water diffusion; Artificial intelligence; Modeling and prediction
Carbon nanotubes (CNTs) have drawn much attention, not only for their exceptional mechanical and electrical properties, but also for their application in the new emerging area of nanofluidics since
they can transport fluids at an extraordinarily fast flow rate. This property has diverse applications, such as in charge storage devices [1], membrane industry [2], drug-delivery devices [3], and
understanding the transport processes in biological channels [4].
In the past few years, a significant number of works have been devoted to the study of fluid flow through CNTs [5-8]. Fast pressure-driven flow of fluids in membranes of CNTs 1.6 and 7 nm in diameter
has been measured by Majumder et al. [5] and Holt et al. [6], respectively. They indicated measured values of 2 to 5 orders of magnitude larger than those calculated by the continuum-based no-slip
Hagen–Poiseuille equation. Recently, Thomas et al. [7] have re-evaluated water transport through CNTs having diameters ranging from 1.66 to 4.99 nm. They found that the measured flow rates exceeded
those predicted by the no-slip Hagen–Poiseuille relation. Interestingly, new experimental results for the flow of water, ethanol, and decane through carbon nanopipes with relatively large inner
diameters (i.e., 43 ± 3 nm) have demonstrated that transport is enhanced up to 45 times that of theoretical predictions [8]. Extraordinarily fast flow rate of fluids in nanopores other than CNTs has
also been observed [9,10]. As can be seen, the classic models of fluid dynamics start to break down while we diminish the working length scale. As a result, new approaches for modeling of fluid flow
at nanoscale dimensions are needed. The present work is an attempt to introduce an alternative methodology, namely, the fuzzy logic approach, to explain the behavior of fluids at nanoscale. As a case
study, we applied this method for modeling and prediction of water diffusion inside a CNT (6,6).
Modeling of phenomena based on conventional mathematical tools (e.g., differential equations) is not appropriate for dealing with ill-defined and uncertain problems. By contrast, a fuzzy logic
approach employing fuzzy if-then rules can model the qualitative aspects of human knowledge and reasoning processes without employing precise quantitative analyses. The fuzzy modeling or fuzzy
identification, first explored systematically by Takagi et al. [11]. The aim of this paper is to suggest an architecture called adaptive-network-based fuzzy inference system (ANFIS) for modeling and
prediction of fluid flow at nanoscale dimensions since it has been suggested to be universal approximator of any continuous function [12]. Furthermore, it has been shown that the obtained results by
the ANFIS approach in estimation of non-linear functions outperform the auto-regressive models and other connectionist approaches, such as neural networks [12]. ANFIS can serve as a basis for
constructing a set of fuzzy if-then rules with appropriate membership functions to generate the stipulated input–output pairs. This architecture was proposed by Jang in 1991 [13,14]. More information
regarding the architecture and the performance of ANFIS can be found in the literature [12]. In what follows, first, performance of an MD simulation of water diffusion through a CNT (6,6) is
described. An ANFIS technique is then employed for modeling and prediction of this phenomenon. Finally, some benefits of the designed ANFIS are detailed.
Model and MD Simulation
To show the diffusion of water molecules in CNTs, a CNT (6,6) (13.4 Å long and 8.1 Å in diameter) was solvated in a cubic box (Box length L = 32.06 Å) of 1,034 TIP3P water molecules [15]. The MD
simulation was performed using Discover, which is a molecular modeling software package implemented with Materials Studio 4.2 [16]. In this investigation, the force field used to model the
interatomic interactions was the consistent-valence force field (CVFF). The MD simulation was done at the NVT statistical ensemble (i.e., a constant number of particles, volume and temperature). The
temperature was kept constant at 300 K using a Nosé-Hoover thermostat [17,18]. The cell multipole method [19] was employed for the non-bond summation method. A time step of 2 fs was used and
structures were sampled every 1 ps. The overall time of the simulation was set to be 50 ns.
Initially, the CNT was in the center of the bath of water molecules. However, the nanotube was free and could be displaced. During the simulation time (i.e., 50 ns), it was observed that water
molecules penetrated into the CNT and passed through it in such a way that the CNT remained occupied by an average of about five water molecules during the whole period of 50 ns. During the
simulation time, an average of about 17 water molecules per nanosecond entered the nanotube and left the other side. It yields an average volumetric flow rate of about 50.4 × 10^−14 cm^3 s^−1, which
is comparable to the reported water diffusion rate through a channel of the transmembrane protein aquaporin-1 [20]. As a result, our MD simulation showed good agreement with experimental results.
Results and Discussion
Now, let us define the flow rate of water molecules as the number of water molecules entering the CNT on one side and leaving the other side per nanosecond. Using the simulation described earlier,
the flow rate of water molecules as a function of time was recorded. This correlation is shown in Fig. 1. In the following section, the applicability of the ANFIS approach for modeling and prediction
of the flow rate of water molecules as a function of time, which is demonstrated in Fig. 1, is put to test. In other words, we attempted to find the unknown function Y = F(X) with the aid of the
ANFIS approach, where Y is defined as the values of the flow rate of water molecules and X stands for the corresponding time values. The Fuzzy Logic Toolbox embedded in MATLAB 7.0 [21] is used for
modeling and prediction of the flow rate of water molecules as a function of time. The input layer of the ANFIS consists of the time while the output layer of the ANFIS corresponds to the flow rate.
The known values of the function up to the point Y = X are used to predict the value at some point in the future Y = X + P. The standard method for this type of prediction is to create a mapping from
D points of the function spaced Δ apart, that is, Y(X + P). To this end, all recorded data set was divided into 6 parts. The first 5 parts of pairs (i.e., training data set) were used for training
the ANFIS while the remaining 1 part of pairs (i.e., checking data set) were used for validating the identified model. Here, the value Δ was selected to be 1. Therefore, we had 4 inputs. In order to
achieve an effective ANFIS, all data sets are needed to be normally preprocessed using an appropriate transformation method. It has been reported that the ANFIS systems trained on transformed data
sets achieve better performance and faster convergence in general. There are many transformation procedures that can be applied to a data set [22]. In this study, the all data sets (i.e., training
and checking data sets) were individually transformed with the log function, which has the following equation [23]:
where z[trn] and z[chk] are the transformed values of the training and checking data sets, a is an arbitrary constant (Here a = 4), and b is set to 1 to avoid the entry of zero in the log functions.
The number of membership functions assigned to each input of the ANFIS was arbitrarily set to 2, therefore the rule number is 16. The ANFIS used here contains a total of 104 fitting parameters, of
which 24 are premise parameters and 80 are consequent parameters. Notice that the number of membership functions, which was chosen to be 2 is the maximum number to obtain the maximum performance of
the designed ANFIS since we should take care of overtraining. In a sense, the so-called “overtraining” term indicates that a given ANFIS adapts itself too well to the training data set in such a way
that further improvement based on the training data set not only impairs more accurate predictions of the checking data set but may also have adverse effects on those predictions. Note that in the
case of overtraining, usually the total number of fitting parameters in the ANFIS is more than the number of pairs in the training data set. The root mean squared error (RMSE) was used in order to
assess the accuracy of the actual output in comparison with the one predicted by the ANFIS. Indeed, this statistical parameter does measure the correlation between the target values (i.e., the flow
rate of water molecules resulting from the MD simulation) and the corresponding values predicted by the ANFIS. Note that for a perfect correlation, RMSE should be 0. After 200 epochs, we had RMSE
[trn], and RMSE[chk] equal zero (note that the designed ANFIS yielded RMSE[trn] = 0.4289 ns^−1 and RMSE[chk] = 0.4840 ns^−1. Since these values have not a real physical meaning, they were reported
with only one significant digit). It should be noted that the so-called “epoch” term means the presentation of each data set to the ANFIS and the receipt of output values. After 200 epochs, we
obtained 200 RMSE values for both training and checking data sets. In order to check whether the difference between two RMSE values is significant, we did use the t-test. As you know, the t-test
assesses whether the means of two data sets are statistically different from each other. The result showed that with a probability more than 95%, the difference between two RMSE values is not
significant. During repeated epochs, it was observed that the RMSE monotonically declines for both data sets (i.e., training, and checking data sets). Eventually, it reaches a value less than 0.5
after 200 epochs. The correlated function, namely, Y = F(X), using the ANFIS approach is also showed in Fig. 1. As can be seen, the designed ANFIS has a very good performance to depict the behavior
of water inside the CNT. In addition, since both RMSEs are very small, we conclude that the proposed ANFIS has captured the essential components of the underlying dynamics. In other words, the
designed ANFIS can successfully model and predict the flow rate of water molecules through the CNT as a function of time, which has been derived by the MD simulation. The resulting 16 fuzzy if-then
rules are listed below.
If input1 is MF1 and input2 is MF1 and input3 is MF1 and input4 is MF1, then output =
If input1 is MF1 and input2 is MF1 and input3 is MF1 and input4 is MF2, then output =
If input1 is MF1 and input2 is MF1 and input3 is MF2 and input4 is MF1, then output =
If input1 is MF1 and input2 is MF1 and input3 is MF2 and input4 is MF2, then output =
If input1 is MF1 and input2 is MF2 and input3 is MF1 and input4 is MF1, then output =
If input1 is MF1 and input2 is MF2 and input3 is MF1 and input4 is MF2, then output =
If input1 is MF1 and input2 is MF2 and input3 is MF2 and input4 is MF1, then output =
If input1 is MF1 and input2 is MF2 and input3 is MF2 and input4 is MF2, then output =
If input1 is MF2 and input2 is MF1 and input3 is MF1 and input4 is MF1, then output =
If input1 is MF2 and input2 is MF1 and input3 is MF1 and input4 is MF2, then output =
If input1 is MF2 and input2 is MF1 and input3 is MF2 and input4 is MF1, then output =
If input1 is MF2 and input2 is MF1 and input3 is MF2 and input4 is MF2, then output =
If input1 is MF2 and input2 is MF2 and input3 is MF1 and input4 is MF1, then output =
If input1 is MF2 and input2 is MF2 and input3 is MF1 and input4 is MF2, then output =
If input1 is MF2 and input2 is MF2 and input3 is MF2 and input4 is MF1, then output =
If input1 is MF2 and input2 is MF2 and input3 is MF2 and input4 is MF2, then output =
Figure 1. Plot of the flow rate of water molecules through a CNT (6,6) as a function of time resulting from the molecular dynamics (MD) simulation
where ith row of the following consequent parameter matrixC:
The linguistic labels MF1[i]and MF2[i](i = l to 4) are defined by the bell membership function (with different parametersa,b, andc):
Table 1also lists the linguistic labels and the corresponding consequent parameters in Eq. 1. Each of these parameters has a physical meaning:cdetermines the center of the membership function,ais the
half width of the membership function andb(together witha) controls the slopes at the crossover points (where the membership function value is 0.5). An example of the bell membership function is
showed in Fig. 2.
Table 1. Membership functions and consequent parameters of the designed adaptive-network-based fuzzy inference system (ANFIS)
Figure 2. An example of the bell membership function (Here,a = 2,b = 4, andc = 6)
Indeed, in this case study, the MD simulation can explain the observed phenomenon (i.e., the water flow inside the CNT). However, it would be so difficult to explain and interpret this phenomenon, if
we consider other parameters such as temperature, pressure, shape, length, etc., as a variable parameter. In the latter case, the MD simulation is just able to draw a picture of the phenomenon and it
is not able to tell us about the effect of each parameter on the phenomenon, the effect of combined parameters (e.g., pressure and temperature together), etc. On the other hand, modeling of this
phenomenon using the ANFIS approach would provide us invaluable information in the analysis of the process (e.g., the underlying physical relationships among influencing parameters) and therefore
design of new applications. Therefore, this methodology holds potential of becoming a useful tool in modeling and predicting the behavior of molecular flows through the CNTs (or generally nanoscale
dimensions). In addition, the proposed ANFIS has the following advantages:
1) If the human expertise is not available, we can still set up intuitively reasonable initial membership functions and start the learning process to generate a set of fuzzy if-then rules to
approximate a desired data set.
2) An ANFIS is able to learn and therefore generalization. Generalization refers to the production by the ANFIS of reasonable outputs for inputs not encountered during the training process. The
generalization ability comes from this fact that while the training process is occurring, the checking data set is used to assess the generalization ability of the ANFIS. As a result, a well-designed
ANFIS is capable of producing reliable output(s) for unseen input(s). Just imagine that the ANFIS approach could give us the reliable results for unseen cases, which are so difficult to perform by
the computer simulations and to do by experiments. In addition, in those cases, which we can perform computer simulations and/or experiments, using a designed ANFIS is much faster and easier than
doing computer simulations and/or experiments since doing a corresponding experimental work is a difficult task and takes much time and cost and using computer simulations also take much time in the
order of several days with the aid of a supercomputer. However, a designed ANFIS can be run on a normal personal computer.
3) An ANFIS ignores a relatively large amount of noise or variations in solving problems while it derives principle rules of a given problem. Statistical errors in reported data either from
experiments or from computer simulations can always be expected. Generally, experimentally measured values include statistical errors since by repeating an experiment the same result is not achieved.
Interestingly, such errors can also be observed in computer simulations [24]. Since the obtained results using computer simulations and/or experiments bear statistical errors, we should repeat those
tasks several times to ensure the accuracy of the results and therefore they are time and cost consuming. As a result, a model describing a phenomenon, which is capable of removing such undesirable
errors, is needed.
4) A designed ANFIS is able to predict the reasonable output(s) for future unseen data sets. In other words, the predictive ability of a designed ANFIS is not restricted to the training data set.
This property could be an asset in modeling of fluid flow at nanoscale dimensions since experimental reports on the dynamics of fluids inside nanotubes are less abundant than the static case. The
main reason is the high complexity of experimental setups to perform such experiments. Therefore, available experiments have been performed, mostly on multi-wall nanotubes. On the other hand, a vast
literature exists on computer simulations of liquid flow, liquid molecular structure under confinement, and interaction of liquid with the tube walls. Computing power requirements have, thus far,
limited these findings to small single-wall nanotubes, creating a gap in terms of nanotube size between experimental and simulation results [25]. As a result, computer simulations should be extended
beyond these computational limits. This can only be achieved with new algorithms that allow for the coupling of different simulation methods on different scales, both in time and space. The need to
make such extrapolations has practical applications, such as in determination of the osmotic permeability of nanochannels [26].
In summary, by employing an ANFIS approach, we succeeded to derive fuzzy if-then rules to describe the input–output behavior of water diffusing through a nanochannel. Some advantages of this approach
over conventional methods for modeling and prediction of this phenomenon were mentioned. The application shown here is only the tip of an iceberg. We hope that this tool could provide us new insights
in the field of nanofluidics where the continuum models of fluid dynamics tend to break down.
The authors sincerely appreciate the staff of the Center for Computational Materials Science of the Institute for Materials Research (IMR), Tohoku University, for its continuous support of the
supercomputing facilities. This work was supported (in part) by the Japan Society for the Promotion of Science (JSPS).
1. Phys. Rev. Lett.. 2003, 90:195503.
Bibcode number [2003PhRvL..90s5503M]
Publisher Full Text
2. Small. 2007, 3:1996.
COI number [1:CAS:528:DC%2BD2sXhsVOrtrjE]
Publisher Full Text
3. Curr. Med. Chem.. 2006, 13:1789.
COI number [1:CAS:528:DC%2BD28XmtVaisL0%3D]
Publisher Full Text
4. J. Am. Chem. Soc.. 2005, 127:7166.
COI number [1:CAS:528:DC%2BD2MXjtl2ntb0%3D]
Publisher Full Text
5. Majumder M, Chopra N, Andrews R, Hinds BJ:
Nature. 2005, 438:44.
COI number [1:CAS:528:DC%2BD2MXhtFOjur%2FI]; Bibcode number [2005Natur.438...44M]
Publisher Full Text
6. Holt JK, Park HG, Wang Y, Stadermann M, Artyukhin AB, Grigoropoulos CP, Noy A, Bakajin O:
Science. 2006, 312:1034.
COI number [1:CAS:528:DC%2BD28Xks1Wqtbg%3D]; Bibcode number [2006Sci...312.1034H]
Publisher Full Text
7. Nano Lett.. 2008, 8:2788.
COI number [1:CAS:528:DC%2BD1cXptVOmtLw%3D]; Bibcode number [2008NanoL...8.2788T]
Publisher Full Text
8. Withby M, Cagnon L, Thanou M, Quirke N:
Nano Lett.. 2008, 8:2632.
Bibcode number [2008NanoL...8.2632W]
Publisher Full Text
9. Nano Lett.. 2008, 8:452.
COI number [1:CAS:528:DC%2BD1cXks12msw%3D%3D]; Bibcode number [2008NanoL...8..452J]
Publisher Full Text
10. J. Am. Chem. Soc.. 2007, 129:2748.
COI number [1:CAS:528:DC%2BD2sXhvVWjurY%3D]
Publisher Full Text
11. Roger Jang J-S, Sun C-T, Mizutani E: Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence. Prentice Hall, Upper Saddle River, NJ; 1997.
12. Jorgensen WL, Chandrasekhar J, Madura JD, Impey RW, Klein ML:
J. Chem. Phys.. 1983, 79:926.
COI number [1:CAS:528:DyaL3sXksF2htL4%3D]; Bibcode number [1983JChPh..79..926J]
Publisher Full Text
13. J. Chem. Phys.. 1984, 81:511.
Bibcode number [1984JChPh..81..511N]
Publisher Full Text
14. Phys. Rev. A.. 1986, 34:2499.
Bibcode number [1986PhRvA..34.2499H]
Publisher Full Text
15. Zheng J, Balasundaram R, Gehrke SH, Heffelfinger GS, Goddard WA, Jiang S:
J. Chem. Phys.. 2003, 118:5347.
COI number [1:CAS:528:DC%2BD3sXhvV2qt74%3D]; Bibcode number [2003JChPh.118.5347Z]
Publisher Full Text
16. Zeidel ML, Ambudkar SV, Smith BL, Agre P:
Biochemistry. 1992, 31:7436.
COI number [1:CAS:528:DyaK38XltVKnuro%3D]
Publisher Full Text
17. Salas JD: Analysis and modeling of hydrologic time series. In Handbook of hydrology. Edited by Maidment DR. McGraw-Hill, New York; 1993.
18. Aqil M, Kita I, Yano A, Nishiyama S:
J. Hydrol. (Amst). 2007, 337:22.
Bibcode number [2007JHyd..337...22A]
Publisher Full Text
19. Frenkel D, Smit B: Understanding molecular simulation: from algorithms to applications. Academic Press, London; 2002.
20. Microfluid. Nanofluid.. 2008, 5:289.
COI number [1:CAS:528:DC%2BD1cXhtFSks7bJ]
Publisher Full Text
21. Zhu F, Tajkhorshid E, Schulten K:
Phys. Rev. Lett.. 2004, 93:224501.
Bibcode number [2004PhRvL..93v4501Z]
Publisher Full Text
Sign up to receive new article alerts from Nanoscale Research Letters
|
{"url":"http://www.nanoscalereslett.com/content/4/9/1054?fmt_view=mobile","timestamp":"2014-04-19T12:50:24Z","content_type":null,"content_length":"89779","record_id":"<urn:uuid:943a99c2-8d49-47b3-9cd5-94d524202012>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Finding standard deviation or error from normalized data.
Hello All,
I am trying to figure out how to find the standard deviation or error in sets of data. So lets say I have sets x1, x2, x3 and x4 with various values and I found the average and standard deviations
for it. Now I have to take the averages, lets say a1, a2, a3, a4, and normalize a2,a3,a4 to a1. Now how do I find the standard deviation or error in the normalized sets? Forgive my ignorance, but I
am suppose to do this for a project and I have never taken any stats course before..
|
{"url":"http://www.physicsforums.com/showthread.php?s=abaaeb5f3b3e2d0778193ccf2708bf5b&p=3527367","timestamp":"2014-04-19T19:39:03Z","content_type":null,"content_length":"23631","record_id":"<urn:uuid:5a8dd2e9-99c2-49a1-bb12-2adcc1efe6cd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Computer Evolution in Teaching Undergraduate Time Series.
Erin M. Hodgess
University of Houston - Downtown
Journal of Statistics Education Volume 12, Number 3 (2004), www.amstat.org/publications/jse/v12n3/hodgess.html
Copyright © 2004 by Erin Hodgess, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors
and advance notification of the editor.
Key Words: Statistical packages; Time series analysis
In teaching undergraduate time series courses, we have used a mixture of various statistical packages. We have finally been able to teach all of the applied concepts within one statistical package; R
. This article describes the process that we use to conduct a thorough analysis of a time series. An example with a data set is provided. We compare these results to an identical analysis performed
on Minitab.
1. Introduction
In a typical undergraduate time series class, we spend a considerable amount of time discussing analysis via multiplicative decomposition and Box-Jenkins models. Over the years, we have used various
computer packages and programs to illustrate the analysis cycle. Until recently, we could never do all of these calculations in one package. We might use a combination of SAS and FORTRAN programs, or
a combination of Excel, FORTRAN, and SAS. Now we can use just one statistical package, R, to produce all of the preliminary plots, transformations, decompositions, Box-Jenkins models, and forecasts.
The R langauage is is extension of a language called S, which was developed in the mid 1980s at then-Bell Labs. S was originally developed for use on UNIX mainframe systems. The language is similar
to the C programming language. As S evolved, there became a need for a version for personal computers. The Mathsoft Corporation (now Insightful Corporation) produced Windows and Mac based versions
called S-Plus. Finally, Ihaka and Gentleman (1996) put together a free ware version of S/S-Plus that they called R.
The help facilities within R are impressive. There are manuals available online, along with documentation for functions (R Development Core Team 2003). One of the manuals is ``An Introduction to R" (
Venables, Smith, and the R Core Development Team 2003) which has a tutorial. Finally, there is a search engine to find existing functions. We use that feature quite often when looking for a function
name. This feature allows a user to enter a phrase into the engine. Options are revealed that contain the phrase.
There are many excellent books for S and S-Plus, and nearly all of these are appropriate for R users. For beginning users, the original S book by Becker, Chambers and Wilks (1988) is still quite
useful. Also, the books by Spector (1994), and Krause and Olsen (2000) contain tutorials for novices. The classic Modern Applied Statistics with S, by Venables and Ripley (2002) shows both beginners
and seasoned users needed skills in tandem with statistical constructs. There are many other books available for advanced programming and statistical modeling, such as Venables and Ripley (2000);
Chambers and Hastie (1992); Chambers (1998), Pinheiro and Bates (2000), and Zivot and Wang (2002).
There are now a few books developed in conjunction with R. Dalgaard (2002) has a book for introductory statistics, and he does not assume any familiarity with the language. Fox (2002) has written a
book for applied regression with both S-Plus and R.
The major advantages of R are quite easy to see. The first, and most meaningful, is that R is free. It can be downloaded from the Internet at any time from several locations around the world. The
site that we typically use is lib.stat.cmu.edu/R/CRAN, which is at the Statistics Department at Carnegie Mellon University. There is a list of mirror sites at this location.
Next, R has a fast learning curve. We have used this package in our time series class as well as other upper level classes, and the students acquire the skills needed to manipulate R readily. We
present a tutorial during the first class session, and assign problems. These problems let the student learn about existing capabilities initially. As the students learn the R nomenclature, they are
also given functions to write themselves. After one or two sessions, they feel very comfortable with this package. They learn to write their own functions in order to eliminate repetitive tasks (and
to impress their classmates).
Finally, R has impressive graphics capabilities. Not only do the graphs appear beautifully on the screen, they are easily saved in the proper formats for reports or Power Point presentations.
The students can use these functions to do a full analysis on their own data set. Part of the course grade depends on an extensive data analysis project. The students must select a topic, investigate
its history, and produce an analysis of past behavior along with forecasts of future behavior. Often, students select stock price data for a particular company. They must do research on that company,
while considering world events during the same time frame as the series. The students are frequently surprised at the role that a company might have played in the global market over time. Finally,
they run the time series procedures and make forecasts. They can then determine which method yielded the best forecasts. Students have an appreciation of the model building process on real data,
combined with user-friendly software.
In this paper, we will show the necessary processes in the development of a good time series analysis. Section 2 is devoted to the data preparation and input stage. Also, we stress the need for
graphics. In Section 3, we discuss transformations and the commands for those transformations. Section 4 shows the multiplicative forecasting procedure. In Section 5, we present the Box-Jenkins
analysis. We use a real data set for our analysis in Section 6. In Sections 2 - 6, all of our material is shown via the R program. We present a comparison between R and Minitab in Section 7. We would
like to demonstrate that for undergraduate time series, R will be the package of choice. R gives the instructor options for more sophisticated material. Finally, in Section 8, we finish with a brief
2. Initial Data Set-Up and Plots
When students bring their data sets, they enter them as numeric vectors, but then they change them to time series in the following fashion:
> y.ts <- ts(y,start=c(1997,1),freq=12)
This command sets up a time series, y.ts, which is a monthly series that begins in January, 1997. Data must be converted to a time series object for some of the functions.
We used most of the data for the model building purposes, but left a few data points out at the end of the series for comparison for forecasting. A common example is the selection of a data set which
ran from January, 1990, to December, 2001. Then January, February, and March were set aside as ``actuals" for the forecast comparisons.
Once a time series is ready, we need to see a plot of the data. But that command is quite simple:
> plot(y.ts)
This will produce a time series plot with the observed values of y.ts on the vertical axis, and the time component on the horizontal axis. By converting to a time series object, the plot command
extracts the time component automatically and produces a line chart.
With tremendous computing speed so freely available, students are occasionally tempted to omit the plotting step. However, we insist on viewing a plot of the data. Often, simply looking at the graph
of the observed values can give insight in terms of historical events.
We saw an interesting example from looking at the Bank of America Monthly Closing Stock Price, as shown below.
Figure 1
Figure 1. Bank of America monthly closing stock price
The recession in the early 1990s appears immediately, in which the business cycle trough occurs in March 1991. The crash of the Asian markets and the financial difficulties in the US markets in late
1997 and 1998 are readily apparent. The recent economic slowdown with its fluctuations finishes the picture. Students were then motivated to investigate some of these events, including Russia's
default on its international debt, and incorporate them into their own projects.
3. Transformations
We are concerned with nonstationarity in time series, along with variance stabilization. We use the method as described in Box and Cox (1964):
At first glance, this formula seems quite formidable. Originally, we had our own function to derive this. But during the last semester, we found a more efficient function from Venables and Ripley
(2002) in the MASS library which is a part of the R package. Here are the commands:
> #This command attaches the MASS library, which contains the Box-Cox command
> library(MASS)
> #We set up a sequence
> t1 <- 1:length(y.ts)
> #We create a linear model
> y.lm <- lm(y.ts~t1)
> #We run the Box-Cox process
> boxcox(y.lm,plotit=T)
We first load the MASS library. Then we must construct a linear model object. This object is essentially a least squares regression model:
where the boxcox function calculates the log-likelihood of the residual sum of squares (part of the variance estimate) for various values of the
The boxcox command as shown above uses -2
Once the trans1, to complete this calculation. An example might appear as:
> y16.ts <- trans1(y.ts,0.16)
The y16.ts is a time series object which contains the transformed values.
4. Multiplicative Decomposition and Forecasting
Once the data have been plotted and transformed, we can begin the modeling process in depth. For the multiplicative decomposition method, we have the following equation:
where y[t] is the observed value of the time series at time t, T[t] is the trend factor at time t, S[t] is the seasonal factor at time t, C[t] is the cyclical factor at time t, and I[t] is the
irregular factor at time t. This method is described in some detail in Bowerman and O'Connell (1993) and Kvanli, Pavur, and Guynes (2000).
This process is designed for annual, quarterly, and monthly data sets. There must be a minimum of 3 years of data, regardless of the frequency. If annual data is used, the seasonal component is set
to 1 for all values. When the seasonal component is requested, and the data frequency is not 1, the number of data points must be a multiple of the frequency. For instance, if monthly data is used,
then the total number of data points must be a multiple of 12. The data can start at any time of the year, but the number of data points must maintain the annual integrity. However, if the seasonal
component is not required, then the number of data points is arbitrary.
We have written our own function for the decomposition process, decom1. Here again, we exploit the time series object within the function. The only required input to the function is the name of the
time series object. An optional input is an indication if the seasonal component is required. The default value is 1, which means that the seasonal values are requested.
The function returns a list of outputs. These include the deseasonalized values of the series, the trend values, the seasonal values, the cyclical values, and the irregular values. For monthly and
quarterly data, an abbreviated list of the seasonal factors is produced as well. Also, the slope and intercept terms are shown as part of the trend calculation. Finally, if forecasts are requested,
the function produces predicted values, and 95% prediction intervals for those values. We use the method suggested by Bowerman and O'Connell (1993) to produce the prediction intervals. If we had a
time series, y16.ts, and we needed a 3 period forecast, the function would appear as:
> decom1(y16.ts,fore1=3)
Once the forecast values have been obtained, we can restore them to the original order of magnitude by using the trans1 function again. If the transformation parameter was 0.16, we can enter (1/.16),
to return the forecasts and the upper and lower prediction levels.
Finally, we can determine the accuracy of the forecast. We created a function, forerr, that uses the actual data and the forecast values to calculate the mean absolute devation and the mean square
error. Statements for this function appear in the Appendix. The command would appear as:
> forerr(yact.ts, y16u.ts$fore)
We will repeat this process with the Box-Jenkins method, and ascertain if one method outperforms the other.
5. The Box-Jenkins Approach and Forecasting
Many books have been written on the elegant Box-Jenkins (Box and Jenkins 1976) methodology and these procedures are wonderful to teach. With the aid of powerful computers, the equations can be
appreciated by all students. We will consider the basic autoregressive integrated moving average model:
where p, (1-B)^d is the differencing operator, q, and the a[t] are a Gaussian white noise sequence with mean zero and a variance y[t] are the series values. The B is the backshift operator such that
B^j y[t] = y[t-j]. We expand the autoregressive polynomial:
such that all of the roots of the polynomial are outside of the unit circle. Similarly, we can expand the moving average polynomial such that:
where all of the roots of the moving average polynomial are outside of the unit circle. We also assume that the autoregressive and moving average polynomials have no roots in common. Finally, the (1
- B)^d polynomial reflects the order of differencing needed to achieve stationarity. Such an expression is referred to as an autoregressive integrated moving average model, or ARIMA(p,d,q) model.
There is a library in R called ts which contains the arima function that will estimate the parameters of the ARIMA(p,d,q) model. The students test several models, and select the model which has the
minimum value of the Akaike Information Criterion (AIC). In real world data, we must be concerned with problems of stationarity and differencing. We experiment with several levels of differencing in
order to get seasonal values for our autoregressive parameters. For an ARIMA(1,1,1) model, the command would appear as:
> #We attach the ts(Time Series) library
> library(ts)
> #This is the command for ARIMA models
> arima(y16.ts,order=c(1,1,1))
As we found in the previous section, seasonal factors can play a major role in the model building process. Fortunately, there are seasonal (SARIMA) models that can be constructed:
with the usual assumption about roots outside of the unit circle. The (1- B^s)^D is the order of seasonal differencing. The notation for these models is ARIMA(p,d,q) x (P,D,Q)[s], in which P is the
order of the seasonal AR polynomial, D is the order of seasonal differencing, Q is the order of the seasonal MA polynomial, and s is the seasonal period. We found these models to be extremely
effective in our learning process. For a test ARIMA(1,1,1) x (1,1,1)[12] model, the command would be:
> arima(y16.ts,order=c(1,1,1),seasonal=list(order=c(1,1,1),period=12))
Once the appropriate model has been selected, an object must be saved to pass to the prediction function. If the previous model is selected, the command would appear as:
> y16.lm <- arima(y16.ts,order=c(1,1,1),seasonal=list(order=c(1,1,1),period=12))
Next, R has its own prediction function for time series objects, which is predict.Arima. Here is a sample of the command, with a 3 period forecast:
> predict.Arima(y16.lm,n.ahead=3)
We calculate our own intervals, as the predict.Arima command returns the predictions and the standard errors. These intervals are merely se for each predicted value, where se is the standard error.
Finally, we transform the predicted data back to its original form with the help of trans1, and run the error comparison via forerr. The students can then make an informed decision on which model
provides the most meaningful forecasts.
6. An Example
We downloaded closing stock prices from the Home Depot Corporation. These data can be found at www.yahoo.com. We have monthly data from January, 1985 until December, 2001. We kept the January through
March 2002 data separate for forecasting comparisons. Here are the set-up commands:
> #We take a data vector and make it a time series object
> y.ts <- ts(hd1,start=c(1985,1),frequency=12)
> #We put the Jan - Mar data into its own time series object
> ya.ts <- ts(c(49.98,49.89,48.55),start=c(2002,1),frequency=12)
We will use the ya.ts later in the process. The y.ts series is the necessary time series object. We now consider a plot of the historical data:
> plot(y.ts,ylab="$",main="Home Depot Monthly\nClosing Stock Price")
Figure 2
Figure 2. Home Depot monthly clossing stock price.
We see the not unexpected patterns; the fluctuations of the market in the late 1990s, and the serious downturn in the late part of 2001. Now we must check to determine if a transformation is needed:
> library(MASS)
> t1 <- 1:length(y.ts)
> yhd.lm <- lm(y.ts ~ t1)
> boxcox(yhd.lm,plotit=T)
Figure 3
Figure 3. Finding
After a bit more fine-tuning, we find that the
> #After we determine the proper lambda value, we transform the series
> y16.ts <- trans1(y.ts,0.16)
Now we can turn to the multiplicative decomposition method. We return to the decom1 method:
> #We run the mult. decomposition function on the transformed data
> y16d <- decom1(y16.ts,fore1=3)
> #These are the predictions and confidence intervals for the transformed data
> y16d$pred
1.920953 1.933335 1.948505
> y16d$predlow
1.790454 1.802817 1.817969
> y16d$predup
2.051452 2.063852 2.079041
We only reproduced the prediction values and intervals here. More of the y16d object appears in the Appendix. We will reverse the transformation procedure to return the forecast values to the correct
order of magnitude:
> #We return our forecasts back to the original order
> y16u <- trans1(y16d$pred,(1/0.16))
> y16u
59.15335 61.57712 64.65985
> #We return the intervals back
> trans1(y16d$predlow,(1/0.16)
38.10832 39.78309 41.91942
> trans1(y16d$predup,(1/0.16))
89.20372 92.62781 96.97159
Finally, we would like to have some measures of accuracy for our forecasts. We will use the forerr function as follows:
> #We calculate MAD, MSE on Forecasts
> forerr(ya.ts,y16u)
[1] 12.32344
[1] 160.0887
We will compare these measures to those produced by the Box-Jenkins method to determine the best fit for this particular data set.
Here is a table with the actual values, the forecast values, and the intervals for January through March of 2002. As we can see, the actual values are decreasing, while the forecasts are increasing.
The intervals are quite wide, and would not be of great value to a trader.
Table 1. Multiplicative Decomposition Forecasts
Date Actual Forecast Lower Bound Upper Bound
Jan 49.98 59.15 38.11 89.20
Feb 49.89 61.58 39.78 92.63
Mar 48.55 64.66 41.92 96.97
We always begin the Box-Jenkins method by checking for an AR(1) model:
> library(ts)
> #We run different ARIMA/SARIMA models on the transformed data
> #We select the best model based on the minimum AIC
> arima(y16.ts,order=c(1,0,0))
> arima(y16.ts,order=c(1,1,0))
> arima(y16.ts,order=c(1,1,1))
> arima(y16.ts,order=c(2,1,0))
> arima(y16.ts,order=c(2,1,1))
> y161.lm <- arima(y16.ts,order=c(1,1,1))
> y161s.lm <- arima(y16.ts,order=c(1,1,1),seasonal=list(order=c(1,1,1),period=12))
We sometimes use both the ARIMA and SARIMA models, simply for extra practice. The models that we have obtained are ARIMA(1,1,1) and ARIMA(1,1,1) x (1,1,1)[12], respectively. The equations would be:
We obtain forecasts from each model as:
> y161p <- predict.Arima(y161.lm,n.ahead=3)
> #Regular ARIMA - calculate Forecast and se for transformed data
> y161p
Jan Feb Mar
2002 1.871792 1.874719 1.872248
Jan Feb Mar
2002 0.02069411 0.03057785 0.03708152
> y161pl <- y161p$pred - 1.96*y161p$se
> y161pu <- y161p$pred + 1.96*y161p$se
> y161s <- predict.Arima(y161s.lm,n.ahead=3)
> #Seasonal ARIMA - calculate Forecast and se for transformed data
> y161s
Jan Feb Mar
2002 1.869099 1.869018 1.880240
Jan Feb Mar
2002 0.02067993 0.02889495 0.03515230
> y161sl <- y161s$pred - 1.96*y161s$se
> y161su <- y161s$pred + 1.96*y161s$se
We must undo the transformation for each Box-Jenkins model:
> #Regular ARIMA - return to original data
> y161u <- trans1(y161p$pred,(1/0.16))
> #Forecasts
> y161u
Jan Feb Mar
2002 50.30490 50.79854 50.38148
> #Lower limit for CI
> trans1(y161pl,(1/0.16))
Jan Feb Mar
2002 43.86780 41.46290 39.33705
> #Upper limit for CI
> trans1(y161pu,(1/0.16))
Jan Feb Mar
2002 57.51750 61.83970 63.92145
> #Seasonal ARIMA - return to original data
> y161uu <- trans1(y161s$pred,(1/0.16))
> #Forecasts
> y161uu
Jan Feb Mar
2002 49.85414 49.84070 51.74074
> #Lower limit for CI
> trans1(y161sl,(1/0.16))
Jan Feb Mar
2002 43.47017 41.12099 40.97337
> #Upper limit for CI
> trans1(y161su,(1/0.16))
Jan Feb Mar
2002 57.00781 60.06359 64.79127
Finally, we calculate the error measures for each model:
> #Regular ARIMA
> forerr(ya.ts,y161u)
[1] 1.021643
[1] 1.428448
> #Seasonal ARIMA
> forerr(ya.ts,y161uu)
[1] 1.121969
[1] 3.399703
Here are the tables with the actual values, the forecast values, and the intervals for January through March of 2002 for each of the Box-Jenkins models. For the regular ARIMA, the intervals are
considerably more narrow. The March forecast value decreases, which matches the pattern of the actual value. In the second table, the SARIMA model is excellent in the first two months, but is
disappointing in March. We see narrow intervals yet again, which can provide useful information to real-world investors.
Table 2. Regular ARIMA Forecasts
Date Actual Forecast Lower Bound Upper Bound
Jan 49.98 50.30 43.87 57.52
Feb 49.89 50.80 41.46 61.84
Mar 48.55 50.38 39.34 63.92
Table 3. Seasonal ARIMA Forecasts
Date Actual Forecast Lower Bound Upper Bound
Jan 49.98 49.85 43.47 57.01
Feb 49.89 49.84 41.12 60.06
Mar 48.55 51.74 40.97 64.79
The Box-Jenkins models far outperform the multiplicative decomposition method. In all of the ARIMA models, the MAD and MSE values are less than 5, while the other method struggles. For the best model
in the Box-Jenkins, we would go with the regular ARIMA for the full 3 month period. The SARIMA was most impressive in January and February, but the upward blip in March disturbed the performance.
Overall, however, we can see for this data set, that the Box-Jenkins methods are the most effective.
7. Minitab Comparison
We used the same data set with the Minitab program. Minitab is used in several of our references. A 30-day demonstration version is available for free downloading from www.minitab.com. Even though we
had not used Minitab in several years, we found the processes that we needed easily, with the aid of the online help files. We wil use the Box-Cox transformation, the multiplicative decomposition,
and the ARIMA process.
The Box-Cox transformation is located on the Control charts menu. Here we came upon an interesting divergence. The optimal R. We use both the 0.113 transformed series and calculated a 0.16
transformed series, simply for comparison purposes.
On the time series menu, we used the decomposition option. We used the trend and seasonal option. We noted that there were no options for cyclical and irregular components. We produced the forecast
values for each series. There is also no option for confidence interval. We were not entirely surprised by this, because the confidence intervals described by Bowerman and O'Connell (1993) is an
empirical method rather than a theoretical one. When we transformed the forecasts back, our decomposition values were quite removed from the actuals. Using the 0.113 transformed series, we found a
MAD value of 17.3767, and a MSE value of 313.2327. With the 0.16 transformed series, we obtained a MAD value of 12.2967, with a MSE of 160.5583. The 0.16 series has values comparable to those
calculated by R.
For the ARIMA functions, the menu is quite easy to use. The forecasts are calculated by an option on the menu. We did notice that the AIC is not available. We use the AIC to determine the best model,
so this could be a potential problem. For forecast errors, the regular ARIMA(1,1,1) yielded a 1.40 MAD value and a MSE value of 3.3006 for the 0.113 series, while the 0.16 series yielded a 1.4067 MAD
with a 4.3843 MSE. Finally, the seasonal model produced a MAD 2.5967 with a MSE 8.2023 for the 0.113 series. The 0.16 series had a MAD of 1.0133 and a MSE of 1.4122. Here again, the 0.16 results are
in line with those from R.
Our comparison study is quite interesting. We really do have a mixture of results. For ease of use, Minitab would be the winner. It is menu driven, and students are accustomed to that scenario.
Students are familiar with worksheet functions. Minitab, too, also has beautiful plots that can be put into other packages. However, we did see more accurate forecasting results with the R package.
We obtained quite different results for the transformation, which impacted the rest of the process. The main concern was the lack of the AIC. This statistic is much more effective than looking at
autocorrelation functions.
The R package has extensions for more advanced time series methods. There are functions to generate simulated ARIMA series for students to practice model identification. Large scale simulation
studies can be carried out easily via the R BATCH command. Many of the standard time series data sets, such as the Canadian lynx data, and sunspot data are part of the R package. Also, data sets from
both SAS and SPSS can be imported into R. Minitab supports SPSS but not SAS data files. Topics such as the Kalman filter, fractional differencing and the ARCH/GARCH models are available in R, but not
in Minitab.
A final consideration may be the intended audience for the course. If the students do not have extensive computer experience, Minitab may be preferable. For more sophisticated users, R may be the
language of choice.
8. Conclusion
In a comparison with Minitab and R, we should consider cost and ease of use. If students do not have access to Minitab on campus, they can ``rent" it for a semester. Minitab is user-friendly and menu
driven. At many universities, the cost factor is not a problem, since most packages are supported in computer labs. But with budget constraints, smaller universities can still have access to an
excellent statistics package.
We have found that the students learned and enjoyed their time series course by the concentration on one software package. Since R is free and is easily downloaded, students do not need to be
concerned with access. With only one package to learn, we could spend more time refining concepts, and developing better models. Also, we could use both regular and seasonal models to their full
advantages. The R package is a most effective learning tool for the undergraduate time series experience. We can employ R for concepts at all levels to supplement students' knowledge base.
Here is the source code for the R functions that we wrote:
> trans1
function(x,lam=1) {
p1 <- abs(lam)
#Check for negative values
if(min(x) <=0 && p1 != Inf)stop("Data values must be positive")
n1 <- length(x)
wa <- numeric(length=n1)
#Set up for exp
if(p1 == Inf) {
wa <- exp(x)
#Set up for log
if(p1 == 0) {
wa <- log(x)
#Set up for regular power
else {
wa <- exp(p1*log(x))
#Set up for negative power
if(lam < 0)wa <- 1/wa
> decom1
function(x,fore1=0,se1=1) {
if(is.ts(x) != T)stop("Data must be a time series")
n1 <- length(x)
f1 <- tsp(x)[3]
f21 <- f1
ck1 <- n1/f1
if(se1 != 1)f21 <- 1
if(ck1 != floor(ck1))stop("Need exact values for a year")
if(fore1 < 0)stop("Forecast value must be positive")
if(fore1 > n1)stop("Forecast value must be less than series length")
#Load ts library
#Now start the seasonal process
#This is NOT done for annual data
if(f21 != 1) {
y <- filter(x,rep(1,f1))/f1
z <- filter(y,rep(1,2))/2
xx <- as.vector(z)
z1 <- c(NA,xx[-n1])
w1 <- x/z1
w2 <- matrix(w1,nrow=f1)
w3 <- apply(w2,1,function(x)mean(x,na.rm=T))
w4 <- sum(w3)/f1
w3 <- w3/w4
sea1 <- rep(w3,length=n1)
sea1 <- ts(sea1,start=start(x),freq=f1)
ab <- f1 - start(x)[2] +2
sea2 <- sea1[ab:(ab+f1-1)]
dy <- x/sea1
else {
sea1 <- rep(1,length=n1)
sea2 <- 1
dy <- x
#Begin fitting the trend
t1 <- 1:n1
trend.lm <- lm(dy ~ t1)
#Obtain Final Fitted series
yhat <- trend.lm$fitted.values*sea1
#We will get cyclical and irregular values
cr1 <- x/yhat
cy1 <- as.vector(filter(cr1,rep(1,3))/3)
ir1 <- cr1/cy1
#Calculate forecasts if needed
if(fore1 != 0) {
new1 <- data.frame(t1=(n1+1):(n1+fore1))
pred1 <- predict(trend.lm,newdata=new1,interval="prediction")
pred2 <- (pred1[,3] - pred1[,2])/2
xs1 <- sea1[1:fore1]
pred4 <- pred1[,1]*xs1
pred5 <- pred4 - pred2
pred6 <- pred4 + pred2
zz <- list(int=trend.lm$coef[1],slope=trend.lm$coef[2],deas=dy,
else {
zz <- list(int=trend.lm$coef[1],slope=trend.lm$coef[2],deas=dy,
function(act1,fore1) {
#Check input lengths
if(length(act1) != length(fore1))stop("Length of actual and forecast not equal")
#Calculate Mean Absolute deviation
mad1 <- sum(abs(act1-fore1))/length(act1)
#Calculate Mean Square Error
mse1 <- sum( (act1-fore1)^2 )/length(act1)
zz <- list(mad=mad1,mse=mse1)
We will print a sample of the y16d object here:
> y16d
1.920953 1.933335 1.948505
1.790454 1.802817 1.817969
2.051452 2.063852 2.079041
The authors wishes to thank the editor and two referees for their very helpful comments and suggestions.
Becker, R.A., Chambers, J.M., and Wilks, A.R. (1988), The New S Language: A Programming Environment for Data Analysis and Graphics, Pacific Grove, CA: Wadsworth and Brooks Cole.
Bowerman, B.L., and O'Connell, R.T. (1993), Forecasting and Time Series: An Applied Approach (3^rd ed.), Pacific Grove, CA: Duxbury.
Box, G.E.P., and Cox, D.R.(1964), ``An analysis of transformations," Journal of the Royal Statistical Society, Series B, 26, 211 - 252.
Box, G.E.P., and Jenkins, G.M. (1976), Time Series Analysis: Forecasting and Control, San Francisco, CA: Holden-Day.
Chambers, J.M. (1998), Programming with Data: A Guide to the S Language, New York: Springer.
Chambers, J.M, and Hastie, T.J. (1992), Statistical Models in S, Pacific Grove, CA: Wadsworth and Brooks Cole.
Dalgaard, P. (2002), Introductory Statistics with R, New York: Springer.
Fox, J. (2002), An R and S-Plus Companion to Applied Regression, Thousand Oaks, CA: Sage Publications.
Ihaka, R., and Gentleman, R. (1996), ``R: A Language for Data Analysis and Graphics," Journal of Computational and Graphical Statistics, 5, 299-314.
Krause, A., and Olson, M. (2000), The Basics of S and S-Plus, New York: Springer.
Kvanli, A.H., Pavur, R.J., and Guynes, C.S. (2000), Introduction to Business Statistics (5^th ed.), Cincinnati, OH: South-Western.
Pinheiro, J.C., and Bates, D.M. (2000), Mixed-Effects Models in S and S-Plus, New York: Springer.
R Development Core Team (2003), R Environment for Statistical Computing and Graphics, CRAN.R-project.org/manuals.html
Spector, P.C. (1994), An Introduction to S and S-Plus, Belmont, CA: Duxbury.
Venables, W.N., and Ripley, B.D. (2000), S Programming, New York: Springer.
Venables, W.N., and Ripley, B.D. (2002), Modern Applied Statistics with S-Plus (4^th ed.), New York: Springer.
Venables, W.N., Smith, D.M. and the R Development Core Team (2003), An Introduction to R, London: Network Theory Limited.
Zivot, E., and Wang J. (2002), Modeling Financial Time Series With S-Plus, New York: Springer-Verlag.
Erin M. Hodgess
Department of Computer and Mathematical Sciences
University of Houston - Downtown
One Main Street
Houston, TX 77002
U. S. A.
Volume 12 (2004) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications
|
{"url":"http://www.amstat.org/publications/jse/v12n3/hodgess.html","timestamp":"2014-04-20T05:44:52Z","content_type":null,"content_length":"48788","record_id":"<urn:uuid:c1effc9f-1ab5-4dc9-a47d-755dadff457f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The weak field approximation
Looks like I really don't have a feel for it. So I was working on this the other day.
(arranged in order)
http://img218.imageshack.us/img218/9613/1gx9.jpg http://img68.imageshack.us/img68/4677/2te4.jpg http://img68.imageshack.us/img68/7853/3jg9.jpg http://img68.imageshack.us/img68/6273/4qw8.jpg http://
It's fairly straightforward, but I think I'm just not used to the style. For example in 17.19 they took only the spatial equations because the metric doesn't change with time. Well just going by the
math, I don't see any constraints on n. I see the constraints on k,j,p though. Do they translate to n as well? Same thing happens at 17.25. I figured you can choose to consider any parts of your
system for whatever reason.
Then at 17.36 when they just dropped that entire term, but chose not to do the same with the 17.35 term. It works, though. The solution at the end is correct.
So I got to thinking .. What role do these derivations really play? Does it really matter how you show that GR reduces to newtonian mechanics? GR is correct whether you do or not, right? I even saw a
place where the author started with the metric for the newtonian limit and 'derived' f=ma. It just seems like so much handwaving smoke and mirrors.
Of course I'm still new to all this so it's possible I didn't pay attention a few pages back. Thoughts?
|
{"url":"http://www.physicsforums.com/showthread.php?t=128005","timestamp":"2014-04-18T10:44:55Z","content_type":null,"content_length":"29564","record_id":"<urn:uuid:d6454271-da93-4c39-9d10-b6456835a467>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two stationary positive point charges, charge 1 of
View the step-by-step solution to:
Two stationary positive point charges, charge 1 of magnitude 4.00nC and...
Home Tutors Physics Two stationary positive point ...
This question has been answered by Expert on Sep 21, 2010. View Solution
Two stationary positive point charges, charge 1 of magnitude 4.00nC and charge 2 of magnitude 1.95nC , are separated by a distance of 54.0cm . An electron is released from rest at the point midway
between the two charges, and it moves along the line connecting the two charges.
Part A
What is the speed vfinal of the electron when it is 10.0 from charge 1?
Express your answer in meters per second.
|
{"url":"http://www.coursehero.com/tutors-problems/Physics/6366238-Two-stationary-positive-point-charges-charge-1-of-magnitude-400nC-a/","timestamp":"2014-04-18T13:11:28Z","content_type":null,"content_length":"41650","record_id":"<urn:uuid:c97739cc-414b-4465-bba3-ff33518ca66b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Simple Graphs and Their Binomial Edge Ideals
This Demonstration illustrates the relationship between combinatorial properties of a simple graph and its binomial edge ideal (see the Details section for definitions). In particular, it can be
used to verify that a graph is closed (for a given ordering of vertices) if and only if the Groebner basis of its edge ideal consists of quadratic polynomials. By starting with a random graph that
is not closed and adding suitable edges until the Groebner basis consists only of quadratic polynomials, you can find the closure of the graph, that is, the minimal closed graph containing the given
graph. Alternatively, you can start with a complete graph (which is always closed) and remove edges (or vertices) to obtain non-closed graphs.
To add/delete a vertex, choose the vertex number from the third setter bar. To add/delete an edge, choose the first and second vertex of the edge from the first two setter bars.
Let be a simple graph on the vertex set We say that the graph is closed with respect to the given ordering of vertices if satisfies the condition that for any pair of edges and with and , the edge
is an edge of and for any pair of edges and with and the edge is an edge of .
For a field , let be the ring of polynomials in variables. The binomial edge ideal is the ideal generated by the elements where and is an edge of Binomial edge ideals of graphs were introduced in
[1] and play a role in the study of conditional independence statements and the subject of algebraic statistics [2]. In this Demonstration we illustrate theorem 1.1 of (1), which states that a
simple graph is closed (for a given ordering) if and only if the reduced Gröbner basis of its binomial edge ideal with respect to the lexicographic ordering on induced by is quadratic (and generated
by ).
[1] J. Herzog, T. Hibi, F. Hreinsdóttir, F. Kahle, and T. Rauh, "Binomial Edge Ideals and Conditional Independence Statements,"
, 2009.
[2] M. Drton, B. Sturmfels, and S. Sullivant,
Lectures on Algebraic Statistics
, Vol. 39, Berlin: Springer, 2009.
|
{"url":"http://demonstrations.wolfram.com/SimpleGraphsAndTheirBinomialEdgeIdeals/","timestamp":"2014-04-18T18:12:28Z","content_type":null,"content_length":"45561","record_id":"<urn:uuid:e7f8eb57-47de-436d-9662-0ef615a2ef5d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
|
K.: Messy genetic algorithms: Motivation, analysis, and first results
Results 1 - 10 of 270
, 2000
"... Multi-objective evolutionary algorithms which use non-dominated sorting and sharing have been mainly criticized for their (i) O(MN computational complexity (where M is the number of objectives
and N is the population size), (ii) non-elitism approach, and (iii) the need for specifying a sharing param ..."
Cited by 807 (28 self)
Add to MetaCart
Multi-objective evolutionary algorithms which use non-dominated sorting and sharing have been mainly criticized for their (i) O(MN computational complexity (where M is the number of objectives and N
is the population size), (ii) non-elitism approach, and (iii) the need for specifying a sharing parameter. In this paper, we suggest a non-dominated sorting based multi-objective evolutionary
algorithm (we called it the Non-dominated Sorting GA-II or NSGA-II) which alleviates all the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN ) computational
complexity is presented. Second, a selection operator is presented which creates a mating pool by combining the parent and child populations and selecting the best (with respect to fitness and
spread) N solutions. Simulation results on a number of difficult test problems show that the proposed NSGA-II, in most problems, is able to find much better spread of solutions and better convergence
near the true Pareto-optimal front compared to PAES and SPEA - two other elitist multi-objective EAs which pay special attention towards creating a diverse Pareto-optimal front. Moreover, we modify
the definition of dominance in order to solve constrained multi-objective problems eciently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective,
seven-constraint non-linear problem, are compared with another constrained multi-objective optimizer and much better performance of NSGA-II is observed. Because of NSGA-II's low computational
requirements, elitist approach, parameter-less niching approach, and simple constraint-handling strategy, NSGA-II should find increasing applications in the coming years.
- Foundations of Genetic Algorithms , 1991
"... This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or
«steady state") selection are compared on the basis of solutions to deterministic difference or diffe ..."
Cited by 389 (32 self)
Add to MetaCart
This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or
«steady state") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides
convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for
more detailed analytical investigation of selection techniques. Keywords: proportionate selection, ranking selection, tournament selection, Genitor, takeover time, time complexity, growth ratio. 1
, 2001
"... The paper describes the hierarchical Bayesian optimization algorithm which combines the Bayesian optimization algorithm, local structures in Bayesian networks, and a powerful niching technique.
The proposed algorithm is able to solve hierarchical traps and other difficult problems very efficiently. ..."
Cited by 255 (63 self)
Add to MetaCart
The paper describes the hierarchical Bayesian optimization algorithm which combines the Bayesian optimization algorithm, local structures in Bayesian networks, and a powerful niching technique. The
proposed algorithm is able to solve hierarchical traps and other difficult problems very efficiently.
- COMPLEX SYSTEMS , 1991
"... This paper considers the effect of stochasticity on the quality of convergence of genetic algorithms (GAs). In many problems, the variance of building-block fitness or so-called collateral noise
is the major source of variance, and a population-sizing equation is derived to ensure that average sig ..."
Cited by 239 (85 self)
Add to MetaCart
This paper considers the effect of stochasticity on the quality of convergence of genetic algorithms (GAs). In many problems, the variance of building-block fitness or so-called collateral noise is
the major source of variance, and a population-sizing equation is derived to ensure that average signal-to-collateral-noise ratios are favorable to the discrimination of the best building blocks
required to solve a problem of bounded deception. The sizing relation is modified to permit the inclusion of other sources of stochasticity, such as the noise of selection, the noise of genetic
operators, and the explicit noise or nondeterminism of the objective function. In a test suite of five functions, the sizing relation proves to be a conservative predictor of average correct
convergence, as long as all major sources of noise are considered in the sizing calculation. These results suggest how the sizing equation may be viewed as a coarse delineation of a boundary between
what a physicist might call two distinct phases of GA behavior. At low population sizes the GA makes many errors of decision, and the quality of convergence is largely left to the vagaries of chance
or the serial fixup of flawed results through mutation or other serial injection of diversity. At large population sizes, GAs can reliably discriminate between good and bad building blocks, and
parallel processing and recombination of building blocks lead to quick solution of even difficult deceptive problems. Additionally, the paper outlines a number of extensions to this work, including
the development of more refined models of the relation between generational average error and ultimate convergence quality, the development of online methods for sizing populations via the estimation
of population-s...
, 1999
"... Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters
of parallel GAs on the quality of their search and on their efficiency are not well understood. This insuf ..."
Cited by 222 (5 self)
Add to MetaCart
Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters of
parallel GAs on the quality of their search and on their efficiency are not well understood. This insufficient knowledge limits our ability to design fast and accurate parallel GAs that reach the
desired solutions in the shortest time possible. The goal of this dissertation is to advance the understanding of parallel GAs and to provide rational guidelines for their design. The research
reported here considered three major types of parallel GAs: simple master-slave algorithms with one population, more sophisticated algorithms with multiple populations, and a hierarchical combination
of the first two types. The investigation formulated simple models that predict accurately the quality of the solutions with different parameter settings. The quality predictors were transformed into
population-sizing equations, which in turn were used to estimate the execution time of the algorithms.
, 1997
"... This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial
supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of ..."
Cited by 210 (88 self)
Add to MetaCart
This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply
of building blocks in a novel way. The result is an equation that accurately predicts the quality of the solution found by a GA using a given population size. Adjustments for different selection
intensities are considered and computational experiments demonstrate the effectiveness of the model. I. Introduction The size of the population in a genetic algorithm (GA) is a major factor in
determining the quality of convergence. The question of how to choose an adequate population size for a particular domain is difficult and has puzzled GA practitioners for a long time. Hard questions
are better approached using a divide-and-conquer strategy and the population sizing issue is no exception. In this case, we can identify two factors that influence convergence quality: the initial
supply of build...
- IEEE Transactions on Evolutionary Computation , 1997
"... Abstract — Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the
history as well as the current state of this rapidly growing field. We describe the purpose, the general struc ..."
Cited by 207 (0 self)
Add to MetaCart
Abstract — Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history
as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) [with
links to genetic programming (GP) and classifier systems (CS)], evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e.,
representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain
incomplete. Index Terms — Classifier systems, evolution strategies, evolutionary computation, evolutionary programming, genetic algorithms,
- Proceedings of the Sixth International Conference on Genetic Algorithms , 1995
"... A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to
predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptiv ..."
Cited by 204 (5 self)
Add to MetaCart
A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to
predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptive problems as easy and difficult non-deceptive problems as difficult, indicates when Gray
coding will prove better than binary coding, and is consistent with the surprises encountered when GAs were used on the Tanese and royal road functions. The FDC measure is a consequence of an
investigation into the connection between GAs and heuristic search. 1 INTRODUCTION A correspondence between evolutionary algorithms and heuristic state space search is developed in (Jones, 1995b).
This is based on a model of fitness landscapes as directed, labeled graphs that are closely related to the state spaces employed in heuristic search. We examine one aspect of this correspondence, the
relationship between...
, 1995
"... Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal
function optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This ..."
Cited by 191 (1 self)
Add to MetaCart
Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal function
optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This study presents a comprehensive treatment of niching methods and the related topic of
population diversity. Its purpose is to analyze existing niching methods and to design improved niching methods. To achieve this purpose, it first develops a general framework for the modelling of
niching methods, and then applies this framework to construct models of individual niching methods, specifically crowding and sharing methods. Using a constructed model of crowding, this study
determines why crowding methods over the last two decades have not made effective niching methods. A series of tests and design modifications results in the development of a highly effective form of
crowding, called determin...
, 1999
"... The goal of linkage learning, or building block identification, is the creation of a more effective genetic algorithm (GA). This paper explores the relationship between the linkage-learning
problem and that of learning probability distributions over multi-variate spaces. Herein, it is argued that th ..."
Cited by 190 (4 self)
Add to MetaCart
The goal of linkage learning, or building block identification, is the creation of a more effective genetic algorithm (GA). This paper explores the relationship between the linkage-learning problem
and that of learning probability distributions over multi-variate spaces. Herein, it is argued that these problems are equivalent. Using a simple but effective approach to learning distributions, and
by implication linkage, this paper reveals the existence of GA-like algorithms that are potentially orders of magnitude faster and more accurate than the simple GA. I. Introduction Linkage learning
in genetic algorithms (GAs) is the identification of building blocks to be conserved under crossover. Theoretical studies have shown that if an effective linkage-learning GA were developed, it would
hold significant advantages over the simple GA (2). Therefore, the task of developing such an algorithm has drawn significant attention. Past approaches to developing such an algorithm have focused
on ev...
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=10262","timestamp":"2014-04-18T21:05:29Z","content_type":null,"content_length":"41084","record_id":"<urn:uuid:3bfa002b-5fc7-4bc7-bc41-fca383b32546>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Chester Township, PA Algebra 1 Tutor
Find a Chester Township, PA Algebra 1 Tutor
...I have a Master of Science degree in math, over three years' experience as an actuary, and am a member of MENSA. I am highly committed to students' performances and to improve their
comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both un...
19 Subjects: including algebra 1, calculus, geometry, statistics
...I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. I have a bachelor's degree in secondary math education. During my time
in college, I took one 3-credit course in Discrete Math.
11 Subjects: including algebra 1, calculus, algebra 2, geometry
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including algebra 1, geometry, algebra 2, GRE
...I have excelled in and taken many elementary ed. classes, such as Differentiated Instruction, Curriculum and Assessment, Teaching Learning Communities 1 & 2, and many more. I have also had a
great deal experience in the Elementary schools during my practicums, and have taught a variety of lesson...
15 Subjects: including algebra 1, reading, English, grammar
...This lets me know how to teach them and it helps them understand how they process information in a classroom. I have been working with computers and math since the late seventies and I kept up
with the mathematical changes over many decades. My former students range from the inner city schools through the corporate computer executives.
14 Subjects: including algebra 1, geometry, SAT math, ACT Math
Related Chester Township, PA Tutors
Chester Township, PA Accounting Tutors
Chester Township, PA ACT Tutors
Chester Township, PA Algebra Tutors
Chester Township, PA Algebra 2 Tutors
Chester Township, PA Calculus Tutors
Chester Township, PA Geometry Tutors
Chester Township, PA Math Tutors
Chester Township, PA Prealgebra Tutors
Chester Township, PA Precalculus Tutors
Chester Township, PA SAT Tutors
Chester Township, PA SAT Math Tutors
Chester Township, PA Science Tutors
Chester Township, PA Statistics Tutors
Chester Township, PA Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Aston algebra 1 Tutors
Brookhaven, PA algebra 1 Tutors
Chester, PA algebra 1 Tutors
Crum Lynne algebra 1 Tutors
Drexel Hill algebra 1 Tutors
Eddystone, PA algebra 1 Tutors
Feltonville, PA algebra 1 Tutors
Garnet Valley, PA algebra 1 Tutors
Logan Township, NJ algebra 1 Tutors
Marcus Hook algebra 1 Tutors
Parkside, PA algebra 1 Tutors
Springfield, PA algebra 1 Tutors
Trainer, PA algebra 1 Tutors
Upland, PA algebra 1 Tutors
Woodlyn algebra 1 Tutors
|
{"url":"http://www.purplemath.com/chester_township_pa_algebra_1_tutors.php","timestamp":"2014-04-18T05:32:08Z","content_type":null,"content_length":"24602","record_id":"<urn:uuid:3c6ca2b6-6e62-4827-a8d8-c7d38b4a979d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Suppose the unit square is divided into four parts by two perpen- dicular lines, each parallel to an edge. Show that at least two of the parts have area no larger than 1/4.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f0d995be4b084a815fcd960","timestamp":"2014-04-17T12:51:15Z","content_type":null,"content_length":"72278","record_id":"<urn:uuid:5b163f5f-ca13-4bbb-a0f9-11826b9dfae7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Friday, March 30, 2012 Game Recap & Discussion [SPOILERS]
dhkendall wrote: I've heard of "Fermat's Last Theorem", but that's almost all I know about it. The only other thing I know about it is that it's some impossible math thing that wasn't proven
until recently. That is it. Have no idea what it is about, and the Wikipedia page soon bored me, couldn't make heads or tails of it
It's actually pretty simple. I assume you know the Pythagorean Theorem? a^2+b^2=c^2. Well, mathematicians spent years trying to find another exponent where that worked, but couldn't. The digits can
only be "squared," nothing else. But they could not
it. "Fermat's last theorem" supposedly did, but he didn't show it. He merely noted in the margin of another paper that he'd found a "remarkable proof" for the problem, but didn't publish it before he
died. So for centuries, mathematicians tried to find "Fermat's last theorem." They had so much difficulty doing so, most believed Fermat was jerking their chains.
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
John Boy wrote:
dhkendall wrote:Judges: I said "kubasa" (KOO-ba-sah) in K $800 (chuckled at the "Krossword Klues 'K'" title) - originally I came on to say "I know that's not the requisite number of letters
and therefore wrong, but could "kielbasa" be concievably pronounced that way to give me credit?" Then a quick Wikipedia check tells me that Canadians generally call "kielbasa" "kubasa", so I
could blame it on my ethnicity. Still fall short in the requisite number of letters, but perhaps lenience in pronunciation? (ie if I said "kubasa", would they think I'm trying to pronounce
"kielbasa" and credit me?)
I don't know what the correct pronunciation is. But around Cleveland (where the word gets said A LOT!) I've heard that first syllable uttered as "kie," "kuh," pretty much everything BUT
"KEEL-basa." Almost as if everyone had a, well, you know, lodged in his throat....
For what it's worth, the L in the original Polish is actually Ł, which is pronounced like the W in "water." At the end of a syllable as in kiełbasa, it sort of sounds like the preceding vowel is
swallowed. So there's a pretty solid precedent for not pronouncing kielbasa with an L, since it's not an L in the language it comes from.
TenPoundHammer wrote:Gujarat didn't sound remotely Indian to me, and "Gir" had me thinking about that hyperactive little robot thing from Invader Zim.
Non-English, multisyllabic, with a J in the middle pronounced as it is in English is a pretty strong indicator of an Indian word/name. Think Punjab, or Vijay.
Remember, you can't spell "harass" without Sarah
Vanya wrote:Several of my ancestors were named Sarah.
What about the women?
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
LeFlaneur wrote:No less than 9 lach trash pickups in J alone for my wife and me.
As a shoutout to Mark Barrett, I'd say that there were plenty of clues tonight that showed that there was no tournament player on stage, so I found the result somewhat satisfying as a fan. Nothing
against Beau, he was just sloppy out there. I would have liked to see him come out there focused, knowing that a likely tourney spot was on the line... It may have been partly that he hit a bad run
of clues that he didn't know. For some reason, at a point where the middle player looked particularly frazzled to me, I just had this feeling that she was going to win somehow. It felt like a
scenario I had seen play out before.
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
I said earlier in the week he had too many bad misses and it would come back to bite him. He was $2,700 short of a lock, with seven wrong responses, including a $2,500 DD miss. A little clamming here
and there and he'd have won the game before DD.
Poster formerly known as LifelongJeopFan
Re: Subcontinental divide
Bamaman wrote:
goforthetie wrote:Or you could ask yourself why so many Argentines have names like Ginobili, Messi, Sabatini...
Why a large number of them have names like Schultz and Mengele is a more pertinent question.
Actually, when he said German was the #4 language spoken there, I did think of the Nazi migration.
Fwiw, German immigration to South America had been going on for over a century before WWII. Only a small number fled there after the war. Not that that makes your reasoning invalid - South America
might have been a more attractive destination for fleeing Nazis because there were already ethnic Germans there.
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
I'm still trying to understand this clue (from the "EX" category):
Fair market value for the owners of the little store of the route of our highway;
Who do we this?
Can anyone explain this? The credited response was "expropriate," if that helps. tia
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
Magna wrote:I'm still trying to understand this clue (from the "EX" category):
Fair market value for the owners of the little store of the route of our highway;
Who do we this?
Can anyone explain this? The credited response was "expropriate," if that helps. tia
The clue refers to eminent domain, the power of a government to take private property for public projects, while giving the owner "fair market value." But expropriation usually refers to taking
without compensation, so it's a bad clue.
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
As for the so-called humour in the clue, it's meant to sound like the old-time (really old-time) cheer: "Two, four, six, eight, who do we appreciate (instead of expropriate)?" Kind lame, and it
confused me, but there you have it.
Re: Friday, March 30, 2012 Game Recap & Discussion (SPOILERS)
Thanks for the explanation. I agree - bad clue.
Other problems are that the grammar is distorted. It's the property, not the people, that is being expropriated. But replacing "this" in the clue with "expropriate," we get "Who do we expropriate?"
It sounds like the owners of the store are being taken captive or something. But worse, the clue doesn't even refer to taking (expropriating) anything. It just mentions a "little store of [on?] the
route of our highway." I guess we're supposed to infer that the land the store sits on is needed for a planned highway and is going to be taken for that purpose.
2-4-6-8! Whom do we appreciate?
Sage on the Hudson wrote:Theodore Roosevelt came to mind instantly, but I just couldn't think of whom the second New Yorker (despite having grown up in New York) might be until the memory of what
"Rocky" supposedly remarked when Gerald Ford asked him to be Veep: "I never wanted to be vice president of anything."
That's "who", not "whom". (Sorry, I couldn't resist.)
Re: Subcontinental divide
Magna wrote:
Bamaman wrote:
goforthetie wrote:Or you could ask yourself why so many Argentines have names like Ginobili, Messi, Sabatini...
Why a large number of them have names like Schultz and Mengele is a more pertinent question.
Actually, when he said German was the #4 language spoken there, I did think of the Nazi migration.
Fwiw, German immigration to South America had been going on for over a century before WWII. Only a small number fled there after the war. Not that that makes your reasoning invalid - South
America might have been a more attractive destination for fleeing Nazis because there were already ethnic Germans there.
Indeed, I come across Mennonites regularly in my day-to-day life (my work is full of them, my church is full of them, my sister lives in a Mennonite-heavy area of the province, you can't swing an
apple cobbler around here without hitting a Mennonite, really), and it seems that every last one of them has family in and/or comes from Bolivia or Paraguay. I've gotten to the point where I've
associated the landlocked countries of South America with Mennonite culture more than Spanish culture ...
"Jeopardy! is two parts luck and one part luck"
Re: 2-4-6-8! Whom do we appreciate?
plasticene wrote:
Sage on the Hudson wrote:Theodore Roosevelt came to mind instantly, but I just couldn't think of whom the second New Yorker (despite having grown up in New York) might be until the memory of
what "Rocky" supposedly remarked when Gerald Ford asked him to be Veep: "I never wanted to be vice president of anything."
That's "who", not "whom". (Sorry, I couldn't resist.)
Re: Subcontinental divide
Magna wrote:
Bamaman wrote:
goforthetie wrote:Or you could ask yourself why so many Argentines have names like Ginobili, Messi, Sabatini...
Why a large number of them have names like Schultz and Mengele is a more pertinent question.
Actually, when he said German was the #4 language spoken there, I did think of the Nazi migration.
Fwiw, German immigration to South America had been going on for over a century before WWII. Only a small number fled there after the war. Not that that makes your reasoning invalid - South
America might have been a more attractive destination for fleeing Nazis because there were already ethnic Germans there.
My Nazi comment was a bit of a joke, but it did come to mind when seeing the clue. Thank you, though for the historical information, that does make sense why Nazis may have fled there if there was a
significant German culture in the country.
Poster formerly known as LifelongJeopFan
|
{"url":"http://jboard.tv/viewtopic.php?p=26523","timestamp":"2014-04-17T21:51:07Z","content_type":null,"content_length":"44579","record_id":"<urn:uuid:a5901784-85cc-41de-a534-68a9265dbd18>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Silicon Raman amplifiers, lasers, and their applications
Conference Proceeding
Silicon Raman amplifiers, lasers, and their applications
Dept. of Electr. Eng., California Univ., Los Angeles, CA, USA
10/2005; DOI:10.1109/GROUP4.2005.1516397 ISBN: 0-7803-9070-9 In proceeding of: Group IV Photonics, 2005. 2nd IEEE International Conference on
ABSTRACT This paper presents recent breakthroughs and applications of Raman based silicon photonics such as silicon Raman amplifiers and lasers. These lasers would extend the wavelength range of
III-V laser to mid-IR where important applications such as laser medicine, biochemical sensing, and free space optical communication await the emergence of a practical and low cost laser.
[show abstract] [hide abstract]
ABSTRACT: The nonlinear process of stimulated Raman scattering is important for silicon photonics as it enables optical amplification and lasing. To understand the dynamics of silicon Raman
amplifiers (SRAs), a numerical approach is generally employed, even though it provides little insight into the contribution of different SRA parameters to the signal amplification process. In
this paper, we solve the coupled pump-signal equations analytically under realistic conditions, and derive an exact formula for the envelope of a signal pulse when picosecond optical pulses are
amplified inside a SRA pumped by a continuous-wave laser beam. Our solution is valid for an arbitrary pulse shape and fully accounts for the Raman gain-dispersion effects, including temporal
broadening and group-velocity reduction (a slow-light effect). It can be applied to any pumping scenario and leads to a simple analytic expression for the maximum optical delay produced by the
Raman dispersion in a unidirectionally pumped SRA. We employ our analytical formulation to study the evolution of optical pulses with Gaussian, exponential, and Lorentzian shapes. The ability of
a Gaussian pulse to maintain its shape through the amplifier makes it possible to realize soliton-like propagation of chirped Gaussian pulses in SRAs. We obtain analytical expressions for the
required linear chirp and temporal width of a soliton-like pulse in terms of the net signal gain and the Raman-dispersion parameter. Our results are useful for optimizing the performance of SRAs
and for engineering controllable signal delays.
Optics Express 08/2010; 18(17):18324-38. · 3.55 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Silicon-based plasmonic waveguides can be used to simultaneously transmit electrical signals and guide optical energy with deep subwavelength localization, thus providing us with a well
needed connecting link between contemporary nanoelectronics and silicon photonics. In this paper, we examine the possibility of employing the large third-order nonlinearity of silicon to create
active and passive photonic devices with silicon-based plasmonic waveguides. We unambiguously demonstrate that the relatively weak dependance of the Kerr effect, two-photon absorption (TPA), and
stimulated Raman scattering on optical intensity, prevents them from being useful in μm-long plasmonic waveguides. On the other hand, the TPA-initiated free-carrier effects of absorption and
dispersion are much more vigorous, and have strong potential for a variety of practical applications. Our work aims to guide research efforts towards the most promising nonlinear optical
phenomena in the thriving new field of silicon-based plasmonics.
Optics Express 01/2011; 19(1):206-17. · 3.55 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Since the recent demonstration of chip-scale, silicon-based, photonic devices, silicon photonics provides a viable and promising platform for modern nonlinear optics. The development
and improvement of such devices are helped considerably by theoretical predictions based on the solution of the underlying nonlinear propagation equations. In this paper, we review the
approximate analytical tools that have been developed for analyzing active and passive silicon waveguides. These analytical tools provide the much needed physical insight that is often lost
during numerical simulations. Our starting point is the coupled-amplitude equations that govern the nonlinear dynamics of two optical waves interacting inside a silicon-on-insulator waveguide. In
their most general form, these equations take into account not only linear losses, dispersion, and the free-carrier and Raman effects, but also allow for the tapering of the waveguide. Employing
approximations based on physical insights, we simplify the equations in a number of situations of practical interest and outline techniques that can be used to examine the influence of intricate
nonlinear phenomena as light propagates through a silicon waveguide. In particular, propagation of single pulse through a waveguide of constant cross section is described with a perturbation
approach. The process of Raman amplification is analyzed using both purely analytical and semianalytical methods. The former avoids the undepleted-pump approximation and provides approximate
expressions that can be used to discuss intensity noise transfer from the pump to the signal in silicon Raman amplifiers. The latter utilizes a variational formalism that leads to a system of
nonlinear equations that governs the evolution of signal parameters under the continuous-wave pumping. It can also be used to find an optimum tapering profile of a silicon Raman amplifier that
provides the highest net gain for a given pump power.
IEEE Journal of Selected Topics in Quantum Electronics 03/2010; · 4.08 Impact Factor
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does
not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
27 Downloads
Available from
Dec 11, 2012
|
{"url":"http://www.researchgate.net/publication/4179088_Silicon_Raman_amplifiers_lasers_and_their_applications","timestamp":"2014-04-23T08:06:52Z","content_type":null,"content_length":"209296","record_id":"<urn:uuid:bed96c4c-3cdf-45a9-aaa2-a53fbca10e81>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SPOJ.com - Problem CODESPTC
SPOJ Problem Set (classical)
9723. Card Shuffling
Problem code: CODESPTC
Here is an algorithm for shuffling N cards:
1) The cards are divided into K equal piles, where K is a factor of N.
2) The bottom N / K cards belong to pile 1 in the same order (so the bottom card of the initial pile is the bottom card of pile 1).
3) The next N / K cards from the bottom belong to pile 2, and so on.
4) Now the top card of the shuffled pile is the top card of pile 1. The next card is the top card of pile 2,..., the Kth card of the shuffled pile is the top card of pile K. Then (K + 1)th card is
the card which is now at the top of pile 1, the (K + 2)nd is the card which is now at the top of pile 2 and so on.
For example, if N = 6 and K = 3, the order of a deck of cards "ABCDEF" (top to bottom) when shuffled once would change to "ECAFDB".
Given N and K, what is the least number of shuffles needed after which the pile is restored to its original order?
The first line contains the number of test cases T. The next T lines contain two integers each N and K.
Output T lines, one for each test case containing the minimum number of shuffles needed. If the deck never comes back to its original order, output -1.
T <= 10000
2 <= K <= N <= 10^9
K will be a factor of N.
Sample Input:
Sample Output:
Here is an algorithm for shuffling N cards:
1) The cards are divided into K equal piles, where K is a factor of N.
2) The bottom N / K cards belong to pile 1 in the same order (so the bottom card of the initial pile is the bottom card of pile 1).
3) The next N / K cards from the bottom belong to pile 2, and so on.
4) Now the top card of the shuffled pile is the top card of pile 1. The next card is the top card of pile 2,..., the Kth card of the shuffled pile is the top card of pile K. Then (K + 1)th card is
the card which is now at the top of pile 1, the (K + 2)nd is the card which is now at the top of pile 2 and so on.
For example, if N = 6 and K = 3, the order of a deck of cards "ABCDEF" (top to bottom) when shuffled once would change to "ECAFDB".
Given N and K, what is the least number of shuffles needed after which the pile is restored to its original order?
The first line contains the number of test cases T. The next T lines contain two integers each N and K.
Output T lines, one for each test case containing the minimum number of shuffles needed. If the deck never comes back to its original order, output -1.
T <= 10000
2 <= K <= N <= 10^9
K will be a factor of N.
Sample Input:
Sample Output:
Added by: Varun Jalan
Date: 2011-10-15
Time limit: 4s
Source limit: 50000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: All
Resource: own problem used for CodeSprint - InterviewStreet Contest
hide comments
2013-04-16 19:37:13 (Tjandra Satria Gunawan)(曾毅昆)
Why my memory usage is 2.6MB? I didn't precompute the value that much. Is SPOJ change the system?
@Francky: Thanks for your recomendation that was great ;-) Imho, it's better to put all your recommended problem in one page like TJANDRA page. :-)
edit: 4.27s is my best, I can't beat your record, congratulations..
--ans(francky)--> I'll try to take some time to make my Francky page. You made a good chrono too, congratulation to you too. Concerning memory usage, I confirm a 'spoj change' as my same code have
now more memory usage. I don't know yet the reason, please let us continue speaking about that in the forum section. ;-)
Last edit: 2013-04-17 17:34:44
2013-04-13 11:03:31 Francky
My 200th problem, AC at first try, and first place. I highly recommend this one to all great solvers ;-)
Many thanks to Varun Jalan for the quality of his tasks.
|
{"url":"http://www.spoj.com/problems/CODESPTC/","timestamp":"2014-04-16T22:00:53Z","content_type":null,"content_length":"26687","record_id":"<urn:uuid:8d846ca5-da78-4b38-8bd7-7ad524f938d2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 59
59 CHAPTER 6 Examination of LTPP Database for Indications of an Endurance Limit Introduction 2. Design procedures that use the equivalent temperature concept but the axle load distribution for each
axle type-- Flexible pavements have traditionally been designed to These procedures also use the cumulative damage con- limit load-related cracking. The more traffic, the thicker the cept to
determine the amount of fracture damage for each HMA layer to limit the load-related cracks to some design structure. The PerRoad Program would fall within this limit. As noted, however, industry has
been proposing the category (63). use of an endurance limit as a mixture property for HMA 3. Design procedures that calculate and use pavement tem- layers. The endurance limit is defined as the
tensile strain peratures at specific depths over some time interval, gener- below which no fracture or fatigue damage occurs and applies ally less than a month--These procedures typically use the to
cracks that initiate at the bottom of the HMA layer. Almost incremental damage concept to determine the amount of all design and analysis procedures that use the endurance fracture damage within
specific time intervals and at spe- limit concept assume that one value applies to all HMA mix- cific depths within the pavement structure. The MEPDG tures and temperatures. Values that have been
used vary from would fall within this category (64). 65 to 120 ms. This section of the report has three objectives: discuss the The equivalent temperature concept simply defines one incorporation of
the endurance limit design premise into temperature for which the annual or seasonal damage equals mechanistic-empirical based pavement design procedures, the cumulative damage determined at monthly
or more fre- confirm the reality and values suggested for the endurance quent intervals. The equivalent temperature is used to estimate limit, and recommend field studies to support use of this con-
the dynamic modulus for calculating the tensile strain at the cept in the MEPDG software. bottom of the HMA layer on an annual or seasonal basis. All M-E based design procedures, regardless of the
group, use Miner's hypothesis to calculate fracture damage, and Including the Endurance assume that wheel-load-related alligator cracks initiate at the Limit Design Premise into bottom of the HMA
layer and propagate to the surface with Mechanistic-Empirical-Based continued truck loadings, with the exception of the MEPDG. Pavement Design Procedures In addition, all M-E based design procedures
use the max- All mechanistic-empirical pavement design procedures can imum tensile strain at the bottom of the HMA layer as the be grouped into three types relative to wheel-load-induced pavement
response parameter for calculating fracture damage cracking. These are as follows: and predicting the amount of alligator cracks. Those design procedures apply the endurance limit design premise in
one 1. Design procedures that use the equivalent axle load and of three methods, which are summarized as follows: equivalent temperature concepts--The equivalent tem- perature is determined based on
an annual or monthly 1. The introduction of the endurance limit design premise basis. These procedures typically use the cumulative dam- into those design procedures that use the equivalent tem- age
concept to determine the amount of fracture damage perature and equivalent axle load concepts is straight for- over the design period for each structure. The DAMA Pro- ward. Stated simply, the
maximum tensile strain is cal- gram would fall within this category (62). culated at the equivalent temperature and axle load and
OCR for page 59
60 Standard Mix; Conv. Low Modulus, Conv. High Modulus, Conv. Standard Mix, Full-Depth Endurance Limit 180 Tensile Strain, Bottom of HMA, 160 140 120 micro-strains 100 80 60 40 20 0 7 8 9 10 11 12 13
14 15 16 17 18 19 HMA Thickness, inches Figure 6.1. Tensile strains calculated for an 18-kip single-axle load for the equivalent annual temperature for different HMA mixtures. compared to the
endurance limit. The HMA layer thickness with this method is that the higher loads result in signifi- is simply determined for which the maximum tensile strain cantly higher damage indices; an
increase in axle load will equals or is less than the endurance limit. Figure 6.1 illus- result in an increase in damage to a power of about four. trates the use of the endurance limit within this
method. Thus, the probability of cracking is much higher than the 2. The introduction of the endurance limit into those design probability of a specific tensile strain being exceeded. procedures that
use the equivalent temperature concept 3. Those design procedures that use the incremental dam- but use the actual axle load distribution is also fairly age concept establish a threshold value for
the tensile straight forward. The maximum tensile strain is calculated strain, below which the fracture damage is assumed to at the equivalent temperature for each axle load within the be zero. In
other words, the procedure simply ignores axle load distribution. The axle load distribution for each calculated tensile strains that are equal to or less than the axle type is used to determine the
probability of the tensile value set as the endurance limit for determining the incre- strain exceeding the endurance limit. The designer then mental damage within a specific time period and depth.
considers that probability of exceeding that critical value Successive runs have been made with the MEPDG to deter- in designing an HMA layer for which no fatigue damage mine the difference in
calculated fracture damage with and would accumulate over time. Figure 6.2 illustrates the use without using the endurance limit as an HMA mixture of the endurance limit within this method. One
concern property. Figures 6.3 and 6.4 illustrate the increasing Standard Mix, Conv. Low Modulus, Conv. High Modulus, Conv. Standard Mix, Full-Depth 105 Endurance Limit (65 micro- Probability of
Exceeding 100 95 strains), % 90 85 80 75 70 7 8 9 10 11 12 13 14 15 16 17 18 19 HMA Thickness, inches Figure 6.2. Probability of exceeding the endurance limit for different HMA mixtures using typical
axle load distributions and seasonal temperatures.
OCR for page 59
61 Summer, 250 ksi E=450 ksi E=650 ksi Winter, 900 ksi Endurance Limit 160 Tensile Strain, Bottom of HMA 140 Layer, micro-strains 120 100 80 60 40 20 0 6 16 26 36 Single Axle Load, kips Figure 6.3.
Increasing tensile strains for varying single-axle loads for different seasons or dynamic modulus within those seasons (HMA thickness equals 15 in.). maximum tensile strains for varying single-axle
loads for ing in the wheel paths is assumed to initiate at the surface and different dynamic modulus values and HMA thicknesses, propagate downward. The MEPDG assumes that both types respectively. of
cracking are caused by load-induced tensile strains. That hypothesis, however, has yet to be confirmed. Version 0.9 of the MEPDG did not include the endurance As noted above, the new MEPDG uses an
incremental dam- limit design premise in the recalibration process of the design age index. Fracture damage is computed on a grid basis with methodology or software. In other words, Version 0.9
assumes depth for each month within the analysis or design period. that any tensile strain in the HMA layer induces some fac- Temperatures are computed with the Integrated Climatic ture damage. Two
types of load-related cracking are pre- Model at specific depth intervals for each hour of the day. dicted for designing flexible pavements in accordance with These temperatures are then grouped into
five averages at the MEPDG--alligator cracking and longitudinal cracking in each depth interval for each month. The fatigue cracking the wheel path. Alligator cracking, the more common crack-
(alligator cracking) equation is used to calculate the amount ing distress used in design, is assumed to initiate at the bottom of fracture damage for each depth interval and month. The of the HMA
layer. These cracks propagate to the surface with monthly damage indices are then summed over time to predict continued fracture damage accumulation. Longitudinal crack- the area of fatigue cracking
at each depth interval. 8-kip Load 14-kip Load 18-kip Load 22-kip Load 28-kip Load 34-kip Load Endurance Limit 300 Maximum Tensile Strain, 250 micro-strain 200 150 100 50 0 6 8 10 12 14 16 18 20 HMA
Thickness, inches Figure 6.4. Increasing maximum tensile strains for varying single- axle loads for different HMA thicknesses (HMA dynamic modulus equals 450 ksi; equivalent annual modulus).
|
{"url":"http://www.nap.edu/openbook.php?record_id=14360&page=59","timestamp":"2014-04-19T07:58:05Z","content_type":null,"content_length":"54437","record_id":"<urn:uuid:4b6d42f5-b232-4bef-b9ea-02f6ea71989e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts about math on Gas station without pumps
Sam Shah and other math bloggers have started a challenge to encourage more math-teacher blogging Mission #1: The Power of The Blog | Exploring the MathTwitterBlogosphere:
You are going to write a blog post on one of the following two prompts:
□ What is one of your favorite open-ended/rich problems? How do you use it in your classroom? (If you have a problem you have been wanting to try, but haven’t had the courage or opportunity to
try it out yet, write about how you would or will use the problem in your classroom.)
□ What is one thing that happens in your classroom that makes it distinctly yours? It can be something you do that is unique in your school… It can be something more amorphous… However you want
to interpret the question! Whatever!
I’m not a math teacher blogger—looking back over my posts for the past couple of years, I only see a few that are really about math education:
I use math all the time in my classes (complex numbers, trigonometry, and calculus in the Applied Circuits class; probability and Bayesian statistics in the bioinformatics classes), and I do reteach
the math the students need, as I find that few students have retained working knowledge of the math that they need. But it has been quite a while since I taught a class in which math education was
the primary goal (Applied Discrete Math, in winter 1998).
So I fell a little like an imposter participating in this blogging exercise with the math teacher bloggers.
I don’t have any “favorite” open-ended or rich problems. Most of the problems that I given in my classes have a heavy engineering design component, in either the circuits course or the
bioinformatics courses. Any good engineering design problem is an open-ended, rich problem. If I had to pick a favorite right now, it would be from my circuits class: either the EKG lab (look for
many posts about the design of that lab in the Circuits Course Table of Contents) or the class-D power amplifier (see Class-D power amp lab went smoothly and other posts). But these are not the sort
of “open-ended” problems that the MathTwitterBlogosphere seem to be interested in—the engineering design constraints that make the problems interesting are too restrictive for them, and a lot of them
prefer videos to text (for reasons that seem to me to be based mainly on assumptions of the functional illiteracy of their students, though a few times a sounder justification is given). In any
event, I doubt that any of the problems that I give to students would be appealing to math teachers, so they are not really germane to the MathTwitterBlogosphere challenge that Sam Shah put out.
It is hard to say what I do as a teacher that is “unique”. It is not a goal for me to be a unique teacher—I’d like to see more teachers doing some of the things I do, like reading student work
closely and providing detailed feedback, or designing engineering courses around doing engineering design.
I may be unique in the School of Engineering in how much emphasis I put on students writing well, and how much effort I put into trying to get them to do so. I created a tech writing course for the
computer engineers and scientists back in 1987 and taught it until 2000. More recently, I have provided many bioengineering students feedback on their senior theses, reading and giving detailed
feedback on five drafts from each student in 10 weeks. In my bioinformatics classes, I read the students’ programs very closely, commenting on programming style and the details of the in-program
documentation—these things matter, but students get very little feedback on them in other classes. In the circuits course, I require detailed design reports for each of the 10 weekly assignments
(though I encourage students to work in pairs for the labs and reports). I evaluate the students almost as much on their writing as on their designs—engineers who can’t write up their design
decisions clearly is pretty useless in the real world.
I’ve not done much about math writing, though a good class on mathematical writing (using Halmos’s How to Write Mathematics) would be a great thing for the university to teach. I have blogged before
about writing in math classes, in my post Out In Left Field: Two ways to ensure learning, which is a response to a post by Katherine Beals: Two ways to ensure learning. In my post, I distinguished
between writing mathematics and the sort of mushy writing about mathematics that many high school teachers favor these days.
Centering engineering courses on doing engineering design is a very important thing, but it is not a unique contribution—I’m not the only professor in the School of Engineering who puts the lab
experience at the center of a course design. Gabriel Elkaim’s Mechatronics course is a good example, as are most (all?) of the lab courses that Steve Petersen teaches. In think that, in general, the
Computer Engineering department does a good job of highlighting design in their courses, as does the Game Design major. I just wish that more of the engineering classes did—especially those where it
is much easier just to teach the underlying science and hope that students pick up the engineering later.
At the end of this post, I’m feeling the lack of a good conclusion—I don’t have any open-ended problems to share with math teachers, and I don’t have anything really unique about my teaching that
will make math teachers want to emulate me. I just hope that even a weak contribution to “Mission 1″ is useful, if only to make other participants feel better about their contributions.
Hard Math for Elementary School
Hard Math for Elementary School by Glenn Ellison: (ISBN 9781489507174) looks like a book I could have used with my son about 8 years ago (too bad it was just published a couple of months ago).
The premise is simple: it is a math enrichment textbook, intended to take the topics of elementary school math deeper for gifted students.
The presentation is good, but the students will have to be reading at the 4th grade level as well as doing math at that level to get much out of the book. This is not a flashy book with lots of
illustrations and stories—it is just cool math, presented as cool math.
Disclaimer: I don’t have a copy of the book, and I haven’t read much of it. I used Amazon’s “Look Inside” feature to look at the table of contents and a couple of pages, and saw such gems as
calculation tricks for computing some squares (like 55^2=3025) quickly. (The trick is based on $(x+y)(x-y)=x^2-y^2$, but the author wisely avoids algebraic notation.)
Reviews from people who have looked at it in more detail can be found at http://albanyareamathcircle.blogspot.com/2013/05/recommended-summer-reading-for-young.html and http://
Glenn also has a book for middle school students: Hard Math for Middle School
Conjecture about gnuplot difficulties for students
A number of the students in my circuits class have been having difficulty with gnuplot, but did not turn in their gnuplot scripts with their lab reports, so I could not examine the scripts to look
for misunderstandings.
Debugging class lessons is much harder than debugging programs, because the computers are much more deterministic, and I can usually add extra print statements to narrow down where things are going
wrong. I can’t seem to get the students who are having difficulty to show me what they can and can’t do, which makes figuring out what to explain to them difficult. There are dozens of things that
they might be hung up on, and shotgun techniques where I tried to cover lots of them have not been very successful.
I have a conjecture about what is causing the students greater difficulty than I expected: scope of variables. These students have had lots of math classes, but few or no programming classes, and
their math skills seem to be mainly formal manipulation. They have probably not been exposed much to the notion that the same variable may have different values in different contexts, or that
variables need to be given different names if they are to be used in the same context. Math classes either always use the same variable (all derivatives with respect to x, for example) or pick new
variables for each equation. It is rare for math classes to use the same function repeatedly with different parameters, and plot that function several times on the same graph. It shouldn’t be rare,
but it seems to be.
I was taking scope of variables for granted in my presentations of gnuplot, since all my previous students had already been well exposed to the idea. (Well, maybe not the 5th graders I taught Scratch
to, but Scratch has a simple scope notion, tying variables to sprites that are the visible objects the students are manipulating, so the problem is finessed.)
The scope rules for gnuplot are actually fairly subtle, as there are expressions that are only applicable in connection with a particular data file (like the $1 and $2 column variables), there are
variables whose scope is just the “plot” or “fit” statement (like “x”), there are variables whose scope is a function definition (the parameters of the function), and there are variables whose scope
is global (the parameters being fit in a “fit” statement, for example). I never explained these scope rules to the students, and the tutorials on gnuplot don’t do so either, since they are all
written by programmers for whom the scope of variables is “obvious” and doesn’t need much explanation.
How will I test whether this is indeed the sticking point for the students who are having trouble? What exercises or explanations can I give them to get them to understand scope well enough? The
course is not a programming course, so the usual approaches of programming courses are probably not appropriate. This pedagogical problem requires more thought—suggestions are welcome.
A probability question
Sam Shah, one of the math teacher bloggers that I read, posted a bioinformatics-related question on A biology question that is actually a probability question « Continuous Everywhere but
Differentiable Nowhere:
Let’s say you have a sequence of 3 billion nucleotides. What is the probability that there is a sequence of 20 nucleotides that repeats somewhere in the sequence? You may assume that there are 4
nucleotides (A, C, T, G) and when coming up with the 3 billion nucleotide sequence, they are all equally likely to appear.
This is the sort of combinatorics question that comes up a lot in building null models for bioinformatics, when we want to know just how weird something we’ve found really is.
Of course, we usually end up asking for the expected number of occurrences of a particular event, rather than the probability of the event, since expected values are additive even when the events
aren’t independent. So let me change the problem to
In a sequence of N bases (independent, uniformly distributed), what is the expected number of k-mers (k≪N). Plug in N=3E9 and k=20.
The probability that any particular k-mer occurs in a particular position is 4^-k, so the expected number of occurrences of that k-mer is N/4^k, or about 2.7E-3 for the values of N and k given. Oops,
we should count both strands, so double that to 5.46E-3.
When the expected number is that small, we can use it equally well as the probability of there being one or more such k-mers. (Note: this assumes 4^k ≫ N.)
Now let’s look at each k-mer that actually occurs (all 2N of them), and estimate how many other k-mers match. There are roughly 2N/4^k for each (we can ignore little differences like N vs. N-1), so
there are 4 N^2/4^k total pairs. But we’ve counted each pair twice, so the expected number of pairs is only 2 N^2/4^k, which is 16E6 for N=3E9 and k=20.
We have to take k up to about 32 before we get expected numbers below 1, and up to about 36 before having a repetition is surprising in a uniform random stream.
Math joke
My son explained to me why so few high schools teach complex numbers nowadays. They’re focused on preparing kids for careers, and the field is closed.
|
{"url":"http://gasstationwithoutpumps.wordpress.com/tag/math/","timestamp":"2014-04-19T02:10:57Z","content_type":null,"content_length":"92542","record_id":"<urn:uuid:8aaafcab-9379-485c-a899-0ec3f675e826>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fast polynomial factorization and modular composition
when quoting this document, please refer to the following
URN: urn:nbn:de:0030-drops-17771
URL: http://drops.dagstuhl.de/opus/volltexte/2008/1777/ Kedlaya, Kiran ; Umans, Christopher
Fast polynomial factorization and modular composition
We obtain randomized algorithms for factoring degree $n$ univariate polynomials over $F_q$ requiring $O(n^{1.5 + o(1)} log^{1+o(1)} q+ n^{1 + o(1)}log^{2+o(1)} q)$ bit operations. When $log q < n$,
this is asymptotically faster than the best previous algorithms (von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998)); for $log q ge n$, it matches the asymptotic running time of the best known
algorithms. The improvements come from new algorithms for modular composition of degree $n$ univariate polynomials, which is the asymptotic bottleneck in fast algorithms for factoring polynomials
over finite fields. The best previous algorithms for modular composition use $O(n^{(omega + 1)/2})$ field operations, where $omega$ is the exponent of matrix multiplication (Brent & Kung (1978)),
with a slight improvement in the exponent achieved by employing fast rectangular matrix multiplication (Huang & Pan (1997)). We show that modular composition and multipoint evaluation of multivariate
polynomials are essentially equivalent, in the sense that an algorithm for one achieving exponent $alpha$ implies an algorithm for the other with exponent $alpha + o(1)$, and vice versa. We then give
two new algorithms that solve the problem optimally (up to lower order terms): an algebraic algorithm for fields of characteristic at most $n^{o(1)}$, and a nonalgebraic algorithm that works in
arbitrary characteristic. The latter algorithm works by lifting to characteristic 0, applying a small number of rounds of {em multimodular reduction}, and finishing with a small number of
multidimensional FFTs. The final evaluations are reconstructed using the Chinese Remainder Theorem. As a bonus, this algorithm produces a very efficient data structure supporting polynomial
evaluation queries, which is of independent interest. Our algorithms use techniques which are commonly employed in practice, so they may be competitive for real problem sizes. This contrasts with all
previous subquadratic algorithsm for these problems, which rely on fast matrix multiplication. This is joint work with Kiran Kedlaya.
BibTeX - Entry
author = {Kiran Kedlaya and Christopher Umans},
title = {Fast polynomial factorization and modular composition},
booktitle = {Computational Complexity of Discrete Problems },
year = {2008},
editor = {Peter Bro Miltersen and R{\"u}diger Reischuk and Georg Schnitger and Dieter van Melkebeek},
number = {08381},
series = {Dagstuhl Seminar Proceedings},
ISSN = {1862-4405},
publisher = {Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2008/1777},
annote = {Keywords: Modular composition; polynomial factorization; multipoint evaluation; Chinese Remaindering}
Keywords: Modular composition; polynomial factorization; multipoint evaluation; Chinese Remaindering
Seminar: 08381 - Computational Complexity of Discrete Problems
Issue date: 2008
Date of publication: 11.12.2008
DROPS-Home | Fulltext Search | Imprint
|
{"url":"http://drops.dagstuhl.de/opus/volltexte/2008/1777/","timestamp":"2014-04-17T06:47:45Z","content_type":null,"content_length":"10212","record_id":"<urn:uuid:7dce8798-bb20-4548-9d5f-f140fed89e76>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Jersey Village, TX ACT Tutor
Find a Jersey Village, TX ACT Tutor
...As a violin and viola teacher, I have my students use ear training all the time in their lessons. As a performing musician, I have to use my ear training skills on a daily basis. I have been a
musician since I started playing violin when I was four years old.
35 Subjects: including ACT Math, reading, English, accounting
...I'm a recent graduate of Texas A&M and received my degree in industrial engineering. I'm an experienced tutor and will effectively teach all subjects in a way that is easily understood. I
specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming.
17 Subjects: including ACT Math, reading, calculus, geometry
...My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with basic concepts, and explain and clarify the ones you need some improvement on;
and then to work on the specific areas of your assignments, such as solving equations with radicals or gra...
20 Subjects: including ACT Math, writing, algebra 1, algebra 2
...Tests are boring, but working with me, you'll never be bored. I studied English and Political Science as an undergraduate at University of San Diego and English as a graduate student at Rice
University, making me well qualified to tutor English, reading, writing, AP Literature, AP Language, AP G...
22 Subjects: including ACT Math, English, college counseling, ADD/ADHD
...From geology to meteorology, geology, physics, biology, chemistry, cosmology, etc. I have a grasp of the basics of most all elementary sciences. I have a bachelors and a masters degree in civil
engineering from Texas A&M, the school that educates the majority of the civil engineers in the state of Texas.
37 Subjects: including ACT Math, chemistry, geometry, physics
|
{"url":"http://www.purplemath.com/Jersey_Village_TX_ACT_tutors.php","timestamp":"2014-04-20T13:39:43Z","content_type":null,"content_length":"24129","record_id":"<urn:uuid:fc2fcf2e-b3fb-4c3f-a22c-7e191d97dfe9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
|
TankSpot Calculator
07-24-2007, 07:06 PM
TankSpot Calculator
The TankSpot Calculator is a quick way of getting information that relates to Effective Health and Shield Slam damage. It is a simple input/output calculator. It is best used to help make gear
choices when you are unsure whether Stamina or Armor would help you more; an example might be comparing your stats with Azure Shield of Coldarra against your stats with Crest of Sha'Tar (hint:
Sha'tar loses in almost all comparisons). In the future, it may be used to index mobs and bosses to make gearing choices easier.
How do I determine my Block Value?
There are three ways to do this.
First, go fight a nearby creature and use Shield Block. Then, check your Combat Log for a specific amount of damage blocked. If no number is shown, find a creature that hits harder -- such as a
creature in an instance or a Fel Reaver in Hellfire Peninsula
Second, add up your Shield Block Value via your in-game Character Pane or your Armory entry. If you choose to do this, do not forget both the base value and bonus value of your shield itself.
Then, divide your Strength by 20 and add that number to the total. Finally, multiply your Block Value by 1.3 and subtract 1 from the total. Enter this number.
Third, download Whitetooth's Tankpoints mod, which will display it conveniently on-screen.
What about Enemy Levels?
Effective Health is most useful as a tool for dealing with raid bosses. Nearly all raid bosses are considered to be level 73 Elite.
You may adjust your Effective Health for any creature down to level 60. However, the calculator is not designed to deal with levels 1-59, as Armor calculations in that range use a different
What is Effective Health?
Read the Effective Health article here at TankSpot. It explains in detail the Effective Health theory.
Is this my full Effective Health?
No! You gain additional Effective Health via Shield Block. Unfortunately, this can't be quantified without further information. Read the addendum in Effective Health.
What is CCS Rating?
This is a quick rating system intended to help determine benchmarks for raid content. For instance, a CCS rating of 3.5 might be good to reach prior to raiding Karazhan.
How is Average Damage determined?
This is a very simple calculation for average Shield Slam damage upon a successful strike. This average is only an average of the range of a normal Shield Slam hit, not an average that includes
Crit or Miss calculations. All calculations are done before Armor is factored.
What all counts in Additional TPS?
Additional Threat-Per-Second, or TPS, is determined under the assumption that you will use Shield Slam every 6 seconds. It factors in both Defiance and Defensive Stance to produce a TPS average.
It includes the full damage of an attack, including base damage. Like Average Damage, it does not include Crit or Miss calculations.
What is Percent of Record?
The Percent of Record score is a percentage of the current known record for Shield Slam, 7318. Find more information at shieldslam.com.
08-17-2007, 10:27 PM
CCS Rating Question
For the CCS Rating, is the calculation assuming the stats given are unbuffed or with full raid buffs?
08-17-2007, 11:08 PM
Assuming unbuffed.
08-18-2007, 05:55 AM
Is there a CSS chart or some basic values that have been calculated for entering SSC, TK, Hyjal, BT? Think it would be interesting if a CSS value was established for each boss.
|
{"url":"http://www.tankspot.com/printthread.php?t=31863&pp=20&page=1","timestamp":"2014-04-21T01:00:44Z","content_type":null,"content_length":"7481","record_id":"<urn:uuid:7d0e6d6a-ac11-4256-b2b8-77ecc706ce8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Space-Variant Fourier Analysis: The Exponential Chirp Transform
October 1997 (vol. 19 no. 10)
pp. 1080-1089
ASCII Text x
Giorgio Bonmassar, Eric L. Schwartz, "Space-Variant Fourier Analysis: The Exponential Chirp Transform," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 10, pp.
1080-1089, October, 1997.
BibTex x
@article{ 10.1109/34.625108,
author = {Giorgio Bonmassar and Eric L. Schwartz},
title = {Space-Variant Fourier Analysis: The Exponential Chirp Transform},
journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {19},
number = {10},
issn = {0162-8828},
year = {1997},
pages = {1080-1089},
doi = {http://doi.ieeecomputersociety.org/10.1109/34.625108},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
TI - Space-Variant Fourier Analysis: The Exponential Chirp Transform
IS - 10
SN - 0162-8828
EPD - 1080-1089
A1 - Giorgio Bonmassar,
A1 - Eric L. Schwartz,
PY - 1997
KW - Logpolar mapping
KW - rotation scale and shift invariance
KW - attention
KW - space-variant image processing
KW - Fourier analysis
KW - nonuniform sampling
KW - real-time imaging
KW - warped template matching.
VL - 19
JA - IEEE Transactions on Pattern Analysis and Machine Intelligence
ER -
Abstract—Space-variant, or foveating, vision architectures are of importance in both machine and biological vision. In this paper, we focus on a particular space-variant map, the log-polar map, which
approximates the primate visual map, and which has been applied in machine vision by a number of investigators during the past two decades. Associated with the log-polar map, we define a new linear
integral transform, which we call the exponential chirp transform. This transform provides frequency domain image processing for space-variant image formats, while preserving the major aspects of the
shift-invariant properties of the usual Fourier transform. We then show that a log-polar coordinate transform in frequency (similar to the Mellin-Transform) provides a fast exponential chirp
transform. This provides size and rotation, in addition to shift, invariant properties in the transformed space. Finally, we demonstrate the use of the fast exponential chirp algorithm on a database
of images in a template matching task, and also demonstrate its uses for spatial filtering. Given the general lack of algorithms in space-variant image processing, we expect that the fast exponential
chirp transform will provide a fundamental tool for applications in this area.
[1] E.L. Schwartz, "Computational Studies of the Spatial Architecture of Primate Visual Cortex: Columns, Maps, and Protomaps," Primary Visual Cortex in Primates, A. Peters and K. Rocklund, eds., vol.
10, Cerebral Cortex. Plenum Press, 1994.
[2] C.F. Weiman and G. Chaikin, "Logarithmic Spiral Grids for Image-Processing and Display," Computer Graphics and Image Processing, vol. 11, pp. 197-226, 1979.
[3] D. Asselin and H.H. Arsenault, "Rotation and Scale Invariance With Polar and Log-Polar Coordinate Transformations," Optics Comm., vol. 104, pp. 391-404, Jan. 1994.
[4] J.K. Brousil and D.R. Smith, "A Threshold-Logic Network for Shape Invariance," IEEE Trans. Computers, vol. 16, pp. 818-828, 1967.
[5] D. Casasent and D. Psaltis, "Position, Rotation and Scale-Invariant Optical Correlation," Applied Optics, vol. 15, pp. 1,793-1,799, 1976.
[6] B.R. Frieden and C. Oh, "Integral Logarithmic Transform: Theory and Applications," Applied Optics, vol. 31, no. 8, pp. 1,138-1,145, Mar. 1992.
[7] A.S. Rojer and E.L. Schwartz, "Design Considerations for a Space-Variant Visual Sensor With Complex-Logarithmic Geometry," Proc. Int'l Conf. Pattern Recognition, ICPR-10, vol. 2, pp. 278-285,
[8] G. Sandini, F. Bosero, F. Bottino, and A. Ceccherini, "The Use of an Antropomorphic Visual Sensor for Motion Estimation and Object Tracking," Proc. OSA Topical Meeting Image Understanding and
Machine Vision, 1989.
[9] J. van der Spiegel, F. Kreider, C. Claiys, I. Debusschere, G. Sandini, P. Dario, F. Fantini, P. Belluti, and G. Soncini, "A Foveated Retina-Like Sensor Using CCD Technology," Analog VLSI
Implementations of Neural Networks, C. Mead and M. Ismail, eds. Boston: Kluwer, 1989.
[10] R.D. Juday, "Log-Polar Dappled Target," Optics Letters, vol. 20, no. 21, pp. 2,234-2,236, Nov. 1995.
[11] G. Engel, D. Greve, J. Lubin, and E. Schwartz, "Space-Variant Active Vision and Visually Guided Robotics: Design and Construction of a High-Performance Miniature Vehicle," Proc. Int'l Conf.
Pattern Recognition, ICPR-12, 1994.
[12] G. Bonmassar and E. Schwartz, "Geometric Invariance in Space-Variant Vision," Proc. Int'l Conf. Pattern Recognition, ICPR-12, 1994.
[13] D.F. Elliot and K.R. Rao, Fast Transforms: Algorithms, Analyses, Applications.New York: Academic Press, 1982.
[14] P. Cavanagh, "Size and Position Invariance in the Visual System," Perception, vol. 7, pp. 167-177, 1978.
[15] E.L. Schwartz, "Spatial Mapping in Primate Sensory Projection: Analytic Structure and Relevance to Perception," Biological Cybernetics, vol. 25, pp. 181-194, 1977.
[16] P. Kellman and J.W. Goodman, "Coherent Optical Implementation of 1-D Mellin Transforms," Applied Optics, vol. 16, pp. 2,609-2,610, 1977.
[17] Y. Sheng and H.H. Arsenault, "Experiments on Pattern Recognition Using Invariant Fourier-Mellin Descriptors," J. Optical Soc. Am., A, vol. 3, pp. 771-776, 1986.
[18] A. Papoulis, "Error Analysis in Sampling Theory," Proc. IEEE, vol. 54, pp. 947-955, July 1966.
[19] D.C. Stickler, "An Upper Bound on Aliasing Error," Proc. IEEE, vol. 55, pp. 418-419, 1967.
[20] J.L. Brown Jr., "A Least Upper Bound for Aliasing Error," IEEE Trans. Automatic Control, vol. 13, pp. 754-755, 1968.
[21] A.W. Splettstosser, "Error Estimates for Sampling Approximation of Non-Bandlimited Functions," Math. Methods in the Applied Sciences, vol. 1, pp. 127-137, 1979.
[22] H.S. Shapiro and R.A. Silverman, "Alias-Free Sampling of Random Noise," J. Soc. Industrial and Applied Math., vol. 8, no. 2, pp. 225-249, June 1960.
[23] F.J. Beutler, "Alias-Free Randomly Timed Sampling of Stochastic Processes," IEEE Trans. Information Theory, vol. 16, no. 2, pp. 147-152, Mar. 1970.
[24] A.J. Jerry, "The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review," Proc. IEEE, vol. 65, no. 11, pp. 1,565-1,596, Nov. 1977.
[25] B. Van Der Pol, "The Fundamental Principles of Frequency Modulation," J. IEE (London), vol. 93, pt. 3, no. 23, pp. 153-158, May 1946.
[26] A. Papoulis, Signal Analysis. McGraw-Hill, 1977.
[27] H.P. Kramer, "A Generalized Sampling Theorem," J. Math. Physics, vol. 38, pp. 68-72, 1959.
[28] J.J. Clark, M.R. Palmer, and P.D. Lawrence, "A Transformation Method for the Reconstruction of Functions From Nonuniformly Spaced Samples," IEEE Trans. Acoustics, Speech, and Signal Processing,
vol. 33, no. 4, p. 1,151, Oct. 1985.
[29] H. Stark, "Sampling Theorems in Polar Coordinates," J. Optical Soc. Am., vol. 69, no. 11, pp. 1,519-1,525, Nov. 1979.
[30] C.F.R. Weiman, "Video Compression via Log-Polar Mapping," Proc. SPIE Symp. OE-Aereospace Sensing, Apr. 1990.
[31] B. Fischl, M. Cohen, and E.L. Schwartz, "The Local Structure of Space-Variant Images," Neural Networks, vol. 10, no. 5, pp. 815-831, 1997.
[32] G. Bonmassar and E. Schwartz, "Lie Groups, Space-Variant Fourier Analysis and the Exponential Chirp Transform," Proc. Computer Vision and Pattern Recognition 96, pp. 229-237, 1996.
[33] P.J. Burt and E.H. Adelson, “The Laplacian Pyramid as a Compact Image Code,” IEEE Trans. Comm., vol. 31, no. 4, pp. 532-540, 1983.
[34] S.L. Tanimoto, "Template Matching in Pyramids," Computer Vision, Graphics, and Image Processing, vol. 16, pp. 356-369, 1981.
[35] S.G. Mallat,“A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 7, pp. 674-693, 1989.
[36] J.G. Daugman, "Two-Dimensional Spectral Analysis of Cortical Receptive Field Profile," Vision Research, vol. 20, pp. 847-856, 1980.
[37] J.G. Daugman, “Complete Discrete 2D Gabor Transforms by Neural Networks for Image Analysis and Compression,” IEEE Trans. Acoustics, Speech, and Signal Processing, vol. 36, no. 7, 1988.
[38] R. Wallace, P.-W. Ong, B. Bederson, and E. Schwartz, "Space Variant Image Processing," Int'l J. Computer Vision, vol. 13, no. 1, pp. 71-90, 1994.
[39] E.L. Schwartz, "Image Processing Simulations of the Functional Architecture of Primate Striate Cortex," Investigative Ophthalmic and Vision Research (Supplement), vol. 26, no. 3, pp. 164, 1985.
[40] G. Bonmassar, "The Exponential Chirp Transform," PhD Thesis, Biomedical Eng. Dept., Boston Univ., 1997.
[41] R.L. DeValois and K.K. DeValois, Spatial Vision. Oxford Univ. Press, 1988.
[42] J.A. Nelder and R. Mead, "A Simplex Method for Function Minimization," Computer J., vol. 7, pp. 308-313.
[43] J.-C. Liu and H.-C. Chiang, "Fast High-Resolution Approximation of the Hartley Transform at Arbitrary Frequency Estimator," Signal Processing, vol. 44, no. 2, pp. 211, June 1995.
[44] Jose A. Ferrari, "Fast Hankel Transform of Order Zero," J. Optical Soc. Am., A, vol. 12, no. 8, pp. 1,812, Aug. 1995.
[45] J. Strain, "A Fast Laplace Transform Based on Laguerre Functions," Math. Computation, vol. 58, no. 197, pp. 275-283, Jan. 1992.
[46] L. Greengard and J. Strain, "The Fast Gauss Transform," SIAM J. Scientific Statistical Computing, vol. 12, no. 1, pp. 79-94, Jan. 1991.
[47] J.F. Yang, S.C. Shaih, and B.L. Bai, "Fast Two-Dimensional Inverse Discrete Cosine Transform for HDTV or Videophone Systems," IEEE Trans. Consumer Electronics, vol. 39, no. 4, pp. 934-940, Nov.
[48] B.T. Kelly and V.K. Madisetti, "The Fast Discrete Radon Transform—I: Theory," IEEE Trans. Image Processing, vol. 2, no. 3, pp. 382-400, July 1993.
[49] C.M. Rader, "Discrete Fourier Transform When the Number of Data Samples is Prime," Proc. IEEE, vol. 56, no. 6, pp. 1,107-1,108, June 1968.
[50] J.H. McClellan and C.M. Rader, Number Theory in Digital Signal Processing.Englewood Cliffs, N.J.: Prentice-Hall, 1979.
Index Terms:
Logpolar mapping, rotation scale and shift invariance, attention, space-variant image processing, Fourier analysis, nonuniform sampling, real-time imaging, warped template matching.
Giorgio Bonmassar, Eric L. Schwartz, "Space-Variant Fourier Analysis: The Exponential Chirp Transform," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 10, pp. 1080-1089,
Oct. 1997, doi:10.1109/34.625108
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tp/1997/10/i1080-abs.html","timestamp":"2014-04-19T03:15:50Z","content_type":null,"content_length":"62464","record_id":"<urn:uuid:82903b60-92a3-4b1d-9f53-7312a1daeb5c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cryptology ePrint Archive: Report 2009/460
Asymptotic enumeration of correlation-immune boolean functionsE. Rodney Canfield and Zhicheng Gao and Catherine Greenhill and Brendan D. McKay and Robert W. RobinsonAbstract: A boolean function of n
boolean variables is correlation-immune of order k if the function value is uncorrelated with the values of any k of the arguments. Such functions are of considerable interest due to their
cryptographic properties, and are also related to the orthogonal arrays of statistics and the balanced hypercube colourings of combinatorics. The weight of a boolean function is the number of
argument values that produce a function value of 1. If this is exactly half the argument values, that is, 2^{n-1} values, a correlation-immune function is called resilient.
An asymptotic estimate of the number N(n,k) of n-variable correlation-immune boolean functions of order k was obtained in 1992 by Denisov for constant k. Denisov repudiated that estimate in 2000, but
we will show that the repudiation was a mistake.
The main contribution of this paper is an asymptotic estimate of N(n,k) which holds if k increases with n within generous limits and specialises to functions with a given weight, including the
resilient functions. In the case of k=1, our estimates are valid for all weights. ~
Category / Keywords: boolean functionsDate: received 17 Sep 2009Contact author: csg at unsw edu auAvailable format(s): PDF | BibTeX Citation Version: 20090920:173218 (All versions of this report)
Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
|
{"url":"http://eprint.iacr.org/2009/460","timestamp":"2014-04-17T13:13:06Z","content_type":null,"content_length":"2877","record_id":"<urn:uuid:4da2a9ae-f9c3-4b5a-a223-b58e42b8de62>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
FOM: "Was sind und was sollen die Zahlen"
Vladimir Sazonov sazonov at logic.botik.ru
Tue Nov 11 16:38:23 EST 1997
Let me finish description of some aspects of formalizing
Feasible Arithmetic. (This is a third message on this subject to
FOM list.) I hope that after reading the following part the idea
of this formalization will become sufficiently clear.
In a previous posting I presented argumentation for restricting
abbreviation mechanisms which are usually involved in
mathematical proofs. Otherwise it is hardly possible to
formalize feasible numbers. Note again, that all the details are
contained in my paper "On Feasible Numbers", LNCS, Vol.960
(http://www.botik.ru/~logic/SAZONOV/papers.html). Therefore now
I will give only some very short and schematic notes.
First, I claim that the ordinary FOL (formulated either as
Hilbert or Gentzen style Calculus or as Natural Deduction)
actually involves some abbreviating tool (besides those which we
may and actually use additionally). Best of all this could be
seen in Natural Deduction rules for existential quantification.
Specifically, the corresponding introduction rule
\exists x A(x)
actually serves as a rule of introduction of a name
(abbreviation) for a term t. This name is just the quantified
variable x. Elimination rule
[A(x)] | If (\exists x A(x)) is proved,
. | we may temporarily assume A(x)
. | and continue proving the goal C.
. |
\exists x A(x) C |
-------------------, |
C |
allows to *use* the abbreviation x introduced in previous rule
when the second follows, say, immediately after the first. Let
us consider this situation as *non-normal* and try to impose a
requirement that all the inferences should be *normal*. The
simplest way to do this is using *traditional notion of normal
inference* which is well known in proof theory. We also may
consider some more liberal notions of normality. (Details are
omitted.) The main idea is to abandon using (or to use only in a
properly restricted way) abbreviations of terms.
This is required because terms are dealing with *objects* of the
`intended' (possibly very vague) model such as Feasible Numbers.
Therefore such abbreviations may have too strong and undesirable
influence on the nature of (objects of) this model. More
properly speaking, abbreviations may actually play the role of
axioms (on existence of big, imaginary numbers like 2^1000)
which we did not assume to impose on the model. (E.g., if your
intention is to consider a theory of Hereditarily-Finite sets,
why to include non-deliberately Infinity Axiom, may be in some
hidden form?)
On the other hand, abbreviations of formulas and of proofs (if
they do not involve someway abbreviations of terms) are still
allowed and even seem desirable as in the traditional
mathematics. As I mentioned above, some kind of term
abbreviations may be also quite safe.
Unfortunately (or may be quite happily?) the above mentioned
normality restrictions (which not necessary coincide with the
traditional notion of normality) simultaneously restrict using
Modus Ponens rule because it can make inference to involve
*implicit* abbreviations. (Existentiality introduction rule may
become followed in a reasonable sense by corresponding
existentiality elimination.) So, it is possible that modus
ponens applied to normal proofs will give non-normal one. As
usual, we may prove (by some complicated induction argument)
normalizability *meta*theorem. However, we know that the cost
of normalization may be extremely high (just of non-elementary
complexity as it was shown by Orevkov and Statman). We cannot
guarantee that the resulting inference will be feasible. I would
say differently: After normalization we will possibly get only
an *imaginary inference*. All of this means that modus ponens is
applicable, but with some cost. Moreover, sometimes the cost is
(practically) infinitely high. We have the same problems with
*transitivity of implication*. (I remember some notes of
Poincare on postulating transitivity *despite* it fails in the
real world.)
Now I present several *provable facts* in Feasible Arithmetic to
demonstrate some unexpected effects.
It is possible to *formally define* two initial parts S and M of
the `set' F of all numbers of Feasible Arithmetic. Intuitively,
S are `small numbers'and M are `middle numbers' or numbers
laying before a `horizon' (Vopenka's informal notion; also note,
that there was a popular lecture by Kolmogorov where he
discussed very informally notions of small, middle, and big
numbers). Further, 0\in S, 0\in M and there are *proper*
S \subset {0,...,10} \subset M \subset {0,...,1000} \subset F
\subset {0,...,2^1000}.
There exists the biggest small number. It is 5 or 6 or 7 and we
cannot decide which one exactly, because the underlying logic is
(essentially) classical.
Moreover, *there exists no biggest number* in M (M is *closed
under successor*), and this is not a contradiction!.
Any attempt to prove that 1000 is in M (say, by applying modus
ponens 1000 times) will not succeed because the resulting proof
will be non-normal and its normal form is non-feasible. The
bigger is a numeral n, the harder is to prove that n \in M and
this is practically impossible to prove for n=1000.
It looks very strange, but simultaneously we can prove
(non-constructively) that M *has the last number*!
This is not a contradiction with the fact that M is closed under
successor because we can get a contradiction from any A and ~A
(defined as A -> falsity) only by modus ponens which is too
difficult to `apply' in our case.
We may define *feasible real numbers* between 0 and 1 in binary
notation as sequences like 0.011001111100100101... containing
the `middle' number of digits and defined as maps of the type
M -> {0,1}. (We may represent such maps as restrictions to M of
maps {0,...,1000} -> {0,1}.) This way we are getting the notion
of *feasible continuum* which is BOTH *continuous* (because
there is no last digit in such real numbers) AND *discrete*
(because simultaneously there should be the last digit).
This seems to me promising. Is not this a possibility for a new
*Feasible Non-Standard Continuous&Discrete Analysis* which would
be `good' for computers and probably for physics? Note, that in
contrast to the ordinary Robinson's approach `non-standard'
features arise here at the very beginning by the very nature of
(feasible) natural numbers considered.
Of course, this is a simplest possible version of formal
feasibility theory. It may be considered as something like a
curious or just as an exercise for estimating the cost of
cut-elimination/normalization. It is necessary (and hopefully
possible) to do much-much more work to get a theory which will
be technically more useful and interesting. Nevertheless, even
this theory seems to illuminate something essential on
feasibility. At least, instead of fantasizing on feasibility or
Heap Paradox, etc. we can try to *prove* some theorems or
metatheorems. I think this is more fruitful because it is governed by
some kind of formal logic. Finally note, that this approach to
vagueness differs essentially from the well-known considerations
on fuzzy logic, etc. It also may be considered as some kind of
feasible complexity theory.
Vladimir Sazonov
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000218.html","timestamp":"2014-04-20T01:10:20Z","content_type":null,"content_length":"9810","record_id":"<urn:uuid:1fbeebb7-63ef-4171-b001-f88fd65715ef>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Users Manual for LS-DYNA Concrete Material Model 159
PDF Version
(1.49 KB)
PDF files can be viewed with the Acrobat® Reader®
Chapter 4. Examples Manual
This chapter contains example problems that help users become familiar with set up and usage of the concrete material model. These are single element simulations in tension and compression. These
simulations demonstrate two methods of setting up the concrete material property input. The fast and easy method is to use default material properties selected as a function of concrete compressive
strength and maximum aggregate size. The more detailed method is to specify all material properties individually. In addition to analyzing plain concrete, the user may wish to analyze reinforced
concrete. Modeling steel reinforcement is discussed in appendix B. Numerous other example problems for plain and reinforced concrete are given in the companion concrete model evaluation report.^(1)
Concrete material model input is given in Figure 105 for default concrete parameters and in Figure 106 for user-specified properties. A complete input file, with nodes and elements, is given in
appendix C. This file is for tensile loading in uniaxial stress of a single element. To convert to compressive loading, change the sign of the ordinate under *DEFINE CURVE at the bottom of the file.
Figure 105. Computer printout. Example concrete model input for default material property input (option mat_CSCM_concrete).
Single element stress-strain results are shown in Figure 107 for concrete with a compressive strength of 30 MPa (4,351 psi) and a maximum aggregate size of 19 mm (0.75 inches). These results can be
achieved using either the default input shown in Figure 105 or the user-specified input shown in Figure 106. Note that the peak strength attained in compression matches the specified strength listed
in Figure 105, which is 30 MPa (4,351 psi). Results are plotted with LS-POST as cross-plots of element z-stress versus z-strain. As additional exercises, the user can vary the unconfined compressive
strength, aggregate size, and rate effects to examine the variation in concrete behavior with these quantities.
Note that the concrete tensile strength is less than 10 percent of the compressive strength. Because of concrete's low tensile strength, unintended tensile damage may occur in the vicinity of contact
surfaces, as discussed in the concrete evaluation report.^(1)
Figure 106. Computer printout. Example concrete model input for user-specified material property input (option MAT_CSCM).
psi = 145.05 MPa
Figure 107. Graph. Example single element stress-strain results for 30 MPa (4,351 psi) concrete with 19-mm (0.75-inch) maximum aggregate size.
Previous | Table of Contents | Next
|
{"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/05062/chapt4.cfm","timestamp":"2014-04-19T03:00:13Z","content_type":null,"content_length":"11393","record_id":"<urn:uuid:62673b7a-13ef-4fc0-afb5-72819f2cdff4>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On Langlands Pairing and transfer factors
up vote 5 down vote favorite
In the paper "On the definition of transfer factors" Langlands and Shelstad define a certain number of factors $\Delta_{I}$, $\Delta_{II}$,$\Delta_{III,1}$,$\Delta_{III,2}$, which are roots of unity.
Let consider the easy case where $G=SL_{2}$ over a field $F$ and as a endoscopic group I take $H$ a one dimensional anisotropic torus splitted by a quadratic extension $E/F$.
My question concerns the computation of the transfer factor $\Delta_{III,2}$ in this particular case.
First, Langlands talk about a pairing between $T(F)$ and $H^{1}(W,\hat{T})$, where $\hat{T}$ is the dual torus, how can we compute explicitely this pairing?
Second, if I choose a $\chi$-data fix admissible embeddings $j:\hat{H}\rightarrow\hat{G}$ and $\gamma_{H}\in H(F)$ $G$-regular semisimple, I can define a cocycle $a\in H^{1}(W,\hat{T})$ (LS, sect.
3.4 p44)
and $\Delta_{III,2}$ is the pairing of $a$ and $\gamma:=j(\gamma_{H})$.
Can we compute it explicitely?
algebraic-groups algebraic-number-theory trace-formula automorphic-forms
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged algebraic-groups algebraic-number-theory trace-formula automorphic-forms or ask your own question.
|
{"url":"https://mathoverflow.net/questions/115341/on-langlands-pairing-and-transfer-factors","timestamp":"2014-04-17T18:41:15Z","content_type":null,"content_length":"46603","record_id":"<urn:uuid:30b3cfb3-7c8f-41de-b37d-300dd11bc212>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Formalization Thesis vs Formal nature of mathematics
Vladimir Sazonov Vladimir.Sazonov at liverpool.ac.uk
Sat Dec 29 20:24:30 EST 2007
The question discussed so far is whether any mathematical theorem and
its proof can be formalized, say, in ZFC. It appears as if
"mathematical" is something which exists and makes sense independently
of any formalization and can be (optionally?) formalized. I understand
things differently.
The main definitive and distinctive attribute of mathematics is that it
is rigorous. But what means rigorous needs to be explained. I take
rigorous = formal and understand formal in sufficiently general sense
of this word. The contemporary concept of formal system (FOL, PA, ZFC,
etc.) is only a limited version of 'formal'.
Mathematical definitions, constructions and proofs were always
sufficiently formal (except may be for some periods like the heroic
time of invention of Analysis). In a rigorous/formal proof (or
construction or definition) we deliberately distract from the content
(intuition) and check only correctness of the form of the proof
according to some standards - depending on the historical time, of
course. These standards can be known either explicitly, like in
contemporary formal systems where even computer can perform this
checking, or invented and used instinctively and studied just by the
way of training in doing/reading/repeating mathematical proofs under
supervision of experienced mathematician/teacher as it was always the
case, and is also in our time. Of course, we additionally adapt these
standards to our intuition, and vice versa. (Not a meaningless formal
play!) The 'vice versa' is also important because our intuition can be
considerably changed by formalization. (E.g. the counterintuitive and
non-intended to be deliberately introduced by the epsilon-delta
formalization continuous and nowhere differentiable curves become quite
intuitive after some experience in doing formal proofs in Analysis.)
Thus, when we say that a formalization of our intuition is adequate or
faithful, we should realize that this is understood in not so
straightforward way.
Anyway, in contemporary mathematics the highest standard of rigour or
formality is known explicitly, at least to the mathematical community
if not to each separate mathematician. This is the result of the
progress done in foundations of mathematics in the previous century.
Nowadays it is impossible to speak on mathematical theorems and proofs
which are not (potentially) formalized yet. A "proof" which is doubtful
to be formalizable (in some formal system) is not considered as
mathematical. It is only a draft of a possible mathematical proof, if
any. Of course, mathematics (unlike programming or software
engineering) does not require completed formalization (like a computer
program). We need only potential formalizability. The point is that
mathematicians can usually clearly distinguish potential
formalizability from irresponsible speculations and are highly
sensitive in this respect.
Now, on Formalization Thesis. I understand that only intuition, or
preliminary considerations or drafts of proofs can be formalizaed, that
is, made mathematical. Mathematical proofs (if they are really
confirmed by mathematical community) are already formal(izable) in a
formal system. This formal system may be not mentioned explicitly in
the proof. But understanding and confirming a proof assumes convincing
ourselves that some formalization is possible, and this is possible
only if the author of the proof acts according to the standards of
rigour, i.e. sufficiently formally.
Then, what is the content of Formalization Thesis? To show
formalizability of what is already shown to be formalizable? Or to
transform formalizable proof to explicitly/absolutely formal (like a
computer program)? The last task is quite interesting in itself, but
potential formalizability is usually quite sufficient for mathematical
rigour. It would be awful for mathematicians to write absolutely formal
proofs. But it is compulsory to be convincing that the proof is
potentially foprmalizable.
I think the real question should be about the formal nature of
mathematics. Why formal side or rigour is so important in mathematics?
How it woks for strengthening our intuition and thought because formal
systems for our thought are like mechanical and other engineering tools
and devices making us stronger faster, etc.
BEZIAU Jean-Yves wrote:
> Formalization, what a nasty world !
Cars, air planes, computers -- what a nasty world?
Can you imagine mathematics without rigour?
S. S. Kutateladze wrote:
> the defenders of FT imply seemingly
> that mathematics reduces to the texts that are meaningful
> without human beings. I disagree
I am defender of a formalist view on mathematics - not of FT as it was
stated. Anyway, it is difficult to understand what do you mean. Say,
does the fact that a computer program (an absolutely formal text)
created by a human being can work autonomously from its creator make
this program meaningless or something defective just because it is
formal and autonomous? (Note that such a program can involve a lot of
original ideas probably also from mathematics and do something very
Happy New Year to everybody!
Vladimir Sazonov
This message was sent using IMP, the Internet Messaging Program.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2007-December/012399.html","timestamp":"2014-04-17T12:29:45Z","content_type":null,"content_length":"7874","record_id":"<urn:uuid:5b6b84f2-94d1-4cb3-9fc4-b95127646716>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Posts Tagged with 'Calculus'—Wolfram|Alpha Blog
Another year has flown by here at Wolfram|Alpha, and the gears are really turning! New data and features are flowing at a rapid rate. To celebrate, Wolfram|Alpha’s creator, Stephen Wolfram, will
share what we’ve been working on and take your questions in a live Q&A.
Please join us on Facebook or Wolfram|Alpha’s Livestream on Wednesday, May 18, 2011, at 10am PDT/12pm CDT/1pm EDT/6pm BST.
If you have a question you’d like to ask, please send it as a comment to this blog post or tweet to @Wolfram_Alpha and include the hashtag #WAChat. We’ll also be taking questions live on Facebook and
Livestream chat during the webcast.
We’re looking forward to chatting with you on May 18!
Do you need some help navigating your chemistry or precalculus classes? Or maybe you’re still trying to decide which classes to take this fall. Good news! Today, we’re releasing the Wolfram General
Chemistry and Precalculus Course Assistant Apps, two more Wolfram|Alpha-powered course assistants that will help you better understand the concepts addressed in these classes.
If you’re taking chemistry, download the Wolfram General Chemistry Course Assistant App for everything from looking up simple properties like electron configurations to computing the stoichiometric
amounts of solutes that are present in solutions of different concentrations. This app is handy for lab researchers, too!
The specialized keyboard allows you to enter chemicals by using formulas or by spelling out their names.
More »
Today we are releasing Wolfram Multivariable Calculus and Wolfram Astronomy, the next two apps on a growing list of Wolfram Course Assistant Apps. These course assistants will help students learn
their course material using the power of Wolfram|Alpha.
The Wolfram Astronomy Course Assistant allows you to easily look up information on constellations and planets, but it can also calculate anything from the next lunar eclipse to the solar interior.
Today we’re releasing the first three of a planned series of “course assistant” apps, built using Wolfram|Alpha technology.
The long-term goal is to have an assistant app for every major course, from elementary school to graduate school. And the good news is that Wolfram|Alpha has the breadth and depth of capabilities to
make this possible—and not only in traditionally “computational” kinds of courses.
The concept of these apps is to make it as quick and easy as possible to access the particular capabilities of Wolfram|Alpha relevant for specific courses. Each app is organized according to the
major curriculum units of a course. Then within each section of the app, there are parts that cover each of the particular types of problems relevant to that unit.
More »
A new school year is here, and many students are diving into new levels of math. Fortunately, this year, you have Wolfram|Alpha to help you work through math problems and understand new concepts.
Wolfram|Alpha contains information from the most basic math problems to advanced and even research-level mathematics. If you are not yet aware of Wolfram|Alpha’s math capabilities, you are about to
have a “wow” moment. For the Wolfram|Alpha veterans, we have added many math features since the end of the last school year. In this post, we’re highlighting some existing Wolfram|Alpha math
essentials, such as adding fractions, solving equations, statistics, and examples from new topics areas like cusps and corners, stationary points, asymptotes, and geometry.
You can access the computational power of Wolfram|Alpha through the free website, via Wolfram|Alpha Widgets, with the Wolfram|Alpha App for iPhone, iPod touch, and the iPad! Even better, the Wolfram|
Alpha Apps for iPhone, and iPod touch, and the iPad are now on sale in the App Store for $0.99 though September 12.
If you need to brush up on adding fractions, solving equations, or finding a derivative, Wolfram|Alpha is the place to go. Wolfram|Alpha not only has the ability to find the solutions to these math
problems, but also to show one way of reaching the solution with the “Show Steps” button. Check out the post “Step-by-Step Math” for more on this feature.
You can find this widget, and many others, in the Wolfram|Alpha Widget Gallery. Customize or build your own to help you work through common math problems. Then add these widgets to your website or
blog, and share them with friends on Facebook and other social networks.
Of course, Wolfram|Alpha also covers statistics and probability. For example, Wolfram|Alpha can compute coin tossing probabilities such as “probability of 21 coin tosses“, and provides information on
normal distribution: More »
Wolfram|Alpha computes things. While the use of computations to predict the outcomes of scientific experiments, natural processes, and mathematical operations is by no means new (it has become a
ubiquitous tool over the last few hundred years), the ease of use and accessibility of a large, powerful, and ever-expanding collection of such computations provided by Wolfram|Alpha is.
Virtually all known processes occur in such a way that certain functionals that describe them become extremal. Typically this happens with the action for time dependent processes and quantities such
as the free energy for static configurations. The equations describing the extremality condition of a functional are frequently low-order ordinary and/or partial differential equations and their
solutions. For example, for a pendulum: Frechet derivative of Integrate[x'[t]^2/2 – Cos[x[t]], {t, -inf, inf}] wrt x[tau]. Unfortunately, if one uses a sufficiently realistic physical model that
incorporates all potentially relevant variables (including things like friction, temperature dependence, deformation, and so forth), the resulting equations typically become complicated—so much so
that in most cases, no exact closed-form solution can be found, meaning the equations must be solved using numerical techniques. A simple example is provided by free fall from large heights:
On the other hand, some systems, such as the force of a simple spring, can be described by formulas involving simple low-order polynomial or rational relations between the relevant problem variables
(in this case, Hooke’s law, F = k x):
Over the last 200+ years, mathematicians and physicists have found a large, fascinating, and insightful world of phenomena that can be described exactly using these so-called special functions (also
commonly known as “the special functions of mathematical physics”), the class of functions that describe phenomena between being difficult and complicated. It includes a few hundred members, and can
be viewed as an extension of the so-called elementary functions such as exp(z), log(z), the trigonometric functions, their inverses, and related functions.
Special functions turn up in diverse areas ranging from the spherical pendulum in mechanics to inequivalent representations in quantum field theory, and most of them are solutions of first- or
second-order ordinary differential equations. Textbooks often contain simple formulas that correspond to a simplified version of a general physical system—sometimes even without explicitly stating
the implicit simplifying assumptions! However, it is often possible to give a more precise and correct result in terms of special functions. For instance, many physics textbooks offer a simple
formula for the inductance of a circular coil with a small radius:
While Wolfram|Alpha knows (and allows you to compute with) this simple formula, it also knows the correct general result. In fact, if you just ask Wolfram|Alpha for inductance circular coil, you will
be simultaneously presented with two calculators: the one you know from your electromagnetics textbook (small-radius approximation) and the fully correct one. And not only can you compute the results
both ways (and see that the results do differ slightly for the chosen parameters, but that the difference can get arbitrarily large), you can also click on the second “Show formula” link (near the
bottom of the page on the right side) to see the exact result—which, as can be seen, contains two sorts of special functions, denoted E(m) and K(m) and known as elliptic integrals: More »
Exciting new math features have arrived in Wolfram|Alpha! Our programmers have spent the past two months developing new capabilities in optimization, probability, number theory, and a host of other
mathematical disciplines. Searching for elusive extrema? Look no further! Just feed your function(s) into Wolfram|Alpha and ask for their maxima, minima, or both. You can find global maxima and
minima, optimize a function subject to constraints, or simply hunt for local extrema.
We’ve also added support for a wide variety of combinatorics and probability queries. Counting combinations and generating binomial coefficients has been simplified with syntax like 30 choose 18.
Want to spend less time crunching numbers and more time practicing your poker face? You can ask directly for the probability of a full house or other common hands, as well as the probabilities of
various outcomes when you play Powerball, roll two 12-sided dice, or repeat any sequence of trials with a 20% chance 4 times.
The pursuit of primes has never been so simple. Imagine yourself walking the streets of an infinite city in search of “prime real estate.” You can find the nearest one simply by requesting (for
example) the prime closest to 100854; alternatively, you could scope out the entire neighborhood by asking Wolfram|Alpha to list primes between 100,000 and 101,000. Would you prefer the greatest
prime number with 10 digits, or will you be satisfied with any random prime between 100,000,000 and 200,000,000? The aspiring real estate agent—er, number theoretician—can also tinker with quantities
like the sum of the first hundred primes or the product of primes between 900 and 1000. If your explorations take you to the realm of the composites (the addresses of houses with “sub-prime”
mortgages, perhaps), you can identify numbers with shared factors by querying Wolfram|Alpha for, say, multiples of 5, 17, 21.
Other additions have brought everything from Archimedes’ axiom to semiaxes and square pyramid syntax into our body of computable knowledge and functions. Wolfram|Alpha grows daily, so stay tuned to
this blog for further updates. Better yet, apply to become a Wolfram|Alpha tester for privileged access to the newest features before they go public!
Valentine’s Day is special to sweethearts around the world. While Wolfram|Alpha can’t come close to replacing a thoughtful card or gourmet box of chocolates, there are a surprisingly large number of
things related to Valentine’s Day (and in particular, to its central icon) that Wolfram|Alpha can compute.
Let’s start with the holiday itself. Just typing in “valentine’s day” gives the expected calendrical information, from which we learn that Valentine’s Day falls on a Sunday this year. For the
procrastinators among us, we can also find out how many days we have remaining to acquire an appropriate token of affection for our loved one (or by how many days we’ve already blown our chance).
Wolfram|Alpha also shows various other useful data, including the interesting fact that Valentine’s Day coincides with Chinese New Year this year.
While Wolfram|Alpha can’t (yet) tell you how many calories are in your box of holiday chocolates or package of Valentine’s Day Sweethearts candy, there are plenty of computational objects related to
that most-famous Valentine’s Day icon—the heart—that it can tell you something interesting and/or useful about. For instance, do you know the average weight of a human heart? The typical resting
heart rate? The Unicode point for the heart symbol character? Or perhaps you’ve forgotten the ASCII keystrokes needed to insert a love emoticon at the end of an email to your Sweet Baboo?
On the mathematical side, typing in “heart curve” gives you a number of mathematical curves resembling the heart shape. The default (and probably most famous) of these is the cardioid, whose name
after all means “heart-shaped” in Latin (and about which we all have fond memories dating back to our introductory calculus courses):
A curve more closely resembling the conventional schematic (if not physiological) heart shape is the so-called “first heart curve“, which is an algebraic curve described by a beautifully simple
sextic Cartesian equation:
If you don’t care for any of the heart curves Wolfram|Alpha knows about (or even if you do), you’re also of course also free to experiment with your own. For example, a particularly attractive curve
can be obtained using the relatively simple input “polar plot 2 – 2 sin t + sin t sqrt (abs(cos t))/(sin t + 1.4)“. More »
Version 1.1 of the Wolfram|Alpha App for the iPhone & iPod is now available in the App Store. The new version includes a number of new features that continue to improve the app’s unique mobile
Wolfram|Alpha experience. Perhaps its most iconic feature, the specialized keyboards that greet you when you first open the Wolfram|Alpha App, have been painstakingly constructed to ease the burden
of entering queries, whether you’re converting from pounds p-function right-shift key left-shift key left-shift key.
To determine the optimal keyboard layout, we scoured Wolfram|Alpha’s server logs for the most commonly entered phrases that have characters with meaning in Wolfram|Alpha. Given that Wolfram|Alpha is
built on Mathematica, one of its core strengths is advanced mathematics. True to form most of the commonly typed characters are related to math. For example, you would generally type the word
“integrate” to compute an integral on the Wolfram|Alpha website. In the Wolfram|Alpha App you could simply type the
Prior to releasing Wolfram|Alpha into the world this past May, we launched the Wolfram|Alpha Blog. Since our welcome message on April 28, we’ve made 133 additional posts covering Wolfram|Alpha news,
team member introductions, and “how-to’s” in a wide variety of areas, including finance, nutrition, chemistry, astronomy, math, travel, and even solving crossword puzzles.
As 2009 draws to a close we thought we’d reach into the archives to share with you some of this year’s most popular blog posts.
Rack ’n’ Roll
Take a peek at our system administration team hard at work on one of the
many pre-launch projects. Continue reading…
The Secret Behind the Computational Engine in Wolfram|Alpha
Although it’s tempting to think of Wolfram|Alpha as a place to look up facts, that’s only part of the story. The thing that truly sets Wolfram|Alpha apart is that it is able to do sophisticated
computations for you, both pure computations involving numbers or formulas you enter, and computations applied automatically to data called up from its repositories.
Why does computation matter? Because computation is what turns generic information into specific answers. Continue reading…
Live, from Champaign!
Wolfram|Alpha just went live for the very first time, running all clusters.
This first run at testing Wolfram|Alpha in the real world is off to an auspicious start, although not surprisingly, we’re still working on some kinks, especially around logging.
While we’re still in the early stages of this long-term project, it is really gratifying to finally have the opportunity to invite you to participate in this project with us. Continue reading…
Wolfram|Alpha Q&A Webcast
Stephen Wolfram shared the latest news and updates about Wolfram|Alpha and answered several users’ questions in a live webcast yesterday.
If you missed it, you can watch the recording here. Continue reading… More »
We’re really catching the holiday spirit here at Wolfram|Alpha.
We recently announced our special holiday sale for the Wolfram|Alpha app. Now we are launching our first-ever Wolfram|Alpha “Holiday Tweet-a-Day” contest.
Here’s how it works.
From tomorrow, Tuesday, December 22, through Saturday, January 2, we’ll use Twitter to give away a gift a day. Be the first to retweet our “Holiday Tweet-a-Day” tweet and you get the prize! You can
double your chances to win by following and playing along with Wolfram Research.
Start following us today so you don’t miss your chance to win with our Wolfram|Alpha “Holiday Tweet-a-Day” contest.
When we launched Wolfram|Alpha in May 2009, it already contained trillions of pieces of information—the result of nearly five years of sustained data-gathering, on top of more than two decades of
formula and algorithm development in Mathematica. Since then, we’ve successfully released a new build of Wolfram|Alpha’s codebase each week, incorporating not only hundreds of minor behind-the-scenes
enhancements and bug fixes, but also a steady stream of major new features and datasets.
We’ve highlighted some of these new additions in this blog, but many more have entered the system with little fanfare. As we near the end of 2009, we wanted to look back at seven months of new
Wolfram|Alpha features and functionality.
(January 15, 2014 Update: Step-by-step solutions has been updated! Learn more.)
Have you ever given up working on a math problem because you couldn’t figure out the next step? Wolfram|Alpha can guide you step by step through the process of solving many mathematical problems,
from solving a simple quadratic equation to taking the integral of a complex function.
When trying to find the roots of 3x^2+x–7=4x, Wolfram|Alpha can break down the steps for you if you click the “Show steps” button in the Result pod.
As you can see, Wolfram|Alpha can find the roots of quadratic equations. Wolfram|Alpha shows how to solve this equation by completing the square and then solving for x. Of course, there are other
ways to solve this problem! More »
We know college is hard. So we’re highlighting examples of how Wolfram|Alpha can make subjects and concepts a bit easier to learn. Wolfram|Alpha is a free computational knowledge engine that can help
you tackle everything from calculus, to computing the number of pages for a double-spaced 1000-word essay, to comparing the flash points of methane, butane, and octane, to figuring just how much
money it’s going to cost you to drive home to do your laundry. Check out a quick introduction to Wolfram|Alpha from its creator, Stephen Wolfram.
We want to help you take full advantage of this resource. Over the next term, we’ll be highlighting helpful computations and information here on the blog, and even providing ways you can get involved
with our company. (Would you like to be a part of the Wolfram|Alpha Team on your campus? Stay tuned to find out how you can be involved.) For this post we selected several of our favorite examples to
help you start thinking about how you can use Wolfram|Alpha in your courses, and in your always-changing college life. More »
We use this blog to provide helpful tips on using Wolfram|Alpha. So when a relevant screencast caught our eye on Twitter—”Wolfram|Alpha for Calculus Students,” produced by Robert Talbert, PhD, an
associate professor of mathematics and computing science at Franklin College—we wanted share it with you. We think his straightforward video is a great demonstration of just how valuable Wolfram|
Alpha is for students. In the screencast, Professor Talbert discusses the concept of Wolfram|Alpha, and illustrates how it solves problems such as factoring or expanding expressions, solving
quadratic equations, and more.
The screencast covers just a few of the ways educators and students are using Wolfram|Alpha. Are you an instructor who has found innovative ways to incorporate Wolfram|Alpha into your lesson plans?
Or are you a student using Wolfram|Alpha to assist in your studies? You can join others having these conversations on the Wolfram|Alpha Community site.
We’ve updated another entry thanks to feedback sent to Wolfram|Alpha. We’ve now changed linguistic priority settings so that “blog” is no longer interpreted as the math expression b log(x) by
Some of you have asked whether you’ll be able to use Wolfram|Alpha for challenging math. Of course!
Remember your old friend pi?
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step.
|
{"url":"http://blog.wolframalpha.com/tag/calculus/","timestamp":"2014-04-21T09:36:47Z","content_type":null,"content_length":"102432","record_id":"<urn:uuid:f64a9f6b-8ae7-4285-85f3-435f81b86e44>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Operations Research/Graphical LP solution
We will now attempt to find an optimal solution to the linear programming model we introduced in the previous section. The method we will employ is known as the graphical method and can be applied to
any problem with two decision variables. It basically consists of two steps: Finding the feasible region or the feasible space (which is the region in the plane where all the feasible solutions to
the problems lie) and then identifying the optimal solution among all the feasible ones.
To begin the procedure we first graph the lines
$6x_1+4x_2= 24$,
$x_1+2x_2= 6$,
$x_2-x_1= 1$,
$x_2= 2$,
in the first quadrant. Note that for our purpose $x_1=x$ and $x_2=y$ on the graph.
We now will shade the feasible region. To do that consider the constraints one by one. The first one is $6x_1+4x_2\le 24$. To determine the region it represents, choose any point which does not pass
through the line $6x_1+4x_2= 24$ say (0,0). Substitute it in the constraint $6x_1+4x_2\le 24$ to get $0\le 24$. Since this is true we conclude that (0,0) lies in the region represented by $6x_1+4x_2\
le 24$. We conclude that all the points on the side of $6x_1+4x_2= 24$ containing (0,0) actually represent $6x_1+4x_2\le 24$. This is suggested by the fact that the line $6x_1+4x_2 = 24$ divides the
plane in 2 distinct halfs: One of points satisfying the inequality and one of those which don't.
In this way all the inequalities can be shaded. The region which is shaded under all inequalities is the feasible region of the whole problem. Clearly in this region all the constraints of the
problem hold. (The non negativity restrictions hold since we are working in the first quadrant.)
We are now ready to find out the optimal solutions. To do this graph the line $5x_1+4x_2=10$. Since $z=5x_1+4x_2$ represents the objective function so $5x_1+4x_2=10$ represents the points where the
objective function has value 10 (i.e. the total profit is 10). Now plot the line $5x_1+4x_2=15$ which represents the points where the objective function has value 15. This gives us an idea of the
direction of increase in z. The optimal solution occurs at the point X which is the point beyond which any further increase will put z outside the boundaries of the feasible region. The coordiantes
of X can be found by solving $6x_1+4x_2= 24$ and $x_1+2x_2= 6$ so that $x_1=3$ and $x_2=1.5$. This is the optimal solution to the problem and indicates that the amounts of salts X and Y should be 3
and 1.5 respectively. This will give the maximum profit of 21 which is the optimal value.
A point to note is that the optimal solution in a LP model always occurs at a corner point of a feasible region. This is true even if the line z=c comes out to be parallel to one of the constraints.
Although a mathematical proof of this fact would involve considerable linear algebra we will satisfy ourselves of it by noting that that any objective function in the feasible region would have
glided out of the region just after touching one of the corner points.
A minimization exampleEdit
Let us look at a minimization problem. It can occur in actual practice when instead of the profits associated with the salts X and Y we are given the costs of their production. All we have to do is
now move the line z=c in the direction of its decrease and we have the optimal solution at the point ((0,0) in our example) where any further decrease will take z outside the feasible region. Another
way to solve the problem is to convert the min problem into a max one. To do that simple consider the negative of the objective function.
Last modified on 8 April 2012, at 02:52
|
{"url":"http://en.m.wikibooks.org/wiki/Operations_Research/Graphical_LP_solution","timestamp":"2014-04-20T15:54:29Z","content_type":null,"content_length":"20204","record_id":"<urn:uuid:ecef4423-1253-4ff8-bac1-e6da9f2e1c20>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Richlandtown Math Tutor
...I also tutored a fifth-grade student in phonics for several weeks this past year. I am familiar with the different stages of reading and writing, and I have many resources to help me teach
phonics. I played soccer for my high school team and for a travel team for nine years.
12 Subjects: including prealgebra, reading, English, writing
...I have over 7 years of tutoring experience. I started volunteering as an AVID tutor at Sultana High School as a senior there. While at the University of California, San Diego, I was a paid AVID
tutor at Washington Middle School in Vista, Ca.
43 Subjects: including calculus, ACT Math, SAT math, geometry
...Sincerely,Miss MelissaI have a Bachelor of Science degree in Elementary Education. One of my student teaching experiences took place in 6th grade. The course study was English and Math, which
allowed my to gain experience in teaching vocabulary.
14 Subjects: including prealgebra, reading, writing, spelling
...I also took the AP exam in European history in high school and scored a 4 on the exam. I have a great interest in European history and have traveled to Europe several times specifically to see
things I have learned in European history. I enjoy reading books on the World wars and how European relations affected the initiation and outcome.
14 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2
...I have substituted at North Penn School District, and also at Perkiomen Valley School District. I am currently working as an Educational Assistant at Upper Dublin School District in an Autistic
Support classroom. As a teacher, I aim to perpetuate knowledge and inspire learning.
18 Subjects: including algebra 2, special needs, study skills, discrete math
|
{"url":"http://www.purplemath.com/richlandtown_math_tutors.php","timestamp":"2014-04-19T23:11:35Z","content_type":null,"content_length":"23806","record_id":"<urn:uuid:4bc641be-2a58-41d5-bc6d-c0dd2d9b1897>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Erik D. Demaine
Paper by Erik D. Demaine
Erik D. Demaine and MohammadTaghi Hajiaghayi, “Bidimensionality: New Connections between FPT Algorithms and PTASs”, in Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms (
SODA 2005), Vancouver, British Columbia, Canada, January 23–25, 2005, pages 590–601.
We demonstrate a new connection between fixed-parameter tractability and approximation algorithms for combinatorial optimization problems on planar graphs and their generalizations. Specifically,
we extend the theory of so-called “bidimensional” problems to show that essentially all such problems have both subexponential fixed-parameter algorithms and PTASs. Bidimensional problems include
e.g. feedback vertex set, vertex cover, minimum maximal matching, face cover, a series of vertex-removal problems, dominating set, edge dominating set, r-dominating set, diameter, connected
dominating set, connected edge dominating set, and connected r-dominating set. We obtain PTASs for all of these problems in planar graphs and certain generalizations; of particular interest are
our results for the two well-known problems of connected dominating set and general feedback vertex set for planar graphs and their generalizations, for which PTASs were not known to exist. Our
techniques generalize and in some sense unify the two main previous approaches for designing PTASs in planar graphs, namely, the Lipton-Tarjan separator approach [FOCS'77] and the Baker layerwise
decomposition approach [FOCS'83]. In particular, we replace the notion of separators with a more powerful tool from the bidimensionality theory, enabling the first approach to apply to a much
broader class of minimization problems than previously possible; and through the use of a structural backbone and thickening of layers we demonstrate how the second approach can be applied to
problems with a “nonlocal” structure.
The paper is 12 pages.
The paper is available in PostScript (449k), gzipped PostScript (166k), and PDF (256k).
See also other papers by Erik Demaine. These pages are generated automagically from a BibTeX file.
Last updated April 8, 2014 by Erik Demaine.
|
{"url":"http://erikdemaine.org/papers/GenApprox_SODA2005/","timestamp":"2014-04-21T07:43:49Z","content_type":null,"content_length":"6116","record_id":"<urn:uuid:bf4a79a2-d338-43c2-8799-1b1f26ac7c06>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Milltown, NJ Geometry Tutor
Find a Milltown, NJ Geometry Tutor
...Anatomy, taphonomy, taxonomy and systematics are of particular interest as a paleontologist. Thus they are my strongest talking points. As an undergraduate I have had two years of Biology,
paleontology, and even Dinosaurs.
13 Subjects: including geometry, chemistry, reading, algebra 1
Pre-Algebra, Algebra 1, Geometry, Algebra II, Pre-calculus, and Calculus. I am a certified K-12 Math Teacher with over 10 years of teaching experience and I can help your child succeed with
individualized, private tutoring. In addition to having 10 years experience as a high school and middle scho...
9 Subjects: including geometry, algebra 1, algebra 2, SAT math
...I can tutor both high school and college level math classes. I am available afternoons, evenings, and most weekends. I have good communication skills as well as teaching skills by building up
through my graduate program as a Research Assistant as well as Teaching Assistant.
8 Subjects: including geometry, calculus, algebra 1, algebra 2
...I have been a private tutor since 2005 and have guided many students through the often stressful process of standardized testing. At Harvard, I concentrated in Visual and Environmental Studies
and Literature. I'm currently an MFA candidate at the Graduate Film Program at NYU Tisch, and my short...
36 Subjects: including geometry, English, chemistry, calculus
...DanDiscrete Math is often coined as "finite mathematics". It does not deal with the real numbers and it's continuity. I have studied discrete math as I obtained my BS in mathematics from Ohio
14 Subjects: including geometry, calculus, ASVAB, algebra 1
|
{"url":"http://www.purplemath.com/Milltown_NJ_geometry_tutors.php","timestamp":"2014-04-18T11:31:34Z","content_type":null,"content_length":"23961","record_id":"<urn:uuid:1869e7fe-8b66-4db1-a8e6-157d483e91ff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pasadena, TX SAT Math Tutor
Find a Pasadena, TX SAT Math Tutor
...More advanced topics such as vectors, polar coordinates, parametric equations, matrix algebra, conic sections, sequences and series, and mathematical induction can be covered. I can reteach
lessons, help with homework, or guide you through a more rigorous treatment of these topics. As needed, we can reinforce prerequisite topics from algebra and pre-algebra.
30 Subjects: including SAT math, calculus, physics, geometry
...I have the national "E" coaching license. This involves an 18 hour course which covers the principles of coaching. I have also coached two middle school teams.
22 Subjects: including SAT math, chemistry, physics, calculus
...One of my best qualities is that I'm a patient teacher, and my goal is to always make sure that my students come away with a clear understanding of the fundamental elements of the subject being
taught; this serves as a foundation for tackling more complex concepts down the line. Not only do I en...
10 Subjects: including SAT math, geometry, algebra 1, algebra 2
...I have extensive experience prepping tutoring students for their MCAT Verbal Reasoning section. The MCAT contains four sections: physical sciences, verbal reasoning, biological sciences and a
trial section (which is optional). The MCAT is a computer-based examination for prospective medical s...
42 Subjects: including SAT math, reading, English, writing
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including SAT math, calculus, geometry, algebra 1
|
{"url":"http://www.purplemath.com/pasadena_tx_sat_math_tutors.php","timestamp":"2014-04-16T05:06:35Z","content_type":null,"content_length":"23971","record_id":"<urn:uuid:d79e08ab-ab4c-406d-ae03-db94cd94c135>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-user] NNLS in scipy
Christian Meesters meesters@uni-mainz...
Sun Jun 15 14:07:23 CDT 2008
I'd like to perform some "non-negatively constraint least squares"
algorithm to fit my data, like this:
Signal = SUM a_i C_i
where C_i is some simulated signal and a_i the amplitude contributed by
that simulated signal. Or in terms of arrays, I will have one reference
array and several arrays of simulated signals. How can find the
(non-negative) coefficients a_i for each simulated signal array? (All
negative contributions should be discarded.)
Is there anything like that in scipy (which I couldn't find)? Or any
other code doing that?
Else I could write it myself and contribute, but having some working
code would be nice, of course.
More information about the SciPy-user mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-June/017203.html","timestamp":"2014-04-17T18:40:04Z","content_type":null,"content_length":"3031","record_id":"<urn:uuid:95bbf3b6-b2b0-4613-8abe-ed354ae26050>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Elimination of Arbitrary Constants
Introduction to elimination of arbitrary constants
The Differential Equations which have only one independent variable are called Ordinary. In the above equations x is the independent variable and y is the dependent variable. The Equations which have
two or more independent variables and partial differential coefficients with respect to them are called Partial.
I like to share this Coterminal Angles in Radians with you all through my article.
The degree of an equation is defined as the degree of the highest differential coefficient while the equation has been made rational and integral regarding the differential coefficients are required
in this equation.
The Benefits of Elimination of Arbitrary Constants
It encourages the optimization of the whole process, not yet individual elements. Local optimization could lead to imbalance .It force a user listen the recognition of unnecessary complexity: which
involves steps to add little value
It recognizes the role of suppliers in a derived area and the importance of up-stream can be prevention while down-stream cannot be detection
They promotes standardization which around best practice
Having problem with Supplimentary Angles keep reading my upcoming posts, i will try to help you.
Elimination of Arbitrary Constants in Differential Equations
To Eliminate the arbitrary constants from the given equation:
y = Ce^x + De^-x + E
differentiating both sides y ' = a E1 e ax cos bx - E1b e ax sin bx + E2ae ax sin bx + E2b e ax cos bx
= a { E1 e ax cos bx + E2 e ax sin bx } - E1 b ea x sin bx + E2 b e ax cos bx
= ay - E1 b ea x sin bx + E2 b e ax cos bx ------(1)
Also, y ' - ay = b { -E1 ea x sin bx + E2 e ax cos bx}- E1 ea x sin bx + E2 e ax cos bx = (1/b) {y ' - ay} -----(2)
differentiating again , y " = a y ' - E1 ab e ax sin bx - E1 b2 e ax sin bx + E2 ab e ax cos bx - E2 b2 e ax sin bx
= a y ' + ab { - E1 e ax sin bx + E2 e ax cos bx } - b2 { E1 e ax sin bx +E2 e ax sin bx }
= a y ' + ab { ( 1/b)( y ' - ay) } - a2 y - b2 y
= 2a y ' - a2 y - b2 y
Arbitrary constants are eliminated from the equation: y = Ce^x + De^-x + E The solution is detailed and well presented as above.
Understanding What are Angles is always challenging for me but thanks to all math help websites to help me out.
• 0 comments
• 0 comments
|
{"url":"http://basicmath.livejournal.com/9414.html","timestamp":"2014-04-20T00:56:57Z","content_type":null,"content_length":"59783","record_id":"<urn:uuid:3f1177c5-f486-4704-9913-13cb993c407c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Further Musings on
Further Musings on "Multiplicand" and "Multiplier"
Date: 06/09/2011 at 22:12:56
From: Carter
Subject: clarification of math terms of multiplicand and multiplier
I have seen Dr. Math's answer to the definition of multiplicand and
multiplier, and would like to share my thoughts.
Consider the possible multiplicand and multiplier in
(9 x 4) = 36
I believe these designations become clearer when the objective is written
or spoken, such as "What is your age times 4?" If your age is 4, then four
is the multiplicand and the multiplier is 9. If on the other hand your age
is 9, then the multiplicand is nine and the factor is 4.
The distinction between multiplicand and multiplier is less clear with
questions about the total of contributions if, to continue the example,
four individuals each gives nine dollars. In my opinion, the multiplicand
is the number that has the same units as the product. For example, I would
say that the multiplicand is the dollar amount, because it is a four
dollar contribution that is magnified by the number of contributors.
Date: 06/09/2011 at 22:56:07
From: Doctor Peterson
Subject: Re: clarification of math terms of multiplicand and multiplier
Hi, Carter.
I'm not sure which page you are responding to; I'll suppose it's this:
Multiplicand, Multiplier
But you might also have seen this:
Defining Multiplication
Or this:
Groups in Multiplication
In any APPLICATION of multiplication, the multiplicand is the number to be
multiplied (or scaled up, or repeated, or whatever), and the multiplier is
the number by which it is to be multiplied (aka, the scale factor, repeat
count, etc.). As you say, that is really unrelated to the way it happens
to be written.
The equation 9 x 4 = 36 need not represent "What is your age times 4?" It
might just as well be what you'd write for "What is 9 times your age?" In
either case, it is clear that the age (9 or 4 years, respectively) is the
multiplicand, because it is the number you start with and modify. But it
is written as the first number in one case, and the second in the other.
I agree with you that dollar amounts (unit prices) are multiplicands,
while numbers or quantities are multipliers. But I would not consider it a
good general principle to say that the multiplicand has the same units as
the product. In the case of 9 pounds at 4 dollars per pound, the product
is in dollars; no two numbers have the same unit! What you say would apply
only when the multiplier is a dimensionless quantity (a mere number of
times, or items).
So in simple problems that require multiplication, it's fairly easy to
identify the multiplier and multiplicand based on the application. The
distinction, however, becomes less and less meaningful as you do more
complex things. (For example, when calculating the force of gravity using
F = GMm/d^2, which of the two masses is the multiplicand?) In the
abstract, however, just given as A x B with no connection to an
application, they are both just "factors" and play an equal role.
- Doctor Peterson, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/76738.html","timestamp":"2014-04-20T21:21:58Z","content_type":null,"content_length":"8559","record_id":"<urn:uuid:7d83ca37-7b68-4547-a9ab-d03f128572e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Sum of first 100 Natural numbers
If the qustion of sum of first 100 natural numbers is asked most people will readily answer as 5050 using n(n+1)/2 formula. When thesame question was asked to Carl Frederick Gauss, the greatest
mathematician ever lived, when he was barely 5 years old, he solved it in a most natural and ingenious way for a 5 year old. This method is a basic idea for deriving n(n+1)/2. Can you just imagine
how he did it?
First he (Gauss) imagined that he had written all the first 100 Natural numbers and started adding in the following way: 1+2+3+4+5+................................97+98+99+100 = (1+100)+(2+99)+(3+98)
+(4+97)+..................(50+51) =101 + 101 + 101 + 101 +.................. 101 = 101 summed 50 times, (because 100 numbers are paired into 50) = 101 x 50 = 5050. Truly remarkable for a tender 5
year old to conceive and devise such a marvellous and wonderful method.
If the qustion of sum of first 100 natural numbers is asked, most people will readily answer as 5050 using n(n+1)/2 formula. When the same question was asked to Carl Frederick Gauss, the greatest
mathematician ever lived, when he was barely 5 years old, he solved it in a most natural and ingenious way for a 5 year old. This method is a basic idea for deriving n(n+1)/2. Can you just imagine
how he did it?
First he (Gauss) imagined that he had written all the first 100 Natural numbers and started adding in the following way: 1+2+3+4+5+................................97+98+99+100 = (1+100)+(2+99)+(3+98)
+(4+97)+................(50+51) =101 + 101 + 101 + 101 +.................. 101 =101 summed 50 times, ( because 100 numbers are paired into 50) = 101 x 50 = 5050. Truly remarkable for a tender 5 year
old to conceive and devise such a marvellous and splendid method.
|
{"url":"http://www.kesavan8.blogspot.com/","timestamp":"2014-04-20T15:51:37Z","content_type":null,"content_length":"17789","record_id":"<urn:uuid:55c19db1-88c2-4f7d-95bb-cd9850d753e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SI unit prefixes: a plea for sanity
Computer programmers can skip to the next sub-heading.
Computers work in binary. They measure things in binary numbers internally, at the hardware level. While you are seeing letters and decimal numbers on this page, the computer just sees streams of 1s
and 0s. The successive digits in binary (moving left) count 1s, 2s, 4s, 8s, and so on, just like the digits in a regular base 10 number are worth 1, 10, 100, 1000. Powers of 10, powers of 2.
Way back in the mists of primordial computer history—or the 60s, as we also know it—someone decided it would be a good idea to define a kilobyte as 1024 bytes, rather than 1000, because 1024 is the
closest power of 2 to 1000. That is, a 10 bit binary number can count from 0 to 1023, just like a 3 digit decimal number counts from 0 to 999.
The problem is, this unusual definition of “kilo” wasn’t given its own symbol at the time; they just used the same “kilo” prefix used for the metric system. Nor was the unusual definition of “kilo”
universally used, even in the computer industry. For instance, while the memory of the computer was measured in binary kilobytes, the speed was always measured in decimal kilohertz.
As the years went on, computers got more memory, and got faster, and their network connections got faster. We ended up with more and more ways to store data. And people kept on randomly choosing
whether to use base 2 ‘computer’ units, or normal base 10 units, when measuring computer stuff.
Welcome back, programmers!
Right now, CDs are measured in base 2, as per the official CD standard—your 650MB CD contains 650×1024×1024 bytes. On the other hand, DVDs are always measured in base 10 units—your 4.7GB writeable
DVD has the normal 4,700,000,000 bytes.
The size of the memory in your computer is always measured in base 2 units (megabytes). However, the bus speed of the memory in your computer is always measured in base 10 units (megahertz).
The file you download has a size, almost certainly reported in base 2 units by your web browser. But, you’ve guessed it, the speed of your modem connection is always measured in base 10 units. Your
1Mbps cable modem gives you 1,000,000 bits per second, not 1,048,576.
The hard disk in a Macintosh? Always specified in base 10 units. If you get a 40GB disk, you get 40,000,000,000 bytes. The disk space reported by Mac OS X? Always binary units. Even Apple aren’t
Let me be blunt: This is a mess.
There is no logic to it. There is no consistency to it. You can’t work out whether a given measurement is base 10 or base 2, you just have to magically know—or guess, and hope that if you’re wrong
the difference isn’t too important.
The solution
There is a solution to this problem. The IEC has a set of official binary prefixes. When you want to refer to something in base 2 units, you can use the appropriate binary prefix instead of using the
closest base 10 metric prefix, and your meaning will be crystal clear. (That still leaves the problem of what to do if you’re measuring one of the many computer-related things that are measured in
base 10, but if we get everyone using binary prefixes it won’t be a problem any more, will it?
And that brings me to the thing I actually want to write about: knee-jerk conservatism.
It turns out that there are a lot of computer programmers who really get pissed off by the idea of having to write MiB for base-2 megabytes. “Megabytes have always been base 2, and always been
written as MB”, they sneer. “Everyone knows that 1MB is 1024KB, unless you’re talking about DVDs, or reading manufacturer specs for a hard drive, and that’s just the hard drive manufacturers being
stupid. Everyone knows that ‘K’ on a computer means 1024; except for speeds, where it means 1000, except for file download speeds where it means 1024, except when it’s the speed of your modem, when
it’s 1000. Everyone knows that. What, are you stupid?”
I find it quite fascinating, really. Engineers generally pride themselves on consistency and clarity, yet when it comes to being consistent and clear in their use of measurements, well, you’d think
you were asking them to drink decaf or something.
Change which makes things easier, more consistent, and less ambiguous is good change. It should be embraced. Clinging to confusing and inconsistent ways of working, just because it’s what you’re used
to, doesn’t make you look superior—it makes you look like an ass. You’re not clinging to consistency with the past, because the past usage was not consistent. The computer industry has never been
consistent in its use of units, it’s not being consistent now—but it’s time for it to start. And there’s only one way to do that.
If you measure in base 2 units, report in base 2 units using the base 2 prefixes.
If you measure in base 10 units, report in base 10 units using the base 10 prefixes.
This is not a big plot to make you report disk sizes in base 10 if you don’t want to. Go on measuring your hard disk in GiB and whining about the hard drive manufacturer conspiracy to defraud you, if
you want; I don’t care. I just want you and your software to be clear, correct and unambiguous when you provide information to me. Leaving me to guess the meaning of ‘K’ and ‘G’ based on context is
not good enough. It is not unambiguous; see above.
Now, get with the program already. All of you. Tell your friends. If anyone whines, point them at this article. And someone get Steve Jobs to lay down the law at Apple, their continuing inconsistency
is really annoying me…
6 thoughts on “SI unit prefixes: a plea for sanity”
1. Someone got Steve Jobs to lay down the law. :D
2. Possibly a result of my reporting the issue via ADC!
3. And now it’s filtered into the world of Ubuntu. Naturally, there is some resistance…
4. Pingback: dakoSpace » 1billion 1billion
6. Let’s just admit it. The SI names are just too embarrassing to use. Let’s start a campain to abandon the current standard. Change to something obvious, simple and less embarrasing like:
1bMB, 12bPb
one binary megabyte, twelve binary petabits
|
{"url":"http://lpar.ath0.com/2008/07/15/si-unit-prefixes-a-plea-for-sanity/","timestamp":"2014-04-20T13:51:43Z","content_type":null,"content_length":"37561","record_id":"<urn:uuid:1a168452-11cb-472f-8eac-8ceb5f0451a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A model of the universe is a mathematical description of how the scale factor R(t) evolves with time. In this chapter we develop some models of the universe.
As a first approximation, consider the analogy of the Newtonian ball of self-gravitating particles. Gravity acts to try to pull the ball together. If the ball is expanding with sufficient velocity it
can resist this collapse. We obtain a simple equation to describe the evolution of this Newtonian ball. One of the most important consequences of this analysis is the realization that gravity permits
three possibilities for the evolution of the universe: it could expand forever; it could stop expanding at infinite time; or it could stop expanding at some finite point in time and recollapse.
Remarkably, the fully general relativistic solution for a universe consisting of smoothly distributed matter has the same form as the Newtonian solution. The equations that describe the evolution of
the universe under the influence of its self-gravity are called the Friedmann equations. Models of the universe derived from this equation are called Friedmann-Robertson-Walker models, or FRW models.
The three possible fate of a universe containing only ordinary mass density correspond to the three basic geometry types studied in Chapter 8. The hyperbolic universe expands forever; the flat
universe expands but ever more slowly, until it ceases expanding at infinite time; and the spherical universe reverses its expansion and collapses in a "big crunch."
These cosmological models assume zero cosmological constant (Lambda). The only force acting is gravity. They can be summarized thusly:
Standard Model Summary Table
┃ Model │ Geometry │k │Omega│q[o] │ Age │ Fate │┃
┃ Closed │Spherical │+1│ >1 │> 1/2│ t[o] < 2/3 t[H] │ Recollapse │┃
┃Einstein- deSitter│ Flat │0 │ =1 │= 1/2│ t[o] = 2/3 t[H] │Expand forever│┃
┃ Open │Hyperbolic│-1│ <1 │< 1/2│2/3t[H] < t[o] < t[H] │Expand forever│┃
The special case of the flat (k = 0, Omega = 1), matter-only universe is called the Einstein-deSitter model. Various numerical parameters such as the age of the universe, the lookback time to distant
objects, and so forth, are easiest to compute in the Einstein-deSitter model, so it provides a convenient guide for estimation of some cosmological quantities. For example, the age of the universe in
the Einstein-deSitter model is 2/3rd the Hubble time.
Adding a nonzero cosmological constant provides a new possibilities. The cosmological constant acts as an additional force, either attractive (negative lambda) or repulsive (positive lambda). Instead
of decreasing in strength with distance like gravity, the "lambda force" increases with scale factor. This means that any nonzero cosmological constant will ultimately dominate the universe. An
attractive "lambda force" will cause a recollapse and big crunch regardless of the model's geometry. However, the possibility of an attractive lambda in the physical universe is ruled out by
observations. A repulsive lambda force has more interesting possible effects. The details depend upon the model, but eventually all such models expand exponentially.
|
{"url":"http://www.astro.virginia.edu/~jh8h/Foundations/chapter11/chapter11.html","timestamp":"2014-04-17T13:15:19Z","content_type":null,"content_length":"10026","record_id":"<urn:uuid:dc19d383-6ed5-4513-b9e4-5f30badc8af1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Help
September 7th 2009, 08:47 AM #1
Feb 2009
i am graphing objects by using level sets (setting z=a constant) and sections (setting x and y = a constant)
i have to sketch..
1.) f (x,y)= -2
2.) g(x,y)= cosy
3.) h(x,y)= 1-x
any pointers/hints on how to do any of these? the first one doesn't have x or y so i wasn't sure what to do..thanks
The first one is just a plane that intersects z at -2. Think about it, whatever x and y are, the point lies at z = -2, so it is a plane:
The 2nd and third one must be dealt with using the following:
z = g(x,y) = cos(y)
So that:
a = cos(y)
z = h(x,y) = 1-x
a = 1 - x
September 7th 2009, 09:16 AM #2
|
{"url":"http://mathhelpforum.com/calculus/100941-vectors.html","timestamp":"2014-04-19T01:56:32Z","content_type":null,"content_length":"32851","record_id":"<urn:uuid:90c9142c-5495-4a45-a7af-3673a5617203>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Guidelines for Convergence Authors
MAA Convergence (where mathematics, history, and teaching converge!) publishes articles about the history of mathematics and its use in teaching. It is aimed at teachers of mathematics at both the
secondary and collegiate levels. Preference is given to topics from grades 8-16 mathematics, with special emphasis on topics from grades 8-14: algebra, combinatorics, synthetic and analytic geometry,
trigonometry, probability and statistics, elementary functions, calculus, differential equations, and linear algebra.
We invite you to submit for publication in Convergence articles of the following types:
• Expository articles on the history of topics in the grades 8-16 mathematics curriculum ideally would contain interactive components, animations, colorful graphics, and/or links that take
advantage of the online setting, together with ideas for using the material in the classroom. We invite you to share your expertise or to take the opportunity to learn more about a topic by
writing an article about it!
□ Math historians: Consider sharing with Convergence readers your latest mathematics history research, taking advantage of our online format and making suggestions for grades 8-16 classroom
□ Math educators: Share your latest research on the role of mathematics history in mathematics education or your latest history-based instructional materials (see "Classroom activities" below).
• Translations of original sources, accompanied by commentary explaining the work and its context, show Convergence readers how mathematical ideas were developed in various cultures and how
knowledge of these developments can be used in teaching the same ideas to today's students.
• Classroom activities, projects, or modules may be designed for a few minutes, days, or weeks of instruction in grades 8-16 classes. Although most will be self-contained articles showing how to
use history in the teaching of a particular topic, these products also may serve as companion pieces to articles published in Convergence or other MAA journals, providing instructions and/or
tools for using information from those articles in classroom settings. Authors should give potential users as much direction as possible about when and how to use the activity, project, or module
(e.g. in which courses? to introduce, develop, and/or review a topic? to replace or supplement traditional instruction? in class and/or homework? how much time for each? individual or group
work?) We invite you to share with our readers how you are using the history of mathematics in your classroom!
• Classroom testimonials describe your experiences using a particular teaching aid, article, book, or website in the classroom. They may range from informal to formal evaluation, and the outcome
may be adoption, adaption, or rejection.
• Reviews of new and old books, articles, teaching aids, and websites should focus on evaluation of the item's utility in teaching.
We also welcome you to submit items for the following features:
• "Problems from Another Time" highlights historical problems.
• "On This Day" is a listing of three or four historic mathematical events that happened on each date.
• "Today's Quotation" is a quotation about mathematics from a historical figure selected from a searchable database of quotations.
• The "Calendar" is an up-to-date guide to conferences and events around the world that feature or include the history of mathematics and its use in teaching.
Submissions should be sent electronically to Janet Beery (see below for e-mail links). Articles sent in LaTeX, Word, pdf, or html formats are welcome, as is a temporary URL for a posted version of
your article with all images, applets, etc. in place.
For your final submission of an accepted article, please plan to submit:
• For an article with very little mathematical notation, a Word (or any text) file.
• For an article with much mathematical notation, a LaTeX file or an html file incorporating MathJax. Please use "backslash"( ... "backslash") in place of $…$ (single dollar signs), "backslash"[
... "backslash"] instead of double dollar signs, and arrays rather than tables.
• Images in separate files in jpg format and applets, etc. as separate files as well.
We have a definite preference for applets created using the free software GeoGebra, because these applets can be hosted by the MAA channel on GeoGebraTube. (Similarly, videos will be hosted by the
MAA channel on YouTube.) If you have an idea for animation or interactivity in an article, but do not know how to produce applets for it, we suggest you contact an expert on your own campus for help.
If that fails, please contact the editor and she will attempt to help you.
If you would be willing to serve as a referee for articles submitted to Convergence, please let the editor know which topics and types of articles you would prefer to review.
Convergence editor:
Janet Beery, University of Redlands
Convergence founding editors:
Victor Katz, University of the District of Columbia
Frank Swetz, Pennsylvania State University
Convergence associate editors:
Amy Ackerberg-Hastings, University of Maryland University College
Janet Barnett, Colorado State University, Pueblo
Kathleen Clark, Florida State University, Tallahassee
Lawrence D’Antonio, Ramapo College of New Jersey
Doug Ensley, Shippensburg State University, Pennsylvania
Victor Katz, University of the District of Columbia
Daniel Otero, Xavier University, Cincinnati, Ohio
Randy Schwartz, Schoolcraft College, Livonia, Michigan
Lee Stemkoski, Adelphi University, Garden City, New York
Frank Swetz, The Pennsylvania State University
|
{"url":"http://www.maa.org/publications/periodicals/convergence/guidelines-for-convergence-authors?device=desktop","timestamp":"2014-04-17T13:23:37Z","content_type":null,"content_length":"101450","record_id":"<urn:uuid:48c63efa-21fa-4358-9fdb-b045e2a347ed>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Number of config. of a binary string invariant under cyclic permutation.
up vote 0 down vote favorite
The following combinatorial problem has bothered me quite a bit. I guess people smarter than me have given the problem some taught as the problem has obvious applications (e.g. to the Ising model),
but I have not found any solution on the web (this might be because I don't know the proper terminology).
Anyways, here is the problem:
Consider a string of $N$ binary variables, $\uparrow$ and $\downarrow$. The string will have $2^N$ different configurations. Now impose a symmetry to the system; two configurations are equal if you
can get from one to the other by cyclic permutation or by reversal of the string (or a combination of these two symmetries). How many unique configurations will the string have?
For 1 $\uparrow$ and $N-1$ $\downarrow$ there will only be 1 unique configuration. For 2 $\uparrow$ and $N-2$ $\downarrow$ there will be $N/2$ configurations if $N$ is even and $(N-1)/2$
configurations if $N$ is odd. But if you take 3 $\uparrow$ and $N-3$ $\downarrow$, it is no longer clear (at least not to me), how one efficiently should count the number of possible configurations.
I would really appreciated some help, or references on relevant literature.
3 You are counting bracelets. en.wikipedia.org/wiki/Necklace_(combinatorics) – Gjergji Zaimi May 30 '10 at 12:58
As your question implies, you can solve the 1D spin-1/2 Ising model this way, and more generally short-ranged spin models. In two dimensions the problem is that there's not a suitable
generalization of the matrix-tree theorem. See also mathoverflow.net/questions/12214/… – Steve Huntsman May 30 '10 at 13:06
And for 2D: mathoverflow.net/questions/10752 – Steve Huntsman May 30 '10 at 13:09
Thank you for the input. @Streve: do you know of a reference for the solution of the 1D Ising model related to Necklace combinatorics. – jonalm Jun 1 '10 at 9:33
add comment
1 Answer
active oldest votes
http://en.wikipedia.org/wiki/Necklace_(combinatorics) will get you started.
up vote 1 down vote accepted
The final parenthesis should be part of the address, but MathOverflow doesn't know it. – Kevin O'Bryant May 30 '10 at 13:46
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics or ask your own question.
|
{"url":"https://mathoverflow.net/questions/26443/number-of-config-of-a-binary-string-invariant-under-cyclic-permutation","timestamp":"2014-04-23T20:13:06Z","content_type":null,"content_length":"56696","record_id":"<urn:uuid:80260148-b645-43f6-8884-289811b3f227>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Northridge SAT Math Tutor
Find a Northridge SAT Math Tutor
...I have been a math tutor for 3 years. I have been trained how to be a good tutor, that is, I will try not to write anything when I explain concepts to my tutee because according to the tutoring
training, students do not learn from me solving the problem and writing down everything. They learn when they are writing down everything.
10 Subjects: including SAT math, calculus, statistics, geometry
I am a very experienced tutor and I will help you with difficult concepts in an easy to understand manner for students at all levels: Middle School, Junior High, High School, college, university,
etc. I have a B.S. degree from The National University of Engineering, and took many courses at Cal Sta...
26 Subjects: including SAT math, Spanish, algebra 1, chemistry
...While I help students develop tricks for memorization, I emphasize understanding the theory and the "why" behind particularly challenging concepts so that the student can apply that
understanding to new problems he or she has not confronted before in the past. Professionally, I have founded a ve...
30 Subjects: including SAT math, Spanish, English, writing
...At age of 15 I started tutor math to my siblings, cousins, neighbors and my fellow students. I have always been fascinated by mathematical studies because of its focus on thought processes and
problem solving techniques. There is always a way to solve a math problem just like in real life.
7 Subjects: including SAT math, calculus, Chinese, algebra 1
...However, when more than three equations need to be solved simultaneously, one needs to begin using arrays and matrices. Thus, central to linear algebra is the study of matrices and how to
perform basic operation such as matrix multiplication. The notion of vector space and subspace becomes important and Eigenvalue problems will be introduced in more advanced linear algebra courses.
22 Subjects: including SAT math, calculus, physics, algebra 1
|
{"url":"http://www.purplemath.com/Northridge_SAT_math_tutors.php","timestamp":"2014-04-17T04:03:39Z","content_type":null,"content_length":"24143","record_id":"<urn:uuid:3e9188ec-03da-4258-8d89-0e5693fcf1f8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Standard Cosmological Model
2.1 The Cosmological Principle
Pietronero [13] argues that the evidence from redshift catalogs and deep galaxy counts is that the galaxy distribution is best described as a scale-invariant fractal with dimension D ~ 2. Others
disagree ([14], [15]). I am heavily influenced by another line of argument: it is difficult to reconcile a fractal universe with the isotropy observed in deep surveys (examples of which are
illustrated in Figs. 3.7 to 3.11 in [11] and are discussed in connection with the fractal universe in pp. 209 - 224 in [11]).
Figure 1. Angular distributions of particles in a realization of a fractal with dimension D = 2 viewed from one of the particles in the realization. The fraction of particles plotted in each distance
bin has been scaled so the expected number of particles plotted is the same in each bin.
Fig. 1 shows angular positions of particles in three ranges of distance from a particle in a fractal realization with dimension D = 2 in three dimensions. At D = 2 the expected number of neighbors
scales with distance R as N (< R) R^2, and I have scaled the fraction of particles plotted as R^-2 to get about the same number in each plot. The fractal is constructed by placing a stick of length L
, placing on either end the centers of sticks of length L / ^1/D, with random orientation, and iterating to smaller and larger scales. The particles are placed on the ends of the shortest sticks in
the clustering hierarchy. This construction with D = 1.23 (and some adjustments to fit the galaxy three- and four-point correlation functions) gives a good description of the small-scale galaxy
clustering [16]. The fractal in Fig. 1, with D = 2, the dimension Pietronero proposes, does not look at all like deep sky maps of galaxy distributions, which show an approach to isotropy with
increasing depth. This cannot happen in a scale-invariant fractal: it has no characteristic length.
A characteristic clustering length for galaxies may be expressed in terms of the dimensionless two-point correlation function defined by the joint probability of finding galaxies centered in the
volume elements dV[1] and dV[2] at separation r,
The galaxy two-point function is quite close to a power law,
where the clustering length is
and the Hubble parameter is
The rms fluctuation in galaxy counts in a randomly placed sphere is N/N = 1 at sphere radius r = 1.4r[0] ~ 6h^-1 Mpc, to be compared to the Hubble distance (at which the recession velocity approaches
the velocity of light), cH[0]^-1 = 3000h^-1 Mpc.
The isotropy observed in deep sky maps is consistent with a universe that is inhomogeneous but spherically symmetric about our position. There are tests, as discussed by Paczynski and Piran [17]. For
example, we have a successful theory for the origin of the light elements as remnants of the expansion and cooling of the universe through kT ~ 1 MeV [18]. If there were a strong radial matter
density gradient out to the Hubble length we could be using the wrong local entropy per baryon, based on conditions at the Hubble length where the CBR came from, yet the theory seems to be
successful. But to most people the compelling argument is that distant galaxies look like equally good homes for observers like us: it would be startling if we lived in one of the very few close to
the center of symmetry.
Mandelbrot [19] points out that other fractal constructions could do better than the one in Fig. 1. His example does have more particles in the voids defined by the strongest concentrations in the
sky, but it seems to me to share the distinctly clumpy character of Fig. 1. It would be interesting to see a statistical test. A common one expands the angular distribution in a given range of
distances in spherical harmonics,
In the approximation of the sum as an integral e[l] is the contribution to the variance of the angular distribution per logarithmic interval of l. It will be recalled that the zeros of the real and
imaginary parts of Y[l]^m are at separation l in the shorter direction, except where the zeros crowd together near the poles and Y[l]^m is close to zero. Thus e[l] is the variance of the fractional
fluctuation in density across the sky on the angular scale l and in the chosen range of distances from the observer.
I can think of two ways to define the dimension of a fractal that produces a close to isotropic sky. First, each octant of a full sky sample has half the diameter of the full sample, so one might
define D by the fractional departure of the mean density within each octant from the mean in the full sample,
Thus in Fig. 1, with D = 2, the quadrupole anisotropy e[2] is on the order of unity. Second, one can see the idea that the mean particle density varies with distance r from a particle as r^-(3-D).
Then the small angle (large l) Limber approximation to the angular correlation function w ([20]
To find e[l] differentiate with respect to l. At D = 2 this gives e[l] ~ 1: the surface density fluctuations are independent of scale. At 0 < 3 - D << 1, e[l] ~ (3 - D) / l. The X-ray background
fluctuates by about f / f ~ 0.05 at l ~ 30. This is equivalent to D ~ 3 - l (f / f)^2 ~ 2.9 in the fractal model in Eq. (9).
The universe is not exactly homogeneous, but it seems to be remarkably close to it on the scale of the Hubble length. It would be interesting to know whether there is a fractal construction that
allows a significantly larger value of 3 - D for given e[l] than in this calculation.
|
{"url":"http://ned.ipac.caltech.edu/level5/Peebles1/Peeb2_1.html","timestamp":"2014-04-18T10:40:32Z","content_type":null,"content_length":"10313","record_id":"<urn:uuid:2c0c566c-e063-4ee2-b4b9-d66ca462c9db>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Umesh Vazirani, and Vijay Vazirani. Matching is as easy as matrix inversion
, 2012
"... Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to
preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of ..."
Cited by 7 (5 self)
Add to MetaCart
Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess
the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the
O(n log n)-time algorithm of Klein [Multiple-source shortest paths in planar graphs. In Proc. 16th Ann. ACM-SIAM Symp. Discrete Algorithms, 2005] for multiple-source shortest paths in planar graphs.
Intuitively, our preprocessing algorithm maintains a shortest-path tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to
compute a shortest non-contractible or non-separating cycle in embedded, undirected graphs in O(g² n log n) time.
"... Abstract. We present a deterministic way of assigning small (log bit) weights to the edges of a bipartite planar graph so that the minimum weight perfect matching becomes unique. The isolation
lemma as described in [MVV87] achieves the same for general graphs using a randomized weighting scheme, whe ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract. We present a deterministic way of assigning small (log bit) weights to the edges of a bipartite planar graph so that the minimum weight perfect matching becomes unique. The isolation lemma
as described in [MVV87] achieves the same for general graphs using a randomized weighting scheme, whereas we can do it deterministically when restricted to bipartite planar graphs. As a consequence,
we reduce both decision and construction versions of the matching problem to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a highly parallel
SPL algorithm for bipartite planar graphs. This improves the earlier known bounds of non-uniform SPL by [ARZ99] and NC 2 by [MN95, MV00]. It also rekindles the hope of obtaining a deterministic
parallel algorithm for constructing a perfect matching in non-bipartite planar graphs, which has been open for a long time. Our techniques are elementary and simple. 1.
"... Abstract. We present a deterministic way of assigning small (log bit) weights to the edges of a bipartite planar graph so that the minimum weight perfect matching becomes unique. The isolation
lemma as described in [MVV87] achieves the same for general graphs using a randomized weighting scheme, whe ..."
Add to MetaCart
Abstract. We present a deterministic way of assigning small (log bit) weights to the edges of a bipartite planar graph so that the minimum weight perfect matching becomes unique. The isolation lemma
as described in [MVV87] achieves the same for general graphs using a randomized weighting scheme, whereas we can do it deterministically when restricted to bipartite planar graphs. As a consequence,
we reduce both decision and construction versions of the matching problem to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a highly parallel
SPL algorithm for bipartite planar graphs. This improves the earlier known bounds of non-uniform SPL by [ARZ99] and NC 2 by [MN95, MV00]. It also rekindles the hope of obtaining a deterministic
parallel algorithm for constructing a perfect matching in non-bipartite planar graphs, which has been open for a long time. Our techniques are elementary and simple. 1.
"... We survey algorithms and hardness results for two important classes of topology optimization problems: computing minimum-weight cycles in a given homotopy or homology class, and computing
minimum-weight cycle bases for the fundamental group or various homology groups. ..."
Add to MetaCart
We survey algorithms and hardness results for two important classes of topology optimization problems: computing minimum-weight cycles in a given homotopy or homology class, and computing
minimum-weight cycle bases for the fundamental group or various homology groups.
, 2012
"... Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to
preprocess the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of ..."
Add to MetaCart
Let G be a directed graph with n vertices and non-negative weights in its directed edges, embedded on a surface of genus g, and let f be an arbitrary face of G. We describe an algorithm to preprocess
the graph in O(gn log n) time, so that the shortest-path distance from any vertex on the boundary of f to any other vertex in G can be retrieved in O(log n) time. Our result directly generalizes the
O(n log n)-time algorithm of Klein [Multiple-source shortest paths in planar graphs. In Proc. 16th Ann. ACM-SIAM Symp. Discrete Algorithms, 2005] for multiple-source shortest paths in planar graphs.
Intuitively, our preprocessing algorithm maintains a shortest-path tree as its source point moves continuously around the boundary of f. As an application of our algorithm, we describe algorithms to
compute a shortest non-contractible or non-separating cycle in embedded, undirected graphs in O(g 2 n log n) time.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=6999892","timestamp":"2014-04-18T17:37:00Z","content_type":null,"content_length":"23251","record_id":"<urn:uuid:ea499c93-5303-47b3-9c2f-f84cb3b23582>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
|
When do 0-preserving isometries have to be linear?
up vote 2 down vote favorite
Let $\langle \mathbf{V},+,\cdot,||.|| \rangle$ be a normed vector space over $\mathbb{R}$.
Let $f : \mathbf{V} \to \mathbf{V}$ be an isometry that satisfies $f(\mathbf{0}) = \mathbf{0}$ .
What conditions on the vector space would or would not force $f$ to be linear?
examples: finite dimensional, complete, norm induced by an inner product, strictly convex
vector-spaces norms isometries
1 en.wikipedia.org/wiki/Mazur%2DUlam_theorem – Theo Buehler Apr 20 '11 at 7:08
I feel ... somewhat silly, although I would not have been able to find that on my own. I'll accept that if you post it as an answer. – Ricky Demer Apr 20 '11 at 7:12
1 Ricky, I wouldn't worry too much. One of the goals of MO, in my opinion, is to match up people with natural (and good!) questions to people who happen to know the answer. – Yemon Choi Apr 20 '11
at 7:16
add comment
1 Answer
active oldest votes
If you assume $f$ to be surjective then $f$ has to be linear without any assumptions on $V$ by the Mazur-Ulam theorem. Wikipedia doesn't offer much more information than a link to the
up vote 8 down beautiful recent proof by J. Väisälä.
vote accepted
3 There is a nice generalization of the Mazur-Ulam theorem due to Figiel, T. Figiel, On nonlinear isometric embedding of normed linear spaces, Bull. Acad. Polon. Sei. Ser. Sei. Math.
Astronom. Phys. 16 (1968), 185-188. If $f$ is an isometric embedding of the Banach space $X$ into the Banach space $Y$ and $f(0)=0$, then $X$ embeds isometrically isomorphically as
a norm one complemented subspace of the closed linear span of $f[X]$. – Bill Johnson Apr 20 '11 at 10:06
Thank you Bill, this is indeed very nice! I'll have a closer look at it later. – Theo Buehler Apr 20 '11 at 10:11
add comment
Not the answer you're looking for? Browse other questions tagged vector-spaces norms isometries or ask your own question.
|
{"url":"http://mathoverflow.net/questions/62380/when-do-0-preserving-isometries-have-to-be-linear/62382","timestamp":"2014-04-18T06:03:10Z","content_type":null,"content_length":"56821","record_id":"<urn:uuid:7e402035-d53a-4959-9fc4-08fe420b0f6a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
|
solving for exteriors in a triangle
December 1st 2010, 06:22 PM #1
solving for exteriors in a triangle
I have very basic question about Alt. Ext
if a have a triangle with the hypothenus facing right, and te to legs make a 90degree angle i have one say is X and the exterior is Y how do i find this?
Huh? What does this even mean? What is X? Is it the length of one of the legs? Is it the 90degree angle? I assume an exterior angle is Y, but which one? Is it the one that forms a linear pair
with the right angle?
Sorry i understand it now, The Ext angle is the sum of the two non adjacent Int. Angles
December 1st 2010, 06:52 PM #2
Junior Member
Mar 2009
December 2nd 2010, 02:54 PM #3
December 2nd 2010, 06:04 PM #4
|
{"url":"http://mathhelpforum.com/geometry/165013-solving-exteriors-triangle.html","timestamp":"2014-04-20T09:38:11Z","content_type":null,"content_length":"38067","record_id":"<urn:uuid:43c3dd42-de3e-4b09-86e8-65a3193e50f3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Borrego Springs Geometry Tutors
...I enjoy the challenge of finding these shortcomings and correcting them. Usually students find that once these are taken care of, they are free to soar. Oh the wonderfully weird trig!
11 Subjects: including geometry, calculus, ASVAB, algebra 1
...I used to hold events where I would cook healthy meals for up to fifty people every other week. I have trained cooking experience in Santa Cruz where I worked full time in a kitchen preparing a
wide variety of foods. I have spent much of my time at UCSD helping other UCSD students with their st...
10 Subjects: including geometry, calculus, algebra 1, algebra 2
...For instance, last year, 3 of my kids went from D or worse to B or better in less than 6 months. I have a BS in Finance, which was a math intensive major. I have 13 years experience tutoring a
variety of Mathematics to elementary through Junior College students.
14 Subjects: including geometry, reading, ASVAB, algebra 1
-Experienced math tutor in all levels of math.-Bachelor degree from University of California, Irvine-Specialize in Algebra, Calculus, SAT I Math, and SAT II Math.-Worked at Palomar College Math
Center for more than a year.-Worked in Upward Bound Program, which help college-bound high school students by providing after school tutoring.-Worked at Anna Little's Learning center for two years
11 Subjects: including geometry, calculus, statistics, Chinese
...I have a B.A. degree in Mathematics and a Ph.D. in theoretical physics. Also, many years' experience in academia, government and industry. I have a BA in mathematics and a PhD in physics.
29 Subjects: including geometry, chemistry, calculus, biology
|
{"url":"http://www.algebrahelp.com/Borrego_Springs_geometry_tutors.jsp","timestamp":"2014-04-18T18:23:27Z","content_type":null,"content_length":"24993","record_id":"<urn:uuid:f63bbd9f-a792-4034-ac71-4d42479849ce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
|
All in one Explorer
All in one Explorer for Amibroker (AFL)
Here i uploaded a afl which is a combination of some indicator. This afl was modified by Manish.
Go to auto-analyser, In formula file pick the afl and click explore. It will explore for signals using many different indicators.
Here is a screenshot of how the indicator looks:
Tags: amibroker, exploration
Submitted by
over 3 years ago
Similar Formulas
Sponsor Links
9 Comments
_*dear mithun
its geting all the counters and giving comment,
can i get a filter so that 4 bullish or 4 berish ,then that stock only i have to get in exploration,is that posible*_
sir i have not found any result after copy and paste in formula editor
NOTHING IS SEEN.PLS EXPLAIN HOW TO SEE.
YOU WRITE – Go to auto-analyser, In formula file pick the afl and click explore. It will explore for signals using many different indicators.
SO WHERE IS AUTO ANALYSER? PLS CLEAR ALL.
Missing byu/sell variable assignments…anybody help?
Regretfully to let you know that it is not giving any scan results while I have followed your instructions. I have tried to analyse the top 100 Nifty F&O scrips for the entire day of 11th Feb. 2014
where I had witnessed buy/sell signals from other AFL but not your exploratory AFL. May be you may need to guide us in this regard. I am using Ami Pro 5.50. Thanks,
How to add OHLC columns to this explorer
Please login here to leave a comment.
|
{"url":"http://www.wisestocktrader.com/indicators/1106-all-in-one-explorer","timestamp":"2014-04-16T13:16:59Z","content_type":null,"content_length":"23391","record_id":"<urn:uuid:90c21360-7a87-4a87-9c2a-1b660fb89e2a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] untenable matrix behavior in SVN
Stéfan van der Walt stefan@sun.ac...
Tue Apr 29 08:24:58 CDT 2008
Hi Charles
2008/4/29 Charles R Harris <charlesr.harris@gmail.com>:
> May I add that if I edit defmatrix.py to act like an array for scalar
> indexing, then the following works.
> In [1]: a = matrix(eye(2))
> In [2]: array([a,a])
> Out[2]:
> array([[[ 1., 0.],
> [ 0., 1.]],
> [[ 1., 0.],
> [ 0., 1.]]])
> This generates an error with the current version of matrix and, frankly, I
> am not going to be bothered going all through the numpy c sources to special
> case matrices to fix that. Someone else can do it if they wish. There are
> recursive routines that expect the dimensions to decrease on each call.
I'd also like to see matrices become proper hierarchical containers --
the question is just how to do that. Thus far, I'm most convinced by
the arguments for RowVectors/Columns, which leaves us with a sane
model for doing linear algebra, while providing the enhancements you
mentioned here and in comments to another ticket.
We were thinking of raising a warning on scalar indexing for 1.1, but
given the above, would that be sensical?
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033308.html","timestamp":"2014-04-16T17:01:30Z","content_type":null,"content_length":"4000","record_id":"<urn:uuid:fa9e7082-3e36-405d-9266-c78b3fb90e29>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematical Proof/Methods of Proof/Proof by Induction
The beauty of induction is that it allows a theorem to be proven true where an infinite number of cases exist without exploring each case individually. Induction is analogous to an infinite row of
dominoes with each domino standing on its end. If you want to make all the dominoes fall, you can either:
1. push on the first one, wait to see what happens, and then check each domino afterwards (which may take a long time if there's an infinite number of dominoes!)
2. or you can prove that if any domino falls, then it will cause the domino after it to fall. (i.e. if the first one falls then the second one will fall, and if the second one falls then the third
one will fall, etc.)
Induction, essentially, is the methodology outlined in point 2.
Parts of InductionEdit
Induction is composed of three parts:
1. The Base Case (in the domino analogy, this shows the first domino will fall)
2. The Induction Hypothesis (in the domino analogy, we assume that a particular domino will fall)
3. The Inductive Step (in the domino analogy, we prove that the domino we assume will fall will cause the next domino to fall)
Weak InductionEdit
Weak induction is used to show that a given property holds for all members of a countable inductive set, this usually is used for the set of natural numbers.
Weak induction for proving a statement $P(n)$ (that depends on $n$) relies on two steps:
• $P(n)$ is true for a certain base step. Usually the base case is $n=1$ or $n=0$
• $P(k)\Rightarrow P(k+1)$. That is, given that $P(k)$ is true, $P(k+1)$ is also true.
If these two properties hold, one may induce that the property holds for all elements in the set in question. Returning to the example, if you are sure that you called your neighbor, and you knew
that everyone who was called in turn called his/her neighbor, then you would be guaranteed that everyone on the block had been called (assuming you had a linear block, or that it curved around
The first example of a proof by induction is always 'the sum of the first n terms:'
Theorem 2.4.1. For any fixed $n\in \mathbb N,$$\sum_{i=1}^{n}{i}=\frac{n(n+1)}{2}$
• Base step: $1=\frac{1\cdot 2}{2}$, therefore the base case holds.
• Inductive step: Assume that $\sum_{i=0}^{n}{i}=\frac{n(n+1)}{2}$. Consider $\sum_{i=1}^{n+1}{i}$.
$\sum_{i=1}^{n+1}{i}=\sum_{i=1}^n i + (n+1) = \frac{n(n+1)}{2}+n+1$
So the inductive case holds. Now by induction we see that the theorem is true.
Reverse InductionEdit
Reverse induction is a seldom-used method of using an inductive step that uses a negative in the inductive step. It is a minor variant of weak induction. The process still applies only to countable
sets, generally the set of whole numbers or integers, and will frequently stop at 1 or 0, rather than working for all positive numbers.
Reverse induction works in the following case.
• The property holds for a given value, say $M$.
• Given that the property holds for a given case, say $n=k+1$, Show that the property holds for $n=k$.
Then the property holds for all values $n\le M$.
Strong InductionEdit
In weak induction, for the inductive step, we only required that for a given $n$, its immediate predecessor ($n-1$) satisfies the theorem (i.e., $P(n-1)$ is true). In strong induction, we require
that not only the immediate predecessor, but all predecessors of $n$ satisfy the theorem. The variation in the inductive step is:
• If $P(k)$ is true for all $k<n,$ then $P(n)$ is true.
The reason this is called strong induction is fairly obvious--the hypothesis in the inductive step is much stronger than the hypothesis is in the case of weak induction. Of course, for finite
induction it turns out to be the same hypothesis, but in the case of transfinite sets, weak induction is not even well-defined, since some sets have elements that do not have an immediate
Transfinite InductionEdit
Used in proving theorems involving transfinite cardinals. This technique is used in set theory to prove properties of cardinals, since there is rarely another way to go about it.
Inductive SetEdit
We first define the notion of a well-ordered set. A set $X$ is well-ordered if there is a total order < on $X$ and that whenever $Y\subset X$ is non-empty, there is a least-element in $Y$. That is, $
\exists p\in Y$ such that $p<q \ \forall q\in Y$.
An inductive set is a set $A\subset X$ such that the following hold:
1. $\alpha\in A$ (where $\alpha$ is the least element of $X$)
2. If $\beta\in A$ then $\forall\gamma\in X$ such that $\beta < \gamma, \gamma\in A$
Of course, you look at that and say "Wait a minute. That means that $A=X$!" And, of course you'd be right. That's exactly why induction works. The principle of induction is the theorem that says:
Theorem 2.4.2. If $X$ is a non-empty well-ordered set and $A\subset X$ is an inductive subset of $X$ then $A=X$.
The proof of this theorem is left as a very simple exercise. Here we note that the set of natural numbers is clearly well-ordered with the normal order that you are familiar with, so $\mathbb{N}$ is
an inductive set. If you accept the axiom of choice, then it follows that every set can be well-ordered.
Last modified on 8 December 2012, at 18:46
|
{"url":"http://en.m.wikibooks.org/wiki/Mathematical_Proof/Methods_of_Proof/Proof_by_Induction","timestamp":"2014-04-19T14:35:02Z","content_type":null,"content_length":"27648","record_id":"<urn:uuid:80ec7e82-fcf9-442d-959f-56ae046b0a9d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Classical Electrodynamics, 3rd Edition
Classical Electrodynamics, 3rd Edition
ISBN: 978-0-471-30932-1
832 pages
August 1998, ©1999
A revision of the defining book covering the physics and classical mathematics necessary to understand electromagnetic fields in materials and at surfaces and interfaces. The third edition has been
revised to address the changes in emphasis and applications that have occurred in the past twenty years.
See More
Introduction to Electrostatics.
Boundary-Value Problems in Electrostatics: I.
Boundary-Value Problems in Electrostatics: II.
Multipoles, Electrostatics of Macroscopic Media, Dielectrics.
Magnetostatics, Faraday's Law, Quasi-Static Fields.
Maxwell Equations, Macroscopic Electromagnetism, Conservation Laws.
Plane Electromagnetic Waves and Wave Propagation.
Waveguides, Resonant Cavities, and Optical Fibers.
Radiating Systems, Multipole Fields and Radiation.
Scattering and Diffraction.
Special Theory of Relativity.
Dynamics of Relativistic Particles and Electromagnetic Fields.
Collisions, Energy Loss, and Scattering of Charged Particles, Cherenkov and Transition Radiation.
Radiation by Moving Charges.
Bremsstrahlung, Method of Virtual Quanta, Radiative Beta Processes.
Radiation Damping, Classical Models of Charged Particles.
See More
• SI units used in the first 10 chapters. Gaussian units are retained in the later chapters.
• Over 110 new problems.
• New sections on the principles of numerical techniques for electrostatics and magnetostatics, as well as some elementary problems.
• Faraday's Law and quasi-static fields are now in Chapter 5 with magnetostatics, permitting a more logical discussion of energy and inductances.
• Discussion of radiation by charge-current sources, in both elementary and exact multipole forms, has been consolidated in Chapter 9.
• Applications to scattering and diffraction are now in Chapter 10.
• Two new sections in Chapter 8 discuss the principles of optical fibers and dielectric waveguides.
• The treatment of energy loss (Chapter 13) has been shortened and strengthened.
• The discussion of synchrotron radiation as a research tool in Chapter 14 has been augmented by a detailed section on the physics of wigglers and undulators for synchrotron light sources.
• New material in Chapter 16 on radiation reaction and models of classical charged particles.
See More
|
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-047130932X.html","timestamp":"2014-04-16T04:21:20Z","content_type":null,"content_length":"42768","record_id":"<urn:uuid:73ff1f49-beed-4879-9ac5-59111de5c9b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Smooth graphs
L. Soukup:
Smooth graphs
A graph G on omega_1 is called <omega-smooth if for each uncountable subset W of omega_1, G is isomorphic to G[W-W'] for some finite W'. We show that in various models of ZFC if a graph G is
<omega-smooth then G is necessarily trivial, i.e, either complete or empty. On the other hand, we prove that the existence of a non-trivial, <omega-smooth graph is also consistent with ZFC.
Downloading the paper
|
{"url":"http://www.renyi.hu/pub/setop/sme.html","timestamp":"2014-04-18T08:41:46Z","content_type":null,"content_length":"1119","record_id":"<urn:uuid:d5fe39a1-9dac-4be4-a12c-18246f140764>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00403-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Can anyone tell me about how to use the local density approximation in
How do you mean 'use'? LDA is not a functional in itself, it's an ansatz, assuming the exchange-correlation energy for each point in the density can be described by each point in the density. Hence
[tex]E_{xc}[\rho] = \int \rho(r)\epsilon(\rho(r))dr[/tex]
Typically you also assume that exchange and correlation contributions are separable, working from the homogeneous electron gas, you can get an analytical expression for the exchange energy, but not
the correlation.
Parr and Yang's well-known book has the details.
If you're asking whether or not applying an LDA method can be done analytically, that'd depend on your system. You probably could for a homogeneous electronic gas, but not much else.
|
{"url":"http://www.physicsforums.com/showthread.php?t=409959","timestamp":"2014-04-21T12:10:53Z","content_type":null,"content_length":"23452","record_id":"<urn:uuid:ac4e6537-bd7c-4792-a7b0-37f9109e0fd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] tensor dot ?
Travis Oliphant oliphant.travis at ieee.org
Mon Aug 28 22:03:29 CDT 2006
Simon Burton wrote:
>>>> numpy.dot.__doc__
> matrixproduct(a,b)
> Returns the dot product of a and b for arrays of floating point types.
> Like the generic numpy equivalent the product sum is over
> the last dimension of a and the second-to-last dimension of b.
> NB: The first argument is not conjugated.
> Does numpy support summing over arbitrary dimensions,
> as in tensor calculus ?
> I could cook up something that uses transpose and dot, but it's
> reasonably tricky i think :)
I've just added tensordot to NumPy (adapted and enhanced from
numarray). It allows you to sum over an arbitrary number of axes. It
uses a 2-d dot-product internally as that is optimized if you have a
fast blas installed.
If a.shape is (3,4,5)
and b.shape is (4,3,2)
tensordot(a, b, axes=([1,0],[0,1]))
returns a (5,2) array which is equivalent to the code:
c = zeros((5,2))
for i in range(5):
for j in range(2):
for k in range(3):
for l in range(4):
c[i,j] += a[k,l,i]*b[l,k,j]
More information about the Numpy-discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-August/010361.html","timestamp":"2014-04-16T16:12:30Z","content_type":null,"content_length":"3658","record_id":"<urn:uuid:fe149fc3-a76e-46fe-89b1-2585dae73a8b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
erimeter of a rhombus
Perimeter of a rhombus
Definition: The total distance around the outside of a
Try this Drag any orange dot to resize the rhombus. The perimeter is calculated as you drag.
How to find the perimeter of a rhombus
Like any polygon, the perimeter is the total distance around the outside, which can be found by adding together the length of each side. In the case of a rhombus, all four sides are the same length
by definition, so the perimeter is four times the length of a side. Or as a formula:
perimeter = 4S where:
S is the length of any one side
In the figure above, drag any orange dot to resize the rhombus. From the side length shown, calculate the perimeter and verify your result matches the formula at the top of the diagram.
Related polygon topics
Types of polygon
Area of various polygon types
Perimeter of various polygon types
Angles associated with polygons
Named polygons
(C) 2009 Copyright Math Open Reference. All rights reserved
|
{"url":"http://www.mathopenref.com/rhombusperimeter.html","timestamp":"2014-04-17T04:25:21Z","content_type":null,"content_length":"13415","record_id":"<urn:uuid:048da19c-10c7-4d9a-9135-168b1b87eb62>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homomorphisms and Factor Rings
February 21st 2013, 02:12 AM
Homomorphisms and Factor Rings
Let R and R' be rings and let N and N' be ideals of R and R' respectively.
Let $\phi$ be a homomorphism of R into R'.
Show that $\phi$ induces a natural homomorphism $\phi_* : R/N \rightarrow R'/N'$ if $\ \ \phi [N] \subseteq N'$
February 21st 2013, 10:34 AM
Re: Homomorphisms and Factor Rings
well, what do we have to work with?
we are given the homomorphism φ:R-->R', the ideal N of R and the ideal N' of R', and that φ(N) is a subset of N'.
so the natural thing to do is define: φ[[*]](r+N) = φ(r)+N'.
whenEVER you define things on cosets, it is imperative that you verify that the definition depends ONLY on the coset r+N, and not on "r".
so we must check that if r'+N = r+N, that φ[[*]](r+N) = φ[[*]](r'+N).
if r+N = r'+N, this means r-r' is in N. since φ maps N inside N', φ(r-r') is in N'. since φ is a homomorphism, φ(r-r') = φ(r)-φ(r').
so we have φ(r)-φ(r') is in N', hence φ(r)+N' = φ(r')+N', that is: φ[[*]](r+N) = φ[[*]](r'+N), as desired.
now all that is left to do is verify that φ[[*]] is a ring homomorphism. you can do this.
February 21st 2013, 11:27 AM
Re: Homomorphisms and Factor Rings
Thanks Deveno ... appreciate your help.
WIll now work through the post
|
{"url":"http://mathhelpforum.com/advanced-algebra/213536-homomorphisms-factor-rings-print.html","timestamp":"2014-04-17T15:45:14Z","content_type":null,"content_length":"5767","record_id":"<urn:uuid:b50f6010-1b9f-47fc-ae90-04c6c3676dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Real Analysis - Sequences 3
d.) Assuming this sequence is in the metric space $(M,d)$, if $\{b_n\}$ converges, then it is bounded. ( $\exists x_0\in M, R\in\mathbb{R}$ such that $d(x_0,b_n)<R~\forall n$) So $d(a_n,x_0)-R<d
(a_n,b_n)~\forall n$. (Triangle Inequality) But since $\{a_n\}$ is unbounded, so is $d(a_n,x_0)$, because $x_0$ is a fixed point. Therefore $d(a_n,b_n)$ is unbounded as well. @ Drexel: $x_n$ needs to
|
{"url":"http://mathhelpforum.com/differential-geometry/129128-real-analysis-sequences-3-a.html","timestamp":"2014-04-17T08:56:34Z","content_type":null,"content_length":"47077","record_id":"<urn:uuid:c86839f7-7e01-4389-bd2b-07f848949e27>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/users/allamiro/medals","timestamp":"2014-04-19T15:04:53Z","content_type":null,"content_length":"73223","record_id":"<urn:uuid:ff54e5f3-19c4-4249-a6db-844a052b0159>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
|
differentiable function
May 18th 2010, 12:52 PM #1
differentiable function
suppose that f is a differentiable function with the derivative f'(x)=(x-1)(x+2)(x+3). determine the values of x for which the function f is increasing and the values for x for which the function
is decreasing
Should just be a simple case of determining f'(x) near 1,-2 and -3.
I imagine there's an easier or probably a more rigorous approach but just test a value in each of the intervals $(-\infty, -3)$, $(-3,-2)$, $(-2,1)$, $(1,\infty)$
May 18th 2010, 01:17 PM #2
|
{"url":"http://mathhelpforum.com/calculus/145368-differentiable-function.html","timestamp":"2014-04-18T11:52:12Z","content_type":null,"content_length":"32630","record_id":"<urn:uuid:a71b6955-aba3-4150-b127-f1221bea42d4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural numbers
So, regarding the inclusion of the zero or not, I prefer to include it, because:
(1) If you construct the naturals from set theory, then you start from the empty set, which is taken as 0, and obtain the others by applying successive applications of the successor function.
however ultimately irrelevant it may be (since we can always "shut up and compute"), I am inclined to refuse to call the set constructed starting with the empty set as a set of natural numbers
(although it's all fine with the peano axioms)... "no elements" isn't really a number of elements, definitely not what we (or should I say "I" instead) intuitively think about when we ponder about
I like to say it's a set of ordinals (as you also pointed out), [tex]\omega[/tex]
but again, perhaps this is ultimately pointless convention on my part (the inverse may also be true).
(3) When you construct ([tex]\mathbb Z[/tex]) from [tex]\mathbb N[/tex], it's easier (and neater) if you already have the 0.
consider this construction
on [tex]\mathbb N \times \mathbb N[/tex] we define the equivalence relation [tex]\equiv[/tex] as:
[tex](m,n)\equiv(m',n') \Leftrightarrow m+n'=n+m'[/tex]
[tex]\mathbb Z[/tex] will be [tex]\frac{\mathbb N\times\mathbb N}{\equiv}[/tex] with addition and multiplication defined as:
with [tex]\hat{(1,1)}\equiv 0[/tex] and [tex]\hat{(2,1)}\equiv 1[/tex]
I think this is a pretty straightforward and neat construction
0 as a natural number (appearing only for the integers)...
|
{"url":"http://www.physicsforums.com/showthread.php?t=367572","timestamp":"2014-04-18T21:22:07Z","content_type":null,"content_length":"57817","record_id":"<urn:uuid:79273050-e9a8-4698-ba30-f82f180e21ba>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector equation of a plane
March 28th 2011, 03:35 PM
Vector equation of a plane
Write a vector and parametric equation for a plane that:
b) contains (-5,9,-3) and is parallel to [x,y,z] = [1, -2, 7] + s[4, -1, -3] and
[x,y,z] = [7,-2,15] + t[1,6,-8]
I'm not sure where to start. Usually, when asking for parallel lines, I find if the direction vectors are scalar multiples of each other, then I find out if s and t have the same value for all x
y and z.
I'm confused about planes.
March 29th 2011, 06:11 AM
Write a vector and parametric equation for a plane that:
b) contains (-5,9,-3) and is parallel to [x,y,z] = [1, -2, 7] + s[4, -1, -3] and
[x,y,z] = [7,-2,15] + t[1,6,-8]
I'm not sure where to start. Usually, when asking for parallel lines, I find if the direction vectors are scalar multiples of each other, then I find out if s and t have the same value for all x
y and z.
I'm confused about planes.
One way to find the equation of a plane is to know a normal vector to the plane and a point in the plane. Since we have the point we need to find a normal vector to the plane. Since the plane
must be parallel to both of the above lines we can find a vector perpendicular to both of them by taking the cross product of the direction vectors.
$\begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ 1 & -2 & 7 \\ 4 & -1 & -3\end{vmatrix}=13\mathbf{i} +31\mathbf{j}+ 7\mathbf{k}$
Can you finish from here?
|
{"url":"http://mathhelpforum.com/advanced-algebra/176112-vector-equation-plane-print.html","timestamp":"2014-04-16T16:08:34Z","content_type":null,"content_length":"5413","record_id":"<urn:uuid:5acad14f-eb35-4aba-9da4-5089d19e919e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Orange, NJ Precalculus Tutor
Find an Orange, NJ Precalculus Tutor
...I have tutored in all these subjects at one point or another, all achieving a high success rate with each student: Mathematics (Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, SAT II,
etc); SAT Prep (Primarily English & Math); ACT Prep (Primarily English, Math, & Science); Physics (Regen...
33 Subjects: including precalculus, physics, calculus, GRE
...My students have seen dramatic improvement in their scores from my tutoring. One of my most recent students in SAT prep earned 800CR/780M, and another earned a combined score of 2290! I'd love
to teach you in any of my listed academic subjects.
26 Subjects: including precalculus, English, calculus, physics
...Hello,I've been tutoring all aspects of the ASVAB for over 2 years. I have found my knowledge of advanced mathematics, English and other standardized tests can be directly applied to help
potential students achieve their goals in this test. I break down the exam into efficient and effective tes...
55 Subjects: including precalculus, English, calculus, reading
...I am currently working on my business part time so I took up tutoring again because I enjoy it and it comes easy to me. I have references available upon request. I am available during the
week, afternoons or evenings.
10 Subjects: including precalculus, geometry, algebra 1, GED
...I am senior student majoring in electrical engineering in December 2011. I am currently a campus tutor at a local college on the subject of Math (from Algebra to calculus), General physics 1&
2, electrical engineering and computer engineering courses. I have been tutoring at the college of Staten island for 4 years.
23 Subjects: including precalculus, French, calculus, physics
Related Orange, NJ Tutors
Orange, NJ Accounting Tutors
Orange, NJ ACT Tutors
Orange, NJ Algebra Tutors
Orange, NJ Algebra 2 Tutors
Orange, NJ Calculus Tutors
Orange, NJ Geometry Tutors
Orange, NJ Math Tutors
Orange, NJ Prealgebra Tutors
Orange, NJ Precalculus Tutors
Orange, NJ SAT Tutors
Orange, NJ SAT Math Tutors
Orange, NJ Science Tutors
Orange, NJ Statistics Tutors
Orange, NJ Trigonometry Tutors
Nearby Cities With precalculus Tutor
Belleville, NJ precalculus Tutors
Bloomfield, NJ precalculus Tutors
East Orange precalculus Tutors
Glen Ridge precalculus Tutors
Harrison, NJ precalculus Tutors
Irvington, NJ precalculus Tutors
Kearny, NJ precalculus Tutors
Livingston, NJ precalculus Tutors
Maplewood, NJ precalculus Tutors
Montclair, NJ precalculus Tutors
Newark, NJ precalculus Tutors
Nutley precalculus Tutors
South Kearny, NJ precalculus Tutors
South Orange precalculus Tutors
West Orange precalculus Tutors
|
{"url":"http://www.purplemath.com/Orange_NJ_precalculus_tutors.php","timestamp":"2014-04-19T02:40:39Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:856bad3d-de54-4039-9d8d-bf315b36aed6>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dacono SAT Tutor
Find a Dacono SAT Tutor
...My promise: your student will get great grades/scores and be less stressed while doing it. Here's what I offer to each student: -Immediate results & customized individual feedback -Keep track
of what your student needs to focus on daily, weekly, and for the whole semester. -The insights, tips, ...
41 Subjects: including SAT math, SAT reading, English, Spanish
...Thanks! Although I am an English native speaker, born in New York City, I was an Exchange Student at Heidelberg University as an undergraduate, a German minor at Yale, and an instructor later
on at the University of Maryland in Germany. During the 43 years of my marriage to a German national, moreover, we spoke mainly German at home.
15 Subjects: including SAT writing, SAT reading, English, writing
...If you are looking for a tutor in physics, I look forward to getting you back on the right track. I have been a PowerPoint user since 1995. I have constructed presentations for simple group
meetings, executive presentations and over 25 conference presentations.
14 Subjects: including SAT math, physics, calculus, Microsoft Excel
...My goal is to empower you to test yourself and monitor your own progress. I am confident that I can provide an approach which will improve performance for test takers in many different subjects
including Algebra, Psychology, Statistics, Research Methods, Biology, American/World History, and Engl...
31 Subjects: including SAT writing, SAT math, SAT reading, English
...After working with my 9th graders for six years, I can tell you THESE are imperative to learn. One of the hardest things to figure out is how you best learn so when you do study it is
effective. I normally introduce a few core study skills to my students.
52 Subjects: including SAT reading, SAT math, SAT writing, reading
|
{"url":"http://www.purplemath.com/dacono_sat_tutors.php","timestamp":"2014-04-19T10:11:44Z","content_type":null,"content_length":"23431","record_id":"<urn:uuid:cf037350-39c6-426d-b2b9-01d93e6d3c3f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Frontiers in Science Colloquium
Date/Time/Room: Friday (3/2/2001) at 12:00 noon in 114 Chemistry Research Building
Speaker: Hal L. Smith, Department of Mathematics, Arizona State University
``How Many Species Can a Given Number of Resources Support?''
Abstract: Given k essential resources, how many species can be supported in purely exploitative competition for the given resources in a spatially and temporally homogeneous environment? It has long
been known that generically speaking, equilibrium coexistence is impossible if n>k and, as competition rarely generates oscillations, one might expect that not more than k species can be supported by
k (non-reproducing) resources. However, numerical simulations of the standard mathematical model in a recent Nature article of Huisman and Weissing strongly suggest that up to six species can be
supported by three resources and as many as twelve species can be supported on five resources. Their simulations show that certain solutions of the standard model of k species with k resources
oscillate (periodically if k=3 and chaotically if k=5) and that these oscillatory communities (solutions) can be successively invaded by one species after another up to the total numbers given above.
These results have important implications for the planktonic paradox-Why can so many plankton species seemingly coexist on so few limiting resources?
Without going into great mathematical detail, we review what can actually be proved mathematically, focusing on the case of two resources and n>=2 species, and 3 resources and 3 species.
|
{"url":"http://www.uta.edu/math/pages/main/abstracts/smith_3_2_01.html","timestamp":"2014-04-23T16:43:28Z","content_type":null,"content_length":"2715","record_id":"<urn:uuid:0c90f9de-71db-4d59-88de-08b4f2170c42>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
|
uniform continuity theorem
May 20th 2010, 09:34 AM #1
Junior Member
Apr 2010
uniform continuity theorem
The uniform continuity theorem says if f: A->N is continuous and Kin A is compact, then f is uniformly continuous on K.
But if I take 1/n on [0,1] it is not uniformly continuous because as we approach 0 from the left the y values get further and further apart. Is tehre something I'm missing in this theorem?
The uniform continuity theorem says if f: A->N is continuous and Kin A is compact, then f is uniformly continuous on K.
But if I take 1/n on [0,1] it is not uniformly continuous because as we approach 0 from the left the y values get further and further apart. Is tehre something I'm missing in this theorem?
This makes no sense whatsoever. What's your function? Are you saying that $f\left(\frac{1}{n}\right)$ isn't Cauchy?
I meant f=1/x, f:[0,1] -> R
f is not defined at 0, therefore not continuous there. That theorem requires (although you didn't state it) that f is continuous in K.
May 20th 2010, 09:41 AM #2
May 20th 2010, 10:03 AM #3
Junior Member
Apr 2010
May 20th 2010, 11:03 AM #4
Super Member
Aug 2009
|
{"url":"http://mathhelpforum.com/differential-geometry/145722-uniform-continuity-theorem.html","timestamp":"2014-04-20T21:08:27Z","content_type":null,"content_length":"38611","record_id":"<urn:uuid:bfb33f20-6772-47cc-8311-7b2444b92c56>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Artificial Intelligence: Problem Set 1
Assigned: Jan. 18
Due: Feb. 8 (NOte: Due date changed because class was cancelled for snow on Jan. 25)
The BIN-PACKING PROBLEM problem is defined as follows: You are given and a collection of N objects with sizes S[1] ... S[N]. The problem is to pack these objects into bins of size B, using as few
bins as possible.
For example, suppose that N=8, that B=102, and that the objects have the following sizes:
S[1] = 42; S[2] = 40; S[3] = 30; S[4] = 31; S[5] = 72; S[6] = 70; S[7] = 60; S[8] = 61.
Then it is possible to pack these into 4 bins by packing objects 1 and 7 into bin 1; objects 2 and 8 into bin 2; objects 3 and 5 into bin 3; and objects 4 and 6 into bin 4. (Note: In this particular
problem the optimal solution happens to have two full bins, and all bins containing the same number of objects, but neither of those are necessary conditions.)
Problem 1
Consider the following non-deterministic algorithm:
/* BIN-PACK goes through the objects in order and non-deterministically
chooses a bin to put them in; either a partially filled bin or
a new bin. It returns, non-deterministically, all possible assignments. */
function BIN-PACK(in N, B : integer;
S[1 .. N] : array of integers --- sizes)
out A[1 .. N] : array of integers --- assignments)
/* Backtraking variables */
var FREE[1 .. N] : the amount of space left in each bin.
P : integer --- number of processors used + 1;
/* Initialization */
P := 1;
FREE[P] := B;
/* Main loop */
for I := 1 to N do
choose processor Q in {1 .. P} such that FREE[Q] >= S[I];
A[I] := Q;
FREE[Q] := FREE[Q] - S[I];
if Q = P then begin P := P+1; FREE[P] := B end; /* starting new bin */
end BIN-PACK
A. Suppose that BIN-PACK is implemented using depth-first search, and that at each choice point it tests the possible values of Q in increasing order. What are the first five solutions found? (Note:
this algorithm returns all consistent assignments, not just the optimal ones.)
B. Describe the state space used in this algorithm. What is a state? What is an operator? What is the start state? What is a goal state?
C. What is the maximum branching factor in this space? What is the depth of the space? Give an upper bound on the size of the space.
Problem 2
A useful general technique in searches for optimal solutions to this kind of problem is called ``branch and bound''. It says that, if you can be sure that your current path in the space cannot
possibly lead to a better solution than one you have already found, then you can abandon the current path. In the BIN-PACKING problem, this is interpreted as follows: If you have already found a
solution that uses K bins, and your current partial assignment uses K bins, then there is no need to pursue this branch of the state space any further, because it cannot lead to a solution better
than the one you already have found.
To express this kind of rule in the language of non-deterministic algorithms requires some new constructs that allow communication between different branches of the search tree; I am not going to go
into these.
If the BIN-PACK algorithm is implemented as in problem 1 and this technique is used, what is the first state encountered where pruning takes place?
Problem 3:
The bin-packing problem can be viewed as a constrained AND/OR tree. It is a three-level tree, where the root is an AND node, the second level nodes are OR nodes, and the third level nodes are leaves.
The leaves of the tree are assignments of objects to bins; e.g. ``Object 3 goes in bin 2''. The OR nodes correspond to objects: each object can either go in bin 1, or in bin 2 ... or in bin N. The
constraint is that no bin contains more than its capacity.
The figure shows the AND/OR tree for 3 objects being assigned to up to 3 bins:
We may now consider two different heuristics for improving the behavior of the BIN-PACK algorithm:
1. Go through the objects in decreasing order of size.
2. When deciding which bin to place the object in, try bins in increasing order of FREE. That is, try it in the fullest bin first; then in the next fullest, and so on.
A. Which of these corresponds to choosing an order of the children of the AND node? Which corresponds to choosing an order of the children of the OR nodes?
B. In our example, what is the effect of applying heuristic 1? Heuristic 2? Both heuristics?
C. (Extra credit) Construct an example in which, when both heuristics 1 and 2 are applied, the first solution found by BIN-PACK is not, in fact, the optimal solution.
|
{"url":"http://cs.nyu.edu/courses/spring00/G22.2560-001/hwk1.html","timestamp":"2014-04-21T14:40:57Z","content_type":null,"content_length":"5316","record_id":"<urn:uuid:f7178828-332d-4c65-9f38-a3d83828e23b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
|
What is the distance from the point (1,-3,8) to the x-axis? - WyzAnt Answers
What is the distance from the point (1,-3,8) to the x-axis?
What is the distance from the point (1,-3,8) to the x-axis? This is a calculus 3 question.
Tutors, please
to answer this question.
Drop perpendicular to the x-axis, it intersects x-axis at the point (1,0,0). The vector from the point (1,0,0) to the point (1,-3, 8) is perpendicular to the x-axis and its length gives you the
distance from the point (1,-3,8) to the x-axis. The coordinates of a vector are (0,-3,8). Its length is √(0^2+(-3)^2+8^2)=√73. This is your answer.
Distance between 2 points P1 to P2 in XYZ or 3-space is |P1P2| =
square root of [(X[2] - X[1])^2 + (Y[2] -Y[1])^2 + (Z[2]-Z[1])^2]
|
{"url":"http://www.wyzant.com/resources/answers/13517/what_is_the_distance_from_the_point_1_3_8_to_the_x_axis","timestamp":"2014-04-21T00:47:52Z","content_type":null,"content_length":"45417","record_id":"<urn:uuid:ec763bc8-d243-4d4e-81d5-19dc6161e266>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Kids.Net.Au - Encyclopedia > User:Egil Sandbox
are 100 m^2
liter 1000 cc The liter was defined in 1901 as the
oldliter[?] 1.000028 dm^3 space occupied by 1 kg of pure water at
l liter the temperature of its maximum density
under a pressure of 1 atm. This was supposed to be 1000 cubic cm, but it was discovered that the original measurement was off. In 1964, the liter was redefined to be exactly 1000 cubic centimeters.
mho siemens Inverse of ohm, hence ohm spelled backward
galvat[?] ampere Named after Luigi Galvani
angstrom 1e-10 m Convenient for describing molecular sizes
xunit[?] 1.00202e-13 meter Used for measuring wavelengths
siegbahn[?] xunit of X-rays. It is defined to be
1|3029.45 of the spacing of calcite planes at 18 degC. It was intended to be exactly 1e-13 m, but was later found to be off slightly.
fermi 1e-15 m Convenient for describing nuclear sizes
Nuclear radius is from 1 to 10 fermis
barn 1e-28 m^2 Used to measure cross section for
particle physics collision, said to have originated in the phrase "big as a barn".
shed[?] 1e-24 barn Defined to be a smaller companion to the
barn, but it's too small to be of much use.
brewster micron^2/N measures stress-optical coef
diopter /m measures reciprocal of lens focal length
fresnel[?] 1e12 Hz occasionally used in spectroscopy
shake[?] 1e-8 sec
svedberg 1e-13 s Used for measuring the sedimentation
coefficient for centrifuging.
gamma microgram
lambda microliter
spat[?] 1e12 m Rarely used for astronomical measurements
preece[?] 1e13 ohm m resistivity
planck[?] J s action of one joule over one second
sturgeon /henry magnetic reluctance
daraf[?] 1/farad elastance (farad spelled backwards)
leo 10 m/s^2
poiseuille[?] N s / m^2 viscosity
mayer[?] J/g K specific heat
mired[?] / microK reciprocal color temperature. The name
abbreviates micro reciprocal degree.
metricounce[?] 25 g
mounce[?] metricounce
finsenunit[?] 1e5 W/m^2 Measures intensity of ultraviolet light
with wavelength 296.7 nm.
fluxunit[?] 1e-26 W/m^2 Hz Used in radio astronomy to measure
the energy incident on the receiving body across a specified frequency bandwidth. [12]
jansky fluxunit K. G. Jansky identified radio waves coming
Jy[?] jansky from outer space in 1931.
pfu[?] / cm^2 sr s particle flux unit -- Used to measure
rate at which particles are received by a spacecraft as particles per solid angle per detector area per second. [18]
katal mol/sec Measure of the amount of a catalyst. One
kat[?] katal katal of catalyst enables the reaction
to consume or produce on mol/sec.
minute 60 s
min minute
hour 60 min
hr hour
day 24 hr
d day
da day
week 7 day
wk[?] week
sennight[?] 7 day
fortnight 14 day
blink[?] 1e-5 day Actual human blink takes 1|3 second
ce[?] 1e-2 day
cron[?] 1e6 years
watch 4 hours time a sentry stands watch or a ship's
crew is on duty.
bell 1|8 watch Bell would be sounded every 30 minutes.
circle 2 pi radian
degree 1|360 circle
arcdeg[?] degree
arcmin[?] 1|60 degree
arcminute arcmin
arcsec 1|60 arcmin
arcsecond arcsec
quadrant[?] 1|4 circle
quintant[?] 1|5 circle
sextant 1|6 circle
pulsatance[?] radian / sec
gon 1|100 rightangle measure of grade
grade gon
centesimalminute[?] 1|100 grade
centesimalsecond[?] 1|100 centesimalminute
milangle[?] 1|6400 circle Official NIST definition.
Another choice is 1e-3 radian.
pointangle[?] 1|32 circle Used for reporting compass readings
centrad[?] 0.01 radian Used for angular deviation of light
through a prism.
mas[?] milli-arcsec Used by astronomers
seclongitude[?] circle (seconds/day) Astronomers measure longitude
(which they call right ascension) in time units by dividing the equator into 24 hours instead of 360 degrees.
Solid angle measure
sphere 4 pi sr
squaredegree[?] 1|180^2 pi^2 sr
squareminute[?] 1|60^2 squaredegree
squaresecond[?] 1|60^2 squareminute
squarearcmin[?] squareminute
squarearcsec[?] squaresecond
sphericalrightangle[?] 0.5 pi sr
octant[?] 0.5 pi sr
Concentration measures
percent 0.01
mill 0.001 Originally established by Congress in 1791
as a unit of money equal to 0.001 dollars, it has come to refer to 0.001 in general. Used by some towns to set their property tax rate, and written with a symbol similar to the % symbol but with two
0's in the denominator. [18]
proof 1|200 Alcohol content measured by volume at
60 degrees Fahrenheit. This is a USA measure. In Europe proof=percent.
ppm 1e-6
partspermillion[?] ppm
ppb[?] 1e-9
partsperbillion[?] ppb USA billion
ppt[?] 1e-12
partspertrillion[?] ppt USA trillion
karat 1|24 measure of gold purity
caratgold[?] karat
gammil[?] mg/l
basispoint[?] 0.01 % Used in finance
fine 1|1000 Measure of gold purity
The pH scale is used to measure the concentration of hydronium (H3O+) ions in
a solution. A neutral solution has a pH of 7 as a result of dissociated
water molecules.
pH pH(x) [;mol/liter] 10^(-x) mol/liter ; (-log(pH liters/mol))
Two types of units are defined: units for computing temperature differences
and functions for converting absolute temperatures. Conversions for
differences start with "deg" and conversions for absolute temperature start
with "temp".
°F tempF(x) [;K] (x+(-32)) degF + stdtemp ; (tempF+(-stdtemp))/degF + 32
°K[?] tempC(x) [;K] x K + stdtemp ; (tempC +(-stdtemp))/K In 1741 Anders Celsius
°C tempcelsius(x) [;K] tempC(x); ~tempC(tempcelsius) introduced a temperature
degcelsius[?] K scale with water boiling at 0 degrees and
K freezing at 100 degrees at standard
pressure. After his death the fixed points were reversed and the scale was called the centigrade scale. Due to the difficulty of accurately measuring the temperature of melting ice at standard
pressure, the centigrade scale was replaced in 1954 by the Celsius scale which is defined by subtracting 273.15 from the temperature in Kelvins. This definition differed slightly from the old
centigrade definition, but the Kelvin scale depends on the triple point of water rather than a melting point, so it can be measured accurately.
fahrenheit 5|9 degC Fahrenheit defined his temperature scale
5|9 degC by setting 0 to the coldest temperature
he could produce in his lab with a salt water solution and by setting 96 degrees to body heat. In Fahrenheit's words:
Placing the thermometer in a mixture of sal ammoniac or sea salt, ice, and water a point on the scale will be found which is denoted as zero. A second point is obtained if the same mixture is used
without salt. Denote this position as 30. A third point, designated as 96, is obtained if the thermometer is placed in the mouth so as to acquire the heat of a healthy man." (D. G. Fahrenheit, Phil.
Trans. (London) 33, 78, 1724)
rankine degF The Rankine scale has the
degreesrankine Fahrenheit degree, but it's zero
degF is at absolute zero.
reaumur 10|8 degC The Reaumur scale was used in Europe and
particularly in France. It is defined to be 0 at the freezing point of water and 80 at the boiling point. Reaumur apparently selected 80 because it is divisible by many numbers.
Units cannot handle wind chill or heat index because they are two variable
functions, but they are included here for your edification. Clearly these
equations are the result of a model fitting operation.
wind chill index (WCI) a measurement of the combined cooling effect of low
air temperature and wind on the human body. The index was first defined
by the American Antarctic explorer Paul Siple in 1939. As currently used
by U.S. meteorologists, the wind chill index is computed from the
temperature T (in °F) and wind speed V (in mi/hr) using the formula:
WCI = 0.0817(3.71 sqrt(V) + 5.81 - 0.25V)(T - 91.4) + 91.4.
For very low wind speeds, below 4 mi/hr, the WCI is actually higher than
the air temperature, but for higher wind speeds it is lower than the air
heat index (HI or HX) a measure of the combined effect of heat and
humidity on the human body. U.S. meteorologists compute the index
from the temperature T (in °F) and the relative humidity H (as a
value from 0 to 1).
HI = -42.379 + 2.04901523 T + 1014.333127 H - 22.475541 TH - .00683783 T^2 - 548.1717 H^2 + 0.122874 T^2 H + 8.5282 T H^2 - 0.0199 T^2 H^2.
Physical constants
Basic constants
pi 3.14159265358979323846
c 2.99792458e8 m/s speed of light in vacuum (exact)
light c
mu0[?] 4 pi 1e-7 H/m permeability of vacuum (exact)
epsilon0[?] 1/mu0 c^2 permittivity of vacuum (exact)
energy c^2 convert mass to energy
e 1.602176462e-19 C electron charge
h 6.62606876e-34 J s Planck constant
hbar[?] h / 2 pi
spin hbar
coulombconst[?] 1/4 pi epsilon0 listed as "k" sometimes
Physico-chemical constants
atomicmassunit[?] 1.66053873e-27 kg atomic mass unit (defined to be
u atomicmassunit 1|12 of the mass of carbon 12)
amu atomicmassunit
amu_chem[?] 1.66026e-27 kg 1|16 of the weighted average mass of
the 3 naturally occuring neutral
isotopes of oxygen
amu_phys[?] 1.65981e-27 kg 1|16 of the mass of a neutral
oxygen 16 atom
dalton u Maybe this should be amu_chem?
avogadro grams/amu mol size of a mole
N_A[?] avogadro
gasconstant[?] 8.314472 J / mol K molar gas constant
R gasconstant
boltzmann R / N_A Boltzmann constant
k boltzmann
molarvolume[?] mol R stdtemp / atm Volume occupied by one mole of an
ideal gas at STP.
loschmidt[?] avogadro mol / molarvolume Molecules per cubic meter of an
ideal gas at STP. Loschmidt did work similar to Avogadro.
stefanboltzmann[?] pi^2 k^4 / 60 hbar^3 c^2 The power per area radiated by a
sigma stefanboltzmann blackbody at temperature T is
given by sigma T^4.
wiendisplacement[?] 2.8977686e-3 m K Wien's Displacement Law gives the
frequency at which the the Planck spectrum has maximum intensity. The relation is lambda T = b where lambda is wavelength, T is temperature and b is the Wien displacement. This relation is used to
determine the temperature of stars.
K_J[?] 483597.9 GHz/V Direct measurement of the volt is difficult. Until
recently, laboratories kept Weston cadmium cells as a reference, but they could drift. In 1987 the CGPM officially recommended the use of the Josephson effect as a laboratory representation of the
volt. The Josephson effect occurs when two superconductors are separated by a thin insulating layer. A "supercurrent" flows across the insulator with a frequency that depends on the potential applied
across the superconductors. This frequency can be very accurately measured. The Josephson constant K_J, which is equal to 2e/h, relates the measured frequency to the potential. The value given here
is the officially specified value for use beginning in 1990. The 1998 recommended value of the constant is 483597.898 GHz/V.
R_K[?] 25812.807 ohm Measurement of the ohm also presents difficulties.
The old approach involved maintaining resistances that were subject to drift. The new standard is based on the Hall effect. When a current carrying ribbon is placed in a magnetic field, a potential
difference develops across the ribbon. The ratio of the potential difference to the current is called the Hall resistance. Klaus von Klitzing discovered in 1980 that the Hall resistance varies in
discrete jumps when the magnetic field is very large and the temperature very low. This enables accurate realization of the resistance h/e^2 in the lab. The value given here is the officially
specified value for use beginning in 1990.
Various conventional values
Hg 13.5951 gram force / cm^3 Standard weight of mercury (exact)
water gram force/cm^3 Standard weight of water (exact)
waterdensity[?] gram / cm^3 Density of water
mach 331.46 m/s speed of sound in dry air at STP
Atomic constants
Rinfinity[?] 10973731.568 /m The wavelengths of a spectral series
R_H[?] 10967760 /m can be expressed as
1/lambda = R (1/m^2 - 1/n^2). where R is a number that various slightly from element to element. For hydrogen, R_H is the value, and for heavy elements, the value approaches Rinfinity, which can be
computed from m_e c alpha^2 / 2 h with a loss of 5 digits of precision.
alpha 7.297352533e-3 The fine structure constant was
introduced to explain fine structure visible in spectral lines. It can be computed from mu0 c e^2 / 2 h with a loss of 3 digits precision and loss of precision in derived values which use alpha.
bohrradius[?] alpha / 4 pi Rinfinity
prout[?] 185.5 keV nuclear binding energy equal to 1|12
binding energy of the deuteron
Planck[?] constants
planckmass[?] 2.1767e-8 kg sqrt(hbar c / G)
m_P[?] planckmass
plancktime[?] hbar / planckmass c^2
t_P[?] plancktime
plancklength[?] plancktime c
l_P[?] plancklength
Masses of elementary particles
electron 5.485799110e-4 u
proton 1.00727646688 u
neutron 1.00866491578 u
muon 0.1134289168 u
deuteron 2.01355321271 u
alpha particle 4.0015061747 u
particle wavelengths: the compton wavelength of a particle is
defined as h / m c where m is the mass of the particle.
Magnetic moments
bohrmagneton[?] e hbar / 2 electronmass
mu_B[?] bohrmagneton
nuclearmagneton[?] e hbar / 2 protonmass
mu_N[?] nuclearmagneton
Units derived from physical constants
kgf[?] kg force
technicalatmosphere[?] kgf / cm^2
at technicalatmosphere
hyl[?] kgf s^2 / m Also gram-force s^2/m according to [15]
mmHg[?] mm Hg
torr mmHg These units, both named after Evangelista
tor Pa Torricelli, should not be confused.
Acording to [15] the torr is actually atm/760 which is slightly different.
inHg[?] inch Hg
inH2O[?] inch water
mmH2O[?] mm water
eV e V Energy acquired by a particle with charge e
electronvolt eV when it is accelerated through 1 V
lightyear c julianyear The 365.25 day year is specified in
NIST publication 811
lightsecond[?] c s
lightminute[?] c min
parsec au / tan(arcsec) Unit of length equal to distance
pc[?] parsec from the sun to a point having
heliocentric parallax of 1 arcsec (derived from parallax second). A distant object with paralax theta will be about (arcsec/theta) parsecs from the sun (using the approximation that tan(theta) =
rydberg[?] h c Rinfinity Rydberg energy
crith[?] 0.089885 gram The crith is the mass of one
liter of hydrogen at standard temperature and pressure.
amagatvolume[?] molarvolume
amagat[?] mol/amagatvolume Used to measure gas densities
lorentz[?] bohrmagneton / h c Used to measure the extent
that the frequency of light
is shifted by a magnetic field.
cminv[?] h c / cm Unit of energy used in infrared
invcm[?] cminv spectroscopy.
wavenumber cminv
kcal_mol[?] kcal / mol N_A kcal/mol is used as a unit of
energy by physical chemists.
CGS system based on centimeter, gram and second
dyne cm gram / s^2 force
dyn[?] dyne
erg cm dyne energy
poise[?] gram / cm s viscosity, honors Jean Poiseuille
P poise
rhe[?] /poise reciprocal viscosity
stokes[?] cm^2 / s kinematic viscosity
St[?] stokes
stoke stokes
lentor[?] stokes old name
Gal[?] cm / s^2 acceleration, used in geophysics
galileo Gal for earth's gravitational field
(note that "gal" is for gallon
but "Gal" is the standard symbol for the gal which is evidently a shortened form of "galileo".)
barye[?] dyne/cm^2 pressure
barad[?] barye old name
kayser[?] 1/cm Proposed as a unit for wavenumber
balmer[?] kayser Even less common name than "kayser"
kine[?] cm/s velocity
bole[?] g cm / s momentum
pond gram force
glug[?] gram force s^2 / cm Mass which is accelerated at
1 cm/s^2 by 1 gram force
darcy[?] centipoise cm^2 / s atm Measures permeability to fluid flow.
One darcy is the permeability of a medium that allows a flow of cc/s of a liquid of centipoise viscosity under a pressure gradient of atm/cm. Named for H. Darcy.
mohm[?] cm / dyn s mobile ohm, measure of mechanical
mobileohm[?] mohm mobility
mechanicalohm[?] dyn s / cm mechanical resistance
acousticalohm[?] dyn s / cm^5 ratio of the sound pressure of
1 dyn/cm^2 to a source of strength 1 cm^3/s
ray acousticalohm
rayl[?] dyn s / cm^3 Specific acoustical resistance
eotvos[?] 1e-9 Gal/cm Change in gravitational acceleration
over horizontal distance
Electromagnetic units derived from the abampere
abampere[?] 10 A Current which produces a force of
abamp[?] abampere 2 dyne/cm between two infinitely
aA abampere long wires that are 1 cm apart
biot[?] aA alternative name for abamp
Bi biot
abcoulomb[?] abamp sec
abcoul[?] abcoulomb
abfarad[?] abampere sec / abvolt
abhenry[?] abvolt sec / abamp
abvolt[?] dyne cm / abamp sec
abohm[?] abvolt / abamp
abmho[?] /abohm
gauss abvolt sec / cm^2
Gs[?] gauss
maxwell abvolt sec Also called the "line"
Mx[?] maxwell
oersted gauss / mu0
Oe oersted
gilbert gauss cm / mu0
Gb gilbert
Gi[?] gilbert
unitpole[?] 4 pi maxwell
emu erg/gauss "electro-magnetic unit", a measure of
magnetic moment, often used as emu/cm^3
to specify magnetic moment density.
Gaussian system: electromagnetic units derived from statampere.
Note that the Gaussian units are often used in such a way that Coulomb's law
has the form F= q1 * q2 / r^2. The constant 1|4*pi*epsilon0 is incorporated
into the units. From this, we can get the relation force=charge^2/dist^2.
This means that the simplification esu^2 = dyne cm^2 can be used to simplify
units in the Gaussian system, with the curious result that capacitance can be
measured in cm, resistance in sec/cm, and inductance in sec^2/cm. These
units are given the names statfarad, statohm and stathenry below.
statampere[?] 10 A cm / s c statamp[?] statampere statvolt dyne cm / statamp sec statcoulomb statamp s esu statcoulomb statcoul[?] statcoulomb statfarad[?] statamp sec / statvolt cmcapacitance[?]
statfarad stathenry[?] statvolt sec / statamp statohm[?] statvolt / statamp statmho[?] /statohm statmaxwell[?] statvolt sec franklin statcoulomb debye[?] 1e-18 statcoul cm unit of electrical dipole
moment helmholtz[?] debye/angstrom^2 Dipole moment per area jar[?] 1000 statfarad approx capacitance of Leyden jar
Some historical eletromagnetic units
intampere[?] 0.999835 A Defined as the current which in one intamp[?] intampere second deposits .001118 gram of silver from an aqueous solution of silver nitrate. intfarad[?] 0.999505 F intvolt[?]
1.00033 V intohm[?] 1.000495 ohm Defined as the resistance of a uniform column of mercury containing 14.4521 gram in a column 1.063 m long and maintained at 0 degC. daniell[?] 1.042 V Meant to be
electromotive force of a Daniell cell, but in error by .04 V faraday[?] N_A e mol Charge that must flow to deposit or faraday_phys[?] 96521.9 C liberate one gram equivalent of any faraday_chem[?]
96495.7 C element. (The chemical and physical values are off slightly from what is obtained by multiplying by amu_chem or amu_phys. These values are from a 1991 NIST publication.) Note that there is
a Faraday constant which is equal to N_A e and hence has units of C/mol. kappline[?] 6000 maxwell Named by and for Gisbert Kapp siemensunit[?] 0.9534 ohm Resistance of a meter long column of mercury
with a 1 mm cross section.
Photometric units
candle 1.02 candela Standard unit for luminous intensity hefnerunit[?] 0.9 candle in use before candela hefnercandle[?] hefnerunit violle[?] 20.17 cd luminous intensity of 1 cm^2 of platinum at its
temperature of solidification (2045 K)
lumen cd sr Luminous flux (luminous energy per lm[?] lumen time unit)
talbot[?] lumen s Luminous energy lumberg[?] talbot
lux lm/m^2 Illuminance or exitance (luminous lx[?] lux flux incident on or coming from phot[?] lumen / cm^2 a surface) ph[?] phot footcandle[?] lumen/ft^2 Illuminance from a 1 candela source at a
distance of one foot metercandle[?] lumen/m^2 Illuminance from a 1 candela source at a distance of one meter
mcs[?] metercandle s luminous energy per area, used to measure photographic exposure
nox 1e-3 lux These two units were proposed for skot[?] 1e-3 apostilb measurements relating to dark adapted eyes.
Luminance measures
nit cd/m^2 Luminance: the intensity per projected stilb[?] cd / cm^2 area of an extended luminous source. sb[?] stilb (nit is from latin nitere = to shine.)
apostilb[?] cd/pi m^2 asb[?] apostilb blondel[?] apostilb Named after a French scientist.
Equivalent luminance measures. These units are units which measure
the luminance of a surface with a specified exitance which obeys
Lambert's law. (Lambert's law specifies that luminous intensity of
a perfectly diffuse luminous surface is proportional to the cosine
of the angle at which you view the luminous surface.)
equivalentlux[?] cd / pi m^2 luminance of a 1 lux surface equivalentphot[?] cd / pi cm^2 luminance of a 1 phot surface lambert[?] cd / pi cm^2 footlambert[?] cd / pi ft^2
The bril is used to express "brilliance" of a source of light on a
logarithmic scale to correspond to subjective perception. An increase of 1
bril means doubling the luminance. A luminance of 1 lambert is defined to
have a brilliance of 1 bril.
bril[?] bril(x) [;lambert] 2^(x+-100) lamberts ;log2(bril/lambert)+100
Some luminance data from the IES Lighting Handbook, 8th ed, 1993
sunlum[?] 1.6e9 cd/m^2 at zenith sunillum[?] 100e3 lux clear sky sunillum_o[?] 10e3 lux overcast sky sunlum_h[?] 6e6 cd/m^2 value at horizon skylum[?] 8000 cd/m^2 average, clear sky skylum_o[?] 2000
cd/m^2 average, overcast sky moonlum[?] 2500 cd/m^2
Photographic Exposure Value
The Additive Photographic EXposure (APEX) system developed in Germany in
the 1960s was an attempt to simplify exposure determination for people
who relied on exposure tables rather than exposure meters. Shortly
thereafter, nearly all cameras incorporated exposure meters, so the APEX
system never caught on, but the concept of Exposure Value (EV) given by
A^2 LS ES
2^EV = --- = -- = --
T K C
A = Relative aperture (f-number)
T = Shutter time in seconds
L = Scene luminance in cd/m2
E = Scene illuminance in lux
S = Arithmetic ISO film speed
K = Reflected-light meter calibration constant
C = Incident-light meter calibration constant
remains in use. Strictly speaking, an Exposure Value is a combination
of aperture and shutter time, but it's also commonly used to indicate
luminance (or illuminance). Conversion to luminance or illuminance
units depends on the ISO film speed and the meter calibration constant.
Common practice is to use an ISO film speed of 100 (because film speeds
are in even 1/3-step increments, the exact value is 64 * 2^(2|3)).
Calibration constants vary among camera and meter manufacturers: Canon,
Nikon, and Sekonic use a value of 12.5 for reflected-light meters, while
Minolta and Pentax use a value of 14. Minolta and Sekonic use a value
of 250 for incident-light meters with flat receptors.
s100 64 * 2^(2|3) / lx s exact speed for ISO 100 film
Reflected-light meter calibration constant with ISO 100 film
k1250[?] 12.5 (cd/m2) / lx s For Canon, Nikon, and Sekonic k1400[?] 14 (cd/m2) / lx s For Minolta and Pentax
Incident-light meter calibration constant with ISO 100 film
c250[?] 250 lx / lx s flat-disc receptor
Exposure value to scene luminance with ISO 100 film
For Minolta or Pentax
ev100(x) [;cd/m^2] 2^x k1400 / s100; log2(ev100 s100 / k1400)
For Canon, Nikon or Sekonic
ev100(x) [;cd/m^2] 2^x k1250 / s100; log2(ev100 s100 / k1250)
Exposure value to scene illuminance with ISO 100 film
iv100[?] iv100(x) [1;lx] 2^x c250 / s100; log2(iv100 s100 / c250)
Astronomical time measurements
Astronmical time measurement is a complicated matter. The rotation of the
earth and motion of the planets is not uniform. Originally the second was
defined relative to the "mean solar day". It is necessary to use the mean
day because the earth's orbit is elliptical so the length of the day varies
throughout the year. Simon Newcomb discovered that there were significant
irregularities in the rotation of the earth and he came up with equations
using the location of a fictitious mean sun. The length of the second was
determined from the tropical year obtained from Newcomb's equations. This
second was officially used from 1960 to 1967, at which point atomic clocks
replaced astronomical measurements for a standard of time.
The measures that appear below are probably obtained from an "ephemeris"
which is a set of equations that predicts the locations of the planets over
anomalisticyear[?] 365.2596 days The time between successive perihelion passages of the earth. siderealyear[?] 365.256360417 day The time for the earth to make one revolution around the sun relative
to the stars. tropicalyear[?] 365.242198781 day The mean interval between vernal equinoxes. Differs from the sidereal year by 1 part in 26000 due to precession of the earth about its rotational axis
combined with precession of the perihelion of the earth's orbit. gaussianyear[?] 365.2690 days The orbital period of a body in circular orbit at a distance of 1 au from the sun. Calculated from
Kepler's third law. elipseyear[?] 346.62 days The line of nodes is the intersection of the plane of Earth's orbit around the sun with the plane of the moon's orbit around earth. Eclipses can only
occur when the moon and sun are close to this line. The line rotates and appearances of the sun on the line of nodes occur every eclipse year. saros 223 synodicmonth The earth, moon and sun appear in
the same arrangement every saros, so if an eclipse occurs, then one saros later, a similar eclipse will occur. (The saros is close to 19 eclipse years.) The eclipse will occur about 120 degrees west
of the preceeding one because the saros is not an even number of days. After 3 saros, an eclipse will occur at approximately the same place. siderealday[?] 23.934469444 hour The sidereal day is the
interval siderealhour[?] 1|24 siderealday between two successive transits siderealminute[?] 1|60 siderealhour of a star over the meridian, siderealsecond[?] 1|60 siderealminute or the time required
for the earth to make one rotation relative to the stars. The more usual solar day is the time required to make a rotation relative to the sun. Because the earth moves in its orbit, it has to turn a
bit extra to face the sun again, hence the solar day is slightly longer. anomalisticmonth[?] 27.55454977 day Time for the moon to travel from perigee to perigee nodicalmonth[?] 27.2122199 day The
nodes are the points where draconicmonth[?] nodicalmonth an orbit crosses the ecliptic. draconiticmonth[?] nodicalmonth This is the time required to travel from the ascending node to the next
ascending node. siderealmonth[?] 27.321661 day Time required for the moon to orbit the earth lunarmonth[?] 29 days+12 hours+44 minutes+2.8 seconds Time between full moons. Full synodicmonth[?]
lunarmonth moon occur when the sun and lunation[?] synodicmonth moon are on opposite sides of lune[?] 1|30 lunation the earth. Since the earth lunour[?] 1|24 lune moves around the sun, the moon has
to revolve a bit farther to get into the full moon configuration. year tropicalyear yr[?] year month 1|12 year mo month lustrum[?] 5 years The Lustrum was a Roman purification ceremony that took
place every five years. Classically educated Englishmen used this term. decade 10 years century 100 years millennium 1000 years millennia millennium solaryear[?] year lunaryear[?] 12 lunarmonth
calendaryear[?] 365 day commonyear[?] 365 day leapyear[?] 366 day julianyear[?] 365.25 day gregorianyear[?] 365.2425 day islamicyear[?] 354 day A year of 12 lunar months. They islamicleapyear[?] 355
day began counting on July 16, AD 622 when Muhammad emigrated to Medina (the year of the Hegira). They need 11 leap days in 30 years to stay in sync with the lunar year which is a bit longer than the
29.5 days of the average month. The months do not keep to the same seasons, but regress through the seasons every 32.5 years. islamicmonth[?] 1|12 islamicyear They have 29 day and 30 day months.
The Hewbrew year is also based on lunar months, but synchronized to the solar
calendar. The months vary irregularly between 29 and 30 days in length, and
the years likewise vary. The regular year is 353, 354, or 355 days long. To
keep up with the solar calendar, a leap month of 30 days is inserted every
3rd, 6th, 8th, 11th, 14th, 17th, and 19th years of a 19 year cycle. This
gives leap years that last 383, 384, or 385 days.
The Hartree system of atomic units, derived from fundamental units
of mass (of electron), action (planck's constant), charge, and
the coulomb constant.
Fundamental units
atomicmass[?] electronmass atomiccharge[?] e atomicaction[?] hbar
derived units (Warning: accuracy is lost from deriving them this way)
atomiclength[?] bohrradius atomictime[?] hbar^3/coulombconst^2 atomicmass e^4 Period of first bohr orbit atomicvelocity[?] atomiclength / atomictime atomicenergy[?] hbar / atomictime hartree[?]
atomicenergy Hartree[?] hartree
These thermal units treat entropy as charge, from [5]
thermalcoulomb[?] J/K entropy thermalampere[?] W/K entropy flow thermalfarad[?] J/K^2 thermalohm[?] K^2/W thermal resistance fourier thermalohm thermalhenry[?] J K^2/W^2 thermal inductance
thermalvolt[?] K thermal potential difference
United States units
linear measure
The US Metric Law of 1866 gave the exact relation 1 meter = 39.37 inches.
From 1893 until 1959, the foot was exactly 1200|3937 meters. In 1959
the definition was changed to bring the US into agreement with other
countries. Since then, the foot has been exactly 0.3048 meters. At the
same time it was decided that any data expressed in feet derived from
geodetic surveys within the US would continue to use the old definition.
US 1200|3937 m/ft These four values will convert US- US international measures to survey- US US Survey measures geodetic- US int[?] 3937|1200 ft/m Convert US Survey measures to int- int international
inch 2.54 cm in[?] inch foot 12 inch feet foot ft[?] foot yard 3 ft yd yard mile 5280 ft The mile was enlarged from 5000 ft to this number in order to make it an even number of furlongs. (The Roman
mile is 5000 romanfeet.) line 1|12 inch Also defined as '.1 in' or as '1e-8 Wb' rod 5.5 USyard perch rod furlong 40 rod From "furrow long" statutemile[?] USmile league 3 USmile Intended to be an an
hour's walk
surveyor's measure
surveyorschain[?] 66 surveyft surveyorspole[?] 1|4 surveyorschain surveyorslink[?] 1|100 surveyorschain chain surveyorschain surveychain[?] chain ch[?] chain link surveyorslink acre 10 chain^2
intacre[?] 43560 ft^2 Acre based on international ft acrefoot[?] acre surveyfoot section[?] USmile^2 township 36 section homestead 160 acre Area of land granted by the 1862 Homestead Act of the
United States Congress gunterschain[?] surveyorschain
engineerschain[?] 100 ft engineerslink[?] 1|100 engineerschain ramsdenschain[?] engineerschain ramsdenslink[?] engineerslink
nautical measure
fathom 6 USft Originally defined as the distance from fingertip to fingertip with arms fully extended. nauticalmile[?] 1852 m Supposed to be one minute of latitude at the equator. That value is about
1855 m. Early estimates of the earth's circumference were a bit off. The value of 1852 m was made the international standard in 1929. The US did not accept this value until 1954. The UK switched in
cable 1|10 nauticalmile intcable[?] cable international cable cablelength cable UScable[?] 100 fathom navycablelength[?] 720 USft used for depth in water marineleague[?] 3 nauticalmile
geographicalmile[?] brnauticalmile knot nauticalmile / hr click[?] km
Avoirdupois weight
pound 0.45359237 kg The one normally used lb pound From the latin libra grain 1|7000 pound The grain is the same in all three weight systems. It was originally defined as the weight of a barley corn
taken from the middle of the ear. ounce 1|16 pound oz ounce dram[?] 1|16 ounce dr[?] dram ushundredweight[?] 100 pounds cwt[?] hundredweight shorthundredweight[?] ushundredweight uston[?] shortton
shortton[?] 2000 lb quarterweight[?] 1|4 uston shortquarterweight[?] 1|4 shortton shortquarter[?] shortquarterweight
Troy Weight. In 1828 the troy pound was made the first United States
standard weight. It was to be used to regulate coinage.
troypound[?] 5760 grain troyounce[?] 1|12 troypound ozt[?] troyounce pennyweight[?] 1|20 troyounce Abbreviated "d" in reference to a dwt[?] pennyweight Frankish coin called the "denier" minted in the
late 700's. There were 240 deniers to the pound. assayton[?] mg ton / troyounce mg / assayton = troyounce / ton usassayton[?] mg uston / troyounce brassayton[?] mg brton / troyounce
Some other jewelers units
metriccarat[?] 0.2 gram Defined in 1907 metricgrain[?] 50 mg carat metriccarat ct[?] carat jewelerspoint[?] 1|100 carat silversmithpoint[?] 1|4000 inch
Apothecaries' weight
appound[?] troypound apounce[?] troyounce apdram[?] 1|8 apounce apscruple[?] 1|3 apdram
Liquid measure
gal[?] gallon quart 1|4 gallon pint 1|2 quart gill 1|4 pint usgallon[?] 231 in^3 usquart[?] 1|4 usgallon uspint[?] 1|2 usquart usgill[?] 1|4 uspint usfluidounce[?] 1|16 uspint fluiddram[?] 1|8 usfloz
minimvolume[?] 1|60 fluiddram qt quart pt[?] pint floz[?] fluidounce usfloz[?] usfluidounce fldr[?] fluiddram liquidbarrel[?] 31.5 usgallon usbeerbarrel[?] 2 beerkegs beerkeg[?] 15.5 usgallon Various
among brewers
ponykeg[?] 1|2 beerkeg winekeg[?] 12 usgallon petroleumbarrel[?] 42 usgallon Originated in Pennsylvania oil barrel petroleumbarrel fields, from the winetierce bbl[?] barrel hogshead 2 liquidbarrel
usfirkin[?] 9 gallon
Dry measures: The Winchester Bushel was defined by William III in 1702 and
legally adopted in the US in 1836.
usbushel[?] 2150.42 in^3 Volume of 8 inch cylinder with 18.5 bu[?] bushel inch diameter (rounded) peck 1|4 bushel uspeck[?] 1|4 usbushel brpeck[?] 1|4 brbushel pk[?] peck drygallon[?] 1|2 uspeck
dryquart[?] 1|4 drygallon drypint[?] 1|2 dryquart drybarrel[?] 7056 in^3 Used in US for fruits, vegetables, and other dry commodities except for cranberries. cranberrybarrel[?] 5826 in^3 US cranberry
barrel heapedbushel[?] 1.278 usbushel Why this particular value? Often rounded to 1.25 bushels.
Grain measures. The bushel as it is used by farmers in the USA is actually
a measure of mass which varies for different commodities. Canada uses the
same bushel masses for most commodities, but not for oats.
wheatbushel[?] 60 lb soybeanbushel[?] 60 lb cornbushel[?] 56 lb ryebushel[?] 56 lb barleybushel[?] 48 lb oatbushel[?] 32 lb ricebushel[?] 45 lb canada_oatbushel[?] 34 lb
Wine and Spirits measure
ponyvolume[?] 1 usfloz jigger 1.5 usfloz Can vary between 1 and 2 usfloz shot jigger Sometimes 1 usfloz eushot[?] 25 ml EU standard spirits measure fifth[?] 1|5 usgallon winebottle[?] 750 ml US
industry standard, 1979 winesplit[?] 1|4 winebottle wineglass[?] 4 usfloz magnum[?] 1.5 liter Standardized in 1979, but given as 2 qt in some references metrictenth[?] 375 ml metricfifth[?] 750 ml
metricquart[?] 1 liter
French champagne bottle sizes
split 200 ml jeroboam 2 magnum rehoboam 3 magnum methuselah 4 magnum salmanazar[?] 6 magnum balthazar[?] 8 magnum nebuchadnezzar 10 magnum
Water is "hard" if it contains various minerals, expecially calcium
clarkdegree[?] 1|70000 Content by weigh of calcium carbonate gpg[?] grains/gallon Divide by water's density to convert to a dimensionless concentration measure
Shoe measures
shoeiron[?] 1|48 inch Used to measure leather in soles shoeounce[?] 1|64 inch Used to measure non-sole shoe leather
USA shoe sizes. These express the length of the shoe or the length
of the "last", the form that the shoe is made on.
shoesize_delta[?] 1|3 inch USA shoe sizes differ by this amount shoe_men0[?] 8.25 inch shoe_women0[?] (7+11|12) inch shoe_boys0[?] (3+11|12) inch shoe_girls0[?] (3+7|12) inch
European shoe size. According to
http://www.shoeline.com/footnotes/shoeterm.shtmlparis points
sizes in Europe are measured with Paris points which simply measure
the length of the shoe.
europeshoesize[?] 2|3 cm
USA slang units
buck[?] US$ fin 5 US$ sawbuck[?] 10 US$ grand 1000 US$ greenback[?] US$ key kg usually of marijuana, 60's lid[?] 1 oz Another 60's weed unit footballfield[?] 100 yards marathon 26 miles + 385 yards
UK 1200000|3937014 m/ft The UK lengths were defined by british- UK a bronze bar manufactured in UK- UK 1844. Measurement of that bar revealed the dimensions given here.
brnauticalmile[?] 6080 ft Used until 1970 when the UK brknot[?] brnauticalmile / hr switched to the international brcable[?] 1|10 brnauticalmile nautical mile. admiraltymile[?] brnauticalmile
admiraltyknot[?] brknot admiraltycable[?] brcable seamile[?] 6000 ft shackle[?] 15 fathoms Adopted 1949 by British navy
British Imperial weight is mostly the same as US weight. A few extra
units are added here.
clove 7 lb stone 14 lb tod[?] 28 lb brquarterweight[?] 1|4 brhundredweight brhundredweight[?] 8 stone longhundredweight[?] brhundredweight longton[?] 20 brhundredweight brton[?] longton
British Imperial volume measures
brminim[?] 1|60 brdram brscruple[?] 1|3 brdram fluidscruple[?] brscruple brdram[?] 1|8 brfloz brfluidounce[?] 1|20 brpint brfloz[?] brfluidounce brgill[?] 1|4 brpint brpint[?] 1|2 brquart brquart[?]
1|4 brgallon brgallon[?] 4.54609 l The British Imperial gallon was defined in 1824 to be the volume of water which weighed 10 pounds at 62 deg F with a pressure of 30 inHg. In 1963 it was defined to
be the volume occupied by 10 pounds of distilled water of density 0.998859 g/ml weighed in air of density 0.001217 g/ml against weights of density 8.136 g/ml. This gives a value of approximately
4.5459645 liters, but the old liter was in force at this time. In 1976 the definition was changed to exactly 4.54609 liters using the new definition of the liter (1 dm^3). brbarrel[?] 36 brgallon
Used for beer brbushel[?] 8 brgallon brheapedbushel[?] 1.278 brbushel brquarter[?] 8 brbushel brchaldron[?] 36 brbushel
Units derived from imperial system
ouncedal[?] oz ft / s^2 force which accelerates an ounce at 1 ft/s^2 poundal[?] lb ft / s^2 same thing for a pound tondal[?] ton ft / s^2 and for a ton pdl[?] poundal psi pound force / inch^2 psia[?]
psi absolute pressure tsi[?] ton force / inch^2 reyn[?] psi sec slug lbf s^2 / ft slugf[?] slug force slinch[?] lbf s^2 / inch Mass unit derived from inch second slinchf[?] slinch force pound-force
system. Used in space applications where in/sec^2 was a natural acceleration measure. geepound[?] slug lbf lb force tonf[?] ton force lbm[?] lb kip[?] 1000 lbf from kilopound ksi[?] kip / in^2 mil
0.001 inch thou 0.001 inch circularinch[?] 1|4 pi in^2 area of a one-inch diameter circle circularmil[?] 1|4 pi mil^2 area of one-mil diameter circle cmil[?] circularmil cental[?] 100 pound centner
[?] cental caliber 0.01 inch for measuring bullets duty ft lbf celo[?] ft / s^2 jerk ft / s^3 australiapoint[?] 0.01 inch The "point" is used to measure rainfall in Australia sabin[?] ft^2 Measure of
sound absorption equal to the absorbing power of one square foot of a perfectly absorbing material. The sound absorptivity of an object is the area times a dimensionless absorptivity coefficient.
standardgauge[?] 4 ft + 8.5 in Standard width between railroad track flag 5 ft^2 Construction term referring to sidewalk. rollwallpaper[?] 30 ft^2 Area of roll of wall paper fillpower[?] in^3 / ounce
Density of down at standard pressure. The best down has 750-800 fillpower. pinlength[?] 1|16 inch A 17 pin is 17/16 in long in the USA. buttonline[?] 1|40 inch The line was used in 19th century USA
to measure width of buttons. scoopnumber[?] /quart Ice cream scoops are labeled with a number specifying how many scoops fill a quart. beespace[?] 1|4 inch Bees will fill any space that is smaller
than the bee space and leave open spaces that are larger. The size of the space varies with species. diamond 8|5 ft Marking on US tape measures that is useful to carpenters who wish to place five
studs in an 8 ft distance. Note that the numbers appear in red every 16 inches as well, giving six divisions in 8 feet. retmaunit[?] 1.75 in Height of rack mountable equipment. U retmaunit Equipment
should be 1|32 inch narrower than its U measurement indicates to allow for clearance, so 4U=(6+31|32)in
Other units of work, energy, power, etc
Calories: energy to raise a gram of water one degree celsius
cal_IT[?] 4.1868 J International Table calorie cal_th[?] 4.184 J Thermochemical calorie cal_fifteen[?] 4.18580 J Energy to go from 14.5 to 15.5 degC cal_twenty[?] 4.18190 J Energy to go from 19.5 to
20.5 degC cal_mean[?] 4.19002 J 1|100 energy to go from 0 to 100 degC calorie cal_IT cal calorie calorie_IT[?] cal_IT thermcalorie[?] cal_th calorie_th[?] thermcalorie Calorie kilocalorie the food
Calorie thermie[?] 1e6 cal_fifteen Heat required to raise the temperature of a tonne of water from 14.5 to 15.5 degC.
btu definitions: energy to raise a pound of water 1 degF
btu[?] cal lb degF / gram K international table BTU britishthermalunit[?] btu btu_IT[?] btu btu_th[?] cal_th lb degF / gram K btu_mean[?] cal_mean lb degF / gram K quad[?] quadrillion btu
ECtherm[?] 1.05506e8 J Exact definition, close to 1e5 btu UStherm[?] 1.054804e8 J Exact definition therm[?] UStherm toe 1e10 cal_IT ton oil equivalent. Energy released by burning one metric ton of
oil. [18] tonscoal[?] 1|2.3 toe Energy in metric ton coal from [18]. naturalgas[?] toe / 1270 m^3 Energy released from natural gas from [18]. (At what pressure?)
Celsius heat unit: energy to raise a pound of water 1 degC
celsiusheatunit[?] cal lb degC / gram K chu[?] celsiusheatunit
The horsepower is supposedly the power of one horse pulling. Obviously
different people had different horses.
ushorsepower[?] 550 foot pound force / sec Invented by James Watt hp[?] horsepower metrichorsepower[?] 75 kilogram force meter / sec electrichorsepower[?] 746 W boilerhorsepower[?] 9809.50 W
waterhorsepower[?] 746.043 W brhorsepower[?] 745.70 W donkeypower[?] 250 W
Thermal insulance: Thermal conductivity has dimension power per area per
(temperature difference per length thickness) which comes out to W / K m. If
the thickness is fixed, then the conductance will have units of W / K m^2.
Thermal insulance is the reciprocal.
Rvalue[?] degF ft^2 hr / btu Uvalue[?] 1/Rvalue europeanUvalue[?] watt / m^2 K RSI degC m^2 / W clo[?] 0.155 degC m^2 / W Supposed to be the insulance required to keep a resting person comfortable
indoors. The value given is from NIST and the CRC, but [5] gives a slightly different value of 0.875 ft^2 degF hr / btu. tog[?] 0.1 degC m^2 / W Also used for clothing.
Misc other measures
ENTROPY ENERGY / TEMPERATURE clausius[?] 1e3 cal/K A unit of physical entropy langley thermcalorie/cm^2 Used in radiation theory poncelet[?] 100 kg force m / s tonrefrigeration[?] ton 144 btu / lb
day One ton refrigeration is the rate of heat extraction required turn one ton of water to ice in a day. Ice is defined to have a latent heat of 144 btu/lb. tonref[?] tonrefrigeration refrigeration
tonref / ton frigorie[?] 1000 cal_fifteen Used in refrigeration engineering. tnt[?] 1e9 cal_th / ton So you can write tons-tnt. This is a defined, not measured, value. airwatt[?] 8.5 (ft^3/min) inH2O
Measure of vacuum power as pressure times air flow.
Permeability: The permeability or permeance, n, of a substance determines
how fast vapor flows through the substance. The formula W = n A dP
holds where W is the rate of flow (in mass/time), n is the permeability,
A is the area of the flow path, and dP is the vapor pressure difference.
perm_0C[?] grain / hr ft^2 inHg perm_zero[?] perm_0C perm_0[?] perm_0C perm[?] perm_0C perm_23C[?] grain / hr ft^2 in Hg23C perm_twentythree[?] perm_23C
Counting measures
pair[?] 2 brace[?] 2 nest 3 often used for items like bowls that nest together hattrick[?] 3 Used in sports, especially cricket and ice hockey to report the number of goals. dicker[?] 10 dozen 12
bakersdozen[?] 13 score 20 flock[?] 40 timer[?] 40 shock 60 gross 144 greatgross[?] 12 gross tithe 1|10 From Anglo-Saxon word for tenth
Paper counting measure
shortquire[?] 24 quire[?] 25 shortream[?] 480 ream[?] 500 perfectream[?] 516 bundle[?] 2 reams bale[?] 5 bundles
Paper measures
The metric paper sizes are defined so that if a sheet is cut in half
along the short direction, the result is two sheets which are
similar to the original sheet. This means that for any metric size,
the long side is close to sqrt(2) times the length of the short
side. Each series of sizes is generated by repeated cuts in half,
with the values rounded down to the nearest millimeter.
A6paper[?] 105 mm 148 mm A7paper[?] 74 mm 105 mm A8paper[?] 52 mm 74 mm A9paper[?] 37 mm 52 mm A10paper[?] 26 mm 37 mm
B0paper[?] 1000 mm 1414 mm The basic B size has an area B1paper[?] 707 mm 1000 mm of sqrt(2) square meters. B2paper[?] 500 mm 707 mm B3paper[?] 353 mm 500 mm B4paper[?] 250 mm 353 mm B5paper[?] 176
mm 250 mm B6paper[?] 125 mm 176 mm B7paper[?] 88 mm 125 mm B8paper[?] 62 mm 88 mm B9paper[?] 44 mm 62 mm B10paper[?] 31 mm 44 mm
C0paper[?] 917 mm 1297 mm The basic C size has an area C1paper[?] 648 mm 917 mm of sqrt(sqrt(2)) square meters. C2paper[?] 458 mm 648 mm C3paper[?] 324 mm 458 mm Intended for envelope sizes C4paper
[?] 229 mm 324 mm C5paper[?] 162 mm 229 mm C6paper[?] 114 mm 162 mm C7paper[?] 81 mm 114 mm C8paper[?] 57 mm 81 mm C9paper[?] 40 mm 57 mm C10paper[?] 28 mm 40 mm
gsm (Grams per Square Meter), a sane, metric paper weight measure
gsm[?] grams / meter^2
In the USA, a collection of crazy historical paper measures are used. Paper
is measured as a weight of a ream of that particular type of paper. This is
sometimes called the "substance" or "basis" (as in "substance 20" paper).
The standard sheet size or "basis size" varies depending on the type of
paper. As a result, 20 pound bond paper and 50 pound text paper are actually
about the same weight. The different sheet sizes were historically the most
convenient for printing or folding in the different applications. These
different basis weights are standards maintained by American Society for
Testing Materials (ASTM) and the American Forest and Paper Association
poundbookpaper[?] lb / 25 inch 38 inch ream lbbook[?] poundbookpaper poundtextpaper[?] poundbookpaper lbtext[?] poundtextpaper poundoffsetpaper[?] poundbookpaper For offset printing lboffset[?]
poundoffsetpaper poundbiblepaper[?] poundbookpaper Designed to be lightweight, thin, lbbible[?] poundbiblepaper strong and opaque. poundtagpaper[?] lb / 24 inch 36 inch ream lbtag[?] poundtagpaper
poundbagpaper[?] poundtagpaper lbbag[?] poundbagpaper poundnewsprintpaper[?] poundtagpaper lbnewsprint[?] poundnewsprintpaper poundposterpaper[?] poundtagpaper lbposter[?] poundposterpaper
poundtissuepaper[?] poundtagpaper lbtissue[?] poundtissuepaper poundwrappingpaper[?] poundtagpaper lbwrapping[?] poundwrappingpaper poundwaxingpaper[?] poundtagpaper lbwaxing[?] poundwaxingpaper
poundglassinepaper[?] poundtagpaper lbglassine[?] poundglassinepaper poundcoverpaper[?] lb / 20 inch 26 inch ream lbcover[?] poundcoverpaper poundindexpaper[?] lb / 25.5 inch 30.5 inch ream lbindex
[?] poundindexpaper poundbondpaper[?] lb / 17 inch 22 inch ream Bond paper is stiff and lbbond[?] poundbondpaper durable for repeated poundwritingpaper[?] poundbondpaper filing, and it resists
lbwriting[?] poundwritingpaper ink penetration. poundledgerpaper[?] poundbondpaper lbledger[?] poundledgerpaper poundcopypaper[?] poundbondpaper lbcopy[?] poundcopypaper poundblottingpaper[?] lb / 19
inch 24 inch ream lbblotting[?] poundblottingpaper poundblankspaper[?] lb / 22 inch 28 inch ream lbblanks[?] poundblankspaper poundpostcardpaper[?] lb / 22.5 inch 28.5 inch ream lbpostcard[?]
poundpostcardpaper poundweddingbristol[?] poundpostcardpaper lbweddingbristol[?] poundweddingbristol poundbristolpaper[?] poundweddingbristol lbbristol[?] poundbristolpaper poundboxboard[?] lb / 1000
ft^2 lbboxboard[?] poundboxboard poundpaperboard[?] poundboxboard lbpaperboard[?] poundpaperboard
When paper is marked in units of M, it means the weight of 1000 sheets of the
given size of paper. To convert this to paper weight, divide by the size of
the paper in question.
paperM[?] lb / 1000
fournierpoint[?] 0.1648 inch / 12 First definition of the printers point made by Pierre Fournier who defined it in 1737 as 1|12 of a cicero which was 0.1648 inches. olddidotpoint[?] 1|72 frenchinch
François Ambroise Didot, one of a family of printers, changed Fournier's definition around 1770 to fit to the French units then in use. bertholdpoint[?] 1|2660 m H. Berthold tried to create a metric
version of the didot point in 1878. INpoint[?] 0.4 mm This point was created by a group directed by Fermin Didot in 1881 and is associated with the imprimerie nationale. It doesn't seem to have been
used much. germandidotpoint[?] 0.376065 mm Exact definition appears in DIN 16507, a German standards document of 1954. Adopted more broadly in 1966 by ??? metricpoint[?] 3|8 mm Proposed in 1977 by
Eurograf point 1|72.27 inch The American point was invented printerspoint[?] point by Nelson Hawks in 1879 and dominates USA publishing. It was standardized by the American Typefounders Association
at the value of 0.013837 inches exactly. Knuth uses the approximation given here (which is very close). The comp.fonts FAQ claims that this value is supposed to be 1|12 of a pica where 83 picas is
equal to 35 cm. But this value differs from the standard. texscaledpoint[?] 1|65536 point The TeX typesetting system uses texsp[?] texscaledpoint this for all computations. computerpoint[?] 1|72 inch
The American point was rounded computerpica[?] 12 computerpoint to an even 1|72 inch by computer postscriptpoint[?] computerpoint people at some point. pspoint[?] postscriptpoint Q 1|4 mm Used in
Japanese phototypesetting Q is for quarter frenchprinterspoint[?] olddidotpoint didotpoint[?] germandidotpoint This seems to be the dominant value europeanpoint[?] didotpoint for the point used in
Europe cicero 12 didotpoint
stick[?] 2 inches
Type sizes
excelsior 3 point brilliant[?] 3.5 point diamondtype[?] 4 point pearl 5 point agate 5.5 point Originally agate type was 14 lines per inch, giving a value of 1|14 in. ruby agate British nonpareil[?] 6
point mignonette[?] 6.5 point emerald mignonette British minion[?] 7 point brevier[?] 8 point bourgeois 9 point longprimer[?] 10 point smallpica[?] 11 point pica 12 point english 14 point columbian
16 point greatprimer[?] 18 point paragon[?] 20 point meridian 44 point canon 48 point
German type sizes
nonplusultra[?] 2 didotpoint brillant[?] 3 didotpoint diamant 4 didotpoint perl 5 didotpoint nonpareille[?] 6 didotpoint kolonel[?] 7 didotpoint petit[?] 8 didotpoint borgis[?] 9 didotpoint korpus[?]
10 didotpoint corpus korpus garamond korpus mittel[?] 14 didotpoint tertia[?] 16 didotpoint text 18 didotpoint kleine_kanon[?] 32 didotpoint kanon[?] 36 didotpoint grobe_kanon[?] 42 didotpoint missal
[?] 48 didotpoint kleine_sabon[?] 72 didotpoint grobe_sabon[?] 84 didotpoint
Information theory units. Note that the name "entropy" is used both
to measure information and as a physical quantity.
nat[?] ln(2) bits Entropy measured base e
hartley[?] log2(10) bits Entropy of a uniformly distributed random variable over 10 symbols.
bps bit/sec Sometimes the term "baud" is incorrectly used to refer to bits per second. Baud refers to symbols per second. Modern modems transmit several bits per symbol. byte 8 bit Not all machines
had 8 bit B byte bytes, but these days most of them do. But beware: for transmission over modems, a few extra bits are used so there are actually 10 bits per byte. nybble[?] 4 bits Half of a byte.
Sometimes equal to different lengths such as 3 bits. nibble nybble meg megabyte Some people consider these units along with the kilobyte gig[?] gigabyte to be defined according to powers of 2 with
the kilobyte equal to 2^10 bytes, the megabyte equal to 2^20 bytes and the gigabyte equal to 2^30 bytes but these usages are forbidden by SI. Binary prefixes have been defined by IEC to replace the
SI prefixes. Use them to get the binary values: KiB, MiB, and GiB. jiffy 0.01 sec This is defined in the Jargon File jiffies[?] jiffy (http://www.jargon.org) as being the duration of a clock tick for
measuring wall-clock time. Supposedly the value used to be 1|60 sec or 1|50 sec depending on the frequency of AC power, but then 1|100 sec became more common. On linux systems, this term is used and
for the Intel based chips, it does have the value of .01 sec. The Jargon File also lists two other definitions: millisecond, and the time taken for light to travel one foot.
Musical measures. Musical intervals expressed as ratios. Multiply
two intervals together to get the sum of the interval. The function
musicalcent can be used to convert ratios to cents.
Perfect intervals
octave 2 majorsecond[?] musicalfifth^2 / octave majorthird[?] 5|4 minorthird[?] 6|5 musicalfourth[?] 4|3 musicalfifth[?] 3|2 majorsixth[?] musicalfourth majorthird minorsixth[?] musicalfourth
minorthird majorseventh[?] musicalfifth majorthird minorseventh[?] musicalfifth minorthird
pythagoreanthird[?] majorsecond musicalfifth^2 / octave syntoniccomma[?] pythagoreanthird / majorthird pythagoreancomma[?] musicalfifth^12 / octave^7
Equal tempered definitions
semitone octave^(1|12) musicalcent[?] (x) [1;1] semitone^(x/100) ; 100 log(musicalcent)/log(semitone)
Musical note lengths.
wholenote[?] ! halfnote[?] 1|2 wholenote quarternote[?] 1|4 wholenote eighthnote[?] 1|8 wholenote sixteenthnote[?] 1|16 wholenote thirtysecondnote[?] 1|32 wholenote sixtyfourthnote[?] 1|64 wholenote
dotted[?] 3|2 doubledotted[?] 7|4 breve doublewholenote semibreve wholenote minimnote[?] halfnote crochet quarternote quaver eighthnote semiquaver sixteenthnote demisemiquaver[?] thirtysecondnote
hemidemisemiquaver[?] sixtyfourthnote semidemisemiquaver[?] hemidemisemiquaver
yarn and cloth measures
yarn linear density
woolyarnrun[?] 1600 yard/pound 1600 yds of "number 1 yarn" weighs a pound. yarncut[?] 300 yard/pound Less common system used in Pennsylvania for wool yarn cottonyarncount[?] 840 yard/pound
linenyarncount[?] 300 yard/pound Also used for hemp and ramie worstedyarncount[?] 1680 ft/pound metricyarncount[?] meter/gram denier[?] 1|9 tex used for silk and rayon manchesteryarnnumber[?] drams/
1000 yards old system used for silk pli[?] lb/in typp[?] 1000 yd/lb asbestoscut[?] 100 yd/lb used for glass and asbestos yarn
tex gram / km rational metric yarn measure, meant drex[?] 0.1 tex to be used for any kind of yarn poumar[?] lb / 1e6 yard
yarn and cloth length
skeincotton[?] 80*54 inch 80 turns of thread on a reel with a 54 in circumference (varies for other kinds of thread) cottonbolt[?] 120 ft cloth measurement woolbolt[?] 210 ft
bolt cottonbolt heer[?] 600 yards cut[?] 300 yards used for wet-spun linen yarn lea 300 yards
drug dosage
mcg[?] microgram Frequently used for vitamins iudiptheria[?] 62.8 microgram IU is for international unit iupenicillin[?] 0.6 microgram iuinsulin[?] 41.67 microgram drop 1|20 ml The drop was an old
"unit" that was replaced by the minim. But I was told by a pharmacist that in his profession, the conversion of 20 drops per ml is actually used. bloodunit[?] 450 ml For whole blood. For blood
components, a blood unit is the quanity of the component found in a blood unit of whole blood. The human body contains about 12 blood units of whole blood.
fixup units for times when prefix handling doesn't do the job
hectare hectoare megohm[?] megaohm kilohm[?] kiloohm microhm[?] microohm megalerg[?] megaerg 'L' added to make it pronounceable [18].
Exchange rates from the New York Times, 27 July 1999
Some European currencies have permanent fixed exchange rates with
the Euro. These rates were taken from the EC's web site:
[[$]] dollar mark germanymark bolivar venezuelabolivar peseta spainpeseta rand southafricarand escudo[?] portugalescudo sol perunewsol guilder netherlandsguilder hollandguilder[?] netherlandsguilder
peso mexicopeso yen japanyen lira italylira rupee indiarupee drachma greecedrachma franc francefranc markka finlandmarkka sucre ecuadorsucre poundsterling[?] britainpound
ISO currency codes
AED[?] unitedarabdirham ATS[?] austriaschilling AUD australiadollar BEF[?] belgiumfranc
BRR[?] brazilreal CAD canadadollar CHF[?] switzerlandfranc CLP[?] chilepeso COP[?] colombiapeso CZK[?] czechkoruna DEM[?] germanymark DKK[?] denmarkkrone ECS ecuadorsucre EGP[?] egyptpound ESP
spainpeseta EUR euro FIM[?] finlandmarkka FRF[?] francefranc GBP britainpound GRD greecedrachma HKD[?] hongkongdollar HUF[?] hungaryforint IDR[?] indonesiarupiah IEP[?] irelandpunt ILS israelshekel
IND[?] indiarupee ITL[?] italylira JOD[?] jordandinar JPY japanyen KRW[?] southkoreawon LBP[?] lebanonpound LUF[?] luxemburgfranc MYR[?] malaysiaringgit MXP[?] mexicopeso NLG[?] netherlandsguilder
NOK norwaykrone NZD newzealanddollar PEN[?] perunewsol PHP philippinespeso PLZ[?] polandzloty PTE[?] portugalescudo RUR[?] russiaruble SAR saudiarabiariyal SEK swedenkrona SGD[?] singaporedollar SKK
[?] slovakiakoruna THB thailandbaht TRL turkeylira TWD[?] taiwandollar USD US$ VEB venezuelabolivar XEU[?] euro ZAR[?] southafricarand
UKP GBP Not an ISO code, but looks like one, and sometimes used on usenet.
Money on the gold standard, used in the late 19th century and early
20th century.
olddollargold[?] 23.22 grains goldprice Used until 1934 newdollargold[?] 96|7 grains goldprice After Jan 31, 1934 dollargold[?] newdollargold poundgold[?] 113 grains goldprice
Nominal masses of US coins. Note that dimes, quarters and half dollars
have weight proportional to value. Before 1965 it was $40 / kg.
USpennyweight[?] 2.5 grams Since 1982, 48 grains before USnickelweight[?] 5 grams USdimeweight[?] 10 cents / (20 US$ / lb) Since 1965 USquarterweight[?] 25 cents / (20 US$ / lb) Since 1965
UShalfdollarweight[?] 50 cents / (20 US$ / lb) Since 1971 USdollarmass[?] 8.1 grams
British currency
quid britainpound Slang names fiver 5 quid tenner[?] 10 quid
shilling 1|20 britainpound Before decimalisation, there oldpence[?] 1|12 shilling were 20 shillings to a pound, farthing 1|4 oldpence each of twelve old pence crown 5 shilling brpenny[?] 0.01
britainpound pence penny tuppence[?] 2 pence tuppenny[?] tuppence oldpenny[?] oldpence oldtuppence[?] 2 oldpence oldtuppenny[?] oldtuppence threepence[?] 3 oldpence threepence never refers to new
money threepenny[?] threepence oldthreepence[?] threepence oldthreepenny[?] threepence oldhalfpenny[?] halfoldpenny oldhapenny[?] oldha'penny brpony[?] 25 britainpound
Canadian currency
loony 1 canadadollar This coin depicts a loon toony[?] 2 canadadollar
Oceanographic flow
sverdrup 1e6 m^3 / sec Used to express flow of ocean currents. Named after Norwegian oceanographer H. Sverdrup.
In vacuum science and some other applications, gas flow is measured
as the product of volumetric flow and pressure. This is useful
because it makes it easy to compare with the flow at standard
pressure (one atmosphere). It also directly relates to the number
of gas molecules per unit time, and hence to the mass flow if the
molecular mass is known.
sccm[?] atm cc/min 's' is for "standard" to indicate sccs[?] atm cc/sec flow at standard pressure scfh[?] atm ft^3/hour scfm[?] atm ft^3/min slpm[?] atm liter/min slph[?] atm liter/hour lusec[?]
liter micron Hg / s Used in vacuum science
Wire Gauge
This area is a nightmare with huge charts of wire gauge diameters
that usually have no clear origin. There are at least 5 competing wire gauge
systems to add to the confusion. The use of wire gauge is related to the
manufacturing method: a metal rod is heated and drawn through a hole. The
size change can't be too big. To get smaller wires, the process is repeated
with a series of smaller holes. Generally larger gauges mean smaller wires.
The gauges often have values such as "00" and "000" which are larger sizes
than simply "0" gauge. In the tables that appear below, these gauges must be
specified as negative numbers (e.g. "00" is -1, "000" is -2, etc).
Alternatively, you can use the following units:
g00[?] (-1) g000[?] (-2) g0000[?] (-3) g00000[?] (-4) g000000[?] (-5) g0000000[?] (-6)
American Wire Gauge (AWG) or Brown & Sharpe Gauge appears to be the most
important gauge. ASTM B-258 specifies that this gauge is based on geometric
interpolation between gauge 0000, which is 0.46 inches exactly, and gauge 36
which is 0.005 inches exactly. Therefore, the diameter in inches of a wire
is given by the formula 1|200 92^((36-g)/39). Note that 92^(1/39) is close
to 2^(1/6), so diameter is approximately halved for every 6 gauges. For the
repeated zero values, use negative numbers in the formula. The same document
also specifies rounding rules which seem to be ignored by makers of tables.
Gauges up to 44 are to be specified with up to 4 significant figures, but no
closer than 0.0001 inch. Gauges from 44 to 56 are to be rounded to the
nearest 0.00001 inch.
In addition to being used to measure wire thickness, this gauge is used to
measure the thickness of sheets of aluminum, copper, and most metals other
than steel, iron and zinc.
wiregauge(g) [;m] 1|200 92^((36+(-g))/39) in;36+(-39)ln(200 wiregauge/in)/ln(92)
Next we have the SWG, the Imperial or British Standard Wire Gauge. This one
is piecewise linear. It was used for aluminum sheets.
The following is from the Appendix to ASTM B 258
For example, in U.S. gage, the standard for sheet metal is based on the
weight of the metal, not on the thickness. 16-gage is listed as
approximately .0625 inch thick and 40 ounces per square foot (the original
standard was based on wrought iron at .2778 pounds per cubic inch; steel
has almost entirely superseded wrought iron for sheet use, at .2833 pounds
per cubic inch). Smaller numbers refer to greater thickness. There is no
formula for converting gage to thickness or weight.
It's rather unclear from the passage above whether the plate gauge values are
therefore wrong if steel is being used. Reference [15] states that steel is
in fact measured using this gauge (under the name Manufacturers' Standard
Gauge) with a density of 501.84 lb/ft3 = 0.2904 lb/in3 used for steel.
But this doesn't seem to be the correct density of steel (.2833 lb/in3 is
This gauge was established in 1893 for purposes of taxation.
Old plate gauge for iron
Manufacturers Standard Gage
A special gauge is used for zinc sheet metal. Notice that larger gauges
indicate thicker sheets.
Screw sizes
In the USA, screw diameters are reported using a gauge number.
Metric screws are reported as Mxx where xx is the diameter in mm.
Ring size. All ring sizes are given as the circumference of the ring.
USA ring sizes. Several slightly different definitions seem to be in
circulation. According to [15], the interior diameter of size n ring in
inches is 0.32 n + 0.458 for n ranging from 3 to 13.5 by steps of 0.5. The
size 2 ring is inconsistently 0.538in and no 2.5 size is listed.
However, other sources list 0.455 + 0.0326 n and 0.4525 + 0.0324 n as the
diameter and list no special case for size 2. (Or alternatively they are
1.43 + .102 n and 1.4216+.1018 n for measuring circumference in inches.) One
reference claimed that the original system was that each size was 1|10 inch
circumference, but that source doesn't have an explanation for the modern
system which is somewhat different.
Old practice in the UK measured rings using the "Wheatsheaf gauge" with sizes
specified alphabetically and based on the ring inside diameter in steps of
1|64 inch. This system was replaced in 1987 by British Standard 6820 which
specifies sizes based on circumference. Each size is 1.25 mm different from
the preceding size. The baseline is size C which is 40 mm circumference.
The new sizes are close to the old ones. Sometimes it's necessary to go
beyond size Z to Z+1, Z+2, etc.
Japanese sizes start with size 1 at a 13mm inside diameter and each size is
1|3 mm larger in diameter than the previous one. They are multiplied by pi
to give circumference.
The European ring sizes are the length of the circumference in mm minus 40.
mph mile/hr mpg[?] mile/gal kph[?] km/hr fL footlambert fpm[?] ft/min fps ft/s rpm rev/min rps[?] rev/sec mi[?] mile mbh[?] 1e3 btu/hour mcm[?] 1e3 circularmil ipy[?] inch/year used for corrosion
rates ccf[?] 100 ft^3 used for selling water [18] Mcf[?] 1000 ft^3 not million cubic feet [18] kp[?] kilopond kpm[?] kp meter kWh kW hour hph[?] hp hour
Radioactivity units
becquerel /s Activity of radioactive source Bq[?] becquerel curie 3.7e10 Bq Defined in 1910 as the radioactivity Ci[?] curie emitted by the amount of radon that is in equilibrium with 1 gram of
radium. rutherford[?] 1e6 Bq
gray J/kg Absorbed dose of radiation Gy[?] gray rad 1e-2 Gy From Radiation Absorbed Dose rep[?] 8.38 mGy Roentgen Equivalent Physical, the amount of radiation which , absorbed in the body, would
liberate the same amount of energy as 1 roentgen of X rays would, or 97 ergs.
sievert J/kg Dose equivalent: dosage that has the Sv sievert same effect on human tissues as 200 rem 1e-2 Sv keV X-rays. Different types of radiation are weighted by the Relative Biological
Effectiveness (RBE).
Radiation type RBE X-ray, gamma ray 1 beta rays, > 1 MeV 1 beta rays, < 1 MeV 1.08 neutrons, < 1 MeV 4-5 neutrons, 1-10 MeV 10 protons, 1 MeV 8.5 protons, .1 MeV 10 alpha, 5 MeV 15 alpha, 1 MeV 20
The energies are the kinetic energy of the particles. Slower particles interact more, so they are more effective ionizers, and hence have higher RBE values.
rem stands for Roentgen Equivalent Mammal
roentgen 2.58e-4 C / kg Ionizing radiation that produces 1 statcoulomb of charge in 1 cc of dry air at stp. rontgen roentgen Sometimes it appears spelled this way sievertunit[?] 8.38 rontgen Unit of
gamma ray dose delivered in one hour at a distance of 1 cm from a point source of 1 mg of radium enclosed in platinum .5 mm thick.
eman[?] 1e-7 Ci/m^3 radioactive concentration mache[?] 3.7e-7 Ci/m^3
A few German units as currently in use.
zentner[?] 50 kg doppelzentner[?] 2 zentner pfund[?] 500 g
Some definitions using ISO 8859-1 characters
¢ cent £ britainpound ¥ japanyen ångström angstrom Å angstrom röntgen roentgen
The following units were in the unix units database but do not appear in
this file:
wey[?] used for cheese, salt and other goods. Measured mass or
waymass[?] volume depending on what was measured and where the measuring
took place. A wey of cheese ranged from 200 to 324 pounds.
sack No precise definition
spindle[?] The length depends on the type of yarn
block Defined variously on different computer systems
erlang A unit of telephone traffic defined variously.
Omitted because there are no other units for this dimension. Is this true? What about CCS = 1/36 erlang? Erlang is supposed to be dimensionless. One erlang means a single channel occupied for one
All Wikipedia text is available under the terms of the GNU Free Documentation License
|
{"url":"http://encyclopedia.kids.net.au/page/us/User:Egil___Sandbox?title=Usfluidounce","timestamp":"2014-04-20T00:46:11Z","content_type":null,"content_length":"194838","record_id":"<urn:uuid:f5e5f6e9-f935-4785-a519-9ee226c7a5df>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
st: RE: -infix- problem?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: -infix- problem?
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject st: RE: -infix- problem?
Date Tue, 13 May 2008 14:56:22 +0100
This is arguable. The help of -infix- does indicate that if you want a -double- you need to specify that, so Stata is putting the onus on you to think about variable types.
Otherwise put, your punishment is that you got what you asked for.
Despite that, the idea that Stata should be smart on your behalf is naturally attractive. Quite what that would mean with -infix- is not clear except to Stata developers who know the exact algorithm. In particular, a decision on optimal variable types presumably implies two passes through the data, i.e. the field width is not enough to decide.
Hau Chyi
I've downloaded several variables from the SIPP (Study of Income and
Program Participation), and realized there seems to be a problem with
the -infix- command, which I hope can be illustrated by the following
Here is only one observation with one variable, which looks like below
in the asc file.
This is the SSUID, the survey unit id of each individual.
If you save this into a asc file as, say "d:\documents\test\test.asc"
, and run the following lines:
infix SSUID 1-16 using "d:\documents\test\test.asc";
format SSUID %16.0f;
and then:
-list SSUID-
The variable Stata reads is:
| SSUID |
1. | 1234567948140544 |
It's completely wrong! I realize this after discovering some families
I generated from SSUID (and other family identifyers) have more than
100 kids!!
The problem disappears when I do
-infix double SSUID 1-16 using ... -
In other words, the precision -infix- chooses automatically is wrong.
Is this a bug of infix or some memory allocation error of my computer?
No matter what, I recommend if you are infixing variables with more
than 10 digits, you'd better check the ascii file to see if it's truly
Thanks for calrifying. I'd also like to add that reading all the
questions and answers posted on the list has improved my own knowledge
to Stata tremendously.
> test.asc is a file of 10 individuals; and the .do file is machine generated file by the SIPP to infix them into Stata.
> The last variable of each individual is SSUID (at position 12654 - 12666) , the survey unit id.
> Tthe raw file indicates SSUID for the first six individuals to be:
> 019003754630
> 019003754630
> 019003754630
> 019003754630
> 019003754630
> 019033358630
> But after using the do file provided by SIPP website to -infix- them, the first 6 observations of the SSUID variable becomes:
> SSUID
> 19003754496
> 19003754496
> 19003754496
> 19003754496
> 19003754496
> 19033358336
> I've noticed that if I do -infix double SSUID- rather than -infix SSUID-, the problem will be fixed. Is this a bug of Stata or my computer? I've checked this problem on Stata 9 and Stata 10MP on PC.
> Thanks for calrifying. I'd also like to add that reading all the questions has improved my own knowledge to Stata tremendously!
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2008-05/msg00475.html","timestamp":"2014-04-18T22:12:36Z","content_type":null,"content_length":"8230","record_id":"<urn:uuid:01567c5a-f062-4498-bceb-77f73c942ae0>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
|
HowStuffWorks "Someone told me that if there are 20 people in a room, there's a 50/50 chance that two of them will have the same birthday. How can that be?"
This phenomenon actually has a name -- it is called the birthday paradox, and it turns out it is useful in several different areas (for example, cryptography and hashing algorithms). You can try it
yourself -- the next time you are at a gathering of 20 or 30 people, ask everyone for their birth date. It is likely that two people in the group will have the same birthday. It always surprises
The reason this is so surprising is because we are used to comparing our particular birthdays with others. For example, if you meet someone randomly and ask him what his birthday is, the chance of
the two of you having the same birthday is only 1/365 (0.27%). In other words, the probability of any two individuals having the same birthday is extremely low. Even if you ask 20 people, the
probability is still low -- less than 5%. So we feel like it is very rare to meet anyone with the same birthday as our own.
When you put 20 people in a room, however, the thing that changes is the fact that each of the 20 people is now asking each of the other 19 people about their birthdays. Each individual person only
has a small (less than 5%) chance of success, but each person is trying it 19 times. That increases the probability dramatically.
If you want to calculate the exact probability, one way to look at it is like this. Let's say you have a big wall calendar with all 365 days on it. You walk in and put a big X on your birthday. The
next person who walks in has only a 364 possible open days available, so the probability of the two dates not colliding is 364/365. The next person has only 363 open days, so the probability of not
colliding is 363/365. If you multiply the probabilities for all 20 people not colliding, then you get:
364/365 × 363/365 × … 365-20+1/365 = Chances of no collisions
That's the probability of no collisions, so the probability of collisions is 1 minus that number.
The next time you are with a group of 30 people, try it!
|
{"url":"http://www.howstuffworks.com/question261.htm","timestamp":"2014-04-16T13:03:37Z","content_type":null,"content_length":"119122","record_id":"<urn:uuid:ce5985f2-a517-4e98-931e-ba26a489f58c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|